378
VISUAL CONTROL OF ROBOTS: High-Performance Visual Servoing Peter I. Corke CSIRO Division of Manufacturing Technology, Australia.

VISUAL CONTROL OF ROBOTS - Peter Corke · sition control, or visual servoing, of a robot manipulator. The book aims to provide a comprehensive coverage of all aspects of the visual

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

  • VISUAL CONTROLOFROBOTS:

    High-PerformanceVisual Servoing

    PeterI. CorkeCSIRO Divisionof ManufacturingTechnology, Australia.

  • To my family, Phillipa,Lucy andMadeline.

    v

  • vi

  • Editorial foreword

    It is no longernecessaryto explain theword 'mechatronics'.Theworld hasbecomeaccustomedto theblendingof mechanics,electronicsandcomputercontrol.Thatdoesnot meanthatmechatronicshaslost its 'art'.

    Theadditionof vision sensingto assistin thesolutionof a varietyof problemsisstill very mucha 'cutting edge'topicof research.PeterCorkehaswrittena veryclearexpositionwhichembracesboththetheoryandthepracticalproblemsencounteredinaddingvisionsensingto a robotarm.

    Thereis greatvaluein thisbook,bothfor advancedundergraduatereadingandfortheresearcheror designerin industrywhowishesto addvision-basedcontrol.

    We will onedaycometo expectvisionsensingandcontrol to bea regularfeatureof mechatronicdevicesfrom machinetoolsto domesticappliances.It is researchsuchasthiswhichwill bring thatdayabout.

    JohnBillingsleyUniversityof SouthernQueensland,

    Toowoomba,QLD4350August1996

    vii

  • viii

  • Author' sPreface

    Outline

    This book is abouttheapplicationof high-speedmachinevision for closed-looppo-sition control,or visual servoing, of a robot manipulator. The bookaimsto providea comprehensive coverageof all aspectsof thevisualservoing problem:robotics,vi-sion, control, technologyandimplementationissues.While muchof the discussionis quitegeneraltheexperimentalwork describedis basedon theuseof a high-speedbinaryvisionsystemwith a monocular'eye-in-hand'camera.

    Theparticularfocusis onaccuratehigh-speedmotion,wherein thiscontext 'highspeed'is takento meanapproaching,or exceeding,theperformancelimits statedbythe robot manufacturer. In orderto achieve suchhigh-performanceI arguethat it isnecessaryto haveaccuratedynamicalmodelsof thesystemto becontrolled(therobot)andthesensor(thecameraandvisionsystem).Despitethelonghistoryof researchintheconstituenttopicsof roboticsandcomputervision,thesystemdynamicsof closed-loop visually guidedrobot systemshasnot beenwell addressedin the literaturetodate.

    I am a confirmedexperimentalistandthereforethis book hasa strongthemeofexperimentation. Experimentsareusedto build and verify modelsof the physicalsystemcomponentssuchasrobots,camerasandvision systems.Thesemodelsarethenusedfor controllersynthesis,andthecontrollersareverifiedexperimentallyandcomparedwith resultsobtainedby simulation.

    Finally, the book hasa World Wide Web homepagewhich serves as a virtualappendix.It containslinks to thesoftwareandmodelsdiscussedwithin thebookaswell aspointersto otherusefulsourcesof information. A videotape,showing manyof theexperiments,canbeorderedvia thehomepage.

    Background

    My interestin theareaof visualservoing datesbackto 1984whenI wasinvolvedintwo researchprojects;video-ratefeatureextraction1, andsensor-basedrobotcontrol.At that time it becameapparentthat machinevision could be usedfor closed-loopcontrol of robot position, sincethe video-field rate of 50Hz exceededthe positionsetpointrateof thePumarobotwhich is only 36Hz. AroundthesameperiodWeissandSandersonpublishedanumberof papersonthis topic[224–226,273]in particularconcentratingon control strategiesandthe direct useof imagefeatures— but onlyin simulation. I was interestedin actuallybuilding a systembasedon the feature-extractorandrobotcontroller, but for anumberof reasonsthiswasnotpossibleat thattime.

    1This work resultedin a commercialunit — theAPA-512 [261], andits successortheAPA-512+ [25].Bothdevicesaremanufacturedby AtlantekMicrosystemsLtd. of Adelaide,Australia.

    ix

  • In the period1988–89I wasfortunatein beingable to spend11 monthsat theGRASPLaboratory, University of Pennsylvaniaon a CSIRO OverseasFellowship.ThereI was able to demonstratea 60Hz visual feedbacksystem[65]. Whilst thesampleratewashigh, theactualclosed-loopbandwidthwasquite low. Clearly therewasa needto morecloselymodel the systemdynamicsso as to be ableto achievebettercontrolperformance.Onreturnto Australiathisbecamethesubjectof my PhDresearch[52].

    Nomenclature

    Themostcommonlyusedsymbolsusedin this book,andtheir unitsarelistedbelow.Note thatsomesymbolsareoverloadedin which casetheir context mustbeusedtodisambiguatethem.

    v a vectorvx a componentof a vectorA a matrixx̂ anestimateof xx errorin xxd demandedvalueof xAT transposeof Aαx, αy pixel pitch pixels/mmB viscousfriction coefficient N m s radC cameracalibrationmatrix (3 4)C q q̇ manipulatorcentripetalandCoriolis term kg m2 sceil x returnsn, thesmallestintegersuchthatn xE illuminance(lux) lxf force Nf focal length mF f -numberF q̇ friction torque N.mfloor x returnsn, thelargestintegersuchthatn xG gearratioφ luminousflux (lumens) lmφ magneticflux (Webers) WbG gearratiomatrixG q manipulatorgravity loadingterm N.mi current AIn n n identitymatrixj 1J scalarinertia kg m2

    x

  • J inertiatensor, 3 3 matrix kg m2AJB Jacobiantransformingvelocitiesin frameA to frameBk K constantKi amplifiergain(transconductance) A/VKm motortorqueconstant N.m/AK forwardkinematicsK 1 inversekinematicsL inductance HL luminance(nit) ntmi massof link i kgM q manipulatorinertiamatrix kg m2

    Ord orderof polynomialq generalizedjoint coordinatesQ generalizedjoint torque/forceR resistance Ωθ angle radθ vectorof angles,generallyrobotjoint angles rads Laplacetransformoperatorsi COM of link i with respectto thelink i coordinateframe mSi first momentof link i. Si misi kg.mσ standarddeviationt time sT sampleinterval sT lenstransmissionconstantTe cameraexposureinterval sT homogeneoustransformationATB homogeneoustransformof point B with respectto the

    frameA. If A is notgiventhenassumedrelative to worldcoordinateframe0. NotethatATB BTA

    1.τ torque N.mτC Coulombfriction torque N.mv voltage Vω frequency rad sx 3-D pose, x x y z rx ry rz T comprising translation

    along,androtationabouttheX, Y andZ axes.x y z CartesiancoordinatesX0, Y0 coordinatesof theprincipalpoint pixelsix iy cameraimageplanecoordinates miX iY cameraimageplanecoordinates pixelsiX cameraimageplanecoordinatesiX iX iY pixelsi X imageplaneerror

    xi

  • z z-transformoperatorZ Z-transform

    Thefollowing conventionshave alsobeenadopted:

    Timedomainvariablesarein lowercase,frequency domainin uppercase.

    Transferfunctionswill frequentlybewrittenusingthenotation

    K a ζ ωn Ksa

    11

    ω2ns2

    2ζωn

    s 1

    A freeintegratoris anexception,and 0 is usedto represents.

    Whenspecifyingmotor motion, inertiaandfriction parametersit is importantthataconsistentreferenceis used,usuallyeitherthemotoror theload,denotedby thesubscriptsm or l respectively.

    For numericquantitiestheunits radmandradlareusedto indicatethereferenceframe.

    In orderto clearlydistinguishresultsthatwereexperimentallydeterminedfromsimulatedor derivedresults,theformerwill alwaysbedesignatedas'measured'in thecaptionandindex entry.

    A comprehensive glossaryof termsandabbreviationsis providedin AppendixA.

    xii

  • Acknowledgements

    Thework describedin this bookis largelybasedonmy PhDresearch[52] whichwascarriedout, part time, at the Universityof Melbourneover the period1991–94.MysupervisorsProfessorMalcolm Goodat the University of Melbourne,andDr. PaulDunnat CSIRO providedmuchvaluablediscussionandguidanceover thecourseoftheresearch,andcritical commentson thedraft text.

    Thatwork couldnot have occurredwithout thegenerosityandsupportof my em-ployer, CSIRO. I am indebtedto Dr. Bob Brown andDr. S. Ramakrishnanfor sup-portingmein theOverseasFellowshipandPhDstudy, andmakingavailablethenec-essarytime andlaboratoryfacilities. I would like to thankmy CSIRO colleaguesfortheir supportof this work, in particular:Dr. Paul Dunn,Dr. Patrick Kearney, RobinKirkham,DennisMills, andVaughanRobertsfor technicaladviceandmuchvaluablediscussion;Murray JensenandGeoff Lamb for keepingthe computersystemsrun-ning; JannisYoungandKaryn Gee,the librarians,for trackingdown all mannerofreferences;Les Ewbankfor mechanicaldesignanddrafting; Ian Brittle' s ResearchSupportGroupfor mechanicalconstruction;andTerry Harvey andSteve Hoganforelectronicconstruction. The PhD work was partially supportedby a University ofMelbourne/ARCsmall grant. Writing this bookwaspartially supportedby the Co-operative ResearchCentrefor Mining TechnologyandEquipment(CMTE), a jointventurebetweenAMIRA, CSIRO, andtheUniversityof Queensland.

    Many othershelpedaswell. ProfessorRichard(Lou) Paul, Universityof Penn-sylvania, was thereat the beginning and madefacilities at the GRASPlaboratoryavailableto me.Dr. Kim Ng of MonashUniversityandDr. Rick Alexanderhelpedindiscussionsoncameracalibrationandlensdistortion,andalsoloanedmetheSHAPEsystemcalibrationtargetusedin Chapter4. VisionSystemsLtd. of Adelaide,throughtheir thenUSdistributorTom Seitzlerof Vision International,loanedmeanAPA-512video-ratefeatureextractorunit for usewhile I wasat theGRASPLaboratory. DavidHoadley proofreadtheoriginalthesis,andmy next doorneighbour, JackDavies,fixedlots of thingsaroundmy housethatI didn't getaroundto doing.

    xiii

  • xiv

  • Contents

    1 Intr oduction 11.1 Visualservoing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

    1.1.1 Relateddisciplines . . . . . . . . . . . . . . . . . . . . . . . 51.2 Structureof thebook . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    2 Modelling the robot 72.1 Manipulatorkinematics. . . . . . . . . . . . . . . . . . . . . . . . . 7

    2.1.1 Forwardandinversekinematics . . . . . . . . . . . . . . . . 102.1.2 Accuracy andrepeatability. . . . . . . . . . . . . . . . . . . 112.1.3 Manipulatorkinematicparameters. . . . . . . . . . . . . . . 12

    2.2 Manipulatorrigid-bodydynamics . . . . . . . . . . . . . . . . . . . 142.2.1 RecursiveNewton-Eulerformulation . . . . . . . . . . . . . 162.2.2 Symbolicmanipulation. . . . . . . . . . . . . . . . . . . . . 192.2.3 Forwarddynamics . . . . . . . . . . . . . . . . . . . . . . . 212.2.4 Rigid-bodyinertialparameters. . . . . . . . . . . . . . . . . 212.2.5 Transmissionandgearing . . . . . . . . . . . . . . . . . . . 272.2.6 Quantifyingrigid bodyeffects . . . . . . . . . . . . . . . . . 282.2.7 Robotpayload . . . . . . . . . . . . . . . . . . . . . . . . . 30

    2.3 Electro-mechanicaldynamics. . . . . . . . . . . . . . . . . . . . . . 312.3.1 Friction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322.3.2 Motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.3.3 Currentloop . . . . . . . . . . . . . . . . . . . . . . . . . . 422.3.4 Combinedmotorandcurrent-loopdynamics . . . . . . . . . 452.3.5 Velocity loop . . . . . . . . . . . . . . . . . . . . . . . . . . 492.3.6 Positionloop . . . . . . . . . . . . . . . . . . . . . . . . . . 522.3.7 Fundamentalperformancelimits . . . . . . . . . . . . . . . . 56

    2.4 Significanceof dynamiceffects. . . . . . . . . . . . . . . . . . . . . 582.5 Manipulatorcontrol . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    2.5.1 Rigid-bodydynamicscompensation. . . . . . . . . . . . . . 60

    xv

  • xvi CONTENTS

    2.5.2 Electro-mechanicaldynamicscompensation. . . . . . . . . . 642.6 Computationalissues . . . . . . . . . . . . . . . . . . . . . . . . . . 64

    2.6.1 Parallelcomputation . . . . . . . . . . . . . . . . . . . . . . 652.6.2 Symbolicsimplificationof run-timeequations. . . . . . . . . 662.6.3 Significance-basedsimplification . . . . . . . . . . . . . . . 672.6.4 Comparison. . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    3 Fundamentalsof imagecapture 733.1 Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

    3.1.1 Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . 733.1.2 Surfacereflectance. . . . . . . . . . . . . . . . . . . . . . . 753.1.3 Spectralcharacteristicsandcolor temperature. . . . . . . . . 76

    3.2 Imageformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 793.2.1 Light gatheringandmetering. . . . . . . . . . . . . . . . . . 813.2.2 Focusanddepthof field . . . . . . . . . . . . . . . . . . . . 823.2.3 Imagequality . . . . . . . . . . . . . . . . . . . . . . . . . . 843.2.4 Perspective transform. . . . . . . . . . . . . . . . . . . . . . 86

    3.3 Cameraandsensortechnologies . . . . . . . . . . . . . . . . . . . . 873.3.1 Sensors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.3.2 Spatialsampling . . . . . . . . . . . . . . . . . . . . . . . . 913.3.3 CCDexposurecontrolandmotionblur . . . . . . . . . . . . 943.3.4 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . 953.3.5 Sensitivity . . . . . . . . . . . . . . . . . . . . . . . . . . . 963.3.6 Darkcurrent . . . . . . . . . . . . . . . . . . . . . . . . . . 1003.3.7 Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003.3.8 Dynamicrange . . . . . . . . . . . . . . . . . . . . . . . . . 102

    3.4 Videostandards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023.4.1 Interlacingandmachinevision . . . . . . . . . . . . . . . . . 105

    3.5 Imagedigitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1063.5.1 OffsetandDC restoration . . . . . . . . . . . . . . . . . . . 1073.5.2 Signalconditioning. . . . . . . . . . . . . . . . . . . . . . . 1073.5.3 Samplingandaspectratio . . . . . . . . . . . . . . . . . . . 1073.5.4 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . 1123.5.5 OverallMTF . . . . . . . . . . . . . . . . . . . . . . . . . . 1133.5.6 Visualtemporalsampling . . . . . . . . . . . . . . . . . . . 115

    3.6 Cameraandlighting constraints . . . . . . . . . . . . . . . . . . . . 1163.6.1 Illumination . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

    3.7 Thehumaneye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

  • CONTENTS xvii

    4 Machine vision 1234.1 Imagefeatureextraction . . . . . . . . . . . . . . . . . . . . . . . . 123

    4.1.1 Wholescenesegmentation . . . . . . . . . . . . . . . . . . . 1244.1.2 Momentfeatures . . . . . . . . . . . . . . . . . . . . . . . . 1274.1.3 Binaryregion features . . . . . . . . . . . . . . . . . . . . . 1304.1.4 Featuretracking . . . . . . . . . . . . . . . . . . . . . . . . 136

    4.2 Perspective andphotogrammetry. . . . . . . . . . . . . . . . . . . . 1374.2.1 Close-rangephotogrammetry. . . . . . . . . . . . . . . . . . 1384.2.2 Cameracalibrationtechniques. . . . . . . . . . . . . . . . . 1394.2.3 Eye-handcalibration . . . . . . . . . . . . . . . . . . . . . . 147

    5 Visual servoing 1515.1 Fundamentals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1525.2 Priorwork . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1545.3 Position-basedvisualservoing . . . . . . . . . . . . . . . . . . . . . 159

    5.3.1 Photogrammetrictechniques. . . . . . . . . . . . . . . . . . 1595.3.2 Stereovision . . . . . . . . . . . . . . . . . . . . . . . . . . 1605.3.3 Depthfrom motion . . . . . . . . . . . . . . . . . . . . . . . 160

    5.4 Imagebasedservoing . . . . . . . . . . . . . . . . . . . . . . . . . . 1615.4.1 Approachesto image-basedvisualservoing . . . . . . . . . . 163

    5.5 Implementationissues . . . . . . . . . . . . . . . . . . . . . . . . . 1665.5.1 Cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665.5.2 Imageprocessing. . . . . . . . . . . . . . . . . . . . . . . . 1675.5.3 Featureextraction. . . . . . . . . . . . . . . . . . . . . . . . 1675.5.4 Visualtaskspecification . . . . . . . . . . . . . . . . . . . . 169

    6 Modelling an experimentalvisual servosystem 1716.1 Architecturesanddynamicperformance. . . . . . . . . . . . . . . . 1726.2 Experimentalhardwareandsoftware. . . . . . . . . . . . . . . . . . 175

    6.2.1 Processorandoperatingsystem . . . . . . . . . . . . . . . . 1766.2.2 Robotcontrolhardware. . . . . . . . . . . . . . . . . . . . . 1776.2.3 ARCL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1786.2.4 Visionsystem. . . . . . . . . . . . . . . . . . . . . . . . . . 1796.2.5 Visualservo supportsoftware— RTVL . . . . . . . . . . . . 182

    6.3 Kinematicsof cameramountandlens . . . . . . . . . . . . . . . . . 1846.3.1 Cameramountkinematics . . . . . . . . . . . . . . . . . . . 1846.3.2 Modellingthelens . . . . . . . . . . . . . . . . . . . . . . . 188

    6.4 Visualfeedbackcontrol . . . . . . . . . . . . . . . . . . . . . . . . . 1916.4.1 Controlstructure . . . . . . . . . . . . . . . . . . . . . . . . 1946.4.2 “Black box” experiments. . . . . . . . . . . . . . . . . . . . 1956.4.3 Modellingsystemdynamics . . . . . . . . . . . . . . . . . . 197

  • xviii CONTENTS

    6.4.4 Theeffectof multi-ratesampling . . . . . . . . . . . . . . . 2016.4.5 A single-ratemodel. . . . . . . . . . . . . . . . . . . . . . . 2036.4.6 Theeffectof camerashutterinterval . . . . . . . . . . . . . . 2046.4.7 Theeffectof targetrange. . . . . . . . . . . . . . . . . . . . 2066.4.8 Comparisonwith joint controlschemes . . . . . . . . . . . . 2086.4.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

    7 Control designand performance 2117.1 Controlformulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2127.2 Performancemetrics . . . . . . . . . . . . . . . . . . . . . . . . . . 2147.3 Compensatordesignandevaluation . . . . . . . . . . . . . . . . . . 215

    7.3.1 Addition of anextra integrator . . . . . . . . . . . . . . . . . 2157.3.2 PID controller. . . . . . . . . . . . . . . . . . . . . . . . . . 2167.3.3 Smith'smethod. . . . . . . . . . . . . . . . . . . . . . . . . 2187.3.4 Statefeedbackcontrollerwith integralaction . . . . . . . . . 2197.3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

    7.4 Axis controlmodesfor visualservoing . . . . . . . . . . . . . . . . . 2277.4.1 Torquecontrol . . . . . . . . . . . . . . . . . . . . . . . . . 2287.4.2 Velocitycontrol . . . . . . . . . . . . . . . . . . . . . . . . . 2297.4.3 Positioncontrol . . . . . . . . . . . . . . . . . . . . . . . . . 2317.4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 2317.4.5 Non-linearsimulationandmodelerror . . . . . . . . . . . . . 2337.4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234

    7.5 Visualfeedforwardcontrol . . . . . . . . . . . . . . . . . . . . . . . 2357.5.1 High-performanceaxisvelocitycontrol . . . . . . . . . . . . 2377.5.2 Targetstateestimation . . . . . . . . . . . . . . . . . . . . . 2427.5.3 Feedforwardcontrolimplementation. . . . . . . . . . . . . . 2517.5.4 Experimentalresults . . . . . . . . . . . . . . . . . . . . . . 255

    7.6 Biologicalparallels . . . . . . . . . . . . . . . . . . . . . . . . . . . 2577.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260

    8 Further experimentsin visual servoing 2638.1 Visualcontrolof a majoraxis . . . . . . . . . . . . . . . . . . . . . . 263

    8.1.1 Theexperimentalsetup. . . . . . . . . . . . . . . . . . . . . 2648.1.2 Trajectorygeneration. . . . . . . . . . . . . . . . . . . . . . 2658.1.3 Puma'native' positioncontrol . . . . . . . . . . . . . . . . . 2668.1.4 Understandingjoint 1 dynamics . . . . . . . . . . . . . . . . 2698.1.5 Single-axiscomputedtorquecontrol . . . . . . . . . . . . . . 2748.1.6 Visionbasedcontrol . . . . . . . . . . . . . . . . . . . . . . 2798.1.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

    8.2 High-performance3D translationalvisualservoing . . . . . . . . . . 282

  • CONTENTS xix

    8.2.1 Visualcontrolstrategy . . . . . . . . . . . . . . . . . . . . . 2838.2.2 Axis velocitycontrol . . . . . . . . . . . . . . . . . . . . . . 2868.2.3 Implementationdetails . . . . . . . . . . . . . . . . . . . . . 2878.2.4 Resultsanddiscussion . . . . . . . . . . . . . . . . . . . . . 290

    8.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294

    9 Discussionand futur e dir ections 2979.1 Discussion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2979.2 Visualservoing: somequestions(andanswers) . . . . . . . . . . . . 2999.3 Futurework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302

    Bibliography 303

    A Glossary 321

    B This book on the Web 325

    C APA-512 327

    D RTVL: a software systemfor robot visual servoing 333D.1 Imageprocessingcontrol . . . . . . . . . . . . . . . . . . . . . . . . 334D.2 Imagefeatures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334D.3 Timestampsandsynchronizedinterrupts . . . . . . . . . . . . . . . 335D.4 Real-timegraphics . . . . . . . . . . . . . . . . . . . . . . . . . . . 337D.5 Variablewatch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337D.6 Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338D.7 Interactivecontrolfacility . . . . . . . . . . . . . . . . . . . . . . . . 338D.8 Datalogginganddebugging . . . . . . . . . . . . . . . . . . . . . . 338D.9 Robotcontrol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340D.10 Applicationprogramfacilities . . . . . . . . . . . . . . . . . . . . . 340D.11 An example— planarpositioning . . . . . . . . . . . . . . . . . . . 340D.12 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341

    E LED strobe 343

    Index 347

  • xx CONTENTS

  • List of Figures

    1.1 Generalstructureof hierarchicalmodel-basedrobotandvisionsystem. 4

    2.1 Dif ferentformsof Denavit-Hartenberg notation. . . . . . . . . . . . . 82.2 Detailsof coordinateframesusedfor Puma560 . . . . . . . . . . . . 132.3 Notationfor inversedynamics . . . . . . . . . . . . . . . . . . . . . 172.4 Measuredandestimatedgravity loadon joint 2. . . . . . . . . . . . . 252.5 Configurationdependentinertiafor joint 1 . . . . . . . . . . . . . . . 292.6 Configurationdependentinertiafor joint 2 . . . . . . . . . . . . . . . 292.7 Gravity loadon joint 2 . . . . . . . . . . . . . . . . . . . . . . . . . 302.8 Typical friction versusspeedcharacteristic. . . . . . . . . . . . . . . 322.9 Measuredmotorcurrentversusjoint velocity for joint 2 . . . . . . . . 342.10 Block diagramof motormechanicaldynamics. . . . . . . . . . . . . 362.11 Schematicof motorelectricalmodel. . . . . . . . . . . . . . . . . . . 362.12 Measuredjoint angleandvoltagedatafrom open-circuitteston joint 2. 392.13 Block diagramof motorcurrentloop . . . . . . . . . . . . . . . . . . 432.14 Measuredjoint 6 current-loopfrequency response. . . . . . . . . . . 442.15 Measuredjoint 6 motorandcurrent-looptransferfunction. . . . . . . 452.16 Measuredmaximumcurrentstepresponsefor joint 6. . . . . . . . . . 482.17 SIMULINK modelMOTOR . . . . . . . . . . . . . . . . . . . . . . 502.18 SIMULINK modelLMOTOR . . . . . . . . . . . . . . . . . . . . . 502.19 Velocity loopblockdiagram. . . . . . . . . . . . . . . . . . . . . . . 512.20 Measuredjoint 6 velocity-looptransferfunction . . . . . . . . . . . . 522.21 SIMULINK modelVLOOP . . . . . . . . . . . . . . . . . . . . . . 522.22 Unimationservo positioncontrolmode. . . . . . . . . . . . . . . . . 532.23 Block diagramof Unimationpositioncontrolloop. . . . . . . . . . . 542.24 Root-locusdiagramof positionloopwith no integralaction. . . . . . 562.25 Root-locusdiagramof positionloopwith integralactionenabled.. . . 562.26 SIMULINK modelPOSLOOP . . . . . . . . . . . . . . . . . . . . . 572.27 Unimationservo currentcontrolmode.. . . . . . . . . . . . . . . . . 572.28 Standardtrajectorytorquecomponents. . . . . . . . . . . . . . . . . 59

    xxi

  • xxii LIST OF FIGURES

    2.29 Computedtorquecontrolstructure.. . . . . . . . . . . . . . . . . . . 612.30 Feedforwardcontrolstructure. . . . . . . . . . . . . . . . . . . . . . 612.31 Histogramof torqueexpressioncoefficient magnitudes . . . . . . . . 68

    3.1 Stepsinvolvedin imageprocessing. . . . . . . . . . . . . . . . . . . 743.2 Luminositycurve for standardobserver. . . . . . . . . . . . . . . . . 753.3 Specularanddiffusesurfacereflectance . . . . . . . . . . . . . . . . 763.4 Blackbodyemissionsfor solarandtungstenillumination. . . . . . . . 773.5 Elementaryimageformation . . . . . . . . . . . . . . . . . . . . . . 793.6 Depthof field bounds. . . . . . . . . . . . . . . . . . . . . . . . . . 833.7 Centralperspective geometry. . . . . . . . . . . . . . . . . . . . . . . 873.8 CCDphotositechargewellsandincidentphotons.. . . . . . . . . . . 883.9 CCDsensorarchitectures. . . . . . . . . . . . . . . . . . . . . . . . 893.10 Pixel exposureintervals . . . . . . . . . . . . . . . . . . . . . . . . . 913.11 Cameraspatialsampling . . . . . . . . . . . . . . . . . . . . . . . . 923.12 Somephotositecaptureprofiles. . . . . . . . . . . . . . . . . . . . . 933.13 MTF for variouscaptureprofiles. . . . . . . . . . . . . . . . . . . . . 933.14 Exposureinterval of thePulnixcamera. . . . . . . . . . . . . . . . . 953.15 Experimentalsetupto determinecamerasensitivity. . . . . . . . . . . 973.16 Measuredresponseof AGCcircuit to changingillumination. . . . . . 983.17 Measuredresponseof AGCcircuit to stepilluminationchange.. . . . 993.18 Measuredspatialvarianceof illuminanceasa functionof illuminance. 1023.19 CCIRstandardvideowaveform . . . . . . . . . . . . . . . . . . . . 1033.20 Interlacedvideofields. . . . . . . . . . . . . . . . . . . . . . . . . . 1043.21 Theeffectsof field-shutteringona moving object. . . . . . . . . . . . 1063.22 Phasedelayfor digitizerfilter . . . . . . . . . . . . . . . . . . . . . . 1083.23 Stepresponseof digitizerfilter . . . . . . . . . . . . . . . . . . . . . 1083.24 Measuredcameraanddigitizerhorizontaltiming. . . . . . . . . . . . 1093.25 Cameraandimageplanecoordinatesystems. . . . . . . . . . . . . . 1123.26 Measuredcameraresponseto horizontalstepilluminationchange. . . 1133.27 Measuredcameraresponseto verticalstepilluminationchange.. . . . 1143.28 Typicalarrangementof anti-aliasing(low-pass)filter, samplerandzero-

    orderhold. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1153.29 Magnituderesponseof cameraoutputversustargetmotion . . . . . . 1173.30 Magnituderesponseof cameraoutputto changingthreshold . . . . . 1173.31 Spreadsheetprogramfor cameraandlighting setup . . . . . . . . . . 1193.32 Comparisonof illuminancedueto aconventionalfloodlampandcam-

    eramountedLEDs . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

    4.1 Stepsinvolvedin sceneinterpretation . . . . . . . . . . . . . . . . . 1254.2 Boundaryrepresentationaseithercrackcodesor chaincode. . . . . . 126

  • LIST OF FIGURES xxiii

    4.3 Exaggeratedview showing circlecentroidoffsetin theimageplane. . 1284.4 Equivalentellipsefor anarbitraryregion. . . . . . . . . . . . . . . . 1294.5 Theidealsensorarrayshowing rectangularimageandnotation.. . . . 1314.6 Theeffect of edgegradientsonbinarizedwidth. . . . . . . . . . . . . 1344.7 Relevantcoordinateframes.. . . . . . . . . . . . . . . . . . . . . . . 1394.8 Thecalibrationtargetusedfor intrinsicparameterdetermination.. . . 1404.9 Thetwo-planecameramodel.. . . . . . . . . . . . . . . . . . . . . . 1424.10 Contourplot of intensityprofilearoundcalibrationmarker. . . . . . . 1464.11 Detailsof cameramounting. . . . . . . . . . . . . . . . . . . . . . . 1484.12 Detailsof camera,lensandsensorplacement. . . . . . . . . . . . . . 149

    5.1 Relevantcoordinateframes . . . . . . . . . . . . . . . . . . . . . . . 1525.2 Dynamicposition-basedlook-and-movestructure.. . . . . . . . . . . 1545.3 Dynamicimage-basedlook-and-movestructure.. . . . . . . . . . . . 1545.4 Position-basedvisualservo (PBVS)structureasperWeiss. . . . . . . 1555.5 Image-basedvisualservo (IBVS) structureasperWeiss. . . . . . . . 1555.6 Exampleof initial anddesiredview of acube . . . . . . . . . . . . . 162

    6.1 Photographof VME rack . . . . . . . . . . . . . . . . . . . . . . . . 1756.2 Overall view of theexperimentalsystem. . . . . . . . . . . . . . . . 1766.3 Robotcontrollerhardwarearchitecture. . . . . . . . . . . . . . . . . 1786.4 ARCL setpointandservo communicationtiming . . . . . . . . . . . 1796.5 MAXBUS andVMEbusdatapaths. . . . . . . . . . . . . . . . . . . 1806.6 Schematicof imageprocessingdataflow . . . . . . . . . . . . . . . . 1816.7 Comparisonof latency for frameandfield-rateprocessing. . . . . . . 1826.8 TypicalRTVL display. . . . . . . . . . . . . . . . . . . . . . . . . . 1836.9 A simplecameramount. . . . . . . . . . . . . . . . . . . . . . . . . 1866.10 Cameramountusedin thiswork . . . . . . . . . . . . . . . . . . . . 1866.11 Photographof cameramountingarrangement.. . . . . . . . . . . . . 1876.12 Target locationin termsof bearingangle . . . . . . . . . . . . . . . . 1896.13 Lenscenterof rotation . . . . . . . . . . . . . . . . . . . . . . . . . 1906.14 Coordinateandsignconventions . . . . . . . . . . . . . . . . . . . . 1936.15 Block diagramof 1-DOFvisualfeedbacksystem . . . . . . . . . . . 1946.16 Photographof squarewave responsetestconfiguration . . . . . . . . 1956.17 Experimentalsetupfor stepresponsetests.. . . . . . . . . . . . . . . 1956.18 Measuredresponseto 'visualstep'. . . . . . . . . . . . . . . . . . . . 1966.19 Measuredclosed-loopfrequency responseof single-axisvisualservo . 1976.20 Measuredphaseresponseanddelayestimate. . . . . . . . . . . . . . 1986.21 Temporalrelationshipsin imageprocessingandrobotcontrol . . . . . 1996.22 SIMULINK modelof the1-DOFvisualfeedbackcontroller. . . . . . 2006.23 Multi-ratesamplingexample . . . . . . . . . . . . . . . . . . . . . . 201

  • xxiv LIST OF FIGURES

    6.24 Analysisof samplingtimedelay∆21. . . . . . . . . . . . . . . . . . . 2026.25 Analysisof samplingtimedelay∆20. . . . . . . . . . . . . . . . . . . 2036.26 Comparisonof measuredandsimulatedstepresponses. . . . . . . . 2056.27 Root-locusof single-ratemodel. . . . . . . . . . . . . . . . . . . . . 2056.28 Bodeplot of closed-looptransferfunction. . . . . . . . . . . . . . . . 2066.29 Measuredeffect of motionblur onapparenttargetarea . . . . . . . . 2076.30 Measuredstepresponsesfor varyingexposureinterval . . . . . . . . 2086.31 Measuredstepresponsefor varyingtargetrange . . . . . . . . . . . . 209

    7.1 Visualfeedbackblock diagramshowing targetmotionasdisturbanceinput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

    7.2 Simulatedtrackingperformanceof visualfeedbackcontroller. . . . . 2157.3 Rootlocusfor visualfeedbacksystemwith additionalintegrator . . . 2167.4 Rootlocusfor visualfeedbacksystemwith PID controller. . . . . . . 2177.5 Simulatedresponsewith PID controller . . . . . . . . . . . . . . . . 2187.6 Simulationof Smith'spredictivecontroller. . . . . . . . . . . . . . . 2207.7 Visualfeedbacksystemwith state-feedbackcontrolandestimator. . . 2217.8 Rootlocusfor pole-placementcontroller. . . . . . . . . . . . . . . . 2227.9 Simulationof pole-placementfixationcontroller . . . . . . . . . . . . 2237.10 Experimentalresultswith pole-placementcompensator. . . . . . . . 2247.11 Comparisonof image-planeerrorfor variousvisualfeedbackcompen-

    sators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2267.12 Simulationof pole-placementfixation controllerfor triangulartarget

    motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2277.13 Block diagramof generalizedactuatorandvisionsystem.. . . . . . . 2297.14 Rootlocusfor visualservo with actuatortorquecontroller. . . . . . . 2307.15 Rootlocusfor visualservo with actuatorvelocity controller. . . . . . 2307.16 Rootlocusfor visualservo with actuatorpositioncontroller. . . . . . 2327.17 Rootlocusplot for visualservo with actuatorpositioncontrollerplus

    additionalintegrator. . . . . . . . . . . . . . . . . . . . . . . . . . . 2327.18 Simulationof time responsesto target stepmotion for visual servo

    systemsbasedon torque,velocity andposition-controlledaxes . . . . 2337.19 Block diagramof visual servo with feedbackandfeedforwardcom-

    pensation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2367.20 Block diagramof visual servo with feedbackandestimatedfeedfor-

    wardcompensation.. . . . . . . . . . . . . . . . . . . . . . . . . . . 2377.21 Block diagramof velocity feedforwardandfeedbackcontrolstructure

    asimplemented. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2387.22 Block diagramof Unimationservo in velocity-controlmode. . . . . . 2387.23 Block diagramof digital axis-velocity loop. . . . . . . . . . . . . . . 2397.24 Bodeplot comparisonof differentiators . . . . . . . . . . . . . . . . 241

  • LIST OF FIGURES xxv

    7.25 SIMULINK modelDIGVLOOP . . . . . . . . . . . . . . . . . . . . 2427.26 Root-locusof digital axis-velocity loop . . . . . . . . . . . . . . . . 2437.27 Bodeplot of digital axis-velocity loop . . . . . . . . . . . . . . . . . 2437.28 Measuredstepresponseof digital axis-velocity loop . . . . . . . . . . 2447.29 Simplifiedscalarform of theKalmanfilter . . . . . . . . . . . . . . . 2487.30 Simulatedresponseof α β velocity estimator . . . . . . . . . . . . 2507.31 Comparisonof velocity estimators.. . . . . . . . . . . . . . . . . . . 2517.32 Detailsof systemtiming for velocity feedforwardcontroller. . . . . . 2527.33 SIMULINK modelFFVSERVO . . . . . . . . . . . . . . . . . . . . 2537.34 Simulationof feedforwardfixationcontroller . . . . . . . . . . . . . 2547.35 Turntablefixationexperiment. . . . . . . . . . . . . . . . . . . . . . 2547.36 Measuredtrackingperformancefor targeton turntable . . . . . . . . 2567.37 Measuredtrackingvelocity for targeton turntable . . . . . . . . . . . 2567.38 Pingpongball fixation experiment . . . . . . . . . . . . . . . . . . . 2577.39 Measuredtrackingperformancefor flying ping-pongball . . . . . . . 2587.40 Oculomotorcontrolsystemmodelby Robinson . . . . . . . . . . . . 2597.41 Oculomotorcontrolsystemmodelasfeedforwardnetwork . . . . . . 259

    8.1 Planview of theexperimentalsetupfor majoraxiscontrol. . . . . . . 2648.2 Position,velocityandaccelerationprofileof thequinticpolynomial. . 2668.3 Measuredaxisresponsefor Unimatepositioncontrol . . . . . . . . . 2678.4 Measuredjoint angleandcameraaccelerationunderUnimateposition

    control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2688.5 Measuredtip displacement. . . . . . . . . . . . . . . . . . . . . . . 2708.6 Measuredarmstructurefrequency responsefunction . . . . . . . . . 2718.7 Joint1 current-loopfrequency responsefunction . . . . . . . . . . . 2728.8 Root-locusfor motorandcurrentloopwith single-modestructuralmodel2738.9 Schematicof simpletwo-inertiamodel. . . . . . . . . . . . . . . . . 2738.10 SIMULINK modelCTORQUEJ1 . . . . . . . . . . . . . . . . . . . 2758.11 Measuredaxisvelocitystepresponsefor single-axiscomputed-torque

    control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2778.12 Measuredaxisresponsefor single-axiscomputed-torquecontrol . . . 2788.13 Measuredaxisresponseunderhybridvisualcontrol . . . . . . . . . . 2808.14 Measuredtip displacementfor visualcontrol . . . . . . . . . . . . . . 2818.15 Planview of theexperimentalsetupfor translationalvisualservoing. . 2838.16 Block diagramof translationalcontrolstructure . . . . . . . . . . . . 2858.17 Taskstructurefor translationcontrol . . . . . . . . . . . . . . . . . . 2888.18 Cameraorientationgeometry. . . . . . . . . . . . . . . . . . . . . . 2898.19 Measuredcentroiderrorfor translationalvisualservo control . . . . . 2918.20 Measuredcentroiderrorfor translationalvisualservo controlwith no

    centripetal/Coriolisfeedforward . . . . . . . . . . . . . . . . . . . . 291

  • xxvi LIST OF FIGURES

    8.21 Measuredjoint ratesfor translationalvisualservo control . . . . . . . 2928.22 MeasuredcameraCartesianposition . . . . . . . . . . . . . . . . . . 2938.23 MeasuredcameraCartesianvelocity . . . . . . . . . . . . . . . . . . 2938.24 MeasuredcameraCartesianpath . . . . . . . . . . . . . . . . . . . . 2958.25 MeasuredtargetCartesianpathestimate . . . . . . . . . . . . . . . . 295

    C.1 APA-512 blockdiagram. . . . . . . . . . . . . . . . . . . . . . . . . 327C.2 APA region datatiming . . . . . . . . . . . . . . . . . . . . . . . . . 328C.3 Perimetercontributionlookupscheme.. . . . . . . . . . . . . . . . . 330C.4 Hierarchyof binaryregions. . . . . . . . . . . . . . . . . . . . . . . 331C.5 Fieldmodeoperation . . . . . . . . . . . . . . . . . . . . . . . . . . 331

    D.1 Schematicof theexperimentalsystem . . . . . . . . . . . . . . . . . 334D.2 Block diagramof video-lockedtiming hardware. . . . . . . . . . . . 335D.3 Graphicsrenderingsubsystem.. . . . . . . . . . . . . . . . . . . . . 336D.4 Variablewatchfacility. . . . . . . . . . . . . . . . . . . . . . . . . . 337D.5 On-lineparametermanipulation. . . . . . . . . . . . . . . . . . . . . 339

    E.1 Photographof cameramountedsolid-statestrobe. . . . . . . . . . . . 343E.2 Derivationof LED timing from verticalsynchronizationpulses.. . . . 344E.3 LED light outputasa functionof time . . . . . . . . . . . . . . . . . 345E.4 LED light outputasa functionof time (expandedscale). . . . . . . . 345E.5 MeasuredLED intensityasa functionof current. . . . . . . . . . . . 346

  • List of Tables

    2.1 Kinematicparametersfor thePuma560 . . . . . . . . . . . . . . . . 122.2 Comparisonof computationalcostsfor inversedynamics.. . . . . . . 162.3 Link massdata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222.4 Link COM positionwith respectto link frame . . . . . . . . . . . . . 232.5 Comparisonof gravity coefficientsfrom severalsources. . . . . . . . 242.6 Comparisonof shouldergravity loadmodelsin cosineform. . . . . . 262.7 Link inertiaabouttheCOM . . . . . . . . . . . . . . . . . . . . . . . 262.8 Relationshipbetweenmotorandloadreferencedquantities.. . . . . . 282.9 Puma560gearratios. . . . . . . . . . . . . . . . . . . . . . . . . . . 282.10 Minimum andmaximumvaluesof normalizedinertia . . . . . . . . . 312.11 Massandinertiaof end-mountedcamera. . . . . . . . . . . . . . . . 312.12 Measuredfriction parameters. . . . . . . . . . . . . . . . . . . . . . 342.13 Motor inertiavalues. . . . . . . . . . . . . . . . . . . . . . . . . . . 372.14 Measuredmotortorqueconstants. . . . . . . . . . . . . . . . . . . . 382.15 Measuredandmanufacturer'svaluesof armatureresistance. . . . . . 412.16 Measuredcurrent-loopgainandmaximumcurrent. . . . . . . . . . . 442.17 Motor torquelimits. . . . . . . . . . . . . . . . . . . . . . . . . . . . 472.18 Comparisonof experimentalandestimatedvelocity limits dueto am-

    plifier voltagesaturation. . . . . . . . . . . . . . . . . . . . . . . . . 492.19 Puma560joint encoderresolution.. . . . . . . . . . . . . . . . . . . 532.20 Measuredstep-responsegainsof velocity loop. . . . . . . . . . . . . 552.21 Summaryof fundamentalrobotperformancelimits. . . . . . . . . . . 582.22 Rankingof termsfor joint torqueexpressions . . . . . . . . . . . . . 602.23 Operationcountfor Puma560specificdynamicsafterparametervalue

    substitutionandsymbolicsimplification. . . . . . . . . . . . . . . . . 672.24 Significance-basedtruncationof thetorqueexpressions. . . . . . . . 692.25 Comparisonof computationtimesfor Pumadynamics. . . . . . . . . 702.26 Comparisonof efficiency for dynamicscomputation. . . . . . . . . . 71

    3.1 CommonSI-basedphotometricunits. . . . . . . . . . . . . . . . . . 75

    xxvii

  • xxviii LIST OF TABLES

    3.2 Relevantphysicalconstants. . . . . . . . . . . . . . . . . . . . . . . 773.3 Photonsperlumenfor sometypical illuminants. . . . . . . . . . . . . 793.4 Anglesof view for PulnixCCDsensorand35mmfilm. . . . . . . . . 813.5 Pulnixcameragrey-level response. . . . . . . . . . . . . . . . . . . 973.6 Detailsof CCIRhorizontaltiming. . . . . . . . . . . . . . . . . . . . 1053.7 Detailsof CCIRverticaltiming. . . . . . . . . . . . . . . . . . . . . 1053.8 Manufacturer'sspecificationsfor thePulnixTM-6 camera. . . . . . . 1103.9 Derived pixel scalefactorsfor the Pulnix TM-6 cameraand DIGI-

    MAX digitizer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1113.10 Constraintsin imageformation. . . . . . . . . . . . . . . . . . . . . 118

    4.1 Comparisonof cameracalibrationtechniques.. . . . . . . . . . . . . 1414.2 Summaryof calibrationexperimentresults. . . . . . . . . . . . . . . 146

    7.1 SimulatedRMSpixel errorfor pole-placementcontroller. . . . . . . . 2257.2 Effect of parametervariationonvelocitymodecontrol. . . . . . . . . 2357.3 Critical velocitiesfor Puma560velocity estimators.. . . . . . . . . . 2407.4 Comparisonof operationcountfor variousvelocity estimators.. . . . 251

    8.1 Peakvelocityandaccelerationfor testtrajectory. . . . . . . . . . . . 2658.2 Comparisonof implementationaldifferencesbetweencontrollers. . . 2798.3 Summaryof taskexecutiontimes . . . . . . . . . . . . . . . . . . . . 289

  • Chapter 1

    Intr oduction

    1.1 Visual servoing

    Visual servoing is a rapidly maturingapproachto the control of robot manipulatorsthat is basedon visualperceptionof robotandworkpiecelocation. More concretely,visualservoinginvolvestheuseof oneor morecamerasandacomputervisionsystemto controlthepositionof therobot'send-effectorrelativeto theworkpieceasrequiredby thetask.

    Modernmanufacturingrobotscanperformassemblyandmaterialhandlingjobswith speedandprecision,yetcomparedto humanworkersrobotsareat a distinctdis-advantagein that they cannot'see' what they aredoing. In industrialapplications,considerableengineeringeffort is thereforeexpendedin providing a suitableworkenvironmentfor theseblind machines.This entailsthe designand manufactureofspecializedpart feeders,jigs to hold the work in progress,andspecialpurposeend-effectors. Theresultinghigh non-recurrentengineeringcostsarelargely responsiblefor robotsfailing tomeettheirinitial promiseof beingversatilereprogrammablework-ers[84] ableto rapidlychangefrom onetaskto thenext.

    Oncethe structuredwork environmenthasbeencreated,the spatialcoordinatesof all relevantpointsmustthenbe taught. Ideally, teachingwould beachievedusingdatafrom CAD modelsof theworkplace,however dueto low robotaccuracy manualteachingis oftenrequired.This low accuracy is a consequenceof therobot'stool-tipposebeing inferredfrom measuredjoint anglesusinga modelof the robot's kine-matics. Discrepanciesbetweenthe modelandthe actualrobot leadto tool-tip poseerrors.

    Speed,or cycle time, is thecritical factorin theeconomicjustificationfor arobot.Machinescapableof extremelyhigh tool-tip accelerationsnow exist but the overallcycle time is dependentuponotherfactorssuchassettlingtime andovershoot.High

    1

  • 2 Intr oduction

    speedandaccelerationareoften achieved at considerablecostsinceeffectssuchasrigid-bodydynamics,link andtransmissionflexibility becomesignificant.To achievepreciseend-pointcontrolusingjoint positionsensorstherobotmustbeengineeredtominimizetheseeffects.TheAdeptOnemanipulatorfor instance,widely usedin high-speedelectronicassembly, hasmassive links soasto achieve high rigidity but this isat theexpenseof increasedtorquenecessaryto acceleratethelinks. Theproblemsofconventionalrobotmanipulatorsmaybesummarizedas:

    1. It is necessaryto provide,atconsiderablecost,highly structuredwork environ-mentsfor robots.

    2. Thelimited accuracy of arobotfrequentlynecessitatestime-consumingmanualteachingof robotpositions.

    3. Mechanicaldynamicsin therobot'sstructureanddrivetrainfundamentallycon-straintheminimumcycle time.

    A visually servoed robot doesnot needto know a priori the coordinatesof itsworkpiecesor otherobjectsin its workspace.In a manufacturingenvironmentvisualservoing could thus eliminaterobot teachingand allow tasksthat werenot strictlyrepetitive,suchasassemblywithout precisefixturing andwith incomingcomponentsthatwereunorientedor perhapsswingingonoverheadtransferlines.

    Visualservoing alsoprovidesthepotentialto relax themechanicalaccuracy andstiffnessrequirementsfor robot mechanismsandhencereducetheir cost. The defi-cienciesof themechanismwouldbecompensatedfor by avisionsensorandfeedbackso asto achieve thedesiredaccuracy andendpointsettingtime. Jägersand[133] forexampleshowshow positioningaccuracy of arobotwith significantbacklashwasim-provedusingvisualservoing. Suchissuesaresignificantfor ultra-finepitchelectronicassembly[126] whereplanarpositioningaccuracy of 0.5µm androtationalaccuracyof 0.1 will berequiredandsettlingtime will besignificant.Moore's Law1 providesaneconomicmotivationfor this approach.Mechanicalengineeringis a maturetech-nology andcostsdo not decreasewith time. Sensorsandcontrol computerson theotherhandhave, andwill continueto, exhibit dramaticimprovementin performanceto priceratio over time.

    Visual servoing is alsoapplicableto the unstructuredenvironmentsthat will beencounteredby field and servicerobots. Suchrobotsmustaccomplishtaskseventhoughtheexactlocationof therobotandworkpiecearenot known andareoftennotpracticablymeasurable.Roboticfruit picking [206], for example,requiresthe robotwhoselocationis only approximatelyknown to graspa fruit whosepositionis alsounknown andperhapsvaryingwith time.

    1GordonMoore (co-founderof Intel) predictedin 1965 that the transistordensityof semiconductorchipswoulddoubleapproximatelyevery18 months.

  • 1.1Visual servoing 3

    The useof vision with robotshasa long history [291] andtodayvision systemsareavailablefrom majorrobotvendorsthatarehighly integratedwith therobot'spro-grammingsystem.Capabilitiesrangefrom simplebinary imageprocessingto morecomplex edge-andfeature-basedsystemscapableof handlingoverlappedparts[35].Thecommoncharacteristicof all thesesystemsis thatthey arestatic,andtypically im-ageprocessingtimesareof theorderof 0.1to1 second.In suchsystemsvisualsensingandmanipulationarecombinedin anopen-loopfashion,'looking' then'moving'.

    Theaccuracy of the'look-then-move' approachdependsdirectly on theaccuracyof thevisualsensorandtherobotmanipulator. An alternative to increasingtheaccu-racy of thesesub-systemsis to usea visual-feedbackcontrolloopwhichwill increasetheoverallaccuracy of thesystem.Takento theextreme,machinevisioncanprovideclosed-looppositioncontrol for a robot end-effector — this is referredto asvisualservoing. Thetermvisualservoingappearsto have beenfirst introducedby Hill andPark [116] in 1979to distinguishtheir approachfrom earlier'blocks world' experi-mentswheretherobotsystemalternatedbetweenpicturetakingandmoving. Prior totheintroductionof thisterm,thelessspecifictermvisualfeedback wasgenerallyused.For thepurposesof thisbook,thetaskin visualservoingis to usevisualinformationtocontrolthepose2 of therobot'send-effectorrelative to atargetobjector asetof targetfeatures(thetaskcanalsobedefinedfor mobilerobots,whereit becomesthecontrolof thevehicle's posewith respectto somelandmarks).Thegreatbenefitof feedbackcontrol is that the accuracy of the closed-loopsystemcanbemaderelatively insen-sitive to calibrationerrorsandnon-linearitiesin the open-loopsystem.However theinevitabledownsideis thatintroducingfeedbackadmitsthepossibilityof closed-loopinstabilityandthis is a majorthemeof this book.

    The camera(s)may be stationaryor held in the robot's 'hand'. The latter case,oftenreferredto astheeye-in-handconfiguration,resultsin a systemcapableof pro-viding endpointrelative positioninginformationdirectly in Cartesianor taskspace.This presentsopportunitiesfor greatly increasingthe versatility andaccuracy of ro-botic automationtasks.

    Vision hasnot, to date,beenextensively investigatedasa high-bandwidthsensorfor closed-loopcontrol.Largely thishasbeenbecauseof thetechnologicalconstraintimposedby thehugeratesof dataoutputfrom a videocamera(around107pixels s),andtheproblemsof extractingmeaningfrom thatdataandrapidlyalteringtherobot'spathin response.A visionsensor'sraw outputdatarate,for example,is severalordersof magnitudegreaterthanthatof a forcesensorfor thesamesamplerate.Nonethelessthereis a rapidly growing bodyof literaturedealingwith visualservoing, thoughdy-namicperformanceor bandwidthreportedto dateis substantiallylessthancouldbeexpectedgiventhevideosamplerate. Most researchseemsto have concentratedonthecomputervisionpartof theproblem,with a simplecontrollersufficiently detunedto ensurestability. Effectssuchastrackinglag andtendency towardinstability have

    2Poseis the3D positionandorientation.

  • 4 Intr oduction

    pixelmanipulation

    featureextraction

    jointcontrol

    trajectorygeneration

    objectmotionplanning

    worldmodel task level reasoning

    perception reaction

    visualservoing

    sceneinterpretation

    abstraction

    bandwidth

    Figure1.1: Generalstructureof hierarchicalmodel-basedrobotandvision sys-tem. The dashedline shows the 'short-circuited'information flow in a visualservo system.

    beennotedalmostin passing.It is exactly theseissues,their fundamentalcausesandmethodsof overcomingthem,thataretheprincipalfocusof this book.

    Anotherway of consideringthedifferencebetweenconventionallook-then-moveandvisualservo systemsis depictedin Figure1.1. Thecontrolstructureis hierarchi-cal, with higherlevelscorrespondingto moreabstractdatarepresentationandlowerbandwidth. The highestlevel is capableof reasoningaboutthe task,givena modelof the environment,anda look-then-move approachis used.Firstly, the target loca-tion andgraspsitesaredeterminedfrom calibratedstereovision or laserrangefinderimages,andthena sequenceof movesis plannedandexecuted.Vision sensorshavetendedto beusedin this fashionbecauseof therichnessof thedatathey canproduceabouttheworld, in contrastto anencoderor limit switchwhichis generallydealtwithat thelowestlevel of thehierarchy. Visualservoingcanbeconsideredasa 'low level'shortcutthroughthehierarchy, characterizedby high bandwidthbut moderateimageprocessing(well shortof full sceneinterpretation).In biological termsthis couldbeconsideredasreactiveor reflexivebehaviour.

    However notall ' reactive' vision-basedsystemsarevisualservo systems.Anders-son'swell known ping-pongplayingrobot[17], althoughfast,is basedon a real-timeexpert systemfor robot pathplanningusingball trajectoryestimationandconsider-abledomainknowledge. It is a highly optimizedversionof thegeneralarchitecture

  • 1.2Structure of the book 5

    shown in Figure1.1.

    1.1.1 Relateddisciplines

    Visual servoing is the fusion of resultsfrom many elementaldisciplinesincludinghigh-speedimageprocessing,kinematics,dynamics,control theory, and real-timecomputing. Visual servoing alsohasmuch in commonwith a numberof otherac-tive researchareassuchasactivecomputervision, [9, 26] which proposesthat a setof simple visual behaviours can accomplishtasksthroughaction, suchas control-ling attentionor gaze[51]. The fundamentaltenetof active vision is not to interpretthe sceneand then model it, but ratherto direct attentionto that part of the scenerelevant to the taskat hand. If the systemwishesto learnsomethingof the world,ratherthanconsulta model,it shouldconsulttheworld by directingthesensor. Thebenefitsof anactiverobot-mountedcameraincludetheability to avoid occlusion,re-solve ambiguityand increaseaccuracy. Researchersin this areahave built robotic'heads' [157,230,270] with which to experimentwith perceptionandgazecontrolstrategies. Suchresearchis generallymotivatedby neuro-physiologicalmodelsandmakesextensive useof nomenclaturefrom thatdiscipline.Thescopeof thatresearchincludesvisual servoing amongsta broadrangeof topics including 'open-loop' orsaccadiceye motion,stereoperception,vergencecontrolandcontrolof attention.

    Literature relatedto structure from motion is also relevant to visual servoing.Structurefrom motion attemptsto infer the3D structureandthe relative motionbe-tweenobjectandcamera,from a sequenceof images.In roboticshowever, we gen-erally have considerablea priori knowledgeof the targetandthespatialrelationshipbetweenfeaturepointsis known. Aggarwal[2] providesa comprehensive review ofthis activefield.

    1.2 Structur eof the book

    Visual servoing occupiesa nichesomewherebetweencomputervision androboticsresearch.It drawsstronglyontechniquesfrom bothareasincludingimageprocessing,featureextraction,control theory, robotkinematicsanddynamics.SincethescopeisnecessarilybroadChapters2–4 presentthoseaspectsof robotics,imageformationandcomputervisionrespectively thatarerelevantto developmentof thecentraltopic.Thesechaptersalsodevelop,throughanalysisandexperimentation,detailedmodelsoftherobotandvision systemusedin theexperimentalwork. They arethefoundationsuponwhichthelaterchaptersarebuilt.

    Chapter2 presentsa detailedmodelof thePuma560robotusedin this work thatincludesthe motor, friction, current,velocity andpositioncontrol loops,aswell asthemoretraditionalrigid-bodydynamics.Someconclusionsaredrawn regardingthe

  • 6 Intr oduction

    significanceof variousdynamiceffects, and the fundamentalperformancelimitingfactorsof this robotareidentifiedandquantified.

    Imageformation is coveredin Chapter3 with topics including lighting, imageformation,perspective, CCD sensors,imagedistortionandnoise,video formatsandimagedigitization. Chapter4 discussesrelevantaspectsof computervision,buildinguponthepreviouschapter, with topicsincludingimagesegmentation,featureextrac-tion, featureaccuracy, andcameracalibration.Thematerialfor Chapters3 and4 hasbeencondensedfrom a diverseliteraturespanningphotography, sensitometry, videotechnology, sensortechnology, illumination,photometryandphotogrammetry.

    A comprehensive review of prior work in the field of visual servoing is giveninChapter5. Visualservo kinematicsarediscussedsystematicallyusingWeiss's taxon-omy[273] of image-basedandposition-basedvisualservoing.

    Chapter6 introducesthe experimentalfacility anddescribesexperimentswith asingle-DOFvisual feedbackcontroller. This is usedto develop andverify dynamicmodelsof the visual servo system. Chapter7 then formulatesthe visual servoingtaskasa feedbackcontrolproblemandintroducesperformancemetrics.This allowsthecomparisonof compensatorsdesignedusinga varietyof techniquessuchasPID,pole-placement,Smith's methodandLQG. Feedbackcontrollersareshown to havea numberof limitations, andfeedforwardcontrol is introducedasa meansof over-coming these. Feedforwardcontrol is shown to offer markedlyimproved trackingperformanceaswell asgreatrobustnessto parametervariation.

    Chapter8extendsthosecontroltechniquesandinvestigatesvisualend-pointdamp-ing and3-DOF translationalmanipulatorcontrol. Conclusionsandsuggestionsforfurtherwork aregivenin Chapter9.

    Theappendicescontainaglossaryof termsandabbreviationsandsomeadditionalsupportingmaterial. In the interestsof spacethe moredetailedsupportingmaterialhasbeenrelegatedto a virtual appendixwhich is accessiblethroughtheWorld WideWeb. Informationavailablevia thewebincludesmany of thesoftwaretoolsandmod-els describedwithin the book, cited technicalreports,links to othervisual servoingresourceson the internet,anderrata. Orderingdetailsfor the accompanying videotapecompilationarealsoavailable.Detailsonaccessingthis informationaregiveninAppendixB.

  • Chapter 2

    Modelling the robot

    Thischapterintroducesanumberof topicsin roboticsthatwill becalleduponin laterchapters.It alsodevelopsmodelsfor theparticularrobotusedin thiswork — a Puma560 with a Mark 1 controller. Despitethe ubiquity of this robot detaileddynamicmodelsandparametersaredifficult to comeby. Thosemodelsthatdoexist areincom-plete,expressedin differentcoordinatesystems,andinconsistent.Much emphasisintheliteratureis on rigid-bodydynamicsandmodel-basedcontrol,thoughtheissueofmodelparametersis not well covered. This work alsoaddressesthe significanceofvariousdynamiceffects,in particularcontrastingthe classicrigid-bodyeffectswiththoseof non-linearfriction andvoltagesaturation.Although thePumarobot is nowquite old, andby modernstandardshaspoor performance,this could be consideredto be an 'implementationissue'. Structurallyits mechanicaldesign(revolute struc-ture,gearedservo motordrive)andcontroller(nestedcontrolloops,independentaxiscontrol)remaintypicalof many currentindustrialrobots.

    2.1 Manipulator kinematics

    Kinematicsis thestudyof motionwithout regardto theforceswhichcauseit. Withinkinematicsonestudiesthe position, velocity andacceleration,andall higherorderderivatives of the position variables. The kinematicsof manipulatorsinvolves thestudyof thegeometricandtimebasedpropertiesof themotion,andin particularhowthevariouslinks move with respectto oneanotherandwith time.

    Typicalrobotsareserial-linkmanipulatorscomprisingasetof bodies,calledlinks,in a chain,connectedby joints1. Eachjoint hasonedegreeof freedom,eithertransla-tional or rotational.For a manipulatorwith n joints numberedfrom 1 to n, thereare

    1Parallellink andserial/parallelhybrid structuresarepossible,thoughmuchlesscommonin industrialmanipulators.Theywill not bediscussedin this book.

    7

  • 8 Modelling the robot

    joint i−1 joint i joint i+1

    link i−1

    link i

    Ti−1

    Tiai

    Xi

    YiZi

    ai−1

    Zi−1

    Xi−1

    Yi−1

    (a) Standardformjoint i−1 joint i joint i+1

    link i−1

    link i

    Ti−1 TiXi−1

    Yi−1Zi−1

    YiX

    i

    Zi

    ai−1

    ai

    (b) Modifiedform

    Figure2.1: Differentformsof Denavit-Hartenberg notation.

    n 1 links, numberedfrom 0 to n. Link 0 is the baseof the manipulator, generallyfixed,andlink n carriestheend-effector. Joint i connectslinks i andi 1.

    A link maybe consideredasa rigid bodydefiningthe relationshipbetweentwoneighbouringjoint axes. A link can be specifiedby two numbers,the link lengthand link twist, which definethe relative locationof the two axesin space.The linkparametersfor thefirst andlastlinks aremeaningless,but arearbitrarily chosento be0. Jointsmaybedescribedby two parameters.Thelink offsetis thedistancefrom onelink to thenext alongtheaxisof the joint. The joint angleis therotationof onelinkwith respectto thenext aboutthejoint axis.

    To facilitate describingthe locationof eachlink we affix a coordinateframetoit — frame i is attachedto link i. Denavit andHartenberg [109] proposeda matrix

  • 2.1Manipulator kinematics 9

    methodof systematicallyassigningcoordinatesystemsto eachlink of anarticulatedchain.Theaxisof revolutejoint i is alignedwith zi 1. Thexi 1 axisis directedalongthe commonnormalfrom zi 1 to zi andfor intersectingaxesis parallelto zi 1 zi .Thelink andjoint parametersmaybesummarizedas:

    link length ai theoffsetdistancebetweenthezi 1 andzi axesalongthexi axis;

    link twist αi theanglefromthezi 1 axisto thezi axisaboutthexi axis;link offset di thedistancefrom the origin of framei 1 to thexi axis

    alongthezi 1 axis;joint angle θi theanglebetweenthexi 1 andxi axesaboutthezi 1 axis.

    For a revolutejoint θi is thejoint variableanddi is constant,while for a prismaticjoint di is variable,andθi is constant.In many of theformulationsthatfollow weusegeneralizedcoordinates,qi, where

    qiθi for a revolutejointdi for a prismaticjoint

    andgeneralizedforces

    Qiτi for a revolutejointfi for a prismaticjoint

    TheDenavit-Hartenberg (DH) representationresultsin a 4x4homogeneoustrans-formationmatrix

    i 1A i

    cosθi sinθi cosαi sinθi sinαi ai cosθisinθi cosθi cosαi cosθi sinαi ai sinθi

    0 sinαi cosαi di0 0 0 1

    (2.1)

    representingeachlink' scoordinateframewith respectto thepreviouslink' scoordinatesystem;thatis

    0T i 0T i 1 i 1A i (2.2)

    where0T i is thehomogeneoustransformationdescribingtheposeof coordinateframei with respectto theworld coordinatesystem0.

    Twodifferingmethodologieshavebeenestablishedfor assigningcoordinateframes,eachof whichallowssomefreedomin theactualcoordinateframeattachment:

    1. Framei hasits origin alongthe axis of joint i 1, asdescribedby Paul [199]andLee[96,166].

  • 10 Modelling the robot

    2. Framei hasits origin alongtheaxisof joint i, andis frequentlyreferredto as'modifiedDenavit-Hartenberg' (MDH) form [69]. Thisform is commonlyusedin literaturedealingwith manipulatordynamics.Thelink transformmatrix forthis form differsfrom (2.1).

    Figure2.1 shows the notationaldifferencesbetweenthe two forms. Note that ai isalwaysthelengthof link i, but is thedisplacementbetweentheoriginsof framei andframe i 1 in oneconvention,andframei 1 andframe i in theother2. This bookwill consistentlyusethestandardDenavit andHartenberg methodology3.

    2.1.1 Forward and inversekinematics

    For ann-axisrigid-link manipulator, the forward kinematicsolutiongivesthecoordi-nateframe,or pose,of thelastlink. It is obtainedby repeatedapplicationof (2.2)

    0Tn 0A11A2 n 1An (2.3)

    K q (2.4)

    which is the productof the coordinateframetransformmatricesfor eachlink. Theposeof theend-effectorhas6 degreesof freedomin Cartesianspace,3 in translationand3 in rotation,so robot manipulatorscommonlyhave 6 joints or degreesof free-dom to allow arbitraryend-effector pose. The overall manipulatortransform0Tn isfrequentlywritten asTn, or T6 for a 6-axis robot. The forward kinematicsolutionmay be computedfor any manipulator, irrespective of the numberof joints or kine-maticstructure.

    Of moreusein manipulatorpathplanningis the inversekinematicsolution

    q K 1 T (2.5)

    which gives the joint coordinatesrequiredto reachthe specifiedend-effector posi-tion. In generalthis solutionis non-unique,andfor someclassesof manipulatornoclosed-formsolutionexists. If themanipulatorhasmorethan6 joints it is saidto beredundantandthesolutionfor joint coordinatesis under-determined.If no solutioncanbe determinedfor a particularmanipulatorposethat configurationis saidto besingular. Thesingularitymaybedueto analignmentof axesreducingtheeffectivedegreesof freedom,or thepointT beingout of reach.

    ThemanipulatorJacobianmatrix,Jθ, transformsvelocitiesin joint spaceto veloc-itiesof theend-effectorin Cartesianspace.For ann-axismanipulatortheend-effector

    2It is customarywhentabulating the 'modified' kinematicparametersof manipulatorsto list ai 1 andα i 1 ratherthanai andα i .

    3It maybearguedthat theMDH conventionis more'logical', but for historicalreasonsthis work usesthestandardDH convention.

  • 2.1Manipulator kinematics 11

    Cartesianvelocity is

    0ẋn0Jθq̇ (2.6)

    tnẋntnJθq̇ (2.7)

    in baseor end-effectorcoordinatesrespectively andwherex is theCartesianvelocityrepresentedby a 6-vector[199]. For a 6-axismanipulatortheJacobianis squareandprovided it is not singularcan be invertedto solve for joint ratesin termsof end-effectorCartesianrates.TheJacobianwill notbeinvertibleatakinematicsingularity,andin practicewill bepoorly conditionedin thevicinity of thesingularity, resultingin high joint rates.A controlschemebasedonCartesianratecontrol

    q̇ 0J 1θ0ẋn (2.8)

    wasproposedby Whitney [277] andis known as resolvedrate motioncontrol. Fortwo framesA andB relatedby ATB n o a p theCartesianvelocity in frameA maybetransformedto frameB by

    Bẋ BJA Aẋ (2.9)

    wheretheJacobianis givenby Paul [200] as

    BJA f ATBn o a T p n p o p a T

    0 n o a T(2.10)

    2.1.2 Accuracy and repeatability

    In industrialmanipulatorsthe positionof the tool tip is inferredfrom the measuredjoint coordinatesandassumedkinematicstructureof therobot

    T̂6 K̂ qmeas

    Errorswill beintroducedif theassumedkinematicstructurediffersfromthatof theac-tualmanipulator, thatis, K̂ K . Sucherrorsmaybedueto manufacturingtolerancesin link lengthor link deformationdueto load.Assumptionsarealsofrequentlymadeaboutparallelor orthogonalaxes, that is link twist anglesareexactly 0 or exactly

    90 , sincethis simplifiesthe link transformmatrix (2.1) by introducingelementsthatareeither0 or 1. In reality, dueto tolerancesin manufacture,theseassumptionarenot valid andleadto reducedrobotaccuracy.

    Accuracy refersto the error betweenthe measuredandcommandedposeof therobot. For a robotto move to a commandedposition,theinversekinematicsmustbesolvedfor therequiredjoint coordinates

    q6 K̂ 1 T

  • 12 Modelling the robot

    Joint α ai di θmin θmax1 90 0 0 -180 1802 0 431.8 0 -170 1653 -90 20.3 125.4 -160 1504 90 0 431.8 -180 1805 -90 0 0 -10 1006 0 0 56.25 -180 180

    Table2.1: Kinematicparametersandjoint limits for thePuma560.All anglesindegrees,lengthsin mm.

    While theservo systemmaymove very accuratelyto thecomputedjoint coordinates,discrepanciesbetweenthekinematicmodelassumedby thecontrollerandtheactualrobotcancausesignificantpositioningerrorsat thetool tip. Accuracy typically variesover the workspaceandmay be improved by calibrationprocedureswhich seektoidentify thekinematicparametersfor theparticularrobot.

    Repeatabilityrefersto theerrorwith which a robotreturnsto a previously taughtor commandedpoint. In generalrepeatabilityis betterthanaccuracy, andis relatedtojoint servo performance.However to exploit this capabilitypointsmustbemanuallytaughtby 'jogging' the robot, which is time-consumingand takesthe robot out ofproduction.

    The AdeptOnemanipulatorfor examplehasa quotedrepeatabilityof 15µm butanaccuracy of 76µm. Thecomparatively low accuracy anddifficulty in exploiting re-peatabilityaretwo of thejustificationsfor visualservoingdiscussedearlierin Section1.1.

    2.1.3 Manipulator kinematic parameters

    As alreadydiscussedthe kinematicparametersof a robot are importantin comput-ing the forwardandinversekinematicsof the manipulator. Unfortunately, asshownin Figure2.1, therearetwo conventionsfor describingmanipulatorkinematics.Thisbookwill consistentlyusethestandardDenavit andHartenberg methodology, andtheparticularframeassignmentsfor the Puma560areasperPaul andZhang[202]. Aschematicof therobotandtheaxisconventionsusedis shown in Figure2.2. For zerojoint coordinatesthearmis in a right-handedconfiguration,with theupperarmhori-zontalalongtheX-axisandthelowerarmvertical.Theuprightor READY position4 isdefinedby q 0 90 90 0 0 0 . OtherssuchasLee[166] considerthezero-angleposeasbeingleft-handed.

    4TheUnimationVAL languagedefinesthesocalled'READY position' wherethearmis fully extendedandupright.

  • 2.1Manipulator kinematics 13

    Figure2.2: Detailsof coordinateframesusedfor thePuma560shown herein itszeroanglepose(drawing by LesEwbank).

    Thenon-zerolink offsetsandlengthsfor thePuma560,which maybemeasureddirectly, are:

    distancebetweenshoulderandelbow axesalongtheupperarmlink, a2;

    distancefrom the elbow axis to the centerof sphericalwrist joint; along thelowerarm,d4;

    offsetbetweentheaxesof joint 4 andtheelbow, a3;

  • 14 Modelling the robot

    offsetbetweenthewaistandjoint 4 axes,d3.

    The kinematicconstantsfor thePuma560aregivenin Table2.1. Theseparam-etersarea consensus[60,61] derived from several sources[20,166,202,204,246].Thereis somevariationin the link lengthsandoffsetsreportedby variousauthors.Comparisonof reportsis complicatedby the varietyof differentcoordinatesystemsused.Somevariationsin parameterscouldconceivablyreflectchangesto thedesignormanufactureof therobotwith time,while othersaretakento beerrors.Leealonegivesa valuefor d6 which is thedistancefrom wrist centerto thesurfaceof themountingflange.

    Thekinematicparametersof arobotareimportantnotonly for forwardandinversekinematicsasalreadydiscussed,but arealsorequiredin thecalculationof manipula-tor dynamicsas discussedin the next section. The kinematicparametersenterthedynamicequationsof motionvia thelink transformationmatricesof (2.1).

    2.2 Manipulator rigid-body dynamics

    Manipulatordynamicsis concernedwith the equationsof motion, theway in whichthe manipulatormoves in responseto torquesappliedby the actuators,or externalforces.Thehistoryandmathematicsof thedynamicsof serial-linkmanipulatorsarewell coveredby Paul [199] andHollerbach[119]. Therearetwo problemsrelatedtomanipulatordynamicsthatareimportantto solve:

    inversedynamicsin which themanipulator'sequationsof motionaresolvedforgivenmotion to determinethegeneralizedforces,discussedfurther in Section2.5,and

    directdynamicsin whichtheequationsof motionareintegratedtodeterminethegeneralizedcoordinateresponseto appliedgeneralizedforcesdiscussedfurtherin Section2.2.3.

    Theequationsof motionfor ann-axismanipulatoraregivenby

    Q M q q̈ C q q̇ q̇ F q̇ G q (2.11)

    where

    q is thevectorof generalizedjoint coordinatesdescribingtheposeof themanipulator

    q̇ is thevectorof joint velocities;q̈ is thevectorof joint accelerations

    M is thesymmetricjoint-spaceinertiamatrix,or manipulatorinertiatensor

  • 2.2Manipulator rigid-body dynamics 15

    C describesCoriolisandcentripetaleffects— Centripetaltorquesarepro-portionalto q̇2i , while theCoriolis torquesareproportionalto q̇i q̇ j

    F describesviscousandCoulombfriction andis notgenerallyconsideredpartof therigid-bodydynamics

    G is thegravity loadingQ is thevectorof generalizedforcesassociatedwith thegeneralizedcoor-

    dinatesq.

    Theequationsmaybederivedvia a numberof techniques,includingLagrangian(energy based),Newton-Euler, d'Alembert [96,167] or Kane's [143] method. Theearliestreportedwork was by Uicker [254] and Kahn [140] using the Lagrangianapproach.Due to the enormouscomputationalcost,O n4 , of this approachit wasnot possibleto computemanipulatortorquefor real-timecontrol. To achieve real-timeperformancemany approachesweresuggested,includingtablelookup[209] andapproximation[29,203].Themostcommonapproximationwasto ignorethevelocity-dependentterm C, sinceaccuratepositioningandhigh speedmotion areexclusivein typical robot applications. Othershave usedthe fact that the coefficientsof thedynamicequationsdonot changerapidly sincethey area functionof joint angle,andthusmaybe computedat a fractionof the rateat which the equationsareevaluated[149,201,228].

    Orin etal. [195]proposedanalternativeapproachbasedontheNewton-Euler(NE)equationsof rigid-bodymotionappliedto eachlink. Armstrong[23] thenshowedhowrecursionmight beappliedresultingin O n complexity. Luh et al. [177] providedarecursiveformulationof theNewton-Eulerequationswith linearandangularvelocitiesreferredto link coordinateframes.They suggesteda time improvementfrom 7 9sfortheLagrangianformulationto 4 5ms, andthusit becamepracticalto implement'on-line'. Hollerbach[120] showed how recursioncould be appliedto the Lagrangianform, andreducedthecomputationto within a factorof 3 of therecursive NE. Silver[234] showed theequivalenceof therecursive LagrangianandNewton-Eulerforms,andthatthedifferencein efficiency is dueto therepresentationof angularvelocity.

    “Kane's equations”[143] provideanothermethodologyfor deriving theequationsof motionfor aspecificmanipulator. A numberof 'Z' variablesareintroducedwhich,whilenotnecessarilyof physicalsignificance,leadtoadynamicsformulationwith lowcomputationalburden. Wampler[267] discussesthe computationalcostsof Kane'smethodin somedetail.

    The NE andLagrangeforms can be written generallyin termsof the Denavit-Hartenberg parameters— howeverthespecificformulations,suchasKane's,canhavelower computationalcost for the specificmanipulator. Whilst the recursive formsarecomputationallymoreefficient, the non-recursive forms computethe individualdynamicterms(M , C andG) directly.

  • 16 Modelling the robot

    Method Multiplications Additions For N=6Mul Add

    Lagrangian[120] 3212n4 86 512n

    3 25n4 6613n3 66,271 51,548

    17114n2 5313n 129

    12n

    2 4213n128 96

    RecursiveNE [120]

    150n 48 131n 48 852 738

    Kane[143] 646 394SimplifiedRNE[189]

    224 174

    Table2.2: Comparisonof computationalcostsfor inversedynamicsfrom varioussources.Thelastentryis achievedby symbolicsimplificationusingthesoftwarepackageARM.

    A comparisonof computationcostsis givenin Table2.2. Thereareconsiderablediscrepanciesbetweensources[96,120,143,166,265] on thecomputationalburdensof thesedifferentapproaches.Conceivablesourcesof discrepancy includewhetheror not computationof link transformmatricesis included,andwhetherthe result isgeneral,or specificto a particularmanipulator.

    2.2.1 RecursiveNewton-Euler formulation

    Therecursive Newton-Euler(RNE) formulation[177] computesthe inversemanipu-lator dynamics,that is, thejoint torquesrequiredfor a givensetof joint coordinates,velocitiesandaccelerations.The forward recursionpropagateskinematicinforma-tion — suchasangularvelocities,angularaccelerations,linearaccelerations— fromthebasereferenceframe(inertial frame)to theend-effector. Thebackwardrecursionpropagatesthe forcesandmomentsexertedon eachlink from theend-effectorof themanipulatorto thebasereferenceframe5. Figure2.3shows thevariablesinvolvedinthecomputationfor onelink.

    Thenotationof Hollerbach[120] andWalkerandOrin [265] will beusedin whichthe left superscriptindicatesthe referencecoordinateframe for the variable. Thenotationof Luh etal. [177] andlaterLee[96,166] is considerablylessclear.

    5It shouldbenotedthatusingMDH notationwith its differentaxisassignmentconventionstheNewtonEulerformulationis expresseddifferently[69].

  • 2.2Manipulator rigid-body dynamics 17

    joint i−1 joint i joint i+1

    link i−1

    link i

    Ti−1

    Tiai

    Xi

    YiZi

    ai−1

    Zi−1

    Xi−1

    Yi−1 p* vi.vi

    .ωiωi

    n fi i

    N Fi i

    vi

    .vi

    _ _i+1 i+1

    n f

    si

    Figure 2.3: Notation usedfor inversedynamics,basedon standardDenavit-Hartenberg notation.

    Outward recursion,1 i n.

    If axis i 1 is rotational

    i 1ωi 1i 1Ri iωi z0q̇i 1 (2.12)

    i 1ω̇i 1i 1Ri iω̇i z0q̈i 1

    iωi z0q̇i 1 (2.13)i 1vi 1

    i 1ωi 1 i 1pi 1i 1Ri ivi (2.14)

    i 1v̇i 1i 1ω̇i 1 i 1pi 1

    i 1ωi 1 i 1ωi 1 i 1pi 1i 1Ri i v̇i (2.15)

  • 18 Modelling the robot

    If axis i 1 is translational

    i 1ωi 1i 1Ri iωi (2.16)

    i 1ω̇i 1 i 1Ri iω̇i (2.17)i 1vi 1

    i 1Ri z0q̇i 1ivi

    i 1ωi 1i 1p

    i 1(2.18)

    i 1v̇i 1i 1Ri z0q̈i 1

    i v̇ii 1ω̇i 1

    i 1pi 1

    2 i 1ωi 1 i 1Ri z0q̇i 1i 1ωi 1

    i 1ωi 1i 1p

    i 1(2.19)

    i v̇iiω̇i si

    iωiiωi si

    i v̇i (2.20)iFi mi

    i v̇i (2.21)iNi Ji

    iω̇iiωi Ji

    iωi (2.22)

    Inward recursion,n i 1.

    i fi

    iRi 1 i 1 f i 1iF i (2.23)

    iniiRi 1 i 1ni 1

    i 1Ri i pii 1 f

    i 1

    i pi

    siiFi

    iNi (2.24)

    Qi

    iniT iRi 1z0 if link i 1 is rotational

    i fi

    TiRi 1z0 if link i 1 is translational

    (2.25)

    where

    i is thelink index, in therange1 to nJi is themomentof inertiaof link i aboutits COMsi is thepositionvectorof theCOM of link i with respectto framei

    ωi is theangularvelocity of link iω̇i is theangularaccelerationof link ivi is thelinearvelocity of frameiv̇i is thelinearaccelerationof frameivi is thelinearvelocity of theCOM of link iv̇i is thelinearaccelerationof theCOM of link ini is themomentexertedon link i by link i 1

  • 2.2Manipulator rigid-body dynamics 19

    fi

    is theforceexertedonlink i by link i 1Ni is thetotalmomentat theCOM of link iFi is thetotal forceat theCOM of link iQ

    iis theforceor torqueexertedby theactuatorat joint i

    i 1Ri is theorthonormalrotationmatrix definingframei orientationwith re-spectto framei 1. It is theupper3 3 portionof thelink transformmatrixgivenin (2.1).

    i 1Ricosθi cosαi sinθi sinαi sinθisinθi cosαi cosθi sinαi cosθi

    0 sinαi cosαi(2.26)

    iRi 1 i 1Ri 1 i 1Ri T (2.27)

    i pi

    is thedisplacementfromtheorigin of framei 1 to framei with respectto framei.

    i pi

    aidi sinαidi cosαi

    (2.28)

    It is thenegative translationalpartof i 1A i 1.z0 is a unit vectorin Z direction,z0 0 0 1

    Note that the COM linear velocity givenby equation(2.14)or (2.18)doesnot needto becomputedsinceno otherexpressiondependsuponit. Boundaryconditionsareusedto introducetheeffectof gravity by settingtheaccelerationof thebaselink

    v̇0 g (2.29)

    whereg is thegravity vectorin thereferencecoordinateframe,generallyactingin thenegative Z direction,downward.Basevelocity is generallyzero

    v0 0 (2.30)

    ω0 0 (2.31)ω̇0 0 (2.32)

    2.2.2 Symbolicmanipulation

    TheRNE algorithmis straightforwardto programandefficient to executein thegen-eralcase,but considerablesavingscanbemadefor thespecificmanipulatorcase.Thegeneralform inevitably involvesmany additionswith zeroandmultiplicationswith 0,

  • 20 Modelling the robot

    1 or -1, in the variousmatrix andvectoroperations.The zerosandonesareduetothe trigonometricterms6 in the orthonormallink transformmatrices(2.1) aswell aszero-valuedkinematicandinertial parameters.Symbolicsimplificationgatherscom-monfactorsandeliminatesoperationswith zero,reducingtherun-timecomputationalload,at theexpenseof a once-onlyoff-line symboliccomputation.Symbolicmanip-ulation can alsobe usedto gain insight into the effects andmagnitudesof variouscomponentsof theequationsof motion.

    Early work in symbolicmanipulationfor manipulatorswasperformedwith spe-cial toolsgenerallywritten in Fortranor LISP suchasARM [189], DYMIR [46] andEMDEG [40]. Laterdevelopmentof generalpurposecomputeralgebratoolssuchasMacsyma,REDUCEandMAPLE hasmadethiscapabilitymorewidely available.

    In thiswork a generalpurposesymbolicalgebrapackage,MAPLE [47], hasbeenusedto computethe torqueexpressionsin symbolicform via a straightforwardim-plementationof the RNE algorithm. Comparedto symboliccomputationusing theLagrangianapproach,computationof the torqueexpressionsis very efficient. Theseexpressionsarein sumof productform, andcanbeextremelylong. For exampletheexpressionfor thetorqueon thefirst axisof a Puma560is

    τ1 C23Iyz3q̈2sx1

    2m1q̈1S23Iyz3q̇

    23

    Iyy2C22q̈1

    C2 Iyz2q̈2

    andcontinuesonfor over16,000terms.Suchexpressionsareof little valuefor on-linecontrol,but areappropriatefor furthersymbolicmanipulationto analyzeandinterpretthesignificanceof variousterms.For examplethesymbolicelementsof theM , C andG termscanbereadilyextractedfrom thesumof productform, overcomingwhat isfrequentlycitedasanadvantageof theLagrangianformulation— thattheindividualtermsarecomputeddirectly.

    Evaluatingsymbolicexpressionsin this simple-mindedway resultsin a lossofthefactorizationinherentin theRNEprocedure.However with appropriatefactoriza-tion duringsymbolicexpansion,a computationallyefficient form for run-timetorquecomputationcanbegenerated,seeSection2.6.2. MAPLE canthenproducethe 'C'languagecodecorrespondingto the torqueexpressions,for example,automaticallygeneratingcodefor computed-torquecontrol of a specificmanipulator. MAPLE isalsocapableof generatingLATEX styleequationsfor inclusionin documents.

    6Commonmanipulatorshave link twists of 0 , 90 or 90 leadingto zero or unity trigonometricresults.

  • 2.2Manipulator rigid-body dynamics 21

    As discussedpreviously, thedynamicequationsareextremelycomplex, andthusdifficult to verify, but anumberof checkshavebeenmade.Theequationsof motionofa simpletwo-link examplein Fuet al. [96] wascomputedandagreedwith theresultsgiven. For thePuma560this is prohibitive,but somesimplecheckscanstill beper-formed.Thegravity termis readilyextractedandis simpleenoughto verify manually.The manipulatorinertia matrix is positive definiteandits symmetrycanbe verifiedsymbolically. A colleague[213] independentlyimplementedthedynamicequationsin MAPLE, usingthe Lagrangianapproach.MAPLE wasthenusedto computethedifferencebetweenthetwo setsof torqueexpressions,which aftersimplificationwasfoundto bezero.

    2.2.3 Forward dynamics

    Equation(2.11)maybeusedto computethe so-calledinversedynamics,that is, ac-tuatortorqueasa functionof manipulatorstateandis usefulfor on-linecontrol. Forsimulationthedirect,integralor forward dynamicformulationis requiredgiving jointmotionin termsof input torques.

    Walker andOrin [265] describeseveral methodsfor computingthe forward dy-namics,andall makeuseof anexisting inversedynamicssolution. Using the RNEalgorithmfor inversedynamics,thecomputationalcomplexity of theforwarddynam-ics using 'Method 1' is O n3 for an n-axis manipulator. Their other methodsareincreasinglymoresophisticatedbut reducethecomputationalcost,thoughstill O n3 .Featherstone[89] hasdescribedthe'articulated-bodymethod' for O n computationof forward dynamics,however for n 9 it is moreexpensive thanthe approachofWalkerandOrin. AnotherO n approachfor forwarddynamicshasbeendescribedby Lathrop[160].

    2.2.4 Rigid-body inertial parameters

    Accuratemodel-baseddynamiccontrol of a manipulatorrequiresknowledgeof therigid-bodyinertialparameters.Eachlink hastenindependentinertialparameters:

    link mass,mi;

    threefirst moments,which may be expressedas the COM location, si, withrespectto somedatumon thelink or asamomentSi misi ;

    six secondmoments,which representtheinertiaof thelink abouta givenaxis,typically throughtheCOM. Thesecondmomentsmaybeexpressedin matrixor tensorform as

    JJxx Jxy JxzJxy Jyy JyzJxz Jyz Jzz

    (2.33)

  • 22 Modelling the robot

    Parameter Value

    m1 13.0m2 17.4m3 4.80m4 0.82m5 0.35m6 0.09

    Table2.3: Link massdata(kg).

    wherethediagonalelementsarethemomentsof inertia, andtheoff-diagonalsareproductsof inertia. Only six of thesenineelementsareunique: threemo-mentsandthreeproductsof inertia.

    For any pointin arigid-bodythereis onesetof axesknownastheprincipal axesof inertia for which the off-diagonalterms,or products,arezero. Theseaxesaregivenby theeigenvectorsof theinertiamatrix(2.33)andtheeigenvaluesaretheprincipalmomentsof inertia.Frequentlytheproductsof inertiaof therobotlinks arezerodueto symmetry.

    A 6-axismanipulatorrigid-bodydynamicmodelthusentails60 inertial parame-ters. Theremay be additionalparametersper joint dueto friction andmotor arma-ture inertia. Clearly, establishingnumericvaluesfor this numberof parametersisa difficult task. Many parameterscannotbemeasuredwithout dismantlingtherobotandperformingcarefulexperiments,thoughthis approachwasusedby Armstrongetal. [20]. Most parameterscouldbederivedfrom CAD modelsof therobots,but thisinformationis oftenconsideredproprietaryandnotmadeavailableto researchers.Therobotusedin this work, thePuma560,wasdesignedin thelate1970'sandprobablypredateswidespreadCAD usage.Thereis alsoaconsiderableliteratureregardingesti-mationof inertialparametersfrom onlinemeasurementof manipulatorstateandjointtorques[130].

    Tarn andBejczy [245,247], Armstrong[22] andLeahy[161,165,257] have allreportedsuccessfulmodel-basedcontrolof thePuma560,yet thereis significantdif-ferencein theparametersetsused.Thismayin factindicatethattherigid-bodyeffectsdo not dominatethedynamicsof this robot,or that“somefeedforwardis betterthanno feedforward”. This issuewill be revisited in Section2.4. Comparisonsof thepublishedmodeldataarecomplicatedby the differentcoordinateframes,kinematicconventionsandmeasurementunits usedin the original reports. The first stepis toconvert all reporteddatato a commonsetof unitsandcoordinateframes,andthesedataare reportedandcomparedin [60,61]. Somedatasetsare to be preferredtoothersdueto the methodologiesusedto obtainthem. The remainderof this sectioncomprisesanabbreviatedreportof thatcomparisonwork andtabulatesthepreferred

  • 2.2Manipulator rigid-body dynamics 23

    Parameter Value

    sx1 -sy1 -sz1 -sx2 -363.8sy2 6sz2 77.4sx3 0sy3 -14sz3 70sx4 0sy4 -19sz4 0sx5 0sy5 0sz5 0sx6 0sy6 0sz6 32

    Table2.4: Link COM positionwith respectto link frame(mm).

    inertialparametervalues.Link massdatafor joints 2-6 givenin Table2.3 arebasedon theresultsof Arm-

    strongetal. [20] whoactuallydisassembledtherobotandweighedthelinks. Theoftencited dataof Tarnet al. [246] is basedon estimatedmassfrom models,dimensionaldataandassumedmassdistributionanddensities,which is likely to belessaccurate.A similar approachappearsto have beentakenby Paul et al. [204]. Armstronghow-ever doesnot give a valuefor m1, soTarn's valueis presentedin the table. It canbeshown however that theparameterm1 doesnot appearin theequationsof motion—it is nota baseparameter[148,153].

    Link centerof gravity datagivenin Table2.4 is againbasedon Armstronget al.who measuredtheCOM of thedisassembledlinks ona knife-edge.Tarnet al.'s datais againanestimatebasedonmodelsof thearmstructure.

    It is difficult to meaningfullycomparethesedatasets,andcontrastthemwith thosefor therobotusedin thiswork. Theapproachproposedhereis to comparethegravityloadingtermsfor joints 2 and3 — thoselinks for which gravity load is significant.A smallnumberof gravity loadcoefficientsencapsulatea largernumberof massandCOM parameters.Thegravity loadingsarereadilygeneratedfromthesymbolictorque

  • 24 Modelling the robot

    Parameter Armstrong Tarn RCCL

    g1 -0.896 -0.977 -0.928(CP30/g)g2 0.116 0.235 0.0254(CP21/g)g3 0 -0.00980g4 0 0g5 -2.88e-3 0.34e-3 -2.88e-3(CP50/g)g6 0 0g7 -0.104 -0.112 0.104(CP22/g)g8 3.80 5.32 -3.79(CP20/g)

    Table2.5: Comparisonof gravity coefficients(N.m) from severalsources.

    equationsestablishedearlier, andaregivenby

    τg3g

    m6 m5 m4 D4 sz3m3 m4sy4 S23

    m3 m4 m5 m6 A3 m3sx3 C23

    sz4m4 sy5m5 sy6m6C6 C23S4

    S23S5sy6m6S6

    sz6m6 sz5m5 C23S5C4 S23C5

    S6sy6m6C23C4C5τg2g

    m2sx2 m2 m3 m4 m5 m6 A2 C2

    sy2m2S2τg3g

    (2.34)

    whereAi andDi arekinematicconstants,Ci cosθi andSi sinθi. Thesemaybewritten moresuccinctlyin termsof a numberof coefficientswhich arefunctionsoflink mass,centerof gravity andsomekinematiclengthparameters;

    τg3g

    g1S23 g2C23 g3C23S4 g4S23S5S6

    g5 S5C4C23 S23C5 g6C5C4C23S6 (2.35)τg2g

    g7S2 g8C2τg3g

    (2.36)

    Thesecoefficientsareevaluatedandcomparedin Table2.5alongwith thoseusedby the RCCL robot control package[115,175]. Thereis closeagreementbetweenthemagnitudesof thecoefficientsfrom Armstrongandthoseusedin RCCL.Dif ferentkinematicconventionsusedby RCCL, or the sign of the gearratios, may explainthe differencein sign for the joint 2 coefficientsg7 andg8. The RCCL valueswere

  • 2.2Manipulator rigid-body dynamics 25

    0 0.5 1 1.5 2 2.5 3-80

    -60

    -40

    -20

    0

    20

    40

    60

    80

    q2 (radl)

    tau_

    g2 (

    Nm

    @lo

    ad)

    Figure2.4: Measuredandestimatedgravity load on joint 2, τg2 θ2 , for θ3π 2. Torquemeasurements(shown dotted)arederived from measuredmotor

    currentandcorrectedto eliminatetheeffect of Coulombfriction. Also shown istheestimatedgravity loadbasedonmaximumlikelihoodfit (dotdash),parametervaluesof Armstrong(solid)andTarn(dashed).

    determinedusinganexperimentalproceduresimilar to thatdescribedby Lloyd [175]for a Puma2607.

    Figure2.4shows thejoint 2 torquedueto gravity, versuschangingshoulderjointangle. Theshoulderjoint wasmoved forwardandbackwardover the angularrangeat very low speedto eliminateany torquecomponentdueto viscousfriction. Jointtorqueis derivedfrom measuredmotorcurrentusingmotortorqueconstantsfrom Ta-ble 2.14. TheCoulombfriction effect is very pronounced,andintroducessignificanthysteresisin the torqueversusangleplot. The torquein Figure 2.4 hasbeencor-rectedfor Coulombfriction usingthe identifiedfriction parametersfrom Table2.12,but somehysteresisremainsat q2 π. It is speculatedthat this is dueto position-dependentCoulombfriction effectsoutsidethe rangeof joint anglesover which thefriction estimationexperimentswereconducted.

    A maximumlikelihood fit to theexperimentaldatais alsoshown in thefigure. Itcanbe seenthat the estimatedtorqueusingArmstrong's datais slightly lower than

    7JohnLloyd, privatecommunication.

  • 26 Modelling the robot

    Parameter β (N.m) φ (rad)Armstrong 46.1 2.6e-3Tarn 61.8 19.5e-3Max.Likelihood 52.9 -10.8e-3

    Table2.6: Comparisonof shouldergravity loadmodelsin cosineform.

    Parameter Value

    Jxx1 -Jyy1 0.35†Jzz1 -Jxx2 0.130Jyy2 0.524Jzz2 0.539Jxx3 0.066Jyy3 0.0125Jzz3 0.086Jxx4 1.8e-3Jyy4 1.8e-3Jzz4 1.3e-3Jxx5 0.30e-3Jyy5 0.30e-3Jzz5 0.40e-3Jxx6 0.15e-3Jyy6 0.15e-3Jzz6 0.04e-3

    Table2.7: Link inertiaabouttheCOM (kg m2). -Thisvalue,dueto Armstrong,is in fact theinertiaaboutthelink frameJyy1 m1 s

    2x1 s

    2z1 notabouttheCOM.

    thatmeasured,while thatbasedon Tarn's datais somewhathigher. Thegravity loadmaybewritten in theform

    τg2 βcos θ2 φ (2.37)

    whereβ is the magnitudeandφ the phase. The coefficients for the variousformsarecomparedin Table2.6, andin termsof magnitudethemaximumlikelihood fit isbracketedby themodelsof TarnandArmstrong,asis alsoevident from Figure2.4.Despitethepreviousobjectionsto themethodologyof Tarnetal. their datagivesa fitfor thegravity loadof joint 2 thatis asgoodasthatof Armstrong.

    Link inertiaaboutthecenterof gra