243
Integration of vision and force for robotic servoing Integratie van visie- en krachtcontrole bij robots Johan Baeten 8 november 2001

Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Integration of vision and force for roboticservoing

Integratie van visie- en krachtcontrole bij robots

Johan Baeten

8 november 2001

Page 2: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

c© Katholieke Universiteit LeuvenFaculteit Toegepaste WetenschappenArenbergkasteel, B-3001 Heverlee (Leuven), Belgium

Alle rechten voorbehouden. Niets uit deze uitgave mag wordenverveelvoudigd en/of openbaar gemaakt worden door middel van druk,fotokopie, microfilm, elektronisch of op welke andere wijze ook zondervoorafgaandelijke schriftelijke toestemming van de uitgever.

All rights reserved. No part of this publication may be reproduced inany form, by print, photoprint, microfilm or any other means withoutwritten permission from the publisher.

D/2001/7515/33ISBN 90-5682-323-XUDC 681.3∗I29

Page 3: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Preface

As is common practice and only most appropriate at the end of a majorwork, I’d like to express my gratitude to everyone I encountered andto many colleagues who stood by me during my research years at thedepartment of mechanical engineering.

First of all, I sincerely thank my promotor prof. dr. ir. Joris DeSchutter for the offered opportunity with an excellent research topicand for his support and guidance. I’d further like to thank my promotorprof. dr. ir. Luc Vangool, the assessors prof. dr. ir. Hendrik Van Brusseland prof. dr. ir. Jan Swevers, and prof. Bruno Siciliano from Italy fortheir efforts to my account.

I highly appreciated the enlightening discussions, ranging fromrobotics to common life, with prof. dr. ir. Herman Bruyninckx, dr.ir. Marnix Nuttin, dr. ir. Stefan Dutre, dr. ir. Qi Wang, ir. Erwin Aert-belien, ir. Walter Verdonck, ir. Tine Lefebvre, ir. Klaas Gadeyne, ir.Gudrun Degersem and many others. I certainly enjoyed the companyof ir. Erwin Aertbelien, ir. Wim Symens, ir. Vincent Lampaert and ir.Bram Demeulenaere during the conferences abroad.

Also thanks to the indispensable technical staff of secretary, infor-matics and electronics at PMA, and to the master thesis students, DirkMiseur, Dieter Penninckx, Peter Schillebeeckx, Swen Vandenberk, JanVerbiest and Walter Verdonck, who molded this research together withme to its final state.

I’m furthermore grateful for the suggestions on language issues byJos Steverlink at the final scripting stage.

Finally, I’m for ever indebted to my wife, who supported me all theway in spite of my absence in some difficult times.

I

Page 4: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

II

Page 5: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Abstract

Recent research aims at the involvement of additional sensors in robotictasks in order to reach a higher level of performance, feasibility andflexibility in an unknown working environment. Of all sensing devices,vision and force sensors are among the most complementary ones. Acombined vision/force setup has to merge their advantages while over-coming their deficiencies.

This work shows how integrated vision/force control improves thetask quality, in the sense of increased accuracy and execution velocity,and widens the range of feasible tasks.

Both the 3D visual servoing alignment task as well as the numerouscombined visual servoing/force control tasks (in 2D and 3D) demon-strate the fitness of the task frame formalism and the hybrid controlstructure as a basis to easily model, implement and execute robotictasks in an unknown workspace. To this end, the high level task de-scription divides the control space in vision, force, tracking and velocitycontrolled directions, possibly augmented with feedforward control.

Several examples illustrate traded, hybrid and (all possible formsof) shared vision/force control with both sensors mounted on the endeffector. This combined mounting and control is classified into fourmeaningful camera/tool configurations being parallel or non-parallelendpoint closed-loop and fixed or variable endpoint open-loop (EOL).The EOL configuration with variable task/camera frame relation isfully explored, since it is the most adequate one for ‘simple’ image pro-cessing and the most challenging in the sense of control. For the vari-able EOL configuration, special control issues arise, due to the timeshift between the moment the contour is measured and the momentthese data are used. Furthermore, keeping the image feature (e.g. acontour) in the camera field of view, while maintaining a force con-trolled contact, imposes additional requirements on the controller. This

III

Page 6: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Abstract

double control problem is adequately solved using the redundancy forrotation in the plane, which exists for rotationally symmetric tools,in order to control independently task and camera orientations. Twovariable EOL tasks, the ‘planar contour following of continuous curves’and the ‘planar contour following at corners’, are fully examined. Theydemonstrate that adding vision based feedforward to the tracking direc-tion reduces tracking errors, hereby enabling a faster and more accurateexecution of the task.

IV

Page 7: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Table of contents

Abstract III

Table of contents V

Symbols IX

1 Introduction 11.1 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Main contributions . . . . . . . . . . . . . . . . . . . . . 21.3 Chapter by chapter overview . . . . . . . . . . . . . . . 4

2 Literature survey 72.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Feature detection . . . . . . . . . . . . . . . . . . . . . . 82.3 Image modelling . . . . . . . . . . . . . . . . . . . . . . 112.4 Visual servoing . . . . . . . . . . . . . . . . . . . . . . . 122.5 Camera configuration . . . . . . . . . . . . . . . . . . . 142.6 Hybrid position/force control . . . . . . . . . . . . . . . 152.7 Sensor integration . . . . . . . . . . . . . . . . . . . . . 152.8 Relation to this work . . . . . . . . . . . . . . . . . . . . 18

3 Framework 213.1 Hybrid control structure . . . . . . . . . . . . . . . . . . 223.2 Theoretical 2D contour following . . . . . . . . . . . . . 273.3 Dynamic control aspects . . . . . . . . . . . . . . . . . . 34

3.3.1 Vision control loop . . . . . . . . . . . . . . . . . 353.3.2 Force control loop . . . . . . . . . . . . . . . . . 37

3.4 Force sensing . . . . . . . . . . . . . . . . . . . . . . . . 393.5 Tracking error identification . . . . . . . . . . . . . . . . 42

V

Page 8: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Table of contents

3.6 Relative area contour detection . . . . . . . . . . . . . . 433.7 ISEF edge detection . . . . . . . . . . . . . . . . . . . . 473.8 Contour modelling and control data extraction . . . . . 503.9 Camera model and calibration . . . . . . . . . . . . . . 563.10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 62

4 Classification 634.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 634.2 Adopted restrictions . . . . . . . . . . . . . . . . . . . . 644.3 Tool/camera configurations . . . . . . . . . . . . . . . . 664.4 Shared control types . . . . . . . . . . . . . . . . . . . . 704.5 Combined vision/force task examples . . . . . . . . . . . 734.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 80

5 Visual servoing: a 3D alignment task 835.1 Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . 845.2 Used algorithms . . . . . . . . . . . . . . . . . . . . . . 925.3 Experimental results . . . . . . . . . . . . . . . . . . . . 935.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 100

6 Planar contour following of continuous curves 1016.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1016.2 EOL setup . . . . . . . . . . . . . . . . . . . . . . . . . 1026.3 Task specification . . . . . . . . . . . . . . . . . . . . . . 1036.4 Detailed control approach . . . . . . . . . . . . . . . . . 105

6.4.1 Double contact control . . . . . . . . . . . . . . . 1056.4.2 Matching the vision data to the task frame . . . 1076.4.3 Calculating the feedforward signal . . . . . . . . 112

6.5 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . 1136.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 117

7 Planar contour following at corners 1197.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1197.2 Path measurement and corner detection . . . . . . . . . 1207.3 Augmented control structure . . . . . . . . . . . . . . . 1237.4 Experimental results . . . . . . . . . . . . . . . . . . . . 1257.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 129

VI

Page 9: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Table of contents

8 Additional experiments 1318.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 1318.2 Traded control in EOL . . . . . . . . . . . . . . . . . . . 1328.3 Shared control in non-parallel ECL . . . . . . . . . . . . 1328.4 3D path in fixed EOL . . . . . . . . . . . . . . . . . . . 1358.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 138

9 General conclusion 1399.1 Main contributions . . . . . . . . . . . . . . . . . . . . . 1409.2 Limitations and future work . . . . . . . . . . . . . . . . 144

Bibliography 146

A Derivations 157A.1 Frames and homogeneous transformations . . . . . . . . 157A.2 Screw transformations . . . . . . . . . . . . . . . . . . . 158A.3 Denavit-Hartengberg for KUKA361 . . . . . . . . . . . . 159A.4 Point symmetry with relative area method . . . . . . . . 162A.5 Relative area algorithms . . . . . . . . . . . . . . . . . . 164A.6 Arc following simulation . . . . . . . . . . . . . . . . . . 166

B Contour fitting 171B.1 The tangent model . . . . . . . . . . . . . . . . . . . . . 172B.2 Full second order function . . . . . . . . . . . . . . . . . 173B.3 Interpolating polynomial . . . . . . . . . . . . . . . . . . 174B.4 Third order non-interpolating polynomial . . . . . . . . 174B.5 Interpolating cubic splines . . . . . . . . . . . . . . . . . 175B.6 Parameterized representation . . . . . . . . . . . . . . . 178B.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 180B.8 Graphical user interface . . . . . . . . . . . . . . . . . . 181

C Experimental validation of curvature computation 183

D Comrade task descriptions 189D.1 3D rectangle alignment . . . . . . . . . . . . . . . . . . . 189D.2 Planar contour following . . . . . . . . . . . . . . . . . . 191D.3 Planar contour following at corners . . . . . . . . . . . . 193D.4 Adaptations to COMRADE . . . . . . . . . . . . . . . . 195

VII

Page 10: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Table of contents

E Image processing software on DSP 197E.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 197E.2 System definition file . . . . . . . . . . . . . . . . . . . . 199E.3 Main program file (example) . . . . . . . . . . . . . . . 200E.4 Main header file . . . . . . . . . . . . . . . . . . . . . . . 209E.5 IO from DSP to transputer . . . . . . . . . . . . . . . . 211E.6 IO from transputer to DSP . . . . . . . . . . . . . . . . 212E.7 Image to host . . . . . . . . . . . . . . . . . . . . . . . . 213E.8 Procedures to evaluate image . . . . . . . . . . . . . . . 215

F Technical drawings 225F.1 Sinusoidal contour . . . . . . . . . . . . . . . . . . . . . 225F.2 ‘Constant-curved’ contour . . . . . . . . . . . . . . . . . 226F.3 Camera mounting - ‘ahead’ . . . . . . . . . . . . . . . . 226F.4 Camera mounting - ‘lateral’ . . . . . . . . . . . . . . . . 228

VIII

Page 11: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Symbols

Variables:Cx, Cy : pixel coordinates of center of image [pix]

F : 3 × 1 force vector [N ]

Fi : force in i-direction, i=x, y, z [N ]bH : homogeneous frame representation of frame bbaH : homogeneous transformation from frame b in aI : intensity (function) or grey-level value []

J : Jacobian matrixK : control gain [sec−1]

M : (3 × 1) moment or torque (vector) [Nmm]

P : 3 × 1 position vector [mm]

R : (3 × 3) rotation matrixRi : rotation matrix for a rotation around direction iS : (6 × 6) screw transformation matrixSXSra : shifted x-signed relative area parameter [pix]

Tra : total relative area parameter [pix]

T vs, T f : control cycle period for vision and force [sec]

Ti : translational component [mm]

V : 6 × 1 velocity vector or twist [mm/sec− rad/sec]

W : 6 × 1 force vector or wrench [N −Nmm]

XSra : x-signed relative area parameter [pix]

XY Sra : xy-signed relative area parameter [pix]

Y Sra : y-signed relative area parameter [pix]

f : focal length [mm]

k : discrete time step []bk : stiffness of object b [N/mm−Nmm/rad]bkinv : inverse stiffness or compliance of b [mm/N − rad/Nmm]

IX

Page 12: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

List of symbols

q : joint angle [rad]

r : radius [mm]

s : arc length [mm]

s : continuous domain Laplace variable [sec−1]

sx : scale factor for resampling of scanline []

t : time [sec]

u, v : coordinates relative to left-top of image [pix]

v : (3 × 1) velocity (vector) [mm/sec]

x, y, z : distances (along respective directions) [mm]

xp, yp : distances expressed in pixels or pixel co-ordinates relative to center of image

[pix]

xs, ys : image sizes: width and height [pix]

z : Z-transform variable []

α, β : angles [rad]

∆θi : orientation tracking error about axis i [rad]

∆i : tracking error in direction i [mm]

θ : angle [rad]

κ : curvature [mm−1]

κ1 : radial lens distortion coefficient [mm−2]

µp : pixel dimension [mm/pix]

τ : time constant [sec]

ω : (3 × 1) angular velocity (vector) [rad/sec]

Frame and object indices (as preceding subscripts or superscripts):C : contourCOC : corrected offset contourOC : offset contourabs : absolute frame or world frameap : arbitrary pointcam : camera framecp : calibration pointee : end effector framefs : force sensor frameobj : object framet : task frame or tool frame

Signal indices (as superscripts):a : actual signal

X

Page 13: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

List of symbols

c : commanded signald : desired signalm : measured signalff : feedforward signalss : steady state signal

Control related indices (as superscripts):f : force direction parametertr : tracking direction parameterv : velocity direction parametervs : vision direction parameter

Direction or type related indices (as subscripts):i : substitute for either x, y or zp : parameter expressed in pixelss : sizera : relative area parameterx : component along or about x-directiony : component along or about y-directionz : component along or about z-direction

Figure 1: Convention for sub- and superscripts

Abbreviations:CSD : control space dimensionDH : Denavit-HartenbergDOF : degree of freedomDSP : digital signal processorECL : endpoint closed-loop

XI

Page 14: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

List of symbols

EOL : endpoint open-loopFWK : forward kinematicsIO : input/outputISEF : infinite symmetric exponential filterTCP : tool center pointTF : task frameV SP : video signal processor

XII

Page 15: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 1

Introduction

Recent research in industrial robotics aims at the involvement of ad-ditional sensors to improve robustness, flexibility and performance ofcommon robot applications. Many different sensors have been devel-oped over the past years to fit the requirements of different but veryspecific tasks.

Sensor based control in robotics can contribute to the ever grow-ing industrial demand to improve speed and precision, especially in anuncalibrated working environment. But beside the kinematic capabil-ities of common industrial robots, the sensing abilities are still veryunderdeveloped. Force/torque sensors mounted to a robot’s wrist, forexample, are still an exception and limited to the fields of scientificresearch. Compared to the number of annual robot sales the numberof sensor equipped robots is still negligible.

Nevertheless good sensing abilities are essential to gain a higherflexibility and autonomy of robots. Sensor based control can deal withand compensate for uncertainties in positions of workpiece and/or tool,e.g. due to tool wear or compliance. Also accurate fixture of the work-piece becomes superfluous. Furthermore, an evaluation of interactingforces protects the tool, the workpiece and the robot itself from damage.

Of all sensors, vision and force are among the most complementaryones. Visual sensors are powerful means for a robot to know its task en-vironment. Force sensors are essential in controlling the tool/workpiececontact. The goal of a mixed visual servoing/force control setup is tocombine their advantages, while overcoming their shortcomings. Boththe task quality, in the sense of increased velocity and accuracy, and

1

Page 16: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

1 Introduction

the range of feasible tasks have to increase significantly with integratedvision/force control.

1.1 Approach

The presented, real-time approach aims at integrating visual servoingand force tracking based on the task frame formalism in a hybrid controlstructure. By dividing the control space into separate directions withclear physical meaning, the task frame formalism provides the meansto easily model, implement and execute robotic servoing tasks in anuncalibrated workspace.

Distinct in our approach w.r.t. other hybrid vision/force research,is the combined mounting of vision and force sensors. Four meaningfulforce-tool/camera configurations are looked into. They are on the onehand the endpoint closed-loop configurations with either a parallel ora non-parallel camera mounting and on the other hand the endpointopen-loop configurations with either a fixed or a variable camera frameto task frame relation. Several tasks illustrate these configurations.Two concrete (variable EOL) applications are examined in detail: the‘planar contour following task’ and the ‘control approach at corners’.They show how the performance of a force controlled task improves bycombining force control (tracking) and visual servoing. While main-taining the force controlled contact, the controller keeps the camera,also mounted on the robot end effector, over the contour at all times.Then, from the on-line vision-based local model of the contour, ap-propriate feedforward control is calculated and added to the feedbackcontrol in order to reduce tracking errors. The latter approach can beapplied to all actions that scan surfaces along planar paths for one-off, uncalibrated tasks with a rotationally symmetric tool: cleaning,polishing, or even deburring . . . .

1.2 Main contributions

The main innovations and contributions of this thesis are:

• A new approach to visual servoing is explored by using the taskframe formalism and relative area algorithms: Using the taskframe formalism, the goal of the task is translated into control

2

Page 17: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

1.2 Main contributions

actions in the task frame. Each task frame directions is herebylinked to one specific image feature. All image features are com-puted using relative area algorithms.

• A new and unique framework for combined vision/force controlis implemented. This framework is based on the task frame for-malism in a hybrid control setting and uses both sensors mountedon the end effector.

* The task frame formalism offers the means to easily model,implement and execute combined vision force robotic servoingtasks in an uncalibrated workspace.

* Vision and force sensing are combined in a hybrid control schemeby dividing the control space in vision, force, tracking and ve-locity subspaces, possibly with feedforward. The hybrid con-trol scheme forms the heart of our approach and consists of a(sensor based) outer control loop around a (joint) velocity con-trolled robot.

• A new form of shared control in which vision based feedforward isadded to force based tracking control, is presented together withexamples of all possible shared control types.

• A new classification of (combined mounted) camera/tool config-urations into parallel or non-parallel endpoint closed-loop andfixed or variable endpoint open-loop (EOL) is given. Numeroustask examples illustrate these new configurations.

• Improved planar contour following for both continuous contoursand contours with corners is realized using a variable endpointopen-loop setup and respective control strategy.

* Adding vision based feedforward to the tracking direction im-proves the quality, in the sense of increased accuracy or executionvelocity, of the force controlled planar contour following task atcontinuous curves or at corners.

* A twice applied least squares solution is implemented for arobust, on-line computation of the curvature based feedforward.

* For rotationally symmetric tools, there exists a redundancy forthe rotation in the plane. This degree of freedom is used to po-sition the camera while maintaining the force controlled contact.

3

Page 18: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

1 Introduction

This results in a new variable relation between task frame andend effector frame orientations.

* Special control issues arise, due to the time shift between themoment the contour is measured and the moment these data areused. A strategy to match the (vision) measurement data withthe contact at hand based on Cartesian position is proposed.

1.3 Chapter by chapter overview

Figure 1.1 gives a structural overview of the following chapters. First,chapter 2 briefly surveys the wide range of research domains which arerelated to our work. It places our approach in a global frame. Chap-ter 3 explains in more detail the different parts of the broad underlyingstructure that make up the framework of our approach. The heart ofthe framework is the hybrid control scheme. It forms the basis for theimplementations in the next chapters. The separate elements of thehybrid control loop, including the sensor related issues, are discussed.

Chapter 4 builds on the presented framework by giving a com-parative classification of combined vision/force tasks using differenttool/camera configurations, on the one hand, and by exploring thepossible types of shared control on the other hand. Numerous task ex-amples (in 3D space) using combined visual servoing and force controlillustrate the presented concepts. The remaining chapters investigatein more detail the key elements of these concepts.

First, to illustrate the visual servoing capabilities, chapter 5 exem-plifies a strategy to visually align the robot with end effector mountedcamera relative to an arbitrarily placed object.

Then, chapters 6 to 8 study combined vision/force tasks. Espe-cially, the endpoint open-loop configuration with variable task/cameraframe relation, which is the most challenging in the sense of control, isfully explored in the planar contour following task of chapter 6. Chap-ter 7 continues by extending the control approach to adequately rounda corner. To end with, chapter 8 presents the results of three addi-tional experiments. They validate once more the potential of the usedapproach.

Finally, chapter 9 concludes this work, followed by theoreticalderivations, an evaluation of contour models, experimental validation ofthe curvature computation, practical implementation issues involvinghigh level task descriptions and programming, vision processing code

4

Page 19: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

1.3 Chapter by chapter overview

Figure 1.1: Structural coherence of contents

examples and technical drawings with calibration results in appendicesA to F respectively.

5

Page 20: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6

Page 21: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 2

Literature survey

2.1 Introduction

This work spans and combines a wide range of research topics. Imageprocessing, robot vision, image modelling, servo control, visual servo-ing, hybrid position/force control and sensor fusion are the most im-portant ones. The following sections briefly review the relevant topicsfor each of these research domains and relate them to our work. Onlystereo vision is left aside because of its fundamentally different nature.

Figure 2.1: Overview of treated topics in this section

7

Page 22: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2 Literature survey

Figure 2.1 gives an overview of the logical coherence of the treatedtopics. Section 2.2 starts with the criteria for and a review of the lowlevel image processing. The suited techniques are chosen. Section 2.3gives a brief overview of the image modelling techniques, followed by anoutline of the basic approaches in visual servoing in section 2.4 and thecamera configuration in section 2.5. Then section 2.6 briefly describesthe standard issues in hybrid position/force control, the second pillarnext to visual servoing, of this work. Section 2.7 continues with thefusion approaches for visual and force control. Finally, section 2.8summarizes our choices and approach.

2.2 Feature detection

Image processing typically consists of three steps: (1) the preprocessingstep, e.g. filtering and image enhancement, (2) the feature extraction,e.g. edge localization and (3) the interpretation, e.g. modelling, con-tour fitting or object matching. The low level image processing thatimplements the first two steps (often in one operation) is called fea-ture detection. Typical image features are edges (one dimensional) andcorners (two dimensional).

This section states the criteria for good feature detection, reviewsthe main feature detection techniques and reveals the choices made inthis work.

Quality criteria for feature detectors : Canny gives three criteriafor feature (in particular edge) detection [9]. They are:

1. Good detection. There should be a minimum number of falsenegatives and false positives.

2. Good localization. The feature location must be reported as closeas possible to the correct position.

3. Only one response to a single feature or edge.

Being part of a real time system using real image sequences, one finaland at least as important criterion for the feature detection is:

4. Speed.

8

Page 23: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2.2 Feature detection

Another summarized list of the desired key characteristics of thedetection algorithm is: robustness ( ∼ criteria 1 and 3), accuracy (cri-terion 2) and real-time feasibility (criterion 4).

Review of edge detection : There has been an abundance of workon different approaches to the detection of one dimensional features inimages. The wide interest is due to the large number of vision appli-cations which use edges and lines as primitives, to achieve higher levelgoals. Some of the earliest methods of enhancing edges in images usedsmall convolution masks to approximate the first derivative of the im-age brightness function, thus enhancing edges. These filters give verylittle control over smoothing and edge localization.

Canny described what has since become one of the most widely usededge finding algorithms [8]. The first step taken is the definition of thecriteria which an edge detector must satisfy. These criteria are thendeveloped quantitatively into a total error cost function. Variationalcalculus is applied to this cost function to find an “optimal” linearoperator for convolution with the image. The optimal filter is shownto be a very close approximation to the first derivative of a Gaussianfunction.

Non-maximum suppression in a direction perpendicular to the edgeis applied, to retain maxima in the image gradient. Finally, weak edgesare removed using thresholding with hysteresis. The Gaussian convo-lution can be performed quickly because it is separable and a closeapproximation to it can be implemented recursively. However, the hys-teresis stage slows the overall algorithm down considerably.

Shen and Castan propose an optimal linear operator for step edgedetection [70]. They use similar analytical approaches to that of Canny,resulting in an efficient algorithm which has exact recursive implemen-tation. The proposed algorithm combines an Infinite Symmetric Ex-ponential smoothing Filter (ISEF), which efficiently suppresses noise,with a differential operator. It is shown that ISEF has a better per-formance in precision of edge localization, insensitivity to noise andcomputational complexity compared with Gaussian and Canny filters.

Venkatesh, Owens et. al. describe the approach of using “local en-ergy” (in the frequency domain) to find features [83]. The local energy

9

Page 24: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2 Literature survey

is found from quadrature pairs of image functions, such as the imagefunction and its Hilbert transform. Fourier transforms of the imagewere initially used to find the required functions, at large computa-tional cost.

Smith presents a new approach which involves a significant depar-ture from feature extraction and noise reduction methods previouslydeveloped [73]. The brightness of each pixel within a mask is com-pared with the brightness of that mask’s nucleus. The area of the maskwhich has the same (or similar) brightness as the nucleus is called the“USAN”, an acronym for “Univalue Segment Assimilating Nucleus”.From the size, centroid and second moments of the USAN two dimen-sional features and edges can be detected. Edges and corners corre-spond to the (locally) Smallest USAN value. This gives rise to theacronym SUSAN (Smallest Univalue Segment Assimilating Nucleus).This approach to feature detection has many differences with respectto the well known methods, the most obvious being that no imagederivatives are used and that no noise reduction is needed.

Choices : In this work, two (different) low image processing tech-niques are investigated and used: a basic pixel-weighting or relativearea method and the ISEF edge detection algorithm of [70].

Similar to the USAN method [73], the pixel-weighting methodwhich directly determines centroids, distances and angles, describedin chapter 3 section 3.6 and appendix A, has the advantage of beingvery robust. The integrating effect of the principle ensures strong noiserejection. This method is also fast because each pixel is evaluated onlyonce and it is well suited for image-based visual servoing (in which fea-ture characteristics are directly used as control inputs, see section 2.4).The method has, however, limited accuracy, uses binary thresholdingand is only applicable to local basic images.

To overcome the limited accuracy of the pixel-weighting method,ISEF edge detection is used. This optimal linear step edge detector isstill quite robust, has excellent accuracy and is fast. The computationalburden is especially low when the edge detector is used on small sub-images with only a limited number of line convolutions. The ISEF edgedetector thus fulfills all the quality criteria expectations (section 2.2).

10

Page 25: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2.3 Image modelling

2.3 Image modelling

The next step, after feature detection, is the image/feature interpreta-tion. This step models the features in the image, in order to generate(in our case) the control actions to be taken. Typically, an image fea-ture will correspond to the projection of a physical feature of someobject on the image plane. The modelling step then extracts the im-age feature parameters by using either implicit or explicit models as arepresentation of the physical object or target.

Implicit modelling assumes that the model is incorporated in thedetection algorithm itself. For example, the proposed pixel-weightingmethod (section 3.6) uses as implicit model a straight contour. Themeasurements of the distance to the contour and the angle of the con-tour are directly incorporated in the method. Implicit modelling istypically part of an image-based approach (see section 2.4).

Most techniques, however, use explicit models with feature param-eters such as (relative) distance, (relative) area, centroid and higherorder moments. For example, Chaumette, Rives and Espiau [11, 31]use a set of (four) points or lines for relative positioning and visual ser-voing; Sato and Aggarwal [68] estimate the relative position and orien-tation of a camera to a circle having unknown radius; Jorg, Hirzingeret al. [44] also use a circle in tracking an engine block; and Ellis et.al. [30] consider the fitting of ellipses to edge-based pixel data in thepresence of uncertainty. More complex object models, made possibleby the emerged computational power, are e.g. used in robust work-piece tracking and gripping by Tonko et al. [76]; in real-time trackingof 3D structures by Drummond and Cipolla [29] or by Wunsch andHirzinger [92].

Typical contour models are (a set of) lines [19, 20], tangents [6, 51],a second order curve [51], a polar description [13] and splines [51] orsnakes e.g. [42]. Chapter 3 investigates different contour models forvision based contour following going from full second order curves (cir-cle, ellipse, hyperbole . . . ), over splines to a list of points and tangents.Circle models are found to be appropriate if the contour is (in advance)known to be circular, otherwise a list of points and tangents, which isused in our approach, gives better results.

The usefulness of a particular model not only depends on the ro-bustness, accuracy and speed in fitting the model onto the edge databut also on the robustness, accuracy and speed at which the control

11

Page 26: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2 Literature survey

signals are derived from the model or thus from the image feature pa-rameters. Robustness, e.g. in object recognition and classification oreven visual servoing, can be achieved by the use of invariant imagefeature parameters in the representation model [48, 79, 81].

The counterpart of the feature-based methods, discussed until now,is the correlation-based method. If the target has a specific pattern thatchanges little over time (i.e. temporal consistency), then tracking can bebased on correlating the appearance of the target in a series of images.The Sum of Squared Differences (SSD) [62, 63, 72] is such a correlation-based method1 which can be used to determine the relative motionbetween object and camera, to get depth or structure from motion inmono-vision or even to initialize a set of good trackable features. On theother hand, if the target does not show a specific pattern, as is the casee.g. in an image of a straight contour, a correlation-based method willresult in multiple inconclusive correspondences between the originaland shifted images. Hence, it is unfit for the contour following tasks athand. A technique related to the correlation-based methods, but muchmore sophisticated, is optical flow [38].

2.4 Visual servoing

Sanderson and Weiss [67, 89] introduced a taxonomy of visual servosystems, into which all subsequent visual servo systems can be cat-egorized [16, 33]. Their scheme distinguishes dynamic look-and-movefrom direct visual servo systems on the one hand and image-based fromposition-based ones on the other.

Dynamic look-and-move versus direct visual servoIf the vision system provides set-point inputs to the joint-level con-troller, which internally stabilize the robot, it is referred to as a dynamiclook-and-move system. In contrast, direct visual servo2 eliminates therobot controller entirely replacing it with a visual servo controller thatdirectly computes joint inputs, thus using only vision to stabilize themechanism. Most implemented systems adopt the dynamic look-and-

1For the correlation methods too, either implicit models, e.g. in feature initial-ization, or explicit models, e.g. in pattern recognition, are used

2Sanderson and Weiss originally used the term “visual servo” for this type ofsystem, but since then this term has come to be accepted as a generic descriptionfor any type of visual control of a robotic system.

12

Page 27: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2.4 Visual servoing

move approach for several reasons [41]. First, the relatively low sam-pling rates available from vision3 make direct control of a robot endeffector with complex, nonlinear dynamics an extremely challengingproblem. Second, many robots already have an interface for acceptingCartesian velocity or incremental position commands. This simplifiesthe construction of the visual servo system, and also makes the meth-ods more portable. Thirdly, look-and-move separates the kinematicsingularities of the mechanism from the visual controller, allowing therobot to be considered as an ideal Cartesian motion device. This workutilizes the look-and-move model only.

Position-based visual servoing versus image-based visual servoingIn position-based visual servoing (PBVS) (e.g. [54]), the image mod-elling step reconstructs (with known camera model) the (3D) workspaceof the robot. The feedback then reduces the estimated pose errors inCartesian space. If a geometric object model lies at the basis of theCartesian pose estimation, the approach is referred to as pose-based.In image-based visual servoing (IBVS) (e.g. [13, 32, 35, 45]), controlvalues are computed from the image features directly. The image-based approach may eliminate the necessity for image interpretationand avoids errors caused by sensor modelling and camera calibration.It does however present a significant challenge to controller design sincethe system is non-linear and highly coupled.

In this context, the image Jacobian was first introduced by Weisset al. [89], who referred to it as the feature sensitivity matrix. The im-age Jacobian is a linear transformation that maps end effector velocityto image feature velocities. Examples can be found in [31, 35, 41] orin [2, 39]. In their work Asada et al. [2, 39] propose an adaptive visualservoing scheme with on-line estimation of the image Jacobian. Mc-Murray et al. [65] even incorporate the unknown robot kinematics inthe dynamic on-line estimation of the Jacobian.

Endpoint open-loop versus endpoint closed-loopIn addition to these considerations, a distinction is made between sys-tems which only observe the target object, as is mostly the case in thiswork, and those which observe both target object and robot end effec-tor [41, 16]. The former are referred to as endpoint open-loop (EOL)

3In spite of reports on 1Khz visual servoing [55, 43] most visual servoing re-searchers still prefer to use the low cost standard rate vision systems.

13

Page 28: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2 Literature survey

systems, and the latter as endpoint closed-loop (ECL) systems. The pri-mary difference is that EOL systems must rely on an explicit hand-eyecalibration when translating a task specification into a visual servoingalgorithm, especially in the position-based visual servoing case. Hence,the positioning accuracy of EOL systems depends directly on the ac-curacy of the hand-eye calibration. Conversely, hand-eye calibrationerrors do not influence the accuracy of ECL systems [1, 90].

2.5 Camera configuration

Visual servo systems typically use one of two camera configurations:end effector mounted, or workspace related4. For the first, also calledan eye-in-hand configuration, there exists a known, often fixed, rela-tionship between the camera pose (position and orientation) and theend effector pose. In the workspace related configuration, the camerapose is related to the base coordinate system of the robot. Here, thecamera may be either fixed in the workspace or mounted on anotherrobot or pan/tilt head. The latter is called an active camera5 system.

For either choice of camera configuration, some form of cameracalibration must be performed. In this work, Tsai’s calibration tech-nique [49, 78, 77] is adopted.

Chu and Chung [12] propose a selection method for the optimalcamera position in an active camera system using visibility and manip-ulability constraints. Also in this context, Nelson and Khosla [57, 59]introduced the vision resolvability concept: the ability to provide goodfeature sensitivity, a concept closely related to the image Jacobian.

In this work, we choose to mount the camera on the end effector.This results in local images and avoids occlusion. Moreover, it givesa controllable camera position to the effect that the image feature ofinterest can be placed on the optical axis (center of the image) andtracked by visual servoing. Hence, the feature measurement is lesssensitive to calibration errors or distortions in the camera/lens system.Furthermore, placing the camera close to the target ensures a goodresolution in the image feature measurement.

4Note that a mounted camera does not necessarily imply an EOL system.Equally, a fixed camera configuration is not by definition an ECL system.

5This subject is closely related to active sensing research.

14

Page 29: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2.6 Hybrid position/force control

2.6 Hybrid position/force control

The two basic approaches to force controlled manipulators [10, 71],surveyed by Yoshikawa in [96], are: hybrid position/force control andimpedance control. The impedance control approach proposed byHogan [37] aims at controlling position and force by translating atask into a desired impedance, a relation between motion and force.The hybrid control approach was originally proposed by Raibert andCraig [66]. It separates the control space in position and force con-trolled directions, resulting in two perpendicular subspaces: the (posi-tion or) velocity controlled subspace and the force regulated subspacealso called twist and wrench spaces respectively6.

In case of a point contact between tool and workpiece, a hybridposition force controlled task can easily be defined in the task frameformalism [7, 23, 25, 52]. Here, desired actions are specified separatelyfor each direction of an orthogonal frame attached to the tool: the“Task Frame”. Each direction (x,y,z) is considered once as an axialdirection (translation; linear force) and once as a polar direction (ro-tation; torque).

This work is based on the hybrid position/force control approachand on the task frame formalism as adopted, among others [36], byDe Schutter and Van Brussel [7, 24, 25].

2.7 Sensor integration

The last step, of utmost importance to this work, is the sensor integra-tion, which combines visual servoing and force control. Only recently,researchers have started to merge the fundamentals of both visual ser-voing and force controlled robotics, in order to benefit from the com-plementary characteristics of vision and force sensors. A combinedvision/force setup, made possible by the emerged computational powerand low cost vision systems, expands the range of feasible robotic taskssignificantly. The two sensing systems, however, produce fundamen-tally different quantities, force and position. This may present an in-herent problem when attempting to integrate information from the twosensors. Sensor integration techniques require a common representa-tion among the various sensory data being integrated. Force and vision

6The orthogonality of twist and wrench spaces can be questioned. For the tasksat hand, however, this concept stands.

15

Page 30: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2 Literature survey

sensors do not provide this common data representation. Furthermore,as a consequence of their different nature, the two sensors are oftenonly useful during different stages of the task being executed. Hence,force and vision information is seldom fused as redundant data [57, 95].We do however present tasks in which vision and force sensing is reallyfused. This is for example the case when vision based feedforward isadded to force related feedback.

Nelson et al. [58] presented three basic strategies which combineforce and vision within the feedback loop of a manipulator: tradedcontrol, hybrid control and shared control. In traded control, the con-troller switches between force control and visual servoing for a givendirection; Hybrid control, an extension to the previously mentionedhybrid position/force control, allows visual servoing only in the twistspace; The highest level of integration however involves shared control :it implies the use of visual servoing and force control along the samedirection simultaneously. Our approach includes hybrid, traded as wellas shared control.

Another approach, which is not adopted in this work but somehowrelated to vision based feedforward on a force controlled direction, isgiven by Morel et al. [53]. They propose an impedance based combina-tion of visual and force control. In their scheme, the vision control loopgenerates the reference trajectory for a position based impedance con-troller with force feedback. Thus, the visual control is built around theforce control loop, in contrast to the previously presented basic strate-gies [58] where vision and force control act at the same hierarchicallevel.

Some researchers focus on hybrid vision/force control in a fully un-calibrated workspace. E.g. [40, 64, 95] use fixed cameras which are notcalibrated w.r.t. the robot. Their approaches require on-line identifi-cation of the contact situation, separation of the workspace in forceand vision controlled directions, on-line vision Jacobian identificationand adaptive control, each of which is tackled successfully. However,defining a path in the image plane of an uncalibrated camera, has noknown physical meaning. We will therefore only assume the objectspace to be uncalibrated. Hence, only position, orientation and shapeof the workpiece or object are unknown7. Any other information aboutthe task is translated in a high level task description which defines the

7This is especially true for one-off tasks in which accurate positioning or calibra-tion of the workpiece is costly or impossible, causing large pose errors.

16

Page 31: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2.7 Sensor integration

complete task, divided in sub-tasks, using the task frame formalism.The task frame formalism thus provides the basis to combine visionand force sensing in an easy but powerful and flexible way with clearphysical meaning. The high level task description divides the six de-gree of freedom (DOF) cartesian control space orthogonal, separate ormixed, vision and force (among others) controlled directions, depend-ing on the type of task or expected contact at hand. This, in contrastto [40, 64, 93, 94, 95], facilitates the decoupling of vision and forcecontrol subspaces, if needed. Hosoda et al. [40] also suggest the needfor a common coordinate frame for multiple external sensor-based con-trollers. In our opinion, the task frame lends itself very well to this pur-pose for numerous tasks, especially when multiple sensors are mountedon the end effector. Furthermore, since the high level task descriptiondetermines the use of each sensor, it counters the need for vision andforce resolvability as introduced by Nelson [57].

For the combined vision/force contour following, as presented inthis work, the camera needs to be calibrated with respect to the robotend effector. After all, with the camera looking ahead and not see-ing the actual contact point (i.e. endpoint open-loop), calibration isessential to match the vision information to the contact situation athand. Thanks to the calibration, the vision algorithms too will equallybenefit from the combined vision/force setup. If the relative positionof camera and force sensor is known and if the (calibrated) tool is incontact with the object, the depth from the image plane to the objectplane is known. Hence, the 3D position of the image feature can becalculated easily with mono vision. This is yet another type of sensorintegration.

Other reports on combined vision/force applications or approachesare given by Zhou, Nelson and Vikramaditya [60, 97] in micromanipu-lation and microassembly, by von Collani et al. [85, 86, 87] in a neurofuzzy solution for integrated visual and force control, by Song et al. [74]in a shared control structure, including vision and force sensing, for anarm/hand teleoperated system, by Hirzinger et al. [44] in a flexiblerobot-assembly using a multi-sensory approach, by Malis, Morel andChaumette [50] in vision/force integration using the task function ap-proach and by Namiki et al. [56] in high speed grasping.

17

Page 32: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2 Literature survey

2.8 Relation to this work

This work presents two approaches. They can be categorized as follows.Chapter 5 gives a 3D relative positioning task

• by image based visual servoing

• using pixel-weighting or relative area related methods,

• with a coarsely known camera setup.

Chapters 6 to 8 (and papers [3, 4, 5, 6]) present combined vision/forcecontour following tasks

• with calibrated camera

• using the ISEF edge detection

• with on-line contour modelling by tangent lines

• for position based visual servoing

• in a hybrid position/force control scheme

• including hybrid, traded as well as shared control.

All the presented methods are:

• feature based,

• dynamic look-and-move,

• (mostly) endpoint open-loop,

• eye-in-hand and

• based on the task frame formalism.

Unlike most reported hybrid position/force control with force sen-sor and camera [40, 56, 61, 57, 58, 59, 64, 93, 94, 95], our work uses aneye-in-hand camera, rather than a fixed one. Few researchers reportedcombined vision force control with both sensors mounted on the ma-nipulator. Nevertheless, they mostly use completely different controlapproaches, being neuro-fuzzy [85] or impedance based [53]. Only re-cently, Jorg, Hirzinger et al. [44] and Malis, Morel and Chaumette [50]

18

Page 33: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

2.8 Relation to this work

have published multiple sensor control approaches similar to our. Theformer work uses multiple sensors in a flexible robot-assembly. Thechosen task, however, lends itself only for traded control. The latteruses, in contrast to the task frame formalism, a task function approach,which they introduced in earlier publications to define the goal in a vi-sual servoing task and now extended to include force control.

The end effector mounted camera in an endpoint open-loop controlenforces the need for camera calibration (pose and intrinsic cameraparameters). The workpiece and environment on the other hand maybe fully uncalibrated.

The task frame formalism is used as a concept to combine vision andforce control in an easy but powerful and flexible way with clear physicalmeaning. In addition, vision based feedforward is added. The use ofthe task frame formalism results by definition in orthogonal, separateor mixed, force and vision control spaces. This avoids the extra effortof decoupling these spaces. It also distinguishes our work from visualservoing such as [31, 41]: neither a vision Jacobian identification noroptical flow calculations are necessary.

Many researchers report on vision based feedforward control. Incontrast to this work, however, they mostly use off-line generated mod-els (e.g. [22]) or partially known paths and workpieces (e.g. [47]).

19

Page 34: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

20

Page 35: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 3

Framework

The previous chapter gave an overview of combined visual servoing andforce control in robotics together with some basic underlying topics.This chapter will explain in more detail the different parts of the broadunderlying structure that make up the framework of our approach asa basis for the next chapters.

The first section discusses the hybrid control structure used in ourapproach. It divides a robotic task into separately controlled directionsusing the task frame formalism. The overall control architecture con-sists mainly of three parts: the controller, the system and the sensingmodalities. The former two are briefly discussed in the last part of thefirst section. The sensors are the subject of sections 3.4 to 3.9.

After the introduction of the hybrid control scheme in section 3.1,section 3.2 gives the theoretical foundation for the fitness of the chosencontrol structure. Together with section 3.3, it surveys the static anddynamic characteristics of vision and force control loops.

The subsequent sections treat the remaining approach issues in-volved in the used sensing techniques. First, section 3.4 briefly treatsthe force sensor and its characteristics. Sections 3.5 explains the track-ing error identification based on either measured forces or velocities.Sections 3.6 and 3.7 describe the adopted low level image processingtechniques, followed by a comparative discussion of the properties ofthe chosen contour model in section 3.8. Finally, section 3.9 describesthe projective camera model and explains the used camera calibrationprocedure. Section 3.10 concludes the chapter.

21

Page 36: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.1: Hybrid control scheme

3.1 Hybrid control structure

The used hybrid control structure combines sensor based position con-trol with velocity set-points, implemented as an outer control looparound the joint controlled robot [24]. The basis for a common repre-sentation in this hybrid control is the task frame formalism [7, 25, 52].In this formalism, desired actions are specified separately for each di-rection of an orthogonal frame related to the task: the Task Frame(TF). Each direction (x,y,z) is considered once as an axial directionand once as a polar direction. Force, position and vision sensors caneach command separate TF directions. In total, we distinguish fivetypes of directions: velocity, force, vision, tracking and feedforward.

Velocity direction: If the robot is commanded to move at a desiredvelocity in a given direction of the task frame, we call this direc-tion a velocity direction. Normally, there are no (physical) motionconstraints1 in a velocity direction. The desired velocity is speci-fied in the task description. The ‘set-point’ direction dv of figure3.1 is a velocity direction.

1imposed by the environment onto the task frame

22

Page 37: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.1 Hybrid control structure

Force direction: If the desired action in a given TF direction consistsof establishing and maintaining a specified contact force with theenvironment or object, we call this direction a force controlleddirection or in short a force direction. Control loop df in figure3.1 controls the contact forces in a force direction. Normally, ina force direction, the TF interacts with the environment. Thisdoes however not always need to be so. For example, the firststage of a guarded approach in a force controlled direction, canbe a free space motion.

Vision direction: The TF direction used to position the task framerelative to the selected image features by visual servoing, is a vi-sion controlled direction or in short a vision direction. Controlloop dvs in figure 3.1 is vision controlled. Normally, there are nomotion constraints1 in a vision controlled direction. The visionmeasurements are in se pose errors2 (of the object w.r.t. the taskframe). Hence, a vision direction is in fact a position controlleddirection using a vision sensor. A vision direction shows similari-ties to a force direction, since both are sensor controlled, althoughwith quite different sensor characteristics.

Tracking direction: If the controller uses a given TF direction toautomatically adjust the pose of the moving task frame w.r.t.the environment, we call this direction a tracking direction. Fora tracking direction, no desired velocity is specified. Implicitly, atracking error equal to zero is asked. Control loop dtr of figure3.1 tries to minimize the (absolute value of the) tracking error ina tracking direction. Since the basis for the tracking control is apositional error, a tracking direction shows similarities to a visiondirection. However, the identification of the tracking error in atracking direction is fundamentally different from the the trackingerror measurement in a vision direction. Section 3.5 discusses theidentification of the tracking error.

Feedforward direction: Any direction of the task frame, whether itis a velocity, a vision, a force or a tracking direction, to which afeedforward signal is added, is (in addition) called a feedforwarddirection, indicated by d+ff in figure 3.1. The feedforward signal is

2pose: position and orientation

23

Page 38: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.2: 3 DOF planar compliant motion (with the control type for agiven direction indicated between brackets)

superimposed on the existing velocity signals and is either modelbased or sensor based[21, 22] (in casu vision based).

The objective of a feedforward action is to reduce errors in atracking, force or vision direction. Section 3.2 gives a theoreticalfoundation for the needed feedforward signal in planar contourfollowing.

Note that position control, without the use of external sensors, isnot possible with the given hybrid control scheme. If necessary, posi-tion control of the robot end effector is performed by another (separate)controller, which is not discussed. The integrating of position controlwith the described velocity based controller, such that the joint con-troller gets a mix of desired TF velocities and positions as input, formsa great challenge for future controller design3

Example : In the case of 3 DOF (Degrees Of Freedom) planar(compliant) motion as shown in figure 3.2, the task is specified by thedesired tangential velocity (direction type dv) and the desired normalcontact force (direction type df ). Furthermore, the plane normal isused as a polar tracking direction (dtr) to keep the task frame tangentto the contour. All other TF directions are in fact velocity directions(dv) with specified velocities set equal to zero. To the tracking directionfeedforward may be added (d+ff). The better the feedforward velocityanticipates the motion of the task frame, the smaller the tracking error.Hence, the faster the task can be executed before the errors become solarge that contact is lost or excessive contact forces occur [24, 25] orthe better the contact forces are maintained.

3Only recently, Lange and Hirzinger [46] reported on such a mixed posi-tion/velocity integrated control structure.

24

Page 39: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.1 Hybrid control structure

Axial : tvci = Polar : tω

ci =

Velocity tvdi tω

di

Force Kfi kinv

i (tFmi − tF

di ) Kf

θikinvθi (tM

mi − tM

di )

Vision Kvsi (ti

d − tim)) Kvs

θi (tθdi − tθ

mi )

Tracking Ktri (∆i) Ktr

θi (∆θi)

With following notation: v [mm/sec] and ω [rad/sec] are axialand polar velocities; F [N] and M [Nmm] are force and moment(or torque); i must be replaced by x, y or z [mm] and indicatesa position or, if used as subscript, a frame axis; θ [rad] is an an-gle; ∆i and ∆θ are axial and polar tracking errors; K [1/sec] isthe proportional control gain; kinv [mm/N] or [1/Nmm] is thetool compliance; Preceding subscript t denotes a TF signal; Su-perscripts c, d and m indicate commanded, desired and measuredsignals respectively; Superscripts f , vs and tr distinguish param-eters for force, vision and tracking directions respectively.

Table 3.1: Proportional control laws and notation

Figure 3.3 gives a typical setup. The (6DOF) Cartesian pose of anypart of the setup is defined by orthogonal frames attached to the con-sidered part. Aside from the task frame, the figure shows the cameraframe, the force sensor frame, the end effector frame and the absoluteworld frame. The task frame is denoted by (preceding) subscript t, theother frames by subscripts cam, fs, ee and abs respectively.

Control scheme : The hybrid control scheme of figure 3.1 forms thebasis of our approach. The hybrid controller implements proportionalcontrol laws for each controlled direction (vision, force or tracking).The output of the controller are commanded velocities. For a velocitydirection the commanded velocities are equal to the specified (desired)velocities. Table 3.1 gives an overview of the control actions for velocity,force, vision and tracking directions, in either axial or polar cases. Toeach velocity signal feedforward can be added. Sections 3.2 and 3.3discuss the control loop characteristics. They demonstrate the fitnessof the chosen proportional control laws with feedforward.

Together, all the commanded velocities, grouped in a 6×1 CartesianTF velocity vector or twist, (must) define unambiguously the instan-taneous motion of the task frame in 6 DOF Cartesian space. Thisvelocity vector is the input for the joint controlled robot.

25

Page 40: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.3: Typical setup with definition of task, end effector, camera, forcesensor and absolute frames

Robot with joint controller : For the sake of completeness, figure3.4 shows the inner servo control loop enclosed by the “robot with jointcontroller”-block of figure 3.1. The TF velocity vector is first recalcu-

26

Page 41: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.2 Theoretical 2D contour following

Figure 3.4: Actual robot system with joint controller which has CartesianTF velocity as input and Cartesian TF position as output

lated4 to absolute end effector velocities and then transformed into thedesired joint velocities, using the inverse robot Jacobian for the currentrobot position. The joint velocities determine the desired joint posi-tions and are also used as feedforward velocities in the joint controlloops for each axis of the robot. The final output of the robot is theabsolute world Cartesian pose of the task frame (based on the physicalforward kinematic structure of the robot end effector and the relationof the task frame to the robot end effector).

The remaining parts of the control block diagram of figure 3.1, beingthe force sensor, the tracking error identification and the vision system,are discussed in more detail in the sections 3.4 to 3.9.

3.2 Theoretical 2D contour following

This section describes the control approach and steady state errorsinvolved in following a 2D planar contour. Three situations are ex-amined: (1) contour following of a straight line, which lies at a fixedangle w.r.t. the forward direction of the none-rotating task frame, (2)contour following of a straight line with a rotating task frame and (3)contour following of an arc shaped contour with a rotating task frame.

The given analysis is similar to the one given by De Schutter [27, 28]which applies to compliant motion with force control in a directionnormal to the path.

Notations : Following notations are used. tabsx is the absolute

x-position of the task frame [mm]; tabsθ is the absolute orientation of

4The velocity screw transformation Sv is formulated in appendix A.

27

Page 42: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.5: Simplified (one-dimensional) position control loop

the task frame. Ct x and C

t θ are the relative position and orientation ofthe contour w.r.t. the task frame. For reasons of simplicity, the pre-ceding superscript C is sometimes omitted. Additional superscripts c, dand m indicate commanded, desired and measured signals respectively;Superscript ff denotes a feedforward signal; superscript ss indicates asteady state signal. Kx and Kθ [1/sec] are the proportional gains forthe tx- and tθ-control loops respectively. In each case the TF moves ata constant velocity tv

cy in the ty-direction5.

None-rotating task frame / straight contour : Following a straightcontour with a non-rotating task frame aims at minimizing the differ-ence in the tx-direction between the origin of the task frame and thecontour while moving with a fixed velocity in the ty-direction of thetask frame. Figure 3.5 gives the simplified proportional position controlloop for the tx-direction.

The commanded tx-velocity tvcx , according to a proportional con-

trol law, is equal to Kx(txd − tx

m). If the robot moves immediately atthe commanded velocity, the relation between the output of the robot,being a position, and the input, being a velocity, is a simple integrator.Hence, the simplified transfer function of the robot is R(s) = 1/s, withs the Laplace variable.

Following a straight line, at a fixed angle Ct θ with the forward ty-

direction of the non-rotating task frame, as illustrated in figure 3.6,causes a (constant) steady state tracking error, defined by

∆xss ≡ limt→∞

(txd(t)− tx

m(t)). (3.1)

According to the final value theorem of Laplace, this steady state track-

5Hence, there is no (external sensor) control for the position of the TF in the

ty-direction.

28

Page 43: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.2 Theoretical 2D contour following

Figure 3.6: 2D path (left) and time response (right) for the proportionaltx-direction control of a non-rotating task frame following a straight contourat a fixed angle.

ing error equals6

∆xss = lims→0

s

(s

s + Kx.−tvy tan(C

t θ)s2

)=−tvy tan(C

t θ)Kx

,

(3.2)

with s/(s + Kx) the closed loop transfer function from input to tx-direction error and −tvy tan(C

t θ)/s2 the input signal. Equation 3.2 im-plies that in steady state the task frame moves parallel to the desiredcontour7.

As is well known from control theory, the steady state trackingerror will vanish for the given setup, if the controller itself, aside fromthe robot being an integrator, also contains an integrator. Adding anextra integrator, however, may endanger the stability of the controlloop. A better solution to eliminate the tracking error, not affectingthe stability, is the use of feedforward control. The additional (tangentbased) feedforward component

tvffx = tvy tan(C

t θ), (3.3)6This equation corresponds to the steady state force error ∆F derived in [27] by

substituting ∆x by ∆F and Kx by Kf .kinv, which are the force control gain andcompliance.

7If we assume this parallel path to be valid in the first place, so tvssx =

−tvy tan(Ct θ), and apply the control law to the steady state equilibrium, giving

tvssx = Kx.∆xss, then the result of equation 3.2 directly follows from these latter

two equations.

29

Page 44: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.7: One dimensional position control loop with feedforward

as shown in figure 3.7, keeps the center of the task frame on the desiredpath, once it has reached this desired path. Any initial error in the taskframe position is eliminated by the feedback control. Hence, the track-ing error becomes zero with (tangent based) feedforward. Equation 3.3implies that the angle C

t θ of the contour is measurable. With vision,but also by force tracking, this is the case.

Rotating task frame / straight contour : A last way to eliminate thetracking error in following a straight line, is the use of a rotating insteadof a non-rotating task frame. This fundamentally changes the setup.Here, the controller regulates both the position (in the tx-direction) andthe orientation (about the tz-direction) of the task frame, while movingwith a fixed velocity in the ty-direction of the task frame. Hence, thetx-position control loop of figure 3.5 is augmented with a tθ-orientationcontrol, given in figure 3.8.

Figure 3.8: One dimensional control loop for the TF orientation

Controlling the orientation of the task frame changes the forwarddirection, i.e. the ty-direction, in time, hereby decreasing the angleCt θ. When C

t θ becomes zero, the task frame no longer rotates and thesteady state tracking error in the tx-direction will also become zero.This follows from equation 3.2 with C

t θ = 0.

Rotating task frame / circular contour : As previously mentioned,the 2D control of a rotating task frame contains two control loops: thetx-position control loop of figure 3.5 with proportional control constantKx and the tθ-orientation control loop of figure 3.8 with proportional

30

Page 45: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.2 Theoretical 2D contour following

Figure 3.9: 2D path (left) and time response (right) for the proportionalcontrol of the orientation of the task frame following an arc shaped contourwith fixed (desired) tangent velocity.

control constant Kθ. The ty-velocity is fixed.The control objective is to move along the contour with constant

ty-velocity while keeping the ty-direction tangent to the contour andto minimize the error in tx-direction.

If the path to be followed is arc shaped (with radius r) or, in otherwords, if the desired orientation of the task frame increases linearlywith time, as shown in figure 3.9 then a steady state orientation error,being ∆θss, exists equal to

∆θss ≡ limt→∞

(Ct θd(t)− C

t θm(t))

= lims→0

s

(s

s + Kθ.−tvy

r.s2

)=−tvy

r.Kθ,

(3.4)

with s/(s + Kθ) the closed loop transfer function from input to orien-tation error and −tvy/(r.s2) the input signal.

The orientation control loop is independent of the position controlloop, since moving the task frame in the tx-direction does not changethe angle C

t θ. In contrast, the position measurement (and control) de-pends on the orientation of the task frame, and thus on the orientationcontrol loop.

Once the orientation control loop reaches an equilibrium, Ct θ no

longer changes and is equal to ∆θss. Hence in steady state, the po-sition control loop sees the contour at a constant angle ∆θss. This isequivalent to the position control of a non-rotating task frame with a

31

Page 46: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.10: 2D path of a rotating task frame, following an arc shapedcontour with radius 100 mm, without feedforward control, with Kx = 1 [1/sec],Kθ = 1 [1/sec] and tvy = 10 mm/sec

straight contour. The position tracking error6 thus follows from equa-tions 3.2 and 3.4:

∆xss =−tvy tan(∆θss)

Kx≈ −tvy∆θss

Kx=

tv2y

rKθKx. (3.5)

The simulation results, shown in figure 3.10, confirm this equation.The actual path of the task frame in steady state follows from threesimultaneous, fixed velocities, being tvy, tv

ssx and tω

ss:

tvssx = Kx∆xss ≈ tv

2y

rKθand tω

ss = Kθ∆θss =−tvy

r. (3.6)

Equation 3.6 is consistent with the results of equations 3.5 and 3.4.Appendix A, section A.6, describes the simulation scheme for the

arc shaped contour following control with a rotating task frame and

32

Page 47: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.2 Theoretical 2D contour following

Figure 3.11: Transient behaviour of position and orientation tracking errorsin following an arc with radius 100 mm, without feedforward control, with Kx

= 1 [1/sec], Kθ = variable [1/sec] and tvy = 10 mm/sec

gives a detailed analysis of the actual task frame motion in steadystate. The simulation results of which some are given in figure 3.11,correspond to the theoretical equations. The transient behaviour of theorientation tracking error is of first order. The position tracking errorshows a second order behaviour.

If we add an additional integrator to the control loop of figure 3.8,the steady state orientation tracking error ∆θss will become zero, pro-vided the stability of the control loop is maintained. However, a bet-ter solution to eliminate any steady state tracking error, not affectingthe stability, is again the use of feedforward. Adding curvature basedfeedforward to the orientation control loop, as shown in figure 3.12,eliminates the orientation tracking error. The feedforward component

tωff = − tvy

r= κ.tvy (3.7)

keeps the task frame tangent to the circular path, once this path isreached. This too is confirmed by simulations. To implement thisfeedforward the curvature κ must be measurable, which is the casewith a vision sensor (or the curvature must be known from a model).

33

Page 48: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.12: Orientation control loop with feedforward

According to equation 3.5, a zero orientation tracking error alsoimplies that the position tracking error ∆xss becomes zero in steadystate.

Conclusion : This section describes the steady state errors involvedin planar contour following. Since the control loops (for position andorientation) contain one integrator, there are constant tracking errorsfor the non-rotating task frame following a straight contour and for therotating task frame following an arc shaped contour.

The use of feedforward can eliminate the steady state tracking er-rors without affecting the control stability. For the position controlloop the feedforward component is tangent based. For the orientationcontrol loop the feedforward component is curvature based. Both ori-entation and position tracking errors become zero in steady state whenfollowing a contour with constant curvature by adding curvature basedfeedforward.

Although the static characteristics are derived only for the case of2D planar contour following (3DOF), the conclusions are, in view ofthe one-dimensional control loop analysis, equally valid for 3D or 6DOFcases8.

3.3 Dynamic control aspects

The following investigates the dynamic characteristics of the vision andforce control loops. The theoretical control problem is simplified to thedesign of one-dimensional control loops.

In contrast to most robot controllers our robot accepts velocityset-points instead of position set-points. This makes a trajectory gen-

8This conclusion does not apply to the sensors, e.g. torsion (in 3D) is not mea-surable by a (mono-)vision system.

34

Page 49: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.3 Dynamic control aspects

eration, as is well known in robotics, superfluous. In our approach, aperceived error generates a velocity command which moves the robottowards the target. This automatically results in a target followingor contour following behaviour without the complexities of an explicittrajectory generator [15, 17, 18].

In addition, using velocity set-points as input lowers the order ofthe control loop: after all, when using velocity set-points as input, therobot itself forms a natural integrator, hereby eliminating steady stateerrors in the loop that encloses the robot. As shown in the previoussections, there is no need for an additional integrating action in thecontroller, if feedforward is used. In a position controlled robot, on theother hand, the robot itself is no longer an integrator. To eliminatesteady state errors, the outer control loop needs to possess an integra-tor. The redundant levels of control add to system complexity and mayhave impact on the closed-loop performance [14, 17].

Vision and force control are clearly different. The bandwidth of thevision control loop is limited by the video frame rate of 25 Hz9. Intheory, the bandwidth of an analogue force sensor is much larger. Inpractice, the force control loop runs at a frequency of only 100 Hz. Un-fortunately, the available computational power does not allow highersample frequencies. Moreover, the force measurements are (very) noisy.This enforces the need of a filter which is implemented digitally at thesame sample frequency of 100 Hz! The maximum possible cut-off fre-quency of the force measurement filter is hence 50 Hz. As section 3.4will show, the noise level involved in the experimental force measure-ments compels to the use of cut-off frequencies of 10 Hz or lower! Forboth vision and force control loops, the previous arguments mean thatthe dynamics of the inner joint control loop of the robot are negligible.Hence, as was the case in section 3.2, for the dynamic analysis too, thecomplete robot may again mainly be modelled as an integrator.

3.3.1 Vision control loop

Figure 3.13 shows the simplified one-dimensional digital vision controlloop. The sample frequency is equal to the video rate, being 25 Hz.This means that each 40 ms, which is the sampling period T vs, theinput for the robot is updated. The proportional control gain is Kvs.

9With separate use of odd and even video frames, the frame rate becomes 50 Hz.

35

Page 50: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.13: One-dimensional vision control loop at 25 Hz (T vs = 40 ms)

The robot (with joint controller) is modelled as an integrator (R(s) =1/s). Using a zero order hold function on the input of the robot, thediscrete equivalent of the robot (R(z)) equals

R(z) = T vs/(z − 1), (3.8)

with z the Z-transform variable. The vision sensor VS(z) is modelledby a (pure) delay. The total delay in the computation of the imagefeatures, consists of two components: the image grabbing delay on theone hand and the image processing delay on the other. Grabbing afull size image, seizes one time period T vs. The processing delay variesfrom a few ms to maximum one time period T vs. In order to be able towork with a fixed delay, the maximum total delay of 2 ∗ T vs is taken,(see also figure E.1 of appendix E), hence

VS(z) = z−2. (3.9)

Figure 3.14 gives the root locus for the vision control loop. Fora gain Kvs = 1 [1/sec], the Gain Margin (GM) is 15.45. The closedloop response is critically damped for Kvs = 3.7. For a gain Kvs = 5(GM ' 3), the closed loop step response shows 4% overshoot10. Takingthe simplification of the model into account, we can conclude that themaximum (desirable) proportional control gain in a vision direction is5/sec. Good values range from 2 to 5.

10Defining the bandwidth of the control loop as the 3dB attenuation frequency,then the closed loop bandwidth is 1 Hz for Kvs = 3.7 and 1.75 Hz for Kvs = 5.

36

Page 51: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.3 Dynamic control aspects

Figure 3.14: Root locus for the digital vision control loop of figure 3.13

3.3.2 Force control loop

Figure 3.15 shows the simplified one-dimensional digital force controlloop. The force sample frequency is 100 Hz or, in other words, everyT f = 10 ms the input to the robot is updated. The proportional controlgain is Kf . The robot R(z) is (again) modelled as an integrator, giving(R(z) = T f/(z − 1)) as the zero order hold equivalent. The outputof the robot is a position. The relation between this position andthe measured force is the contact stiffness k [N/mm]. To make thecontrol gain Kf independent from the contact stiffness, the force erroris multiplied by the contact compliance kinv, which is the inverse of thestiffness (kinv = k−1).

The feedback loop contains a unit delay (T f ) and a digital filter.The unit delay incorporates any delay in the control loop, including,the force measurement delay and the force processing. It mainly re-

37

Page 52: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.15: One-dimensional force control loop at 100 Hz (T f = 10 ms)

flects some of the dynamics of the inner servo control loop which are,in contrast to the vision control loop, not completely negligible in theforce control case, because of the higher sample frequency of 100 Hz(compared to 25 Hz in the vision control case)11. The digital filter isa 2nd order Butterworth filter with cut-off frequency at 2.5 Hz. Theneed for this filter is explained in section 3.4, which discusses the forcesensor.

Figure 3.16 gives the root locus for the force control loop. For again Kf = 1 [1/sec], the Gain Margin (GM) is 16.9. The closed loopresponse approximates a critically damped behaviour (no overshoot)for Kf = 412. For a gain Kf = 5, the closed loop step response shows8% overshoot. In view of the simplified robot model, the maximum(desirable) proportional control gain is 4/sec. Good values range from2 to 4.

If the cut-off frequency of the digital filter increases (for example to5 Hz or 10 Hz), the (theoretical) gain margin of the control loop willalso increase and higher control gains are possible. But also the noiselevel will grow. This makes the practical control restless, especially ifthe control gains are set too high. Even more, the dynamics of the filterwill (theoretically) no longer outshine the robot dynamics, such thatthese would need to be modelled more accurately. The overall stabilityof the control loop will not change that much and the (newly) found

11The introduction of a ‘full’ unit delay in the control loop may be a slight over-estimation of the real delay and dynamics, but it will surely give a ‘save’ controllerdesign.

12This corresponds to a closed loop bandwidth equal to 1.3 Hz for Kf = 4.

38

Page 53: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.4 Force sensing

Figure 3.16: Root locus for the digital force control loop of figure 3.15

control gains will come close to the ones given previously.

3.4 Force sensing

The used force sensor is a strain-gauge force sensor, which senses theacting forces by measuring the deformation of the sensor. The sensoroutput is a wrench: a 6× 1 vector containing 3 forces and 3 moments.The measured forces, from environment onto the robot, in the forcesensor frame are recalculated to their equivalents in the task frame bythe force screw transformation fs

tSf as explained in section A.2.

A force sensor calibration determines the scale factors (betweenactual measured signals and corresponding forces), the mass of thetool and the center point of this mass. The latter two parameters areused to compensate gravity forces.

39

Page 54: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.17: Block diagram of force sensor

For reasons of completeness, figure 3.17 gives the block diagram ofthe force sensor13. If strain gauges are used, the force sensor compli-ance fskinv is only a fraction of the contact compliance kinv previouslyintroduced in the force control loop of figure 3.15. The sensor compli-ance fskinv needs to be very small (stiff sensor) in order to minimizethe loading effects of the sensor onto the system, as is well known frommeasurement theory.

Figure 3.18 shows the noise level involved in the force measure-ments. The top figure gives the unfiltered measurement data. Thepeak to peak noise level is about 3.5 N. Peaks on the unfiltered forcesup to 10 N have been measured (with the robot actuators on, butstanding still).

Due to the high noise level a low-pass filter with cut-off frequencyat maximum 10 Hz is necessary. Frequently, the cut-off frequency iseven decreased to 5 Hz or 2.5 Hz. The digitally implemented low passfilter is a 2nd order Butterworth filter. Figure 3.18 (middle & bottom)shows the remaining noise level in the force measurement using thesefilters. Figure 3.19 shows the corresponding Bode plots (and equations)of three Butterworth filters with cut-off frequencies at 10 Hz, 5 Hz and2.5 Hz for a sample rate of 100 Hz.

Without a filter, the robot behaves restlessly and is not adequatelycontrollable. On the other hand, the cut-off frequency is so low thatthe proportional control gains highly depend on the filter (see previoussection)!!

13Figure 3.17 is simplified in one aspect. In the transmission of the measuredsignal to the controller unit, the digital data is made analogue, connected to astandard analogue input of the controller board and then resampled. This makesit possible to use different (force) sensors in the same ‘way’ with the disadvantage,however, that the signal quality degrades.

40

Page 55: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.5 Tracking error identification

Figure 3.18: Noise level of force measurement: (top) unfiltered, (middle &bottom) filtered with cut-off frequency set to 10 Hz and 2.5 Hz

Figure 3.19: Bode plots of 2nd order digital Butterworth filters with cut-offfrequency at (a) 10 Hz, (b) 5 Hz or (c) 2.5 Hz at a sampling rate of 100Hz

41

Page 56: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

3.5 Tracking error identification

This section discusses the identification of the tracking error in detail.The tracking error identification is either based on measured forces orbased on (commanded) velocities. In the former case, we call the cor-responding control action tracking on forces, in the latter case, trackingon velocities.

Figure 3.20: Tracking error identification based on forces (left) or based onvelocities (right)

Take the situation of figure 3.20. In figure 3.20-left, the orientationerror of the task frame is measured based on forces. If the ty-directionis not tangent to the contour, then contact forces occur in tx- as well asin ty-direction. They are tFx and tFy [N] respectively. The orientationerror ∆θ shown in figure 3.20-left then follows from

∆θ = arctan(− tFx

tFy

)(3.10)

In figure 3.20-right, the orientation (tracking) error is measured byexamining the task frame velocities. Moving the task frame in thetx-direction, with a velocity tvx, causes a change in contact force dueto the orientation error ∆θ. To maintain the contact force in the ty-direction, the task frame has to move towards the contour, let’s saywith a velocity tvy

14. The orientation error ∆θ then follows from

∆θ = arctan(

tvy

tvx

)(3.11)

14The velocity tvy in the force controlled direction equals Kfy kinv

y (tFmy − tF

dy) and

thus depends in se on the measured forces.

42

Page 57: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.6 Relative area contour detection

If the tx-direction lies tangent to the contour, the orientation error isequal to zero.

Identification based on velocities is badly conditioned when movingslowly (tvx ≈ 0 in equation 3.11) and is disturbed by the deformationof the tool. On the other hand, friction forces, which are difficult tomodel, disturb the identification of the tracking error based on forces.Both identification methods are noise sensitive. In the velocity basedidentification, however, we use the commanded tx-velocity instead ofthe measured one, hereby making the identification less noise sensi-tive. Hence, if the tangent velocity is not too small, the velocity basedidentification of the tracking error will give the best results.

3.6 Relative area contour detection

The following describes the used low level image processing techniques:the relative area contour detection (section 3.6) and the ISEF edgedetection (section 3.7).

Figure 3.21: Block diagram of relative area contour detection (left) anddefinition of contour feature parameters (right)

The relative area (or pixel-weighting) method is a direct method. Itmeasures the image features of interest directly from the thresholdedimage without any intermediate steps. These image features are thedistance to the contour from the center of the image Cxp [pix] andthe angle of the contour Cθ [rad] as shown in figure 3.21-right. Figure3.21-left shows the single block diagram of this method, emphasizingits direct characteristics15.

15This in contrast to the ISEF edge detection method, described in section 3.7 forwhich the found edge positions are only an intermediate step in the measurementof the contour.

43

Page 58: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.22: The position Cxp of the straight contour is proportional to thearea abcd (left); the orientation of the contour Cθ is related to the area ABC(right).

Figure 3.22 illustrates the principle. Under the assumption of astraight contour which enters the image through the bottom-side andleave the image through the top-side, Cxp and Cθ follow from the weight-ing of ‘positive’ and ‘negative’ object or environment pixels16. Theobject is assumed to be dark, the environment bright17.

Let I(u, v) be the brightness (also intensity or grey-level value) ofpixel (u, v), with I = 0 for a black pixel and I = 255 for a white one.Define the functions Obj(u, v) and Env(u, v), which indicate whetherpixel (u, v) belongs to the object (dark) or to the environment (bright)respectively, as

Obj(u, v) =

{1 if I(u, v) < threshold,

0 if I(u, v) ≥ threshold,(3.12)

and

Env(u, v) = 1−Obj(u, v) =

{0 if I(u, v) < threshold,

1 if I(u, v) ≥ threshold.(3.13)

The threshold value is chosen in such a way that it clearly separatesobject and environment, e.g. equal to 120 (medium grey).

16‘Pixel’ originated as an acronym for PICture’S ELement.17Mark that such a condition can always be achieved by the use of back-lighting.

44

Page 59: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.6 Relative area contour detection

Then, the distance Cxp, for an xs by ys image, is given by:

Cxp =1

2ys

xs,ys∑u,v=1,1

Obj(u, v)−xs,ys∑

u,v=1,1

Env(u, v)

(3.14)

Equation 3.14 counts the number of pixels in the current image be-longing to the object minus those belonging to the environment, whichcorresponds to the area of the parallellogram abcd in figure 3.22-left.The area of abcd equals 2Cxpys. This explains equation 3.14.

The angle Cθ is related to the area of ABC in figure 3.22-right. Cθis in fact equal to half the top angle of the triangle ABC. The area ofABC follows from the difference in the number of object pixels in thetop half of the image and those in the bottom half of the image. Thisgives

Cθ = − arctan

4y2

s

xs∑u=1

ys/2∑v=1

Obj(u, v)−ys∑

v= ys2

+1

Obj(u, v)

. (3.15)

The main advantage of the relative area method is its robustness.No previous filtering of the image is needed because the summationover all object pixels will average out the effect of faulty pixels. It isalso a very easy to implement and quite fast method.

There are however important restrictions. First, this method needsa threshold value, to separate the pixels belonging to the object fromthose belonging to the environment. This is a major drawback whenused under changing lighting conditions.

Second, the accuracy of the position and particularly the anglemeasurement is limited due to the limited size of the image and theinevitable pixel quantization. Figure 3.23 gives an example. The leftpart of figure 3.23 shows the (simulated) measured angle versus theactual angle of the computer generated contour. It further shows theresulting error for angles going from 0◦to 25◦. The image size is 64 by64 pixels. The right part of figure 3.23 gives the theoretical minimum(positive) measurable angle according to equation 3.15 as a functionof the image height ys. The smaller the image, the less accurate theorientation measurement will be.

Third, as previously mentioned, equations 3.14 and 3.15 are basedon the assumption that the contour enters and leaves the image through

45

Page 60: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.23: Examples of the accuracy of the relative area contour mea-surement: (left) measured contour angle and error versus angle of computergenerated image; (right) absolute value of the minimum measurable angle asa function of the image height ys

top- and bottom-sides18. This can be checked by looking at the fourcorners of the image. If the assumption is not true, the problem issolved by taking a sub-image for which it is true.

Fourth, equations 3.14 and 3.15 are valid for vertical contours withthe object lying left in the image. Equivalent equations for horizontalcontours and/or the object to the right can be deduced.

Finally, the calculated parameters refer to a line. They are there-fore only valid for a straight contour, resulting in a less accurate mea-surement (and hence positioning) with curved contours. An image of astraight contour, however, is point symmetric w.r.t. any frame centeredat the contour. This property can be checked with following relation(see section A.4 for the full deduction of this equation): An image with

18Although equations 3.14 and 3.15 may not be valid for contours with largeangles, the incorrect measurement will not affect the correctness of the positioning.The generated signal (with correct sign) will still result in a movement of the robottowards the desired image, containing a vertical contour, for which the equationsare valid again.

46

Page 61: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.7 ISEF edge detection

vertical contour is point symmetric if

ys∑v=1

floor(xs2

+Cxp)∑u=1

Obj(u, v)−xs∑

floor(xs2

+Cxp)+1

Obj(u, v)

∣∣∣∣∣∣xs∑

u=1

ys/2∑v=1

Obj(u, v)−ys∑

v= ys2

+1

Obj(u, v)

∣∣∣∣∣∣ ' −(xs + 2 Cxp).ys

2

(3.16)

This equation, possibly repeatedly applied to different sub-images,will validate or reject the calculated values Cxp and Cθ. In the lattercase a different method or a smaller image, for which a straight lineapproximation is valid, has to be considered. The resolution of Cθ,however, will further deter when decreasing the image sizes because ofthe mentioned pixel quantization (see figure 3.23).

All of this implies that on most images the proposed relative areamethod can only be applied locally, on small sub-images. This is thecase for chapter 5 which presents a successful experiment using thisrelative area method. The aim of this experiment is the alignmentat a given distance of an uncalibrated eye-in-hand camera with anarbitrarily placed rectangle for the full 6 degrees of freedom.

3.7 ISEF edge detection

This section briefly discusses the used ISEF edge detector. As shownin figure 3.24, the ISEF edge detection is the first step in obtaining thenecessary control data. The input for the ISEF edge detection is theraw image, the output is a set of contour edge points [Xp, Yp], which intheir turn are the basis for the contour modelling, described in the nextsection, and the control data extraction. The number of contour pointsis chosen to be sufficient (e.g. 7 to 11 points) for a good subsequentfit. Since the image features are computed in several steps, we call thisapproach an indirect method, in contrast to the (direct) relative areamethod of the previous section.

The ISEF edge detector, proposed by Shen and Castan [70], con-sists of an Infinite Symmetric Exponential (smoothing) Filter (ISEF -

47

Page 62: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.24: Block diagram of control data computation: the first step isthe ISEF edge detection

see figure 3.25), which efficiently suppresses noise, followed by a differ-ential element to detect the changes. It computes the first and secondderivatives of the image brightness (or intensity) function (I). Thelocation of a step edge corresponds to an extremum in the intensitygradient or to a zero crossing of the second derivative. The recursiverealization of the ISEF edge detector results in a line convolution op-erator which works on a row or column of grey values (I(j)) accordingto the following equations:

yrd(j) = (1− bisef )I(j) + bisef yrd(j + 1), j = l − 1 . . . 1yld(j) = (1− bisef )I(j) + bisef yld(j − 1), j = 2 . . . l

D1(j) = yrd(j + 1)− yld(j − 1), j = 3 . . . l − 2D2(j) = yrd(j + 1) + yld(j − 1)− 2I(j), j = 3 . . . l − 2

(3.17)

Figure 3.25: Infinite symmetric exponential filter (ISEF)

yld and yrd are left and right convolutions respectively; D1 is the firstderivative and D2 is the second derivative; j is the pixel index on arow (or column); l is the length of the pixel row (or column); I(j) isthe brightness of pixel j and bisef is the exponential constant, whichshapes the exponential filter, shown in figure 3.25. Changing bisef

allows a trade off between noise suppression and edge location accuracy.Because the algorithm only uses line convolutions on a limited numberof pixel rows, the real-time processing restrictions are easily met.

48

Page 63: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.7 ISEF edge detection

Figure 3.26: Edge localization by the zero crossing of the second derivative:(left) brightness function, first and second derivatives, (right) linear interpo-lation to pinpoint the exact zero crossing (with sub-pixel accuracy)

Figure 3.27: Noise level of ISEF edge detector

Figure 3.26-left gives an example of the edge detection. The (single)edge is pinpointed by the zero crossing of the second derivative atthe extremum of the first derivative of the image brightness function.If, due to the pixel quantization, the second derivative has no exactzero value, then the zero crossing follows from a linear interpolation asshown in figure 3.26-right. This results in sub-pixel accuracy for the

49

Page 64: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

edge localization.Figure 3.27 shows the measured noise level for the implemented

ISEF edge detector: A fixed image (with still camera) is grabbed andprocessed 500 times in a row; for each image, the ISEF detector mea-sures the shown x-position of the edge on a fixed image row; the noiselevel is in the order of ±0.15 pixels.

In contrast to the relative area algorithm, the ISEF edge detectoris very sensitive to interlacing which occurs when the camera movesvery fast (in a direction normal to the contour). As figure 3.28 exem-plifies, such (undesirable) interlacing effects inevitably have impact onthe accuracy of the contour measurement. Therefore, only odd imagelines are used (or scanned). This avoids interlacing effects and henceimproves the image processing.

Figure 3.28: Interlacing effect due to fast (sidewards) moving camera

3.8 Contour modelling and control data ex-traction

The previous section discusses the detection of edge points on a con-tour. This section investigates the contour modelling and control dataextraction steps, shown in figure 3.29. The model is represented bythe parameters of a function, which is fitted through the set of edgepoints. The searched control data, such as contour pose and curvature,are derived from the analytical representation of the contour.

50

Page 65: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.8 Contour modelling and control data extraction

Figure 3.29: Block diagram of contour modelling and control data extrac-tion steps

Appendix B presents several contour models and evaluates them.The evaluation criteria (similar to those of section 2.2) are:

1. the accuracy and robustness of the fitted function w.r.t. the realcontour,

2. the suitability of the model as a basis for the control data com-putation and

3. the computational burden in obtaining the model and controldata.

The following describes the tangent model and summarizes theconclusions of appendix B by comparing the tangent model to twoother contour models, being a parameterized third order polynomialand parameterized cubic splines. The latter two models are chosen forthis comparison, because they give the best positional fits with limitedmodel complexity.

Figure 3.30: Tangent to contour

51

Page 66: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.31: Tangent to contour in nine points(left); Corresponding leastsquares solution for the curvature computation (right)

The tangent model : The tangent contour model consists of (a setof) lines of the form

xp = ayp + b. (3.18)

The model parameters a and b follow from a least squares fit of a linethrough the set of n contour points [Xp, Yp] with Xp = [xp(1) . . . xp(n)]′

and Yp = [yp(1) . . . yp(n)]′. Normally, the contour lies ‘top-down’ in theimage. Hence it is logical to represent the contour by x as a functionof y.

If the contour points lie closely together, the fitted line will ap-proximate the tangent to the contour at the center of the data set.Figure 3.30 gives an example.

The main advantage of this model is its simplicity. The contourpose19 (position and orientation) for one single point on the contouris directly given by the model. A line, however, does not give anyinformation about the contour curvature, at least not by itself. If, onthe other hand, the tangents to the contour are known for successivecontour points, the curvature κ can be computed as the change inorientation Cθ as a function of the arc length s:

κ =d Cθ

ds. (3.19)

In practice, an approximation for κ follows from the least squares so-lution of a set of m first order equations

Cθ(i) = κ s(i) + cte, i = 1 . . .m. (3.20)19similar to the output of the relative area method shown in figure 3.21

52

Page 67: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.8 Contour modelling and control data extraction

The m (e.g. 9) pairs (Cθ(i), s(i)) lie symmetrically around the positionof interest. Figure 3.31 gives an example. An experimental validationof the curvature computation according to equation 3.20 is given inappendix C.

Comparison : Figure 3.32 compares the accuracy and suitability ofthree contour models. The input image, given at the top of the figure,is a computer generated circle with a radius of 209 pixels. A circulartest contour is chosen, since it has a known constant curvature, in thiscase equal to (1/209 =) 4,78.10−3 [pix−1]. The contour is approximatedby (1) tangent fits, (2) a parameterized 3rd order polynomial fit and(3) parameterized (smoothed) cubic splines. A full description of thesecontour models, among others, is given in appendix B.

All three models are fairly robust and give a good positional ap-proximation of the contour. The contour plots (top figure) are hardlydistinguishable. For the computed contour tangent (middle figure),both the tangent model and the splines give good results. The poly-nomial model gives a less accurate tangent profile. But the most im-portant difference emerges in the computed curvature profile given infigure 3.32-bottom.Both the polynomial fit and the splines show unacceptable deviationsin the computed curvature from the real one. Only the tangent modelgives satisfactory results for the curvature, even in spite of the factthat the curvature is related to the second derivative of the positionand thus difficult to compute accurately. The proposed (double) leastsquares solution clearly suppresses the explosion of noise which is in-evitably related to a derivative operation.

The computational effort for the three models, leaving the parame-terized character aside, is of the same order of magnitude: The tangentmodel uses two least squares fits, each in two unknowns with about 9equations; a non parameterized 3rd order polynomial model uses oneleast squares fit in four unknowns with up to 8 (or more) equationsand straightforward formulas for tangent and curvature computation;and the cubic splines are based on a set of 17 segments each using onlystraightforward formulas both for the three unknowns as well as for thetangent and curvature computation.

For a good fit on an arbitrary contour, however, both the polyno-mial and the splines need to be parameterized (see appendix B), herebydoubling the computational effort, since both x and y are now expressed

53

Page 68: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.32: Computer generated circle with 209 [pix] radius (top); Com-puted tangent (middle) and curvature (bottom) versus arc length for the arcshaped contour of the top image based on three different models: tangent fits,a parameterized 3rd order polynomial fit in 8 contour points and parameter-ized (smoothed) cubic splines with 17 contour knots

54

Page 69: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.8 Contour modelling and control data extraction

as a function of the parameter w. Even more, parameterization intro-duces a new problem: the determination of the parameter value for thecontour position at which the curvature needs to be computed. Forexample, in the determination of the curvature at the contour posi-tion y = 0, we first need to compute the value of the parameter w forwhich y = 0 by searching the roots of the function y = η(w). Thisagain increases the computational effort. All of this implies that alsofor the computational effort, the tangent model is preferred over theother proposed models.

Conclusion : Of all the investigated contour models, going from asimple tangent model over full second order circular or elliptic modelsto polynomials and splines possibly parameterized20, the tangent modelgives the most accurate approximation of the contour position, orien-tation and curvature at a limited computational effort, thus permittinga ‘real time’ implementation. Table 3.2 summarizes the characteristicsof the three treated contour models.

Tangent Third order Parameterizedmodel parameterized smoothed

Accuracy polynomial cubic splines↪→ Position Very good Good Very good↪→ Tangent Very good Poor good↪→ Curvature Good Bad (unacceptable deviations)

Order of Important for Not important Very importantpoints curvature -criticalPose Least squares Twice least Simple equations

computation solution squares solution for each segmentCurvature Least squares Simple equations

computation solution but extra parameter search

Table 3.2: Comparison of the characteristics for three contour models.

20See appendix B for a full discussion of these models

55

Page 70: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.33: Used Camera model with definition of camera frame, cameraparameters and image plane coordinates

3.9 Camera model and calibration

Finally, the camera calibration, explained in this section bridges theremaining gaps between image and real world coordinates on the onehand and between camera frame and task frame signals on the other.

Used camera model : The used camera model is the perspectiveview, pin-hole model shown in figure 3.33. Following equations describethe mapping of a point to its corresponding camera frame position bythe pin-hole model: {

apcamx = − ap

camz apxp µp/fap

camy = − apcamz apyp µp/f

(3.21)

with f the focal length [mm], µp the effective pixel dimension [mm/pix]of the square pixels, (apxp , apyp) the image plane coordinates of anarbitrary point ap [pix] and ap

camP = ( apcamx, ap

camy, apcamz)′ the camera

frame coordinates of point ap [mm].The given model has one fixed intrinsic camera constant (µp) and

nine parameters: three internal (also called intrinsic or interior) pa-rameters and six external (also called extrinsic or exterior) parameters.

The internal parameters21 are the focal length f and the center21A complete model also includes the scaling factor sx and the lens distortion κ1

as internal parameters. See paragraph Tsai’s model and parameters.

56

Page 71: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.9 Camera model and calibration

coordinates of the image (Cx, Cy). The center of the image is expressedin the (u, v) coordinates: the image plain coordinates of a point in pixelsrelative to the top left corner of the image. (Cx, Cy) is the piercing pointof the camera frame z-axis with the image plane, and thus the originof the image coordinate frame (xp , yp) (see figure 3.33).

The external parameters determine the position of the camera framein a reference frame, which is in our case the end effector frame. Theyare (Tx, Ty, Tz) the translational components in mm and (αx, αy, αz)the rotation angles in radians for the transformation between referenceand camera frames.

Calibration : Full calibration (for internal and external parame-ters) of the given camera model is essential in two cases:

• in position based visual servoing and

• in endpoint open-loop combined mounting of vision and forcesensors.

In the former case, the calculation of the Cartesian position of an imagefeature uses by definition a known and calibrated camera model. Inthe latter case, calibration is essential to link the vision measurementsto the contact (position) between force probe and object. In this work,Tsai’s calibration technique [49, 78, 77] is adopted, however, with asubstantial change in the setup.

Tsai’s model and parameters : Tsai’s camera model is a pin-holemodel of 3D-2D perspective projection with first order radial lens dis-tortion as shown in figure 3.34-left. The reference frame is an absolutefixed coordinate frame. The model has two additional internal param-eters:

κ1 : the radial lens distortion coefficient [mm−2] and

sx : a scale factor to account for any uncertainty in the framegrab-ber’s resampling of the horizontal scanline.

The radial lens distortion coefficient κ1 and the scale factor sx,however, are of more importance in 3D vision measurements with highaccuracy than in visual servoing. They are not used in our approach(sx = 1 and κ1 is neglected).

57

Page 72: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

Figure 3.34: Left) Tsai’s pin-hole camera model: a geometric model of 3D-2D perspective projection with radial lens distortion; Right) Tsai’s calibrationsetup: the corners of each square with exactly known world coordinates makeup the calibration points

The first stage of Tsai’s calibration solves a linear set of equa-tions, using a least squares method, in five unknowns, αx, αy, αz, Tx

and Ty being the 3D camera position up to a variable distance in thez-direction. The second phase determines the remaining six unknownsin a recursive way. This is necessary due to the remaining non-linearequations. The division of the calibration problem in a linear and anon-linear part, is one of the main contributions of Tsai’s calibrationtechnique resulting in enhanced accuracy, speed and versatility.

Tsai’s setup : The calibration data for the model consist of the3D (x, y, z) world coordinates of a feature point and the correspondingcoordinates (u, v) of the feature point in the image. The quality ofthe calibration depends among other things on the accuracy at whichthe world coordinates of the calibration points are known. Tsai uses acalibration block with a pattern of exactly known squares and a fixedcamera (see figure 3.34-right). The corners of each square, measuredwith sub-pixel accuracy, make up the calibration points. The position-ing of the calibration block has to be performed very accurately. In anormal situation, this is a tedious and unpleasant operation. Hence,many researchers tend to avoid calibration.

58

Page 73: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.9 Camera model and calibration

Figure 3.35: View of the calibration setup (for one position) using only onecalibration point with a fixed absolute position and a end effector mountedcamera

Used setup : Our setup highly differs from Tsai’s setup. In contrastto Tsai, not the absolute pose of a fixed camera but the relative poseof an end effector mounted camera w.r.t. the end effector needs to becalibrated. Furthermore, our calibration procedure utilizes only onecalibration point, thus avoiding the accurate positioning, mainly inorientation, of a calibration pattern. Figure 3.35 gives an overviewof the used setup, defining absolute, end effector and camera frames.These frames are indicated by the preceding subscript abs, ee and camrespectively. The z-direction of the camera frame coincides with theoptical axis but points into the camera, in contrast to Tsai’s setup.

Used procedure : The absolute position of the calibration pointis directly measured by the robot system by controlling the robot endeffector to a known position and placing the calibration point at that

59

Page 74: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

position. This makes up the position of the calibration point cpabsP

expressed in the absolute coordinate frame. The calibration point is infact the center of a small (black) circle.

Next, the robot end effector is controlled to a set of known posi-tions ee

absH(i), with eeabsH the homogeneous representation22 in absolute

coordinates of the frame attached to the end effector and i the numberof each position, going from 1 to e.g. 100. In order to get representativecalibration parameters, the spread of the end effector positions coversthe actual used workspace. For each of these positions the calibrationpoint shifts relative to the end effector. The mounted camera, then,measures the centroid of the circle in the image plain, as the averageposition of the pixels belonging to the circle, in horizontal as well as invertical direction.

This results, for each robot position, in the searched image pointwith sub-pixel accuracy, given by the set [u(i), v(i)]. See figure 3.36.The relative position of the calibration point w.r.t. the end effectorcpeeP (i) is computed for each robot position using the absolute measure-ments:

cpeeP (i) = [ ee

absH(i)]−1 cpabsP (3.22)

with P the homogeneous coordinates [x, y, z, 1]′. If we split eeabsH in a

3× 3 rotation matrix and a 3× 1 translation vector according to

eeabsH =

(ee

absRee

absT0 0 0 1

)(3.23)

then equation 3.22 becomes:

cpeeP (i) = [ ee

absR(i)]−1 ( cpabsP − ee

abs T (i)). (3.24)

From equation 3.24 can be seen that if the orientation of the endeffector ( ee

absR) does not change and if it is taken parallel to the baseframe, the calculation of the relative position boils down to a simplesubtraction, thus simplifying the method even more. Since both theend effector positions and the position of the calibration point are “mea-sured by the robot”, the method takes advantage of the high relativepositioning accuracy of the robot.

The input for Tsai’s calibration thus consists of the 3D (x, y, z)relative coordinates of the calibration point in mm, being cp

eeP (i), and

22See also Appendix A section A.1

60

Page 75: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3.9 Camera model and calibration

Figure 3.36: Example of absolute end effector positions eeabsP (i)(top); re-

sulting relative calibration point position in the end effector frame cpeeP (i),

which are used as input for the calibration (middle); and measured imageplane coordinates [u(i), v(i)] of the calibration point for the different end ef-fector positions (bottom)

61

Page 76: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

3 Framework

the corresponding image coordinates [u(i), v(i)] in pixels. Figure 3.36gives an example. The external parameters (αx, αy, αz, Tx, Ty, Tz), re-sulting form this calibration then determine the transformation fromend effector to camera23:

Hcam−Tsai =

Rz(αz).Ry(αy).Rx(αx)Tx

Ty

Tz

0 0 0 1

(3.25)

with Ri(α) the rotation matrix for a rotation of α radians about direc-tion i (see also section A.1).

The searched relative pose of the camera frame expressed in theend effector frame, or in other words the transformation matrix fromcamera coordinates to coordinates in the end effector frame, is theinverse of equation 3.25:

camee H = [Hcam−Tsai]−1

Rx(π)000

0 0 0 1

(3.26)

The extra rotation over 180 degrees about the x-axis (Rx(π)) gives acamera frame with the camz-axis pointing into the camera, the camx-axis pointing to the right of the image and the camy-axis pointing up-wards in the image (see figures 3.33 and 3.35 ).

3.10 Conclusion

This chapter describes the basic elements which make up the underly-ing structure for our approach. The key elements are the hybrid controlscheme and the TF formalism. The analysis of the hybrid controllerwith feedforward proves its fitness for tracking tasks such as contouror path following. Control gains are deduced for vision and force con-trol loops. The (characteristics of the) separate sensing modalities,including the image processing, the contour modelling and the cameracalibration, are discussed and explained.

23According to Tsai’s setup, the camz-axis points out of the camera as shown infigure 3.34.

62

Page 77: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 4

Classification

4.1 Introduction

This chapter shows how to use the task frame to model, implementand execute 3D robotic tasks which combine force control and visualservoing in an uncalibrated workspace.

Chapter 3 explained the basics of the task frame formalism. Itdivides the control space, by means of the high level task description, inorthogonal directions as a first step to integrated vision/force control.Possible directions are vision, force, tracking and velocity directions,indicated by dvs, df , dtr and dv respectively. The task descriptionfurther specifies the set-points for force, velocity and vision directions.To any direction feedforward, indicated with d+ff , may be added, asshown in the control scheme of figure 3.1.

According to Nelson [57], the levels of force/vision integrated con-trol are threefold, being traded, hybrid and shared control. In tradedcontrol, a given direction is alternately controlled by vision or by force.Hybrid control involves the simultaneous control of separate directionsby vision and force. In shared control, which is the highest level ofvision and force fusion, both sensors control the same direction si-multaneously1. This chapter gives numerous examples of combinedvision/force tasks, with the emphasis on hybrid and shared control.

1A fourth vision/force sensing fusion occurs when the vision algorithm recon-structs the 3D position of a feature point, using the depth to the object from the3D position of the force probe contact. This last method of sensor fusion dependson the relative mounting of the calibrated camera and the used vision algorithms.

63

Page 78: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

The task frame concept provides a common frame to coordinatemultiple external sensor-based controllers, especially when these sen-sors are mounted on the robot end effector. Mounting the camera onthe end effector results in a controllable camera position. The imagefeature of interest is (mostly) placed on the optical axis (center of theimage) by visual servoing2, which makes the feature measurement lesssensitive to calibration errors or distortions in the camera/lens system.Moreover, incorporating the vision information in the position controlloop of the robot makes the control robust against kinematic or struc-tural errors [41].

The combined mounting of force and vision sensors and the useof a control scheme with an internal velocity controller in combinationwith the task frame formalism, as shown in figure 3.1, distinguishes ourapproach from any other reported combined vision/force approach [40,44, 57, 64, 93].

Overview : First, section 4.2 summarizes the restrictions we im-pose on the considered tasks in a combined vision/force setup. Thisresults in four (meaningful) distinctive configurations which are clas-sified in section 4.3. This classification depends on both the relativecamera mounting and the involved control issues. Section 4.4 contin-ues with the exploration of integrated vision/force tasks by giving allpossible types of shared control in one direction. Next, section 4.5compares and discusses several full 3D tasks. Each task is explainedby describing the control actions to be taken in the task frame. Finally,section 4.6 concludes the chapter.

4.2 Adopted restrictions

In robotic tasks, vision and force sensing are highly complementary.Handling soft objects, free space positioning, looking for a startingpoint, etc. are best handled by the vision system; making contact andtracking a surface in contact necessitate the use of a force sensor.

The force sensor gives full 3D information (6 force components)about the local contact with the object. Hence, the force sensor maycontrol up to the full 6 DOF of the robot, depending on the actualcontact situation between tool and environment.

2According to the taxonomy introduced by [67] the used visual servoing type isa dynamic look-and-move control.

64

Page 79: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.2 Adopted restrictions

The vision system, on the other hand, gives global informationabout the 3D environment projected onto the 2D image plane. As-suming the exact dimension or texture of the object or image featureis unknown3, a distance measurement from image to object plane (ordepth measurement) by the vision system is not possible4. Hence, the(mono) vision system can only measure, and therefore control, a max-imum of 3 feature characteristics independently, being the perceivedfeature position (in x and y) and orientation (about z).

In order to ‘fuse’ the sensor signals, they are transformed or linkedto the task frame5. A general transformation is very easy for the forcemeasurements, but less so for visual information. After all, a general(3D) transformation of the vision measurement to the task frame wouldrequire the object depth and depth estimation from a single image isan ill conditioned operation. Therefore, the following discusses onlythose combined vision/force tasks in which the camera, if possible, ismounted in such a way that the vision controlled task frame directionslie either parallel (for a translation) or perpendicular (for a rotation)to the image plane. This simplifies the mapping (if needed!) fromperceived image features (in the camera frame) to their equivalents inthe task frame significantly. We refer to such a mounting as parallelmounting. Due to possible collision of the camera with the object,positioning the camera in a such way that the image plane lies parallelto the object may not always be feasible, necessitating a non-parallelmounting.

Furthermore, the tool or TF z-axis is by convention chosen iden-tical (up to the sense) to the end effector z-axis. Hence, due to thephysical connection between end effector and tool, the center of thecamera frame can not lie on the tool z-axis6.

3This is the case for all considered tasks except for the visual alignment task ofchapter 5.

4Anyhow, if it were possible, such a depth measurement would be ill conditioned.5Note that a transformation of the vision measurements to the TF is mandatory

in position based visual servoing. In image based visual servoing, on the other hand,the (control of an) image feature is directly linked to a given TF direction. This isthe case in the 3D alignment task of chapter 5.

6Only in the special case of a force sensor with a hole in it and the camera lookingthrough this hole, the given assumption is not valid. This special case is howevernot considered.

65

Page 80: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

Summarized, the adopted assumptions or constraints for the pre-sented examples are:

1. Only tasks using combined mounting of vision and force sensorare considered.

2. Camera and TF z-axis can never coincide (geometric mountingconstraint).

3. The camera frame may never collide with the object (geometricobject constraint).

4. The distance between camera and object along the optical axis(z-axis) is not measurable by the vision system (observabilityconstraint).

5. The optical axis points to the image feature of interest (resultingin minimal distortion) and, if possible, the image plane is takenparallel to the task frame (optimal camera positioning).

4.3 Tool/camera configurations

Two force tool/camera configurations are possible: either the camerasees the contact between tool and object or it does not see it.

ECL: If the camera observes both object and tool, the setup isreferred to as Endpoint Closed-Loop (ECL). The visual servoing consistof controlling the relative position of the force probe w.r.t. the objectas seen by the camera. In this setup, the optical axis of the camerapoints towards the center of the task frame or to the tool center point(TCP). Both measurement and control by vision as well as by force arecollocated at the same point, i.e. the TF origin. An ECL setup mayfurther be divided into two cases depending on the object constraintand the optimal camera positioning.

Parallel vs. Non-parallel ECL: If feasible, the image plane is takenparallel to the TF. Otherwise, a non-parallel setup is chosen. The con-straining factor in the above choice is of course the fact that object andcamera may never collide. Figure 4.6 gives examples of ECL configu-rations with parallel (4.6-top) and non-parallel mounting (4.6-bottom).

66

Page 81: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.3 Tool/camera configurations

An ECL visual servoing action is possible without calibration, al-though the exact relation between the controlled velocity and thechange in measured feature error is, with an uncalibrated camera, un-known and can therefore not be taken into account in the control law.If, on the other hand, camera pose and force tool are calibrated, theexact 3D positions and errors of the contact point or image feature areknown, making a more accurate control design possible.

EOL: If, on the other hand, the camera does not see the tool, thesetup is referred to as Endpoint Open-Loop (EOL). In contrast to theECL case, an EOL setup comes down to a non-collocated control. Notonly the actual TF pose, but also the pose of the camera frame haveto be controlled.

Again two subcases are possible: either the relation of the cameraframe to the task frame is fixed or it is variable. Both cases use a par-allel mounting. The difference between the two configurations dependson the existence (and use) of a redundancy for the TF orientation.This redundancy is explained first. Then the fixed and variable EOLconfigurations are discussed.

Redundancy for TF orientation Since task and end effector framesare not necessarily rigidly fixed to each other, the TF orientation pos-sesses a redundant level of control, if a rotationally symmetric tool isused. For example, reorienting the task frame can be done (i) by ro-tating the robot end effector while keeping the task frame fixed to theend effector or (ii) by redefining the relation, in particular the angleabout the z-axis, between task and end effector frames. Hence, an addi-tional requirement may be imposed on the TF/end effector orientationcontrol.

Figure 4.1 gives three examples applied to a contour following task.The task objective consists of keeping the task frame tangent to thecontour. However, the way this is realized differs from case to case.In the first case (left), the task frame is kept tangent to the contour(while moving along the contour) by rotating both task frame and endeffector. The task frame is fixed to the end effector. In the second case(middle) the task frame is kept tangent to the contour by redefiningthe relation between task frame and end effector. The orientation ofthe end effector does not change but the task frame can rotate withrespect of the end effector. In the third case (right), the angle between

67

Page 82: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

task and end effector frames is again variable but in addition the endeffector orientation may change. The variable orientation of the endeffector, which is independent of the TF orientation, can now be usedto position the end effector mounted camera7.

Figure 4.1: Examples for two time instants (t1 and t2) of a TF fixed to theend effector (left), a rotating TF with fixed end effector orientation (middle)and both variable task and end effector orientations (right)

Fixed EOL: Since the camera is rigidly mounted on the end effector,the camera frame is always fixed w.r.t. the end effector (frame). Hence,the fixed case of figure 4.1 also corresponds to a fixed camera-taskrelation. Here, some TF directions are used to control the pose of thecamera frame, in such a way that the image errors8 diminish (to zero).However, controlling the camera pose will influence the TF or tool poseas well, since both are fixed w.r.t. each other.

With image based (fixed EOL) visual servoing, there is even no needfor the mentioned general transformation of the vision measurementsto their equivalents in the task frame. Nor a calibration of the camerasetup is necessary. However, as is the case in the ECL configuration, acalibrated camera is advantageous for the control design.

Variable EOL: In the variable camera frame/task frame case, thepose of the camera may change without actually changing the TF pose.

7This is yet another extension to the task frame formalism, similar to thosepresented by Qui Wang[88].

8i.e. the difference between desired and measured image features.

68

Page 83: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.3 Tool/camera configurations

This corresponds to the variable case of figure 4.1 using the redundancywhich exists for rotationally symmetric tools. If the tool axis is identicalto the last revolute joint axis, rotating the end effector changes theposition of the eccentric mounted camera, while the tool center itselfstands still. During this motion, the TF pose stays fixed w.r.t. anyabsolute frame, but shows a varying rotation about the tz-axis w.r.t.the end effector.

An example of the variable camera-task relation case in the EOLconfiguration is the planar combined vision/force contour following taskdescribed in chapter 6. In this task, the camera looks ahead, while mov-ing along the contour, to measure (and log) in advance the path whichthe tool has to follow later on. Hence, in contrast to the fixed case,there is a (possibly variable) time shift between the moment the visiondata are acquired and the moment these data are used (in the TF).To solve this time shift, the visually measured path is linked to theactual contact point based on Cartesian positions. For a planar task,this linking method is straightforward, provided the camera setup iscalibrated, as explained in chapter 6. For a 3D task, on the otherhand, the exact depth of the feature at the moment the vision dataare logged, is unknown. But the depth to the object follows from thecontact between force probe and object when the force probe passesthrough the previously logged position. At this time instant, the exact3D (visual) error may be reconstructed in order to compute the appro-priate control or feedforward from it. This, in contrast to any otherconfiguration, necessitates the calibration of the camera pose w.r.t. theend effector.

Advantages and disadvantages: The main advantage of an ECLconfiguration is the trivial fact that the tool always lies in the camerafield of view. The main disadvantage may be the occurrence of occlusionof the object by the tool, making a straightforward vision measurementimpossible. Furthermore, compensation for the vision processing delayis not possible. A disadvantage specific to a non-parallel ECL configu-ration is the inaccurate (or unknown) mapping of vision measurementsto the task frame.

The main advantage of an EOL configuration is the absence of occlu-sion of the object by the tool. Hence, easy image processing algorithmscan be used. Furthermore by looking ahead, image capture and controldelays can be compensated. The main disadvantages are the needed

69

Page 84: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

Figure 4.2: Classification of combined vison/force configurations

link, if image data are to be used in the task frame, and the extra con-trol involved in the variable camera frame/task frame relation. Bothdisadvantages are, however, adequately solved in chapter 6.

As mentioned, calibration of the camera setup is essential in a vari-able EOL configuration in order to link the vision information to thecontact at hand9. However, for all configurations, a calibrated cameraimproves the control design.

Summary: This section gives four meaningful tool/camera config-urations. They are on the one hand the endpoint closed-loop config-urations with either a parallel or a non-parallel camera mounting andon the other hand the endpoint open-loop configurations with eithera fixed or a variable camera frame to task frame relation. Figure 4.2gives an overview.

4.4 Shared control types

The highest level of sensor integration is shared control. There are six(meaningful) basic forms of shared control: three for an axial direction(a) and three (counterparts) for a polar direction (p). Figure 4.3 showsone example for each possible form of shared control:

[a1,p1:] Vision and force control on the same direction (df+vs):The objective of the approach task (a1) of figure 4.3 consists of estab-

9Due to the calibration, the vision algorithms too will benefit from the combinedvision/force setup. If the relative position of camera and force sensor is known andif the (calibrated) tool is in contact with the object, the depth from the image tothe object plane is known. Hence, the 3D position of the image feature can becalculated easily (with mono vision).

70

Page 85: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.4 Shared control types

Figure 4.3: Task examples of the six possible shared control forms: 3 axial(left) and 3 polar (right)

lishing a point/plane contact. The vision sensor measures the distancebetween tool and plane, proportional to which the approach velocity isincreased, hence accelerating the task. In case (p1) the task objectiveis to make a line/plane contact. The desired torque is zero. Hence, thetorque control on the ty-direction will only upon contact align the toolwith the plane. The vision system, on the other hand, measures theangle between tool and plane in order to align the tool with the plane(possibly even before contact occurs).

[a2,p2:] Force control with vision based feedforward (df+ff): Ex-amples (a2) and (p2) illustrate tasks using shared control in order tomaintain a point/plane and line/plane contact respectively, while mov-ing along the plane. In task (a2) the task frame specification does not

71

Page 86: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

allow a rotation around the ty-axis. Hence, the tz-direction is fixed.In task (p2), on the contrary, the desired zero torque around the tz-direction keeps the tz-direction normal to the plane. In case (a2) thefeedforward component is tangent based, in case (p2) it is curvaturebased10.

[a3,p3:] Tracking control with vision based feedforward (dtr+ff): Inexample (a3) the tool is in contact with an edge. While moving alongthe edge the tracking action (dtr) keeps the contact point centered(i.e. in the middle of the tool resulting in zero torque). With visionbased feedforward depending on the needed (measured) change in toolcenter position, tracking errors will decrease. Task example (p3) illus-trates a line/plane contact which has to be maintained while movingalong an one-dimensionally curved plane. The tracking control keepsthe forward direction (dv) tangent to the plane. Here, curvature basedfeedforward is used to improve the tracking action.

Any of the given tasks can be executed without using vision. How-ever, either the task quality (with regard to tracking errors or actingcontact forces) or the execution time needed will be worse in the force-only case than in the shared control case.

All the examples of figure 4.3 use an ECL configuration, with paral-lel camera mounting. Counterparts for non-parallel mounting or EOLsystems exist.

Notice that there are no examples given with tracking and visualservoing on the same direction (dtr+vs). This combination, howeverpossible, is inferior to the case of tracking with vision based feedforward(cases a3 and p3 in figure 4.3). If shared tracking/visual servoing isfeasible, tracking with vision based feedforward will be feasible too andwill result in a better control.

Also visual servoing with feedforward is possible (dvs+ff). This,however, involves no shared control.

Any task can further be classified according to the number of sharedand/or vision controlled directions. Assuming the object dimension isunknown, the (mono or uncalibrated) vision system can only control amaximum of three directions independently. Hence, a given task can

10In example (p2) an additional shared control on the tz-direction of type (a2) ispossible.

72

Page 87: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.5 Combined vision/force task examples

only possess a maximum of three shared controlled directions. Possiblecombinations (l, k), with (l) the number of shared controlled directionsand (k) the number of vision(-only) controlled directions, fulfilling thecondition l + k ≤ 3, are: (1,0), (1,1), (1,2), (2,0), (2,1), (3,0). Thisgives again six different combinations or classes of tasks containing atleast one direction with shared control. The next section gives someexamples.

A further distinction in the task may follow from the way the re-maining directions are divided into force, tracking and velocity direc-tions. Such a subdivision, however, would leads us too far.

4.5 Combined vision/force task examples

This section gives several combined vision/force task examples whichfulfill the assumptions and constraints given in section 4.2.

First, some traded control examples are illustrated. Traded controlis mostly used in one or more (distinctive) subtasks, which togethermake up the global task. Note that the actual location of the taskframe may change from subtask to subtask and may as well coincidewith the camera frame.

Then, a fairly extensive list of task examples using hybrid and/orshared control is given. This list is not intended to be exhaustive.It merely illustrates the vast set of tasks in which meaningful sharedand hybrid control may be used. All figures show the chosen controltypes for any direction (vision vs, force f or velocity v possibly withfeedforward +ff). If nothing is indicated, the concerned direction isvelocity controlled with desired velocity set equal to zero.

For some tasks, also the high level task description program is given.The syntax and units used in this specification automatically indicatethe control type.

Traded control example : Consider the global task, shown in fig-ure 4.4, which consists of following subtasks: 1.1) go to a fixed position,1.2) search a starting point on the edge by vision, 1.3) align with thecontour tangent using vision, 1.4) move to a safe approach position(otherwise the force probe would hit the object on the top plane in-stead of the table for the given setup), 1.5) make a force controlledcontact with the table (establishing the depth), 1.6) lift the robot toolto a safe height above the table, 1.7) make contact with the edge of

73

Page 88: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

Figure 4.4: Paths travelled according to the different subtasks in order toautomatically search a starting point, follow the contour and go home

the object, 1.8) position the camera over the contour while maintainingthe force control, 1.9) move along the contour with a given tangentialvelocity, while maintaining a constant normal contact force, adjustingthe TF directions by tracking and feedforward and keeping the contourin the camera field-of-view until a given distance is travelled 11 1.10)retract the robot tool from the contour surface and 1.11) go to thehome position.

SUBTASK 1.3: {task frame: fixed

x: velocity 0 mm/secy: feature distance in camera frame 0 mmz: velocity 0 mm/secθx: velocity 0 rad/secθy: velocity 0 rad/secθz: feature angle in camera frame 0 rad

until θz feature angle >-0.1 rad and <0.1 radand y feature distance <0.5 mm and >-0.5 mm }

SUBTASK 1.7: { SUBTASK 1.8: {task frame: fixed task frame: variable

x: velocity 0 mm/sec x: velocity 0 mm/secy: force 20 N y: force 30 Nz: velocity 0 mm/sec z: velocity 0 mm/secθx: velocity 0 rad/sec θx: velocity 0 rad/secθy: velocity 0 rad/sec θy: velocity 0 rad/secθz: velocity 0 rad/sec θz: velocity 0 rad/sec

until y force > 15 N } until y feature distance in camera frame< 0.1 mm and > -0.1 mm }

11Subtask 1.9, the contour following part, is the subject of chapter 6 and corre-sponds to Task 2b in figure 4.5-bottom.

74

Page 89: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.5 Combined vision/force task examples

Subtask 1.9 corresponds to the combined vision/force contour followingtask discussed in chapter 6. Subtasks 1.3, 1.7 and 1.8, for which experi-mental results are given in chapter 8, are programmed as shown above.Subtasks 1.3 and 1.7 are an example of traded control. In subtask 1.3the ty-direction is vision controlled, in subtask 1.7 the ty-direction isforce controlled.

Hybrid and shared control examples : This paragraph gives sev-eral task examples using hybrid and/or shared control. For reasonsof convenience the tasks are numbered. This numbering is merely forreferencing purposes and does not imply a classification or degree ofdifficulty. In each figure the Control Space Dimension (CSD) is given.CSD is the number of independent states to be controlled in the task.If a variable camera frame is used, the orientation control of this frameintroduces one extra control state. Hence the maximum control spacedimension is 7. Some tasks only differ in the tool/object contact situa-tion. At the end of this section, table 4.1 compares and summarizes allthe characteristics of the presented hybrid or shared controlled tasks.

Figure 4.5 shows planar contour following in non-parallel ECL andvariable EOL.

Figure 4.5: Planar contour following tasks in a non-parallel ECL configu-ration (top) and in a variable EOL configuration (bottom)

75

Page 90: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

Figure 4.6: Pipe (top) and path on surface (bottom) following tasks in ECLconfigurations

Figure 4.7: Seam tracking tasks in a parallel ECL configuration

76

Page 91: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.5 Combined vision/force task examples

Figure 4.8: Examples of fixed EOL tasks: path on surface (top) and bladepolishing (bottom)

Figure 4.6 gives examples of ECL configurations for which the CSDis equal to 6, however, without shared control. The pipe following taskof figure 4.6 (top) allows a parallel mounting. The path on plain taskof figure 4.6 (bottom) necessitates a non-parallel mounting.

Figure 4.7 gives four examples of seam tracking tasks, each of whichhas one shared controlled direction and one velocity direction. Depend-ing on the tool, the other directions are divided into force and trackingdirections. Remember that a tracking direction uses measurementsfrom two other directions to automatically adjust the TF pose.

Figures 4.8 and 4.9 give examples of EOL configurations with con-trol space dimensions ranging from 5 to 7. The path on surface fol-lowing task of figure 4.8 (top) with CSD = 5, can be compared to atruck with semi-trailer. The camera is the ‘truck’, the tool the ‘trailer’.The optical axis of the camera always stays on the path. The relation

77

Page 92: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

between task and camera frame is fixed, hereby setting the direction inwhich the trailer goes.

Also the blade polishing task of figure 4.8, has a fixed task to camerarelation. Here the dimension of the control space is equal to 6. Theblade polishing task of figure 4.8 (bottom) consists of following at asafe distance the edge of a slightly curved surface with a polishing toolwhile maintaining a constant contact force. The tool must be keptnormal to the surface. The task program is:

TASK 5b - 3D blade polishing – edge following: {with task frame: fixed

x: velocity 10 mm/secy: feature distance in camera frame 0 mmz: force -15 Nθx: force 0 Nmmθy: force 0 Nmmθz: feature angle in camera frame 0 rad

until relative distance > 250 mm}

In this example, the tz-, the tθx- and the tθy-directions are forcecontrolled; tx is a velocity direction, ty and tθz are vision directions.The desired contact force in the tz-direction = −15 N. Chapter 8 givesexperimental results for this type of task.

The examples of figure 4.9 show a variable EOL seam tracking taskand a variable EOL path on surface following task. Both tasks have acontrol space dimension of 7. They can be seen as the 3D extension ofthe contour following tasks of figure 4.5 (bottom). A task descriptionfor the path on surface task of figure 4.9 (top) is:

TASK 6a - 3D path on surface following: {with task frame: variable

x: velocity 20 mm/secy: feature distance 0 mmz: force -10 Nθx: force 0 Nmmθy: force 0 Nmmθz: feature angle 0 rad with feedforward

until relative distance > 250 mm}

Finally, figure 4.10 shows a block on plane positioning task withthree vision controlled directions and figure 4.11 gives a block in cornerpositioning task with the maximum of three shared controlled direc-tions.

78

Page 93: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.5 Combined vision/force task examples

Figure 4.9: Examples of variable EOL tasks: path on surface (top) andseam tracking (bottom)

Figure 4.10: Block on plane positioning task

To conclude this section, the comparative table 4.1 gives anoverview of the presented tasks. Experimental results for some of these

79

Page 94: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4 Classification

Figure 4.11: Block in corner positioning task

tasks are presented in chapters 6 and 8. Chapter 6 fully explores theplanar contour following task (Task 2b in figure 4.5-bottom) whichuses a variable EOL configuration. Chapter 8 discusses the experi-mental results for Task 1 (subtasks 1.3, 1.7 and 1.8) as an example oftraded control. It continues with the results of a shared control task oftype 1a (according to figure 4.3) and concludes with the blade polishingexperiment Task 5b (figure 4.8-bottom).

4.6 Conclusion

This chapter shows how the task frame formalism provides the meansto easily model, implement and execute 3D robotic servoing tasks usingboth force control and visual servoing in an uncalibrated workspace.

A framework is established to classify and compare combined vi-sion/force tasks. On the one hand, four meaningful camera/tool config-urations are suggested, of which the endpoint open-loop configurationwith variable task/camera frame relation is the most challenging in thesense of control. On the other hand, possible types of shared control,which is the highest level of vision/force integration, are discussed andillustrated.

Numerous tasks exemplify the four camera/tool configurations aswell as traded, hybrid and shared vision/force control. Together with

80

Page 95: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

4.6 Conclusion

Table 4.1: Overview of the characteristics of tasks 1 to 8, presented in thissection in figures 4.4 to 4.11

the presented high level task descriptions, they emphasize the potentialof the used approach to widen the range of feasible tasks.

81

Page 96: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

82

Page 97: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 5

Visual servoing:A 3D alignment task

From a historical point of view, the visual servoing related aspects ofthis work are the main new contributions to the already implemented[80] force controlled robot structure1. To illustrate the visual servo-ing capabilities (apart from any force controlled task), the followingexemplifies a strategy to visually position the robot (with end effec-tor mounted camera) relative to an arbitrarily placed object. Such analignment task may be part of a more global task consisting of first lo-cating the object, then aligning with the object and finally grasping theobject. The (initial) 3D pose of the object w.r.t. the robot is of courseunknown. This chapter also investigates the suitability of three kindsof reference objects or markers2: a circle, an ellipse and a rectangle.

The 3D relative positioning task is solved using image based visualservoing, meaning that the control signals are computed directly fromthe image features. The approach is, among others, based on the rel-ative area method, explained in section 3.6. It uses a coarsely knowncamera setup. In particular, a rough estimation of the focal length andthe orientation of the eye-in-hand camera w.r.t. the end effector aresufficient.

1at the department of Mechanical Engineering of the Katholieke UniversiteitLeuven

2The reference marker may be placed on the object or may be the object itself.In the following the reference ‘object’ used for the alignment task is referred to asalignment object.

83

Page 98: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

5.1 Strategy

A 3D positioning task involves the control of the robot for the full6 DOF. In a visual alignment task this boils down to controlling theposition and orientation of the camera relative to a given object (usingvision information only). For this task, the task frame is centered atthe camera and is in fact identical to the camera frame3. For reasons ofsimplicity, the preceding subscripts t or cam, indicating task or cameraframe, which are in this task identical, and the preceding superscriptobj, indicating a parameter of the object, are (mostly) omitted in thefollowing.

Each task direction is controlled continuously using specific visioninformation or image features. The alignment object, which still maybe chosen freely, needs to possess - and preferably emphasize - thedesired vision information. The (desired) end pose of the robot (i.e. therelative pose of the robot w.r.t. the object at the end of the task) has tofollow unambiguously from the pose of the alignment object. Possiblealignment objects are a circle [44, 68], an ellipse [30], a rectangle, a set ofpoints [11, 31] or any combination of these. The following investigatesthe use of a circle, an ellipse and a rectangle. The latter is found to bethe best choice.

3D alignment using a circle : A circle alignment task may bedescribed as follows:

Position the camera in such a way that (i) the center of the circlelies at the optical axis (using 2 DOF), that (ii) the image planelies parallel to the object plane (using again 2 DOF) and that (iii)the distance between the object plane and the image plane amountsto a given value (using 1 DOF).

The task description specifies 5 of the 6 degrees of freedom. The 6thDOF, being the rotation around the optical axis, can not be definedsince a circle is rotationally symmetric. For the distance control eitherthe real length of the object [mm] (e.g. the diameter) or the desiredperceived object length in the image plane [pix] must be known.

Figure 5.1 shows the four steps in realizing the given task. Theshown velocities indicate the control actions to be taken. The foursteps are:

3For a typical setup of the camera and the definition of the camera frame, seefigure 3.3.

84

Page 99: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5.1 Strategy

Figure 5.1: Four steps of the (eye-in-hand) circle alignment task

1. Center the perceived image by translating in x and y-direction4.

2. Align the longest axis of the circle, which is perceived as an ellipseif image plane and object plane are not parallel, with the x-direction by rotating around the z-axis. Determine the distanceto the center of the circle, measured along the optical axis asshown in figure 5.2, by measuring the width of the circle (i.e. thelongest axis of the perceived ellipse which now lies horizontally).

3. Maximize the measured object size along the y-axis, by rotat-ing around the x-axis. Simultaneously translate along the y-direction, so that the actual followed camera path is arc shapedwith the center of this path at the object. See figure 5.2. Theperceived image will slowly evolve from an ellipse to a circle. Theimage plane is now parallel to the object plane

4. Adjust the distance between the image plane and the object planeby translating over the z-axis.

4Mark that the center of the perceived ellipse is not the center of the real ellipse.

85

Page 100: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

Figure 5.2: Determination of vffy [mm/s] in order to follow an arc shaped

path around the center of the object, given the object width xw [mm], themeasured width in pixels wp, the pixel size µp [mm/pix], the focal length f[mm] and the commanded polar velocity ωc

x [rad/s]

With each new step, the control actions for the previous steps aremaintained. The execution of a new step starts if the end conditionof the previous step is fulfilled. A typical end condition is ‘perceivederror smaller than a given threshold’.

The main advantage in using a circle as alignment object, is the factthat, provided the real diameter of the circle is known, the distance tothe object plane is derived from the longest perceived axis independentfrom the camera pose.

However, using a circle also causes several problems. As indicatedin figure 5.3, the sign of the action to be taken in step 3 does notunambiguously follow from one image. Both camera positions a and

86

Page 101: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5.1 Strategy

Figure 5.3: Ambiguity in the determination of the correct sense of step 3in figure 5.1: the 2 drawn camera positions give almost identical images whenusing a circle as alignment object

b give (almost) the same perceived image. The correct sense of thecontrol action to be taken follows only from a sequence of images (inmotion) by checking whether the measured object size along the y-axis, defined as the object height, increases. If the perceived objectheight increases, the current sense of the motion is correct. If theperceived object height decreases, the current motion sense has to beinverted. Due to measurement noise, the controller must not reacttoo quickly to a change in measured object height. A filter or non-linear elements with hysteresis can partly stabilize the measurementsignal. They, however, also introduce phase lag in the measurementand may affect the accuracy of the positioning or the stability of thecontrol. The biggest problems in determining the correct sense of themotion (in step 3) occur when the actual camera position is close tothe desired position. Here, image and object planes are almost paralleland a rotation around the x-axis will influence the measured objectheight only slightly. The positioning task using a circle as alignmentobject is thus ill conditioned.

87

Page 102: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

Moreover, experiments show that also step 2 causes problems.When image and object planes lie parallel - which is the goal of thetask -, the determination of the longest axis of the perceived circle isvery difficult and not clearly defined. Hence, the action which is basedon this determination, has to be performed very slowly and accurately.

The verification of a valid global end condition is also a very sen-sitive operation. Since several motions occur simultaneously and sincethese motions, based on the changed visual information, are not inde-pendent from each other, small oscillations and/or drift in the robotmotion emerge. Then, all the end conditions for the separate con-trolled motions will never be fulfilled simultaneously. Hence the globalend condition will not be reached.

3D alignment using an ellipse : Besides the practical problems, acircle offers, as mentioned above, the advantage that the real diameterof the circle is always related to the longest axis of the perceived ellipse,independent from the orientation of the image plane w.r.t. the objectplane. A circle does however not give any information for the rotationaround the optical axis. The use of an ellipse as the alignment objectsolves this problem. Aligning the longest axis of the ellipse with thex-axis of the task frame, defines the global desired end position upto 180◦. Figure 5.4 shows the four steps in the 3D alignment with anellipse.

The use of an ellipse however creates a new problem in step 3 offigure 5.4, which tries to make the image plane parallel to the objectplane. Here, not only the perceived height of the ellipse (being theshortest axis) but also the perceived width (being the longest axis)needs to be maximized. Hence, the problems for step 3 in the alignmentwith a circle, here arise twice. Moreover, in contrast to a circle, theaccuracy of width and height measurements of the perceived ellipse,strongly depend on the accuracy at which step 2 is performed andmaintained. A small rotation around the optical axis (z-axis) has aprofound influence on the width and height measurements.

Hence, the practical implementation remains very difficult, is verynoise sensitive and the resulting positioning is not satisfactory. Theachievable accuracy with a stable control is very limited. Increasingthe proportional control gains quickly results in restless and oscillatorybehaviour. Also the simultaneous fulfillment of all the end conditionsremains very difficult.

88

Page 103: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5.1 Strategy

Figure 5.4: Sequence of images in the four steps of the 3D alignment taskusing an ellipse as alignment object

3D alignment using a rectangle : The use of a rectangle as thealignment object solves most of the previously mentioned problems.In the desired end position of the camera, as shown in figure 5.5, thelongest side of the rectangle lies horizontal, the shortest side lies ver-tical. This defines the global end position up to 180◦. The rectanglealignment task is described as follows:

Position the camera in such a way that (i) the optical axis passesthrough the center of the rectangle (using 2 DOF), that (ii) thelongest side of the rectangle lies horizontal, i.e. parallel to the x-axis (using 1 DOF, defined up to 180◦), that (iii) the image planeis parallel to the object plane (again using 2 DOF) and that (iv)the distance between the object plane and the image plane amountsto a given value (setting the last DOF).

Figure 5.5 shows the four steps in realizing the above alignment task.They are:

1. Center the perceived image by translating in x and y-direction.

89

Page 104: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

Figure 5.5: Sequence of images in the four steps of the 3D alignment taskusing a rectangle as alignment object

2. Align the longest axis of the rectangle with the x-direction byrotating around the z-axis (i.e. the optical axis).

3. Position the image plane parallel to the object plane, by rotatingaround the x-axis and respectively the y-axis in order to make theperceived opposite sides of the rectangle parallel. Simultaneouslytranslate along the y-direction and respectively the x-direction,so that the actually followed camera path is arc shaped with thecenter of this path at the object. The perceived image will slowlyevolve from a trapezoid alike to a rectangle. The image plane isnow parallel to the object plane.

4. Adjust the distance between the image plane and the object planeby a translation over the z-axis.

With each new step, the control actions for the previous steps aremaintained.

The ambiguity of figure 5.3, showing two different camera positionsresulting in the same perceived image, does no longer exist. When the

90

Page 105: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5.1 Strategy

Figure 5.6: Perceived images for the positions a and b of figure 5.3 forthe definition of the x-alignment angle error ∆α: the sense of ωc

x followsunambiguously from the angles α1 and α2.

Figure 5.7: Definition of y-axis alignment angle error ∆β

image plane is not parallel with the object plane, the rectangle is per-ceived as a (distorted) trapezoid. Due to the perspective projection,the perceived opposite sides of the rectangle do not lie parallel. Asshown in figure 5.6, for camera position a, the largest side of the trape-zoid lies in the top half of the image. For camera position b, the largestside of the trapezoid lies in the bottom half of the image. Hence thesense of the action for step 3 is unambiguously defined. This is truefor both the rotation around the x-axis (for which the figure is drawn)and the rotation around the y-axis.

91

Page 106: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

5.2 Used algorithms

This section briefly reviews the used algorithms in the processing ofthe vision information.

The x- and y-position control use the image features XSra andY Sra respectively, which are the x-signed and y-signed relative areaparameters. XSra is equal to the number of object pixels with pos-itive x-coordinates minus the number of object pixels with negativex-coordinates5. Y Sra is equal to the number of object pixels with pos-itive y-coordinates minus the number of object pixels with negativey-coordinates. A detailed formulation of these parameters is given insection A.5. The velocities vc

x and vcy, shown in step 1 of figures 5.1, 5.4

and 5.5, command the camera frame to a position where the imagefeature parameters XSra and Y Sra are equal to zero.

The z-orientation control is based on the image feature XY Sra,which gives the difference in object pixels lying on the one hand inthe first and third quadrant and on the other hand in the second andfourth quadrant of the image as defined in section A.5. The velocityωc

z, shown in step 2 of figures 5.1, 5.4 and 5.5, commands the cameraframe to an orientation where the image feature XY Sra equals zero.If XY Sra = 0, then the object lies centered and horizontally in theimage.

The rotations about the x- and y-axes of step 3 need to maximizethe height in the case of a circle and the height and the width of thealignment object in the case of an ellipse. In contrast to the measure-ment of the image feature parameters XSra, Y Sra and XY Sra, themeasurement of width or height [pix] (of circle or ellipse) is not basedon an averaging principle and has therefore no sub-pixel accuracy.

If a rectangle is used as alignment object, then step 3 is based onfigure 5.6. The rotation ωc

x about the x-axis6 follows from the differencein the measured angles α1 and α2, defined in figure 5.6

ωcx = Kθx∆α = Kθx(α1 − α2). (5.1)

Equally, the rotation ωcy about the y-axis follows from the difference in

the measured angles β1 and β2, defined as the angles of bottom and

5w.r.t. the camera frame6The actual used control gains are easily derived from the figures that present

the results.

92

Page 107: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5.3 Experimental results

top edges as shown in figure 5.7,

ωcy = Kθy∆β = Kθy(β1 − β2). (5.2)

The angles α1, α2, β1 and β2 and the width wp of the rectangle aremeasured using the relative area contour detection described in section3.6. To this end, small windows lying centered at left and right edgeson the one hand, and at bottom and top edges on the other hand, areused.

The distance along the z-axis between the image plane and theobject plane7, objz, follows from the known real width of the alignmentobject xw [mm] and the perspective view model of the camera givenby equation 3.21 in section 3.9. The commanded velocity vc

z along thez-axis, shown in step 4 of figure 5.5, equals

vcz = −Kz(zd − zm) (5.3)

withzm = ( obj

camzm) = − xw f

wp µp. (5.4)

5.3 Experimental results

Experimental setup : The experimental setup consists of a KUKA361robot, with end effector mounted Sony CCD camera (type XC77bbce).The focal length of the camera lens is about 6 mm. The robot con-troller is implemented on a transputer board (type T801) equippedwith analogue input and output cards to control the robot actuatorsand to measure the actual robot position. The grabbed image is a512 × 512 matrix of grey intensity values ranging from 0 to 255 (8bit). The image is processed on a digital signal processor (DSP) (typeTexas Instruments C40). The communication between the DSP andthe Transputer is carried out on a transputer link.

The algorithms, described in section 5.2, are implemented in C andexecuted on the DSP on a sub-image of 256 × 256 pixels at a frequencyof 12.5 Hz. Decreasing the used image size enables a frequency of 25 Hz,which is the non-interlaced video frequency. A frequency of 12.5 Hz,however, proves to be sufficient for a smooth and stable control. Due

7In contrast to all previous features, this feature is not truly image based.

93

Page 108: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

Figure 5.8: Measured features and commanded velocities versus time whiletesting the fitness of a circle as alignment object

to the larger field of view, the lower sample frequency is preferred. Atthe start of each experiment the object has to lie, at least partly, inthe camera’s field of view.

Global remarks : Figures 5.8 to 5.11 give some experimental resultsfor the 3D alignment tasks using either a circle, an ellipse or a rectangleas alignment object. All results are shown as a function of time. Thetime instant at which the end condition for a given step is fulfilled anda new control action is taken, is indicated with the step number. Allend conditions are specified as a maximum allowable remaining featureerror. For example, step 1 ‘ends’ when the feature parameters XSra

and Y Sra, which correspond to the distances objx and objy betweenthe optical axis and the center of the object (along x- and y-directions

94

Page 109: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5.3 Experimental results

respectively), are smaller than a given threshold. Time periods in thefigures are denoted by letters.

For a given TF direction, all figures show the measured image fea-ture together with the commanded velocity, in order to better under-stand the physical motion. The goal of each control action is to mini-mize the corresponding feature value. Typically in image based visualservoing (in contrast to position based visual servoing), these featureerrors are expressed in pixels or are dimensionless (in our case by di-viding by the image size xsys as indicated by the scale factors sc andsc1 in the figures).

Circle alignment task : Figure 5.8 shows the (test) results usinga circle as alignment object. At time instant (1), the distance errorsalong x- and y-direction are smaller than the given threshold and thecontrol for step 2 of figure 5.1 starts. An accurate measurement ofthe orientation error of the circle about the z-axis by the parameterXY Sra is difficult and slightly noisy. To assure stability, a sufficientlyslow control is necessary (see part (a) in figure 5.8).

At time instant (2), the end condition for step 2 is fulfilled and thecontrol of step 3 is tested. Step 3 needs to maximize the perceivedheight of the circle by rotating around the x-axis. Defining the param-eter ∆hp [pix] as the difference in measured width and measured heightof the circle, the control of step 3 boils down to minimizing ∆hp. Thefigure shows how the parameter ∆hp changes when moving accordingto an arc shaped path around the center of the circle. The experimentshows that ∆hp never becomes zero but it does reach a minimum value(see part (b) in figure 5.8). The measurement of ∆hp, however, is verynoisy. Hence, it will be extremely difficult to determine whether ∆hp

is increasing or decreasing during a given motion, which is needed tocheck the correct sense of the control action, let alone to determineaccurately whether the minimum is reached.

The jump in XYra, shown in part (c) in figure 5.8, indicates that thefeedforward control, as explained in figure 5.2, is not implemented withcomplete accuracy. The reason for this is of course the fact that thedistance to the object plain has to be estimated and is not accuratelyknown. The y-direction control, however, eliminates the occurred er-rors.

Ellipse alignment task : Figure 5.9 shows the (testing) resultsusing an ellipse as alignment object. The y-orientation control is not

95

Page 110: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

Figure 5.9: Measured features and commanded velocities versus time whiletesting the fitness of an ellipse as alignment object

executed. At time instant (1), the optical axis passes through the centerof the ellipse. This starts the z-orientation control, which is very stableand more accurate, since the longest axis of an ellipse, in contrast toa circle, is clearly measurable. The z-orientation control fulfills its endcondition at time step 2.

Now the fitness of a change in ∆hp, as control signal for the x-orientation control, is tested. In part (a) in figure 5.9 the camera isagain commanded to move on an arc shaped path around the centerof the ellipse. This time, the figure also gives the supposed controldirection dir (=1 or =-1) by showing the signal ∆hpdir. This mea-surement behaves oscillatory and is unfit to be used as the basis for acontrol action. The determination of the correct sense of the needed ac-

96

Page 111: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5.3 Experimental results

Figure 5.10: Measured features and commanded velocities versus time whiletesting the fitness of a rectangle as alignment object

tion goes wrong due to the noisy measurements and the ill-conditionedalignment, as previously explained.

As part (b) in figure 5.9 shows, the x-orientation control still influ-ences the y-position control, because the distance to the object planeis not accurately estimated.

Rectangle alignment task : The results for two experiments using

97

Page 112: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

a rectangle as alignment object are shown in figures 5.10 and 5.11.The first experiment tests the suitability of the ∆α and ∆β mea-

surements, according to the demand for parallel opposite sides ex-plained in figures 5.6 and 5.7, as a basis for the x- and y-orientationcontrol. At time instant (1) in figure 5.10, the first step is accom-plished. The optical axis of the camera now passes through the centerof the perceived ‘rectangle’. The rectangle lies almost horizontal andthe (weak) end condition of the z-orientation control is (almost imme-diately) fulfilled. During the period (a), the task frame rotates aroundthe (imaginary) horizontal axis of the rectangle at a fixed velocity, inorder to measure the change in ∆α. During the period (b), the taskframe rotates around the (imaginary) vertical axis of the rectangle at afixed velocity, in order to measure the change in ∆β. The orientationmeasurements clearly contain the searched orientation information butare still fairly noisy. This necessitates filtering or averaging out of themeasurements.

Figure 5.11 gives the final results in the rectangle alignment task.At time instant (1), step 1 of figure 5.5 is accomplished. The remainingx- and y-feature errors are smaller than the given threshold. This ac-tivates the z-orientation control, which reaches its goal at time instant(2), according to step 2 of figure 5.5. Also the x- and y-orientationcontrol, activated separately in parts (a) and (b) respectively and bothtogether in part(c), live up to the requirement. During this experi-ment, the orientation measurements are averaged out over 10 samples.At time instant (3) the end conditions for the first three steps (of fig-ure 5.5) are simultaneously fulfilled. Now the distance adjustment canstart.

To test the accuracy of the alignment task, all control actions arenow turned off and the robot is commanded to translate along thez-axis of the camera frame until the desired distance to the object isreached. Finally, after about 35 seconds, the alignment task reaches itsglobal goal.

During the last step, the robot moved over a distance of 78 mmalong the z-axis. The center of the object shifted by 0.75 pix. For thegiven setup this corresponds to an orientation error between the opticalaxis and the object normal of 0.3◦. In view of the limited accuracy atwhich the camera frame is defined w.r.t. the end effector frame, thisresult is more than satisfactory.

Appendix section D.1 gives the full high level task description.

98

Page 113: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5.4 Conclusion

Figure 5.11: Measured features and commanded velocities versus time forthe 3D alignment of the eye-in-hand camera with a rectangle as alignmentobject

99

Page 114: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

5 Visual servoing: a 3D alignment task

5.4 Conclusion

This section presents a new strategy to visually align an eye-in-handcamera with a given object. For this 3D alignment task, i.e. in therelative positioning of the camera frame w.r.t. the given object, a rect-angle is to be preferred as alignment object over a circle or an ellipse.The image features and likewise the corresponding control actions inthe case of a rectangle as alignment object are easier to extract andmore robust than in the case of a circle or an ellipse.

The 3D alignment task using a rectangle is successfully and accu-rately executed. Thanks to the averaging effect of the relative areaimage processing methods, the implemented image based visual servo-ing control is quite robust and always stable, provided that the controlconstants stay limited.

The 3D alignment task illustrates the visual servoing capabilities ofthe robot using image based control of the task frame. The extensionto a complete 3D following system is, from the control point of view,straightforward, given a sufficiently slow and smooth object motion.

Some aspects are still open for investigation: (1) In the alignmenttasks, the goal position of the image plane is chosen parallel to the ob-ject plane. From a physical point of view, this seems a logical choice.But a non-parallel alignment task may perhaps give better results. (2)Can the alignment task be executed more quickly if all the controlactions are started simultaneously with guaranteed (maintained) sta-bility? (3) How robust is a 3D visual following control based on therectangle alignment task with a slowly moving rectangle?

100

Page 115: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 6

Planar contour following ofcontinuous curves

6.1 Introduction

In planar force controlled contour following, the robot is holding atool while following the contour of a workpiece. When pose and shapeof the workpiece are unknown, the force sensor is used to modify, oreven generate, the tool trajectory on-line, which is referred to as forcetracking, explained in section 3.5. Due to the limited bandwidth of thesensor-based feedback control loop (loop dtr in figure 3.1), the executionspeed of the task is restricted in order to prevent loss of contact orexcessive contact forces.

This chapter shows how the performance of a contour followingtask improves w.r.t. a pure force feedback controlled task by combiningforce control (tracking) and visual servoing. While maintaining theforce controlled contact, the controller has to keep the camera, alsomounted on the robot end effector, over the contour at all times. Then,from the on-line vision-based local model of the contour, appropriatefeedforward control is calculated and added to the feedback control inorder to reduce tracking errors.

The presented task exploits the variable EOL configuration, whichis the most adequate to assure easy image processing and the mostchallenging in the sense of control.

The approach presented in this chapter can be applied to all actionsthat scan surfaces along planar paths with a rotationally symmetric

101

Page 116: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6 Planar contour following of continuous curves

tool: cleaning, polishing, or even deburring . . . It is especially usefulfor one-off tasks in which accurate positioning or calibration of theworkpiece is costly or impossible.

Overview : Section 6.2 reviews the motivation for the chosenEOL setup. Section 6.3 gives the full planar contour following taskdescription. Section 6.4 describes the details of the control approach.It deals with the problem of keeping the contour in the camera fieldof view in addition to maintaining the force controlled contact. Itfurther explains how the vision data are matched to the actual position,and how the feedforward signal is calculated. Section 6.5 describesthe experimental setup and presents the experimental results. Finally,section 6.6 concludes this chapter.

6.2 EOL setup

The combined vision/force contour following task, presented in thischapter, uses a variable EOL configuration. It corresponds to Task 2bof figure 4.5-bottom described in section 4.3.

Figure 6.1: Global overview of vision/force contour following setup

102

Page 117: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6.3 Task specification

Figure 6.1 gives the complete setup. The task frame is connected tothe tool (tip). The camera is mounted on the end effector ahead of theforce sensor with the optical axis normal to the plane. As previouslymentioned, mounting the camera in this way, results in local images,normally free from occlusion, and in a controllable camera position.The image feature of interest is placed on the optical axis (center ofthe image) by visual servoing, which makes the feature measurementless sensitive to calibration errors or distortions in the camera/lenssystem.

6.3 Task specification

The task consists of following a planar contour with force feedback andvision based feedforward. The objectives are1:

1. move along the workpiece contour, i.e. along the tx-axis, withgiven (tangential) velocity (direction type dv),

2. maintain a constant normal (i.e. along the ty-axis) contact forcebetween tool and workpiece (direction type df ),

3. keep the contour in the camera field of view by rotating the endeffector (direction type dvs),

4. keep the task frame tangent to the contour, i.e. keep the tx-axisnormal and the ty-axis tangent to the contour, by rotating aroundthe tz-axis (direction type dtr) and

5. generate the correct feedforward to be added to the rotation ofthe task frame in the plane in order to reduce tracking errors(direction type d+ff).

Figure 6.2 visualizes these desired actions. They are specified inthe high level task description of our experimental control environmentCOMRADE [80]. The syntax and units used in this specification clearlyindicate the control type. The following is an example of the programfor the above task, expressed in the task frame indicated in figure 6.1:

1The respective direction types according to the control scheme of figure 3.1 areindicated between brackets.

103

Page 118: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6 Planar contour following of continuous curves

Figure 6.2: Control actions for the contour following task with indicatedcontrol types according to the hybrid control scheme of figure 3.1

TASK EXAMPLE: Planar contour following {with task frame: variable - visual servoing

x: velocity 25 mm/secy: force 30 Nz: velocity 0 mm/secθx: velocity 0 rad/secθy: velocity 0 rad/secθz: track on velocities with feedforward

until relative distance > 550 mm}

The tracking action (objective 4 in the above list) has to reduce oreliminate the tracking error. This tracking error, as explained in section3.5, always depends in some way on a force measurement. The visionfeedforward (action 5 in the above list) tries to avoid the occurrence oftracking errors in the first place. As shown in section 3.2 by equation3.7 the feedforward on a rotation is curvature based. Both (feedforwardand tracking) actions control the same direction simultaneously. Hence,this gives the highest level of sensor integration: fusing force and visioninformation through shared control (type p3: dtr+ff).

104

Page 119: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6.4 Detailed control approach

6.4 Detailed control approach

This section describes the details of the control approach, explains thematching of the visually measured contour with the contact point athand and implements the feedforward control.

6.4.1 Double contact control

Keeping the contour in the camera field of view while maintaininga force controlled contact corresponds to keeping a “virtual doublecontact”. The first contact, point A in figure 6.3, is between workpieceand force probe. This contact point must move with the specified(tangential) velocity. The second (virtual) contact, point B in figure6.3, coincides with the intersection of the optical axis and the contourplane.

Figure 6.3: Top view of contour for two time instants

To maintain this double contact, following control actions aretaken2:

(1 - dv) Velocity control: Move the task frame (point A) with thedesired tangential velocity:

tvcx = tv

dx. (6.1)

(2 - df) Force control: Compensate the normal force error by ap-plying a velocity in the ty-direction:

tvcy = Kf

ytkinv

y [ tFmy −t F d

y ]. (6.2)

2For a description of the used notation see table 3.1 or list of symbols.

105

Page 120: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6 Planar contour following of continuous curves

Kfy is the y-direction force control gain, tk

invy is the y-direction

compliance of the tool and tFmy and tF

dy are the measured and

desired forces (from object to robot) in the y-direction of the taskframe.

(3 - dvs) Visual servoing: Rotate the end effector3 around the eez-direction by:

eeωcz = ω1 + ω2, (6.3)

withω1 = Kvs

θzC

camxm/rAB. (6.4)

Component ω1 controls point B towards the contour. rAB [mm]is the fixed distance between points A and B. Component ω2

moves B tangent to the contour in B with velocity −→v B (directionknown, magnitude unknown) while compensating for the velocityof point A. This tangent based component is in fact a feedforwardsignal on the position control of point B as explained in section3.2. Its value follows from:

−→ω2 ×−−→rAB +−→vA = −→vB. (6.5)

According to the notations of figure 6.4 and neglecting the veloc-ity tv

cy, ω2 is solved as:

ω2∼= − tv

cx sin( C

camθm + γ)rAB cos( C

camθm), (6.6)

with γ the angle between −→v A and camy-axis.

(4 - dtr) Force tracking: Compensate the tracking angle error ∆θz∼= tv

cy/tv

cx by rotating the task frame w.r.t. the end effector frame.

This action does not change the actual position of the end effector.In theory it could take place in one control time step, since there

3From a practical point of view, the task frame is related to the end effector andnot to some absolute world frame. Hence, moving or rotating the end effector framewill also move or rotate the task frame. To make the orientation of the task frame

abstθz independent from the rotation of the end effector by eeω

cz (the visual servoing

action), we have to redefine the relation between task and end effector by −eeωcz to

compensate.

106

Page 121: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6.4 Detailed control approach

Figure 6.4: Double contact with definition of variables

is no mass displacement involved. From a practical point of view,however, the noise, which affects the identification of the trackingangle ∆θz, enforces the need of a low pass filter4 resulting in:

tωcz = Ktr

θz∆θz, (6.7)

with Ktrθz the proportional control gain for the tracking direction

[sec−1].

6.4.2 Matching the vision data to the task frame

The second step, after controlling both camera and force probe posi-tions, determines the link between the actual position of the task frameand the data of the contour, as collected by the vision system. Thissubsection describes how the contour data are measured5, correctedand logged and how the logged contour data are then matched to theactual tool position.

Contour measurement : The image processing algorithm deter-mines (the absolute position of) the desired tool path, in several steps.Figure 6.5 gives an overview of these steps. The following explainsthese steps one by one. First, the Infinite Symmetric Exponential Fil-ter (ISEF), proposed by [70] and reviewed in section 3.7, extracts local

4Also a sudden orientation change in the forward direction is not desirable.5This measurement is unaffected by the camera/lens distortion, since the visual

servoing control (equations 6.3 to 6.6) keeps the optical axis of the camera close tothe contour at all times.

107

Page 122: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6 Planar contour following of continuous curves

Figure 6.5: Vision processing flow

contour points (xp, yp) in the image space [pixels]. Once a startingpoint on the contour is found, a narrowed search window is applied(which tracks the contour) to make the scanning of the subsequentcontour points more robust and faster. In total n (e.g. n = 9) contourpoints are extracted lying symmetrically around the center (horizontalline) of the image, giving the data set:

[XpYp] with Xp = [x1p . . . xn

p ]′ and Yp = [y1p . . . yn

p ]′. (6.8)

The Total Least Squares (TLS) solution6 ofYp

1...1

(tan(−Cθ)

Cxp

)= Xp (6.9)

then determines the position Cxp [pixels] and orientation Cθ [rad] ofthe contour in the image. Figure 6.6 gives an example of these contourparameters.

Next, the position Ccamx [mm] of the contour w.r.t. the center of the

camera frame is calculated according to the perspective view pin-hole6From a practical point of view, the available TLS algorithm is used to solve

equation 6.9. Note however that a LS solution, which does not suppose errors onthe column of 1’s (or on the elements of Yp) in equation 6.9, may give a slightlybetter and more justified result.

108

Page 123: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6.4 Detailed control approach

Figure 6.6: Position and orientation of the contour in the image

model, given in section 3.9,

Ccamx = −Cxp µp

Ccamz/f (6.10)

with Ccamz the distance [mm] between camera frame and object plane

(which is negative), f the focal length [mm] of the lens and µp the pixeldimension [mm/pix]. The orientation does not change when expressedin the camera frame so C

camθ = Cθ. The contour parameters Ccamx and

Ccamθ are used in the visual servoing actions of equations 6.4 and 6.6.

The contour is now represented by the frame CcamH, with origin at

the contour, x-axis normal and y-axis tangent to the contour:

CcamH =

cos( C

camθ) − sin( Ccamθ) 0 C

camxsin( C

camθ) cos( Ccamθ) 0 0

0 0 1 Ccamz

0 0 0 1

. (6.11)

This position is offset, normal to the contour, by the tool radius rt (12.5mm), to give OC

camH: the position of (one point on) the Offset Contour(still expressed in the camera frame),

OCcamH =

1 0 0 cos( C

camθ)rt

0 1 0 sin( Ccamθ)rt

0 0 1 00 0 0 1

CcamH. (6.12)

Then, the relative frame position is transformed to absolute coordi-nates. This is based on the transformation from camera to end effec-tor frame cam

ee H, which is determined by calibration, and the forwardkinematics of the robot7 ee

absFWK(q) (from end effector to absolute7See appendix section A.3.

109

Page 124: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6 Planar contour following of continuous curves

coordinates as a function of the robot joint coordinates q):OCabsH = ee

absFWK(q) camee H OC

camH. (6.13)

Finally, this single measurement is corrected to account for the defor-mation of the tool and the robot under the current contact forces. Thiscorrection is based on a linear spring model with stiffness camk. It shiftsthe contour position to the Corrected Offset Contour COC

abs H accordingto

COCabs H =

1 0 0 sin( t

absθz)tFy/camk

0 1 0 −cos( tabsθz)tFy/

camk0 0 1 00 0 0 1

OCabsH (6.14)

with tFy the normal force and tabsθz the (orientation of the) tangent in

the (current) contact point.From each image, one measurement of the contour (represented by

the frame COCabs H) is taken and logged. Together, these measurements

represent the desired path to be followed by the tool center.Note that the order in which the above calculations are performed,

may change. For example, the tool offset computation (equation 6.12)may also be executed as the last step just before the data logging.

Tool position measurement : The measured tool pose ( tabsH

m)follows straightforwardly from the robot pose and the relative pose ofthe tool w.r.t. the end effector, t

eeH.t

absHm = ee

absFWK(q). teeH. (6.15)

The measured tool pose, however, does not take into account the tool(and robot wrist) deformation under the current contact force tF

m.Due to the tool compliance tkinv, the actual tool pose, t

absHa, differs

from the measured pose. The measured pose thus has to be correctedin the sense of the acting contact force, this is normal to the contour(in the ty-direction):

tabsH

a =

1 0 0 sin( t

absθz) tFytkinv

0 1 0 −cos( tabsθz) tFy

tkinv

0 0 1 00 0 0 1

tabsH. (6.16)

Equation 6.16 is similar to equation 6.14. They use the same con-tact force tFy for the pose corrections (be it for a different point on the

110

Page 125: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6.4 Detailed control approach

Figure 6.7: Matching vision data to the tool frame

contour). Both equations use, however, different stiffness or compliancevalues. The tool and camera-setup compliances8 are not equal!

Matching : The matching is based on corresponding absolute po-sition of on the one hand the tool pose and on the other hand thevisually measured path. The path ahead is logged in a look-up table,see figure 6.5, containing one entry of the absolute corrected contourpose (represented by COC

abs x, COCabs y and COC

abs θz in correspondence to theresult of equation 6.14) for each image. The table is extended with thearc length s, which will be used in the curvature calculation in the nextsubsection.

In order to eliminate the effects of the vision delay time T vs, we needto compute the control parameters for the next time instant. Hence,not the current tool pose, but the predicted tool pose for the next timeinstant (k + 1), t

absHa((k + 1)T vs), is used as a (interpolating) pointer

into the look-up table:

tabsH

a((k + 1)T vs) =

1 0 0 cos( t

absθz) tvcx T vs

0 1 0 sin( tabsθz) tv

cx T vs

0 0 1 00 0 0 1

tabsH

a(kT vs).

(6.17)Figure 6.7 summarizes all the matching steps. The logged position

which lies the closest to the predicted tool position is indicated as thematched position h.

8The tool compliance and the camera-setup compliance are determined experi-mentally.

111

Page 126: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6 Planar contour following of continuous curves

The advantage of the used position based matching is its simplic-ity. The position based matching will however fail if the contour itselfmoves. In this case, an arc length based matching method may give asolution. This option is not further investigated.

6.4.3 Calculating the feedforward signal

The implemented feedforward (loop d+ff in figure 3.1) has to avoid theoccurrence of a tracking angle error ∆θz using the following feedforwardcontrol signal:

tωffz = −κ tv

cx (6.18)

with κ the curvature of the contour in the matched position and tvcx

the tangent velocity of the tool, as explained in section 3.29.An obvious way to calculate κ is the use of a fitted contour model.

As previously explained in section 3.8, however, this poses some prob-lems: for a start, the calculation of the curvature from the fitted con-tour is very noise sensitive due to the second derivative. Furthermore,the fitted contour may differ from the real contour, especially for sim-ple models or may be computationally expensive for more complicatedmodels. Finally, not the curvature in one point is needed but the meancurvature for the arc length travelled during one time interval (T vs).

In order to avoid all of the mentioned problems, the curvature iscalculated as the mean change in orientation of the contour over thetravelled arc length ds (see also section 3.8):

κ =d COC

abs θz

ds. (6.19)

κ results from the Total Least Squares [82] solution of a set of m firstorder equations (see equation 3.20) lying symmetrically around thematched position (i.e. position h in figure 6.7). Note that there is nophase lag involved in the computation of κ.

For the feedforward control too, the compliance of the tool needs tobe taken into account. After all, due to the tool compliance, the z-axesof end effector and tool frames do not coincide. At an outer curve (e.g.curves 1 and 3 in figure 6.10), the end effector makes a sharper turn

9Note the added minus sign in equation 6.18 w.r.t. equation 3.7. Hence, thecorrect sense of the feedforward signal depends on the setup!

112

Page 127: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6.5 Experiments

Figure 6.8: Shift between end effector path and tool path due to the toolcompliance

than the tool and thus needs to rotate faster. Figure 6.8 exemplifiesthis. The desired angular feedforward velocity eeω

ffz is therefore

eeωffz = −κee

tvcx (6.20)

withκee =

κ

1− κ tFy/tk, (6.21)

tFy the normal contact force and tk the tool stiffness. The distance be-tween end effector path and tool path amounts to tFy/

tk. This explainsequation 6.21.

The feedforward velocity calculated according to equations 6.18 to6.21, is added to the feedback control actions described in section 6.4.1.

6.5 Experiments

First, the experimental setup is described briefly. Then the results arepresented.

Experimental setup : Figure 6.1 gives an overview of the setup. Itconsists of a KUKA 361 robot, with a SCHUNK force sensor togetherwith an eye-in-hand SONY CCD XC77 camera with 6.15 mm lens.The CCD camera consists of 756 by 581 square pixels with 0.011 mm

113

Page 128: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6 Planar contour following of continuous curves

Figure 6.9: Experimental setup

pixel width, from which however only a sub-image of 128 by 128 pix-els is used. The dimensions of the camera mounting are described inappendix section F.3.

Instead of the commercial controller, our own software environmentCOMRADE [80, 26] is used. The high level task description for thecomplete task is given in appendix section D.2. The robot controllerand force acquisition are running at 100 Hz on a T801 transputer board.The image processing, the calculation of the camera position controlsignals, the matching and logging and the feedforward calculation areimplemented on a TI-C40 DSP unit, with frame grabber and transputerlink. The image processing and calculations are limited by the noninterlaced frame rate of 25 Hz. Since the robot controller and imageacquisition rates are different, the robot controller uses the most recentDSP calculations 4 times in a row.

114

Page 129: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6.5 Experiments

Figure 6.10: Paths travelled by camera and task frame

The force probe is about 600 mm long with compliance kinvy =

0.09 mm/N. The distance rAB is 55 mm. The camera is placed about200 mm above the workpiece resulting in a camera resolution of about3 pix/mm. The testing object has a sinusoidal form and is describedin appendix section F.1.

115

Page 130: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6 Planar contour following of continuous curves

Figure 6.11: Measured normal contact forces tFmy without and with use of

vision based feedforward on the tracking direction tθz (top); Actual and idealfeedforward signals for the given contour (bottom)

Results : Figure 6.10-top shows the uncorrected paths travelledby camera and task frame. The plotted task and end effector framedirections illustrate the variable relation between them during taskexecution. Figure 6.10-bottom shows the logged path (as measured bythe vision system, offset by the tool radius and corrected for the camera-setup compliance) and the corrected tool path. The maximum errorbetween these two is about 1 mm. This validates the matching method.

Figure 6.11-top compares the measured contact forces in the ty-direction for two experiments with tangential velocity set to 25 mm/sec:when feedforward is used, the desired normal contact force of 30 Nis maintained very well ; without feedforward, both contact loss andcontact forces of about 60 N occur.

The tangential velocity is limited to 25 mm/sec when only feedbackis used, but can be increased to 75 mm/sec without loss of contact for

116

Page 131: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

6.6 Conclusion

the given contour when applying vision based feedforward.Figure 6.11-bottom shows the good correspondence between actu-

ally calculated and ideal feedforward signals for the given contour. Theused method for the curvature calculation, however, levels out peaksin the curvature profile. This gives a less noisy feedforward signal butcauses small remaining contact force errors at positions of high curva-ture.

6.6 Conclusion

This chapter presents a combined force tracking/visual servoing taskin a variable EOL configuration. It shows how the quality of forcecontrolled planar contour following improves significantly by addingvision based feedforward on the force tracking direction. This reducestracking errors, resulting in a faster and more accurate execution of thetask. The feedforward signal is calculated from the on-line generatedlocal data of the contour. To get a good match between tool path andvisually measured contour data, compliances of both tool and camera-setup are modelled and incorporated in the measurements.

Vision based feedforward is fused with feedback control in a trackingdirection. As the tracking control itself is based on force measurements,this is an example of shared control.

Keeping the contour in the camera field of view, while maintaininga force controlled contact, however, imposes additional requirements onthe controller. This double control problem is solved using the redun-dancy for rotation in the plane, which exists for rotationally symmetrictools. The key in this solution is redefining the relation of the taskframe with respect to the end effector in order to keep the task frametangent to the contour, while rotating the end effector to position thecamera above the contour. Hence, the orientations of task and end ef-fector frames are controlled independently, hereby fully exploiting thevariable EOL configuration.

The experimental results validate the used approach.

117

Page 132: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

118

Page 133: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 7

Planar contour following atcorners

7.1 Introduction

The solution presented in the previous chapter is not directly applica-ble to contours which contain places with extremely high curvature orcorners. This chapter shows how the combined vision/force approach ofthe previous chapter can be adapted to also improve contour followingtasks at corners1.

The camera watches out for corners. If a corner is detected, a finitestate control is activated to successfully take the corner. The cornerdetection and the finite state controller are the main new contributionsw.r.t. the approach presented in chapter 6.

Setup : The global setup of figure 6.1 remains. Only in this case,the contour contains a corner in the path ahead. As in the previouschapter, the camera is mounted on the end effector ahead of the forcesensor with the optical axis normal to the object plane. The approachis still position based and endpoint open-loop. It uses a calibratedcamera-setup.

Overview : Section 7.2 describes the corner detection. Section 7.3treats the new control approach issues. The basic control scheme,shown in figure 3.1, is augmented to a three layer control structure

1No round but only sharp corners are considered.

119

Page 134: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7 Planar contour following at corners

with a finite state machine. Section 7.4 presents the experimental re-sults. Finally, section 7.5 concludes this chapter.

7.2 Path measurement and corner detection

Section 6.4.2 and figure 6.5 already described the contour measurement.Figure 7.1 briefly reviews the results. It shows the measured contour,the actual contour (or corrected contour), the offset contour and thecorrected offset contour. The latter is the desired tool (center) path.It further shows the measured tool center and the actual tool center.The latter must coincide with the corrected offset contour.

Figure 7.1: Measured, actual, offset and corrected offset contours as mea-sured or computed by the vision system on the one hand and measured tooland actual tool center path on the other hand

The corner detection algorithm is based on the first step of thecontour measurement, which extracts local contour points. If there isa sudden jump in coordinates of the extracted contour points [XpYp]or if there are empty scan lines due to the absence of an edge, thescan window is shifted and rotated2, as shown in figures 7.2 and 7.3.The image is scanned again. If the second contour measurement iscorrect (no sudden jump nor empty scan lines) as shown in figure 7.3,its orientation is compared to the orientation of the previously loggedcontour. A corner is identified if the difference in orientation exceedsa given threshold. The position of the corner then easily follows fromthe intersection of two lines.

2The rotation is 45◦; the shift is large enough to avoid that the new scan windowcontains the expected corner.

120

Page 135: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7.2 Path measurement and corner detection

The exact location of the corner, however, is updated afterwards,when the optical axis of the vision systems is again positioned overthe contour (as it was before the corner occurred) in which case thecontour measurement is more accurate.

The corner position is offset by the tool radius. This offset corneris the starting point of the tool path around the corner, as shown infigure 7.6. We can calculate the offset corner as the intersection ofthe corrected offset contour, just before the corner occurred, with thecorrected contour, after the corner occurred. These two lines are given

Figure 7.2: Example of empty scan lines due to the presence of an edgeand close-up of rotated and shifted scan window

121

Page 136: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7 Planar contour following at corners

Figure 7.3: Two consecutive sets of scan lines for corner detection

by: {x = COC

abs θz(1) y + COCabs x(1)

x = CCabsθz(2) y + CC

absx(2)(7.1)

with superscript CC indicating the corrected or actual contour. All pa-rameters are expressed in absolute coordinates. For reasons of simplic-ity, the preceding sub- and superscripts are left away in the following.In order to get a stable, errorfree computation of the intersection ofthe two lines of equation 7.1, the equation is rewritten as{

x(1) + d cos(θz(1) + π/2) = x(2) + l cos(θz(2) + π/2)y(1) + d sin(θz(1) + π/2) = x(2) + l sin(θz(2) + π/2),

(7.2)

with (distance) parameters d and l. Only the solution for parameter dis needed3:

d =[x(1)− x(2)] cos(θz(2)) + [y(1)− y(2)] sin(θz(2))

sin(θz(1)− θz(2)). (7.3)

The resulting intersection of the two lines, which is the searched posi-tion of the offset corner, is then{

xcorner = x(1)− d sin(θz(1))ycorner = y(1) + d cos(θz(1))

(7.4)

The difference between the orientations θz(2) and θz(1) gives the angleof the corner to be rounded.

3Since the angles θz(1) and θz(2) are never equal, the denominator of equation7.3 never becomes zero.

122

Page 137: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7.3 Augmented control structure

7.3 Augmented control structure

Figure 7.4 illustrates the augmented control structure. The externalCartesian space control, using vision and force sensors around the lowlevel servo controlled robot, corresponds to the control scheme of figure3.1. The Cartesian control loop keeps the contour in the camera field ofview, while maintaining a constant normal contact force as describedin full detail in the previous chapter.

Figure 7.4: Augmented three layer control structure

Figure 7.5: Finite state scheme for planner

On top of the Cartesian space control, the sequencer and/or plannerperforms the overall control. It determines set-points, monitors tran-sition conditions and adapts the control parameters for the differentcontrol states. The finite state machine, shown in figure 7.5 representsthese different control states.

123

Page 138: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7 Planar contour following at corners

When a corner is detected, the tool slows down. At this point theend effector will have to turn very fast (without changing the directionin which the tool is moving) to keep the camera over the contour: anaction that is limited by the maximum allowed velocity and/or accel-eration of the robot. Here the robot dynamics play an important role.During this transition movement, the contour measurement is not veryaccurate and hence preferably not used. At sharp corners, moreover,the camera briefly loses the contour.

Once the tool (center) reaches the corner, the corner is taken atconstant velocity while adapting the tangent direction (of the tool) byfeedforward to follow the desired arc-shaped path. The desired angularfeedforward velocity eeω

ff is

eeωff =

−eevx

rt − tFy/tk(7.5)

with eevx the tangent velocity of the end effector (or tool), rt the toolradius, tFy the normal contact force and tk the tool stiffness. Due tothe compliance of the tool, the z-axes of end effector and tool (or task)frames do not coincide. At the corner the end effector makes a sharperturn than the tool (as previously illustrated in figure 6.8). The radiusof this turn is smaller than the tool radius by a distance tFy/

tk. Thisexplains equation 7.5.

Figure 7.6: Division of desired path in finite control states

After the corner is taken, the (tangent) velocity will gradually buildup and the controller will return to the normal operation state.

Figure 7.6 gives an example which matches the desired path to thedifferent control states. Some control specifications are given.

124

Page 139: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7.4 Experimental results

7.4 Experimental results

The experimental setup corresponds to the one described in section 6.5and figure 6.9. Only a few changes are made. Of course, the contourpath ahead now contains a corner.

Furthermore, a non-interlaced sub-image of 64 by 128 pixels in 256grey levels is grabbed of which in normal operation only nine centeredlines are used. The force probe is about 220 mm long with stiffness tk= 13.2 N/mm. The distance between tool center point and optical axisis 55 mm. The camera is placed about 100 mm above the workpieceresulting in a camera resolution of about 5.6 pix/mm. The camera-setup has a stiffness camk = 21.2 N/mm.

Figure 7.7: Normal force versus arc length at different fixed velocities with-out using the vision system

The results of two experiments are presented. As a basis of compar-ison, the first experiment measures the contact forces while followingthe unknown contour4 with a corner of 90◦, without using any visioninformation. Figure 7.7 gives the results. The desired contact force is30 N. In this experiment, contact is lost or excessive contact forces oc-cur if the velocity is too high. To assure a fairly well maintained forceprofile, the maximum allowable tangent velocity is only 10 mm/sec.However, experience shows that decreasing the tangent velocity fur-ther will not result in a better force profile: if the tangent velocity is

4The angle of the corner is of course also unknown.

125

Page 140: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7 Planar contour following at corners

Figure 7.8: Measured normal force (top) and tangent velocity (bottom)versus arc length for the contour following task using vision based feedforwardat the corner

too low, the tracking error identification (on velocities) will fail (seealso section 3.5) and the robot is unable to round the corner. The taskliterally gets stuck at the corner.

The second experiment implements the described finite state con-troller and corner detection in a combined vision/force contour follow-ing task of the same contour. The high level task description programfor this experiment is given in section D.3.

The top of figure 7.8 gives the measured normal force versus arclength for the combined vision/force control of the second experiment.The contour is successfully tracked without major changes in (normal)contact force. Only at the corner itself, the contact force is slightlysmaller than desired, probably due to imperfect corner measurementand/or non-modelled non-linear deformations of tool and robot wrist.

The bottom of figure 7.8 gives the measured tangent velocity versusarc length. At the corner the robot slows down to round the corner

126

Page 141: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7.4 Experimental results

in the best conditions. After the corner the robot accelerates. For thesecond experiment, the normal operation velocity is set to 50 mm/sec,reducing the overall execution time w.r.t. the first experiment.

Figure 7.9 shows the measured contour together with the cameraand tool paths. The corner is detected correctly. The corner informa-tion is used to adapt the execution speed and to set the tool path atthe corner. The correct feedforward control at the corner results in thearc shaped paths (for both end effector and actual tool) shown in thefigure.

The blow-up at the bottom of figure 7.9 clearly indicates the needto correct the measured contour data according to equation 6.14, whichtakes the compliance of the camera-setup into account. Although theused test object consists of straight edges, the (uncorrected) measuredcontour just after the corner is not a straight edge. This is caused by thechanging contact situation. As the tool rounds the corner, the directionof the contact force changes over 90◦and so does the deformation of thecamera-setup. Once this deformation due to the contact force at handis compensated, the resulting corrected contour is again a straight edge.

Figure 7.9 also shows the measured and the actual tool paths. Themeasured tool path lies more inwardly than the actual tool path due tothe compliance of the tool. To get a good match between the tool pathand the desired path (calculated by the vision system), the tool pathtoo needs to be corrected. This correction is similar to the correctionof the camera measurement. However, as previously mentioned, thecompliance (or stiffness) of the tool (including the wrist) is differentfrom the compliance of the camera-setup.

50 mm/sec is the maximum allowable approach velocity. For highervelocities, it is possible that the camera misses the contour. After all,only a few (center) lines of the image are scanned and processed to limitthe needed processing time. If higher velocities than 50 mm/sec aredesired, the vision system must be instructed to look ahead by scanningthe top lines of the grabbed image. If the system does not detect anedge or contour in the top part of the image, a corner is nearby andthe tangent velocity has to be reduced to 50 mm/sec. Then the imageprocessing won’t miss this corner when it passes the center of the image.

127

Page 142: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7 Planar contour following at corners

Figure 7.9: Top: Camera path, measured contour and tool path for thesecond experiment; Bottom: Blow-up of the same measurements after thecorner

128

Page 143: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

7.5 Conclusion

7.5 Conclusion

This chapter presents a combined vision/force control approach at cor-ners, being a special case of the planar contour following task.

The vision system is used to watch out for corners. Incorporatingthe camera/tool deformation in the edge measurements enables an ac-curate contour and corner localization. A simple and robust algorithmis implemented to detect corners in the path ahead. Once a corner isdetected, the finite state controller is activated to take the corner inthe best conditions resulting in a faster and more accurately executedtask.

The experimental results validate the used approach.

129

Page 144: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

130

Page 145: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 8

Additional experiments

8.1 Introduction

This chapter presents experimental results for three combined vi-sion/force tasks. In addition to the results given in the previous chap-ters, they once more validate our approach.

The first experiment is an example of traded control, accordingto subtasks 1.3, 1.7 and 1.8 (described in section 4.5). Subtask 1.3consists of visual aligning the camera with the contour in position andorientation. Subtask 1.7 is a force controlled approach task with fixedtask/camera relation. In subtask 1.8, the force controlled contact ismaintained (with the tool standing still), while the camera positionsover the contour using a variable EOL configuration.

The second experiment is an example of shared control of type 1a(see figure 4.3) being a guarded approach.

The third and last experiment uses a fixed EOL configuration. Itfully corresponds to the blade polishing task (Task 5b) presented infigure 4.8-bottom. However, the test object is a car door.

The next sections describe the experimental setup and present theresults for the three experiments1. The (corresponding) high level taskdescriptions were already given in chapter 4, section 4.5.

1Preliminary results for Task 5a, the path on surface following task, simulatinga truck with semi-trailer, are given by Schillebeeckx and Vandenberk in [69].

131

Page 146: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

8 Additional experiments

8.2 Traded control in EOL

The first experiment uses the experimental setup of the contour follow-ing task of chapter 6. This is an EOL configuration.

Figure 8.1 shows some of the results for subtasks 1.3, 1.7 and 1.8,described in section 4.5. Vision and force control are stable and thesubtasks are all executed successfully. Note that subtask 1.7 is basedon a fixed EOL configuration. Subtask 1.8, on the other hand, exploitsthe additional rotational DOF of a variable EOL configuration.

Figure 8.1: Decay of angular error camθmz in subtask 1.3 (top) and contact

force tFmy and decay of vision error camxm in subtasks 1.7 and 1.8 (bottom)

8.3 Shared control in non-parallel ECL

Figure 8.2 gives the setup for the second experiment. In contrast tothe other two, this one is a non-parallel ECL configuration, which isused for a shared control task of type (df+vs) in an axial direction,according to the first example in figure 4.3.

132

Page 147: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

8.3 Shared control in non-parallel ECL

The task goal is to make contact with an object, avoiding excessivecontact forces. Since the (exact) position of the object is unknown, avery slow approach velocity is mandatory. In a guarded approach, thisapproach velocity depends on the difference in measured and desiredforces: tv

f = Kfkinv(tFm − tF

d). (With Kf = 4 sec−1, kinv = 0.075mm/N and tF

d = −30 N, this gives tvf = 9 mm/sec).

With the vision system, we can speed up the approach simply byaugmenting the guarded approach velocity with vision control, pro-portional to the visually measured distance to the object: tv

vs =Kvs obj

t xm. The total commanded velocity is the sum of the visionand force dependent parts: tv

c = tvf + tv

vs.Figure 8.3 shows the results of the shared control approach exper-

iment. At the start of the task, the object is not visible. The visiondependent velocity is set to a maximum (about 60 mm/sec in this ex-periment but up to 100 mm/sec is feasible). Once the object comes inthe camera field of view, the vision dependent part of the commandedvelocity rapidly decays and the approach slows down. When the dis-tance between the force probe and the object, as seen by the camera,

Figure 8.2: ECL configuration with non-parallel mounting used for sharedcontrol in a guarded approach task

133

Page 148: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

8 Additional experiments

Figure 8.3: Normal force, absolute task frame position and commandedtask frame velocity versus time for the shared control approach experiment

becomes zero, the vision control stops. Due to the chosen setup, thereis no contact yet at this point in time. The force dependent controlhowever remains and stably builds up the contact force to -30 N. Ev-idently, the approach is performed much faster when vision control isadded than without vision control, while preserving stability.

In contrast to all other experiments, this experiment utilizes a newcamera (SONY XC55bb - NTSC) with square pixels (7.4 by 7.4 µm).The frame rate is 60 Hz. This corresponds to an image frequencyequal to 30 Hz. Hence the image sampling period becomes T vs = 33.3ms (instead of 40 ms for the previous PAL camera). Since the robotcontrol and data logging frequency is 100 Hz, the (same) vision control

134

Page 149: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

8.4 3D path in fixed EOL

signals are used three or four times in a row. This can be seen from thestaircase-like decay of the vision based part in the controlled velocity.

8.4 3D path in fixed EOL

The third and last experiment uses a setup with lateral mounted cameraas shown in figure 8.4. The technical drawing for this mounting is givenin appendix section F.4.

This experiment simulates a ‘car door polishing’ task, which fullycorresponds to blade polishing task, being Task 5b in section 4.5.

The task directives are 1) to make contact with the car door andposition the tool normal to the car door surface 2) to approach thedoor edge using vision 3) to align the camera with the edge and 4)to follow the edge. During steps 2 to 4, the force controlled contactof step 1 has to be maintained. Figures 8.5 to 8.7 show the results

Figure 8.4: Fixed EOL configuration with a lateral mounted camera forthe ’car door polishing’ experiment

135

Page 150: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

8 Additional experiments

Figure 8.5: 3D view (top) and top view (bottom) of task and camera framepaths for the ‘car door polishing’ experiment

136

Page 151: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

8.4 3D path in fixed EOL

Figure 8.6: Camera frame errors for the ‘car door polishing’ experiment

Figure 8.7: Contact forces for the ‘car door polishing’ experiment

137

Page 152: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

8 Additional experiments

for steps 2 to 4. Figure 8.5 gives a 3D view and a top view of thepaths followed by task and camera frames. As can be seen from thefigure, the tz-direction is kept normal to the door surface. This isprogrammed by demanding zero torque around tx- and ty-directions.Figure 8.6 gives the vision measurements, which are used to controlthe position in the ty-direction and the orientation about the tz-axis.At the start of the approach phase, the edge is not yet visible. Oncethe edge becomes visible, the tool turns to align the camera with theedge. Then, the edge is followed with a constant velocity of 10 mm/sec.Figure 8.7 shows the measured contact forces during the experiment.These behave oscillatory due to the very stiff contact between tool andcar door. Nevertheless the experiment is executed successfully!

This experiment illustrates that if the image feature of interest isclearly measurable by the vision system and if the tool/object contactincorporates enough compliance (such that an adequate force controlis possible), then an integrated vision/force control is feasible.

8.5 Conclusion

This chapter presents additional experimental results for three exper-iments. All experiments are successfully executed. They show thefitness of the presented approach to implement combined vision/forcetasks in an uncalibrated workspace. They further illustrate the differ-ent camera/tool configurations.

If the image feature of interest is clearly measurable by the visionsystem and if the tool/object contact situation incorporates enoughcompliance to enable an adequate force control, then an integratedvision/force control approach, as presented in this work, is always fea-sible. Hence, a wider range of tasks can be executed with improvedperformance in the sense of increased velocity, accuracy or feasibility.

138

Page 153: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Chapter 9

General conclusion

This work presents a framework to integrate visual servoing and forcecontrol in robotic tasks. Numerous tasks illustrate the fitness of thetask frame formalism and the presented hybrid control scheme as a ba-sis for vision and force integration. The task frame formalism providesthe means to easily model, implement and execute robotic servoingtasks based on combined force control and visual servoing in an uncal-ibrated workspace. The high level task description divides the hybridcontrol space in (separate or mixed) vision and force (among trackingand velocity) subspaces. The key characteristic of the hybrid controlleris the fusion of vision and force control (possibly with feedforward) onthe basis of velocity set-points, which form the input to the joint con-trolled robot.

In contrast to what one might expect, the needed effort to accom-plish a combined vision/force task (as presented in our approach) isless than the sum of efforts needed for both visual servoing and forcecontrol on its own1. After all, avoiding collision or depth estimation ina pure visual servoing task or detecting a change in the contact situa-tion, e.g. at a corner, in a force controlled task are annoying operationswhich are hard to deal with. Using both sensors together remediesthese shortcomings. As shown, the vision system is used to look for astarting point, to align with an edge, to measure the path ahead and towatch out for corners in order to speed up the task or to improve theforce controlled contact. The force sensor, on the other hand, is used

1Except for the the variable EOL configuration

139

Page 154: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

9 General conclusion

to establish the depth to the object plane, to make and maintain con-tact with the object and to avoid collisions. Thanks to the calibratedcombined mounting of vision and force sensor, the mono vision basedcomputations evidently benefit from the known depth to the objectwhen in contact.

The golden rule of thumb in the above examples is easy: ‘Assignspecific control directions to that sensor which is best suited to controlthem’. This rule implies that a (mono) vision system can only controla maximum of 3 task directions properly. The number of force con-trolled directions on the other hand depends on the tool/object contactsituation (which if not known in advance needs to be identified on-line).

This thesis gives numerous examples of traded, hybrid and sharedvision/force controlled tasks. Together with the presented high leveltask descriptions and the experimental results, they underline the po-tential of the used approach.

As can be seen from the comparative table 4.1 (in chapter 4), thekey characteristics of any of the presented tasks are included in thosewhich where chosen to be (fully) experimentally tested. We can there-fore assume that any task presented in chapter 4 can effectively beexecuted.

Also from an economical point of view, a combined setup is ben-eficial. After all, the relatively cheap vision system can improve theperformance of the force controlled task, hereby lowering the still ex-isting threshold to use (relatively expensive) force sensors in industrialpractice.

9.1 Main contributions

This thesis contributes to both visual servoing and combined visualservoing/force control approaches. The emphasize lies however on thelatter. The main contributions are first grouped into four categoriesand then discussed in more detail. They are:

A. A new approach to visual servoing by using the task frame for-malism and relative area algorithms.

B. The unique framework for combined vision/force control with

140

Page 155: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

9.1 Main contributions

(1) both sensors mounted on the end effector in (2) a hybrid vi-sion/force control approach based on the task frame formalism.

C. Classifications of (1) shared control types on the one hand and of(2) camera/tool configurations on the other hand. Both classifi-cations apply to combined mounted vision/force control.

D. Improved planar contour following (1) for both continuous con-tours and contours with corners in (2) a variable endpoint open-loop configuration.

A. Visual servoing: Chapter 5 presents a full 3D, image based visualservoing task. The presented (eye-in-hand) visual alignment task isaccurately executed incorporating following innovations:

• The task frame formalism is extended to incorporate visual ser-voing.

• Using the task frame formalism, the task goal is translatedinto control actions in the task frame. Each task frame directionis hereby linked to one specific image feature. By gradually in-creasing the task control space, the task is successfully executed,even if the alignment object is only partly visible at the start ofthe task.

• The image features are all calculated using (partly new) relativearea algorithms resulting in real-time, robust control.

• A rectangular alignment object is proven to be superior to a circleor an ellipse as alignment object, in the sense that it gives bettermeasurable image features to determine the goal position.

B.1 Combined vision/force mounting: This thesis is one of thefirst, if not the first, to use an integrated vision/force approach withboth vision and force sensors mounted on the (same) end effector.

• The poses of vision and force sensors are identified by calibration.Hence, the vision measurements benefit from the known depth tothe object when in contact.

141

Page 156: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

9 General conclusion

• Mounting the camera on the end effector results in a controllablecamera position, close to the image feature and hence in a highresolution. Furthermore, aligning the image feature of interestwith the optical axis, makes the feature measurement insensitiveto the camera/lense distortion.

• With both sensors mounted on the same end effector, the posi-tion measurements benefit from the high relative accuracy of therobotic system.

• Incorporating the camera/tool deformation (when in contact) inthe measurements enables an accurate contour measurement andcorner localization. This improves the match between actual toolpath and visually measured contour data.

B.2 Integrated hybrid control in the task frame formalism:

• Vision and force sensing are fused in a hybrid control schemewhich forms the heart of our approach. It consists of a (sen-sor based) outer control loop around a (joint) velocity controlledrobot.

• The task frame formalism is the means to easily model, imple-ment and execute combined vision force robotic servoing tasks inan uncalibrated workspace.

• The high level task description divides the hybrid control spacein (separate or mixed) vision, force, tracking and velocitysubspaces. These control subspaces are by definition orthogonal.Any control action may be augmented with feedforward control.

• The hybrid controller implements traded, hybrid as well as shared(vision/force) control, dependent on the high level task descrip-tion.

C.1 Shared control types: Section 4.4 gives examples of all pos-sible forms of shared control (for both polar and axial directions) incombining vision and force control in one direction or by adding visionbased feedforward to either a force direction or a force based trackingdirection.

C.2 New classification of camera/tool configurations: A frame-work is established to classify combined vision/force tasks, with both

142

Page 157: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

9.1 Main contributions

sensors mounted on the end effector, into four camera/tool configura-tions:

1. Parallel endpoint closed-loop

2. Non-parallel endpoint closed-loop

3. Fixed endpoint open-loop

4. Variable endpoint open-loop

• Examples are given for all configurations but mostly the EOLconfiguration with variable task/camera frame relation is fullyexplored, since it is the most adequate one for ‘simple’ imageprocessing and the most challenging in the sense of control.

• All configurations are compatible with the presented hybrid con-trol scheme.

D.1 Improved planar contour following: Chapters 6 and 7 fullyexamine the planar contour following task for both continuous curvesand contours with corners.

• They show how the quality of force controlled planar contour fol-lowing improves significantly by adding vision based feedforwardon the force tracking direction.

• They are at once an excellent example of shared control.

• An adequate approach is presented to compute the curvaturebased feedforward on-line and in real-time. The contour is mod-elled by tangent lines and the curvature is computed as the changein orientation versus arc length. A twice applied least squares so-lution (once for the tangent, once for the curvature) avoids noisycontrol signals.

• A simple and robust corner detection algorithm is proposed toadequately detect and measure corners in the path ahead. Thecorner is rounded, maintaining the contact force, using (again)feedforward. To this end, the control structure is augmentedwith a finite state machine.

D.2 Variable EOL control: To realize the improved planar contourfollowing control an variable EOL configuration is suggested.

143

Page 158: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

9 General conclusion

• An EOL configuration avoids occlusion of the object or contourby the tool and offers the opportunity to detect in advance thepath ahead, which, regarded the low vision bandwidth, can onlybe advantageous.

• For rotationally symmetric tools, a redundancy exists for the ro-tation in the plane. This degree of freedom is used to position thecamera over the contour while maintaining the force controlledcontact. This results in a variable relation between task and endeffector orientations.

• Special control issues arise, due to the time shift between themoment the contour is measured and the moment these data areused. A strategy to match the (vision) measurement data withthe contact at hand based on Cartesian position is proposed.

• Using the variable camera/task frame relation, our approach as-sures a tangent motion for both the task frame (i.e. at the contactpoint) and the camera frame (i.e. at the intersection of the opticalaxis with the object plane) along the contour.

9.2 Limitations and future work

In view of the limited equipment due to the older type of robot withemerging backlash, limited bandwidth and restricted processing power,the still excellent, achieved results emphasize once more the potential ofan outer sensor based control loop around an internal velocity scheme.Better results may be achieved if the sample frequency increases, if theimage processing improves (increased computational power and morecomplex algorithms) or if the controller takes the robot dynamics intoaccount. For the latter, however, the present sample frequency is toolow.

Introducing the proposed approach in industrial practice faces yetanother problem. Although it should be commonly known, as onceagain underlined in this work, that an outer sensor based control loopis best built around a velocity instead of a position controlled robot,most robot controllers are in fact position based. Furthermore, robotmanufactures offer only limited sensor access to their controllers. Thiscomplicates the migration of the proposed method to industrial appli-cations.

144

Page 159: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

9.2 Limitations and future work

The presented approach mainly focuses on the lowest level of con-trol. How a given task is best tackled, or with other words, how thecontrol directions are divided over vision and force control by the taskdescription, is left over to the responsibility of the programmer or issubject to other research such as programming by human demonstra-tion, active sensing, human-machine interactive programming or tasklevel programming (in which the high level task description is automati-cally generated and autonomously executed). These latter methods canutilize the presented control approach as a basis to integrate vision andforce in their respective fields.

This work uses mono vision only. Future work may look into thespecifics of combined stereo vision and force control.

Finally, this work is not about vision or vision processing by itself.However essential, the vision processing is kept simple and robust, inorder to make real-time control possible. With increasing processingpower, more complex, real time algorithms will certainly enable theprocessing of a more diverse range of working scenes. This opens newresearch possibilities, e.g. by using real-time high-performance snake-algorithms in tracking the image features. The underlying controlstructure and ideas, however, will remain intact.

145

Page 160: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

146

Page 161: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[1] P. K. Allen, A. Timcenko, B. Yoshimi, and P. Michelman. Auto-mated tracking and grasping of a moving object with a robotichand-eye sytem. IEEE Trans. on Robotics and Automation,9(2):152–165, April 1993.

[2] M. Asada, T. Tanake, and K. Hosoda. Visual tracking of unknownmoving object by adaptive binocular visual servoing. In Int. Conf.on Multisensor Fusion and Integration for Intel. Syst., pages 249–254, Taipai, Taiwan, August 1999.

[3] J. Baeten, H. Bruyninckx, and J. De Schutter. Combining eye-in-hand visual servoing and force control in robotic tasks using thetask frame. In Int. Conf. on Multisensor Fusion and Integrationfor Intel. Syst., pages 141–146, Taipai, Taiwan, August 1999.

[4] J. Baeten and J. De Schutter. Improving force controlled planarcontour following using on-line eye-in-hand vision based feedfor-ward. In Proc. of Int. Conf. on Advanced Intelligent Mechatronics,pages 902–907, Atlanta, GA, September 1999.

[5] J. Baeten and J. De Schutter. Combined vision/force control atcorners in planar robotic contour following. In Proc. of Int. Conf.on Advanced Intelligent Mechatronics, pages 810–815, Como, Italy,July 2001.

[6] J. Baeten, W. Verdonck, H. Bruyninckx, and J. De Schutter. Com-bining force control and visual servoing for planar contour follow-ing. Machine Intelligence and Robotic Control, 2(2):69–75, July2000.

147

Page 162: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[7] H. Bruyninckx and J. De Schutter. Specification of force-controlledactions in the ”task frame formalism”– A synthesis. IEEE Trans.on Robotics and Automation, 12(4):581– 589, August 1996.

[8] J. Canny. Finding edges and lines in images. Master’s thesis, MIT,Cambridge, USA, 1983.

[9] J. Canny. A computational approach to edge detection. IEEETrans. on Pattern Analysis and Machine Intelligence, 8(6):679–698, November 1986.

[10] C. Canudas De Wit et al. Theory of Robot Control. Springer,London, 1996. pp. 150-170.

[11] F. Chaumette, P. Rives, and B. Espiau. Positioning of a robotwith respect to an object, tracking it and estimating its velocity byvisual servoing. In IEEE Int. Conf. on Robotics and Automation,pages 2248–2253, Sacramento, CA, 1991.

[12] G. W. Chu and M. J. Chung. Selection of an optimal cameraposition using visibility and manipulability measures for an activecamera system. In Int. Conf. on Intelligent Robots and Systems,pages 429–434, Takamatsu, Japan, 2000.

[13] C. Collewet and F. Chaumette. A contour approach for image-based control on objects with complex shape. In Int. Conf. onIntelligent Robots and Systems, pages 751–756, Takamatsu, Japan,2000.

[14] P. I. Corke and M. Good. Dynamic effects in high-performancevisual servoing. In IEEE Int. Conf. on Robotics and Automation,pages 1838–1843, Nice, France, 1992.

[15] P. I. Corke. Dynamics of visual control. In Workshop M-5, VisualServoing: Achievements, Applications and Open Problems, IEEEInternational Conference on Robotics and Automation, 1994. SanDiego, California.

[16] P. I. Corke. Visual control of robot manipulators – A review. InWorkshop M-5, Visual Servoing: Achievements, Applications andOpen Problems, IEEE International Conference on Robotics andAutomation, pages 1–31, 1994. San Diego, California.

148

Page 163: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[17] P. I. Corke. Dynamic issues in robot visual-servo systems, pages488–498. G. Giralt and G. Hirzinger, editors, Robotics Research.The Seventh International Symposium. Springer-Verlag, 1996.

[18] P. I. Corke. Visual Control of Robots: High-Performance visualservoing. Mechatronics. Research Studies Press (John Wiley),1996.

[19] E. Coste-Maniere, P. Couvignou, and P. K. Khosla. Contour fol-lowing based on visual servoing. In Int. Conf. on Intelligent Robotsand Systems, pages 716–722, Yokohama, Japan, July 1993.

[20] E. Coste-Maniere, P. Couvignou, and P. K. Khosla. Visual ser-voing in the task-function framework: a contour following task.Journal of Intelligent and Robotic Systems, 12:1–21, 1995.

[21] S. Demey, H. Bruyninckx, and J. De Schutter. Model-based planarcontour following in the presence of pose and model errors. Int.J. Robotics Research, 16(6):840–858, 1996.

[22] S. Demey, S. Dutre, W. Persoons, P. Van De Poel, W. Witvrouw,J. De Schutter, and H. Van Brussel. Model based and sensor basedprogramming of compliant motion tasks. In Proc. Int. Symposiumon Industrial Robots, pages 393–400, 1993. Tokyo.

[23] J. De Schutter, H. Bruyninckx, S. Demey, and W. Witvrouw. Forcecontrolled robots tutorial. In Technology Transfer Workshop onIndustrial Vision and Autonomous Robots, June 1994. Leuven,Belgium.

[24] J. De Schutter and H. Van Brussel. Compliant robot motion. II. Acontrol approach based on external control loops. Int. J. RoboticsResearch, 7(4):18–33, 1988.

[25] J. De Schutter and H. Van Brussel. Compliant robot motion. I. Aformalism for specifying compliant motion tasks. Int. J. RoboticsResearch, 7(4):3–17, 1988.

[26] J. De Schutter, P. Van de Poel, W. Witvrouw, H. Bruyninckx,S. Demey, and S. Dutre. An environment for experimental com-pliant motion. In Tutorial on Force and Contact Control in RoboticSystems: A historical Perspective and Current Technologies, IEEEConf. on Robotics and Automation, pages 112–126, 1993. Atlanta.

149

Page 164: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[27] J. De Schutter. Compliant robot motion: Task formulation andcontrol, 1986. Ph.D. dissertation, Departement of Mechanical En-gineering, Katholieke Universiteit Leuven, Belgium.

[28] J. De Schutter. Improved force control laws for advanced trackingapplications. In IEEE Int. Conf. on Robotics and Automation,pages 1497–1502, Philadelphia, PA, 1988.

[29] T. Drummond and R. Cipolla. Real-time tracking of complexstructures with on-line camera calibration. In Britisch MachineVision Conference, 1999.

[30] T. Ellis, A. Abbood, and B. Brillaullt. Ellipse detection an match-ing with uncertainty. Image and Vision Computing, 10(5):271–276,June 1992.

[31] B. Espiau, C. Francois, and P. Rives. A new approach to visualservoing in robotics. IEEE Trans. on Robotics and Automation,8(3):313–326, June 1992.

[32] J. T. Feddema and O. R. Mitchell. Vision-guided servoing withfeature-based trajectory generation. IEEE Trans. on Robotics andAutomation, 5(5):691–700, october 1989.

[33] G. Hager and S. Hutchinson. Visual Servoing: Achievements, Ap-plications and Open Problems. Workshop M-5 of IEEE Int. Conf.on Robotics and Automation, San Diego, CA, 1994.

[34] R. S. Hartenberg and J. Denavit. A kinematic notation for lowerpair mechanisms based on matrices. Journal of Applied Mechanics,77:215–221, june 1955.

[35] K. Hashimoto, T. Kimoto, T. Ebine, and H. Kimura. Manipu-lator control with image-based visual servo. In IEEE Int. Conf.on Robotics and Automation, pages 2267–2271, Sacramento, CA,1991.

[36] G. Hirzinger. Direct digital control using a force-torque sensor.In Proc. IFAC Symp. on Real Time Digital Control Applications,1983.

[37] N. Hogan. Impedance control: An approach to manipulation: Parti-theory; part ii-implementation; part iii-applications. ASME J.Dynamic Systems, Measurements and Control, 107(1):1–24, 1985.

150

Page 165: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[38] B. Horn and B. Schunck. Determining optical flow. ArtificialIntelligence, 17:185–203, 1981.

[39] K. Hosoda and M. Asada. Versatile visual servoing without knowl-edge of true jacobian. In Int. Conf. on Intelligent Robots andSystems, pages 186–193, Munchen, Germany, 1994.

[40] K. Hosoda, K. Igarashi, and M. Asada. Adaptive hybrid controlfor visual and force servoing in an unknown environment. IEEERob. Automation Mag., 5(4):39–43, December 1998.

[41] S. Hutchinson, G. Hager, and P. Corke. A tutorial on visual servocontrol. IEEE Trans. on Robotics and Automation, 12(5):651–670,October 1996.

[42] M. Isard and A. Blake. Contour tracking by stochastic propagationof conditional density. In Proceedings of European Conference onComputer Vision, pages 343–356, 1996.

[43] I. Ishii, Y. Nakabo, and M. Ishikawa. Target tracking algorithmfor 1ms visual feedback system using massively parallel processing.In Proceedings of IEEE International Conference on Robotics andAutomation, pages 2309–2314, 1996. Minneapolis, Minnesota.

[44] S. Jorg, J. Langwald, J. Stelter, G. Hirzinger, and C. Natale. Flex-ible robot-assembly using a multi-sensory approach. In IEEE Int.Conf. on Robotics and Automation, pages 3687–3694, San Fran-cisco, CA, 2000.

[45] H. W. Kim, J. S. Cho, and I. S. Kweon. A novel image-basedcontrol-law for the visual servoing system under large pose error.In Int. Conf. on Intelligent Robots and Systems, pages 263–268,Takamatsu, Japan, 2000.

[46] F. Lange and G. Hirzinger. A universal sensor control architectureconsidering robot dynamics. In Int. Conf. on Multisensor Fusionand Integration for Intel. Syst., pages 277–282, Germany, 2001.

[47] F. Lange, P. Wunsch, and G. Hirzinger. Predictive vision basedcontrol of high speed industrial robot paths. In IEEE Int. Conf.on Robotics and Automation, pages 2646–2651, Leuven, Belgium,1998.

151

Page 166: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[48] S.-W. Lee, B.-J. You, and G. D. Hager. Model-based 3-d ob-ject tracking using projective invariance. In IEEE Int. Conf. onRobotics and Automation, pages 1598–1594, Detroit, MI, 1999.

[49] R. Lenz and R. Tsai. Techniques for calibration of the scale factorand image center for high accuracy 3d machine vision metrology.In IEEE Int. Conf. on Robotics and Automation, pages 68–75,Raleigh, NC, 1987.

[50] E. Malis, G. Morel, and F. Chaumette. Robot control using dis-parate multiple sensors. Int. J. Robotics Research, 20(5):364–377,may 2001.

[51] E. Marchand. Visp: A software environment for eye-in-hand visualservoing. In IEEE Int. Conf. on Robotics and Automation, pages3224–3229, Detroit, MI, 1999.

[52] M. Mason. Compliance and force control for computer controlledmanipulators. IEEE Trans. on Systems, Man, and Cybernetics,11:418–432, 1981.

[53] G. Morel, E. Malis, and S. Boudet. Impedance based combinationof visual and force control. In IEEE Int. Conf. on Robotics andAutomation, pages 1743–1748, Leuven, Belgium, 1998.

[54] K. Nagahama, K. Hashimoto, T. Noritsugu, and M. Takaiwa. Vi-sual servoing based on object motion estimation. In Int. Conf. onIntelligent Robots and Systems, pages 245–250, Takamatsu, Japan,2000.

[55] Y. Nakabo and M. Ishikawa. Visual impedance using a 1 ms visualfeedback system. In IEEE Int. Conf. on Robotics and Automation,pages 2333–2338, Leuven, Belgium, 1998.

[56] A. Namiki, Y. Nakabo, I. Ishii, and M. Ishikawa. High speedgrasping using visual and force feedback. In IEEE Int. Conf. onRobotics and Automation, pages 3195–3200, Detroit, MI, 1999.

[57] B. J. Nelson and P. K. Khosla. Force and vision resolvability forassimilating disparate sensory feedback. IEEE Trans. on Roboticsand Automation, 12(5):714–731, October 1996.

152

Page 167: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[58] B. J. Nelson, J. Morrow, and P. K. Khosla. Robotic manipulationusing high bandwidth force and vision feedback. Mathl. ComputerModelling, 24(5/6):11–29, 1996.

[59] B. J. Nelson, N. P. Papanikolopoulos, and P. K. Khosla. Roboticvisual servoing and robotic assembly tasks. IEEE Rob. AutomationMag., pages 23–31, June 1996.

[60] B. J. Nelson, Y. Zhou, and B. Vikramaditya. Integrating forceand vision feedback for microassembly. In Proc. of SPIE: Mi-crorobotics and Microsystems Fabrication, pages 30–41, October1997. Pittsburgh, PA.

[61] B. J. Nelson. Assimilating disparate sensory feedback withinvirtual environments for telerobotic systems. Robotics and Au-tonomous Systems, 36:1–10, 2001.

[62] N. P. Papanikolopoulos, P. K. Khosla, and T. Kanade. Visualtracking of a moving target by a camera mounted on a robot: Acombination of control and vision. IEEE Trans. on Robotics andAutomation, 9(1):14–35, February 1993.

[63] N. P. Papanikolopoulos and P. K. Khosla. Feature based roboticvisual tracking of 3-d translational motion. In Proc. of 30th IEEEconf. on Decision and Control, December 1991. Brighton, UnitedKingdom.

[64] A. Pichler and M. Jagersand. Uncalibrated hybrid force-visionmanipulation. In Int. Conf. on Intelligent Robots and Systems,pages 1866–1871, Takamatsu, Japan, 2000.

[65] J. A. Piepmeier, G. V. McMurray, and H. Lipkin. A dynamicjacobian estimation method for uncalibrated visual servoing. InProc. of Int. Conf. on Advanced Intelligent Mechatronics, pages944–949, Atlanta, GA, 1999.

[66] M. Raibert and J. J. Craig. Hybrid position/force control ofmanipulators. Trans. ASME J. Dyn. Systems Meas. Control,102:126–133, 1981.

[67] A. C. Sanderson and L. E. Weiss. Adaptive Visual Servo Control ofRobots, pages 107–116. A. Pugh, editor, Robot Vision, Int. Trendsin Manufactoring Technology. Springer-Verlag, 1983.

153

Page 168: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[68] M. Sato and K. Aggerwal. Estimation of position and orientationfrom image sequence of a circle. In IEEE Int. Conf. on Roboticsand Automation, pages 2252–2257, Albuquerque, NM, 1997.

[69] P. Schillebeeckx and S. Vandenberk. Contourvolgen van een robotdoor het combineren van kracht- en visiecontrole, 1998.

[70] J. Shen and S. Castan. An optimal linear operator for step edgedetection. Graphical Models and Image Processing, 54(2):112–133,March 1992.

[71] B. Siciliano and L. Villani. Robot Force Control. The KluwerInt. series in engineering and computer science. Kluwer AcademicPublishers, january 2000. ISBN 0-7923-7733-8.

[72] C. E. Smith, S. A. Brandt, and N. P. Papanikolopoulos. Eye-in-hand robotic tasks in uncalibrated environments. IEEE Trans. onRobotics and Automation, 13(6):903–914, December 1997.

[73] S. Smith and J. Brady. Susan - a new approach to low level imageprocessing. Int. J. Computer Vision, 23(1):45–78, May 1997.

[74] Y. Song, W. Tianmiao, W. Jun, Y. Fenglei, and Z. Qixian. Sharedcontrol in intelligent arm/hand teleoperated system. In IEEE Int.Conf. on Robotics and Automation, pages 2489–2494, Detroit, MI,1999.

[75] H. Spath. One Dimensial Spline Interpolation Algorithms, pages151–153. A.K.Peters Ltd., Wellesley, Massachusetts, 1995.

[76] M. Tonko, K. Schafer, F. Haimes, and H.-H. Nagel. Towards visu-ally servoed manipulation of car engine parts. In IEEE Int. Conf.on Robotics and Automation, pages 3166–3171, Albuquerque, NM,1997.

[77] R. Tsai and R. Lenz. A new technique for fully autonomous andefficient 3d robotics hand-eye calibration. IEEE Trans. on Roboticsand Automation, pages 345–358, June 1989.

[78] R. Tsai. A versatile camera calibration technique for high-accuracy3d machine vision metrology using off-the-shelf tv cameras andlenses. IEEE J. Rob. Automation, 3(4):323–344, 1987.

154

Page 169: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

[79] T. Tuytelaars, L. Van Gool, L. D‘haeneand, and R. Koch. Match-ing of affinely invariant regions for visual servoing. In IEEE Int.Conf. on Robotics and Automation, pages 1601–1606, Detroit, MI,1999.

[80] P. Van De Poel, W. Witvrouw, H. Bruyninckx, and J. De Schutter.An environment for developing and optimising compliant robotmotion tasks. In 6th Int. Conference on Advanced Robotics, pages713–718, 1993. Tokyo.

[81] L. Van Gool and M. Proesmans. Machine Vision. IMPRO Course2.2. KULeuven, 1995.

[82] S. Van Huffel and J. Vandewalle. The total least squares problem:computational aspects and analysis. SIAM Philadelphia (Pa.),1991.

[83] S. Venkatesha and R. Owens. An energy feature detection scheme.In Proceedings of IEEE Int. Conf. on Image Processing, pages 553–557, September 1989. Singapore.

[84] J. Verbiest and W. Verdonck. Gebruik van visie voor het ver-beteren van krachtgecontroleerd contourvolgen met een robot,1999. Masters thesis 99EP4, Katholieke Universiteit Leuven, Dept.Werktuigkunde, Afd. PMA.

[85] Y. von Collani, M. Ferch, J. Zhang, and A. Knoll. A general learn-ing approach to multisensor based control using statistic indices.In IEEE Int. Conf. on Robotics and Automation, pages 3221–3226,San Francisco, CA, 2000.

[86] Y. von Collani, C. Scheering, J. Zhang, and A. Knoll. A neuro-fuzzy solution for integrated visaul and force control. In IEEE Int.Conf. on Robotics and Automation, pages 2965–2970, Detroit, MI,1999.

[87] Y. von Collani, J. Zhang, and A. Knoll. A neuro-fuzzy solutionfor fine-motion control based on vision and force sensors. In IEEEInt. Conf. on Robotics and Automation, pages 2965–2970, Leuven,Belgium, May 1998.

[88] Q. Wang. Extension to the task frame formalism. In PhD: Pro-gramming of compliant robot motion by human demonstration,

155

Page 170: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Bibliography

pages 107–130. Katholieke Universiteit Leuven, Department ofMechanical Engineering, November 1999.

[89] L. E. Weiss, A. C. Sanderson, and P. C. Neuman. Dynamic sensor-based control of robots with visual feedback. IEEE J. Rob. Au-tomation, 3(5):404–417, october 1987.

[90] S. Wijesoma, D. Wolfe, and R. R.J. Eye-to-hand coordination forvision-guided robot control applications. Int. J. Robotics Research,12(1):65–78, February 1993.

[91] W. Witvrouw. Development of experiments and environment forsensor controlled robot tasks. PhD thesis, Katholieke UniversiteitLeuven, Department of Mechanical Engineering, Belgium, 1996.

[92] P. Wunsch and G. Hirzinger. Real-time visual tracking of 3-dobjects with dynamic handling of occlusion. In IEEE Int. Conf.on Robotics and Automation, Albuquerque, NM, 1997.

[93] D. Xiao et al. Sensor-based hybrid position/force control of a robotmanipulator in an uncalibrated environment. IEEE Transactionon Control Systems Technology, 8(4):635–645, 2000.

[94] D. Xiao, B. K. Gosh, N. Xi, T. J. Tarn, and Z. Yu. Real-timeplanning and control for robot manipulator in unknown workspace.In IFAC, pages 423–428, 1999.

[95] D. Xiao, B. K. Gosh, N. Xi, and T. J. Tarn. Intelligent roboticsmanipulation with hybrid position/force control in an uncalibratedworkspace. In IEEE Int. Conf. on Robotics and Automation, pages1671–1676, Leuven, Belgium, 1998.

[96] T. Yoshikawa. Force control of robot manipulators. In IEEE Int.Conf. on Robotics and Automation, pages 1220–1225, San Fran-cisco, CA, 2000.

[97] Y. Zhou, B. J. Nelson, and B. Vikramaditya. Fusing force andvision feedback for micromanipulation. In IEEE Int. Conf. onRobotics and Automation, pages 1220–1225, Leuven, Belgium,1998.

156

Page 171: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Appendix A

Derivations

A.1 Frames and homogeneous transformations

The position and orientation of (a right-handed) frame a expressed inframe b is:

abH =

[abR

abT

000 1

](A.1)

with abR a 3× 3 rotation matrix, indicating the orientation of frame a

w.r.t. frame b and abT a 3× 1 translation vector indicating the position

of the origin of frame a in frame b.The position of a point in frame b, being the 3 × 1 vector bP con-

taining x, y and z coordinates, given the position of this point in framea being aP follows from

bP = abR aP + a

bT ⇔ bP = abH aP (A.2)

with P the homogeneous coordinates of the point P or

P =

xyz

⇔ P =

xyz1

. (A.3)

abH thus represents both the homogeneous transformation from framea to frame b and the pose of frame a in frame b.

Any arbitrary rotation R can be represented by three angles[αx, αy, αz]′ indicating the successive rotations about z, y and x-directions:

R = Rz(αz) Ry(αy) Rx(αx) (A.4)

157

Page 172: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A Derivations

with Ri(α) the rotation matrix for a rotation of α radians about direc-tion i or in particular:

Rz(αz) =

cos(αz) − sin(αz) 0sin(αz) cos(αz) 0

0 0 1

,

Ry(αy) =

cos(αy) 0 + sin(αy)0 1 0

− sin(αy) 0 cos(αy)

,

Rx(αx) =

1 0 00 cos(αx) − sin(αx)0 sin(αx) cos(αx)

.

(A.5)

The computation of the rotation matrix from the rotation angles usingequations A.4 and A.5 then comes downs to (with c = cos and s = sin):

R =

c(αz)c(αy) c(αz)s(αy)s(αx)− s(αz)c(αx) c(αz)s(αy)c(αx) + s(αz)s(αx)s(αz)c(αy) s(αz)s(αy)s(αx) + c(αz)c(αx) s(αz)s(αy)c(αx)− c(αz)s(αx)−s(αy) c(αy)s(αx) c(αy)c(αx)

(A.6)

A.2 Screw transformations

Let aV be a 6×1 generalized velocity vector (also called twist or velocityscrew), containing 3 axial and 3 polar velocities, expressed in frame a:

aV = [ avx, avy, avz, aωx, aωy, aωz]′. (A.7)

Let abS

v be the velocity screw transformation for the recalculation of avelocity screw in frame a to its equivalent1 velocity screw in frame b:

bV = abS

vaV. (A.8)

Using the relative pose of frame a w.r.t. frame b, abH, as introduced in

section A.1, the velocity screw transformation then corresponds to

abS

v =[

abR

abR

abT×

Zeros abR

](A.9)

1for an identical rigid body motion

158

Page 173: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A.3 Denavit-Hartengberg for KUKA361

with Zeros a 3× 3 matrix with all elements equal to zero and with

abT× =

0 −abT z

abT y

abT z 0 a

bT x−a

bT yabT x 0

. (A.10)

Equations A.8 and A.9 can also be subdivided intobvx

bvy

bvz

= abR

avx

avy

avz

+ abT ×

aωx

aωy

aωz

(A.11)

and bωx

bωy

bωz

= abR

aωx

aωy

aωz

. (A.12)

Let aW be a 6 × 1 generalized force vector (also called wrench orforce screw), containing 3 forces and 3 moments, expressed in frame a:

aW = [ aFx, aFy, aFz, aMx, aMy, aMz]′. (A.13)

Let abS

f be the force screw transformation for the recalculation of awrench in frame a to its equivalent2 wrench in frame b:

bW = abS

faW. (A.14)

The force screw transformation then corresponds to

abS

f =[

abR Zeros

abR

abT× a

bR

](A.15)

with Zeros and abT× as defined above.

A.3 A Denavit-Hartengberg representation ofthe KUKA361 robot

The Denavit-Hartenberg (DH) representation [34] is an easy formalismto describe the kinematic structure of a serial manipulator. Each link

2for an identical force equilibrium

159

Page 174: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A Derivations

i 1 2 3 4 5a 5b 6αi −π/2 0 π/2 π/6 −π/3 π/6 0ai 0 480 0 0 0 0 0di 1020 0 0 645 0 0 120θi q1 q2 − π/2 q3 + π/2 q4 + π/2 q5 −q5 q6 − π/2

Table A.1: DH parameters for KUKA361 manipulator with distances inmm and angles in rad

of the manipulator is associated with a frame, going from the absoluteworld frame (frame 0) to the end effector frame (frame 6 for a 6 degreeof freedom (DOF) manipulator). The transition of frame (i − 1) toframe (i) corresponds to either a translation DOF or a rotation DOFin the connection of link (i − 1) and link (i). The KUKA361 robothas 6 rotation DOF. Using the DH representation the pose of the endeffector (frame) is calculated as a function of the robot joint valuesq = [q1 . . . q6]′. This is called the forward kinematic transformationFWK(q).

Formalism : The DH formalism describes the transition fromframe (i− 1) to frame (i) with four parameters:

αi the angle between the zi−1 and the zi-axes about the xi-axis;

ai the distance between the zi−1 and the zi-axes along the xi-axis;

di the distance from the origin of frame i− 1 to the xi-axis alongthe zi−1-axis;

θi the angle between the xi−1 and the xi-axes about the zi−1-axis,which normally includes the variable joint value qi.

Frame (i) thus follows from frame (i− 1) by the following four steps:

1. Rotate frame (i− 1) over the angle θi around the zi−1-axis.

2. Translate the frame along the zi−1-axis by the distance di.

3. Translate the frame along the xi-axis by the distance ai.

4. Rotate the frame over the angle αi around the xi-axis.

160

Page 175: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A.3 Denavit-Hartengberg for KUKA361

These four steps correspond to the homogeneous transformation ii−1H,

being

ii−1H =

cos θi − sin θi cos αi sin θi sinαi ai cos θi

sin θi cos θi cos αi − cos θi sinαi ai sin θi

0 sinαi cos αi di

0 0 0 1

. (A.16)

A recursive application of this transformation results inee

absFWK(q) = 1absH(q1)21H(q2) . . . ee

5H(q6). (A.17)

Figure A.1: DH frames of KUKA361 manipulator

161

Page 176: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A Derivations

Figure A.2: Wrist of KUKA361

KUKA361 : Figure A.1 gives an overview of the DH frames forthe KUKA361 manipulator. The figure is drawn with all joint values[q1 . . . q6]′ equal to zero. Table A.1 summarizes the DH parameters forthe KUKA3613. Because joint 5 consists of two moving parts, as shownin figure A.2, one additional frame is needed, hereby splitting frame 5in a and b. The rotations around the axes 5a and 5b in figure A.2,however, are always equal with opposite signs.

A.4 Point symmetry with relative area method

Assume that the object is dark and the environment is bright. Assumefurther that the object lies in the left part of the image. Let I(u, v) bethe intensity or grey-level value of pixel (u, v), with I = 0 for a blackpixel and I = 255 for a white one. Define the functions Obj(u, v) andEnv(u, v), which indicate whether pixel (u, v) belongs to the object orto the environment respectively, as

Obj(u, v) =

{1 if I(u, v) < threshold,

0 if I(u, v) ≥ threshold,(A.18)

and

Env(u, v) =

{0 if I(u, v) < threshold,

1 if I(u, v) ≥ threshold.(A.19)

Define the total relative area parameter Tra for an xs by ys image as

3Practical note: to get an identical robot position for a given set of joint valueswith this representation as with COMRADE, the joint values (q4, q5, q6) have to beinverted.

162

Page 177: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A.4 Point symmetry with relative area method

Tra =

xs,ys∑u,v=1,1

Obj(u, v)−xs,ys∑

u,v=1,1

Env(u, v)

(A.20)

Then, the distance Cxp, defined in figure 3.21, is given by (equivalentto equation 3.14):

Cxp =1

2ysTra (A.21)

Define the y-signed relative area parameter Y Sra as

Y Sra =xs∑

u=1

ys/2∑v=1

Obj(u, v)−ys∑

v= ys2

+1

Obj(u, v)

. (A.22)

Then, the angle Cθ, defined in figure 3.21, is (equivalent to equation3.15)

Cθ = − arctan[

4y2

s

Y Sra

]. (A.23)

Finally, define the shifted x-signed relative area parameter SXSra as

SXSra = −ys∑

v=1

floor(xs2

+Cxp)∑u=1

Obj(u, v)−xs∑

floor(xs2

+Cxp)+1

Obj(u, v)

.

(A.24)

The straightness of a vertical contour in the image can now bechecked by verifying the point symmetry of the image w.r.t. a framecentered at the contour. This is a necessary but not sufficient condition.

Consider the two situations shown in figure A.3. The absolute valueof parameter SXSra is equal to the area of ACDEF . The absolutevalue of Y Sra is equal to the area of ABC. The sum of both areas isequal to the area DEGH. For a contour with positive angle (figureA.3-left), both parameters SXSra and Y Sra are negative, thus

SXSra + Y Sra = −(xs + 2 Cxp) ys

2. (A.25)

163

Page 178: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A Derivations

Figure A.3: Illustration for point symmetry calculation for images of acontour with positive angle (left) and negative angle (right)

For a contour with negative angle (figure A.3-right), SXSra is negativeand Y Sra is positive, thus

SXSra − Y Sra = −(xs + 2 Cxp) ys

2. (A.26)

Both equations A.25 and A.26 together give following theorem:

If an image with vertical contour is point symmetric, then

SXSra − |Y Sra| = −(xs + 2 Cxp) ys

2.

(A.27)This equation is equivalent to equation 3.16. Due to rounding off

errors and the spatial quantization of the image, equation A.27 is hardlyever fulfilled. Hence, the image is regarded as point symmetric if∣∣∣∣SXSra − |Y Sra|+

(xs + 2 Cxp) ys

2

∣∣∣∣ < threshold1. (A.28)

A good value for threshold1 is 2 ys.

A.5 Some relative area algorithms

This section briefly describes some relative area algorithms used toprocess the image, which have not yet been discussed previously.

164

Page 179: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A.5 Relative area algorithms

Figure A.4: Relative area parameters: XSra, Y Sra and XY Sra

Assume that the object (e.g. a circle, ellipsoid or rectangle) is darkand that the environment is bright. Let I(u, v) be the (image) intensityfunction as defined in the previous section. Let (xs, ys) be the imagesize. Define the function Obj(u, v), which indicates whether the pixel(u, v) belongs to the object by equation A.18.

Define the image features XSra and Y Sra, which are the x-signedand y-signed relative area parameters shown in figure A.4, as

XSra = −ys∑

v=1

xs/2∑u=1

Obj(u, v)−xs∑

u=xs2

+1

Obj(u, v)

(A.29)

and

Y Sra =xs∑

u=1

ys/2∑v=1

Obj(u, v)−ys∑

v= ys2

+1

Obj(u, v)

. (A.30)

Then the distances objx and objy, which are the coordinates [pix]of the center of the object, as shown in figure A.4, are proportional torespectively XSra and Y Sra. If the image feature XSra is zero, thenalso objx will be zero and likewise for Y Sra and objy.

Define the image features XY Sra, which gives the difference inobject pixels lying on the one hand in the first and third quadrant and

165

Page 180: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A Derivations

on the other hand in the second and fourth quadrant of the image, as

XY Sra =−xs/2∑u=1

ys/2∑v=1

Obj(u, v)−xs∑

u=xs2

+1

ys∑v= ys

2+1

Obj(u, v)

+xs/2∑u=1

ys∑v= ys

2+1

Obj(u, v) +xs∑

u=xs2

+1

ys/2∑v=1

Obj(u, v).

(A.31)

Then the angle objθz, as shown in figure A.4, is proportional to theimage feature parameter XY Sra. If the image feature XY Sra is zero,then also objθz will be zero.

A.6 2D arc following simulation

Consider a task consisting of a rotating task frame following an arcshaped contour. The objectives for this task are:

• move with constant ty-velocity;

• keep the task frame tangent to the contour;

• minimize the position error between the task frame origin andthe contour along the tx-direction.

This section describes the SIMULINK/MATLABr control scheme,given in figure A.5, including the system, the sensor and the controller.This model simulates the dynamic behaviour of the task frame for thearc following task.

The system : The system to be controlled is the 2D pose of thetask frame. The input are the commanded relative velocities (tv

cx, tv

cy

and tωcz) of the task frame. The output of the system is the 2D absolute

pose ( tabsx(t), t

absy(t), tabsθz(t)) of the task frame at any time instant t.

The computation of the absolute pose from the relative velocities isequal to the multiplication of the input velocity vector with a rotationmatrix, followed by an integration from velocity to position:

s

tabsx

tabsy

tabsθz

=

cos( tabsθ) − sin( t

absθ) 0sin( t

absθ) cos( tabsθ) 0

0 0 1

tvcx

tvcy

tωcz

. (A.32)

166

Page 181: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A.6 Arc following simulation

Figure A.5: Simulation scheme for computing the dynamic behaviour ofthe tracking errors in following an arc with a rotating task frame

Figure A.6 illustrates the setup.

The sensor : The sensor ‘measures’ the position and orientationerrors (∆x and ∆θ) of the contour relative to the task frame. For the

167

Page 182: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A Derivations

Figure A.6: Task frame pose w.r.t. the absolute frame

arc shaped contour of figure A.7, these errors are∆x = L cos γ −

√r2 − L2 sin2 γ

∆θ = − arcsin(

Lsinγ

r

) (A.33)

with L =

√t

absy2 + (r − t

abs x)2

γ =π

2+ t

absθ − arctan(

r − tabs xt

absy

) . (A.34)

γ is the angle between the tx-axis (line AD) and the line (AC), con-necting the center of the arc with the origin of the task frame, as shownin figure A.7; r is the radius of the arc; L is the distance between thecenter of the arc (point C) and the origin of the task frame (pointA).

Figure A.7: Illustration for the computation of position and tracking errors,∆x and ∆θ, of the arc relative to an arbitrary task frame pose (left) andcomputation of γ (right)

168

Page 183: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A.6 Arc following simulation

The sensor includes a time delay of 40 ms and a zero order holdcircuit, as shown in the detail of the sensor in figure A.5, to simulatethe vision sensor which is a discrete system that has to operate at thevideo frame rate of 25Hz.

The controller : The equations for the proportional control of (tx)position and orientation of the task frame, possibly with feedforward,are {

tωcz = Kθ∆θ + tω

ffz

tvcx = Kx∆x

. (A.35)

Parameters : The simulation allows for variation in several pa-rameters such as: the control constants (Kx and Kθ), the begin poseof the task frame ( t

absx(0), tabsy(0), t

absθz(0)), the radius of the arc (r),the forward velocity (tv

cy) and the use of feedforward control.

Some results : The actual path of the task frame in steady statefollows from three simultaneous, fixed velocities, being tvy, tv

ssx and

tωss:

tvssx = Kx∆xss ≈ tv

2y

rKθand tω

ss = Kθ∆θss =−tvy

r. (A.36)

This results in a task frame motion which consists of an instantaneousrotation (tω

ss) and a translation (tvssx ). The center of the rotation lies

at a distance r (which is also the radius of the arc) along the tx-axis.At the same time, the task frame translates in the tx-direction. Thisimplies that the center of the instantaneous rotation, itself moves ona circular path with radius ro around the center of the arc. ro equals−tv

ssx /tω

ss = tvy/Kθ. See figure A.8. Other simulation results are givenin figures 3.10 and 3.11 in section 3.2.

169

Page 184: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

A Derivations

Figure A.8: The center of the instantaneous rotation of the task framefollows itself a circular path around the center of the arc; the settings are Kx

= 1 [1/sec], Kθ = 1 [1/sec], tvy = 20 [mm/sec] and no feedforward

170

Page 185: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Appendix B

Contour fitting

This appendix presents several contour fitting models and evaluatesthem. Important evaluation criteria are the accuracy and robustnessof the fitted function w.r.t. the real contour, the computational burdenin obtaining the model parameters and the suitability of the model asa basis for the control data computation. The input for the contourfitting is the set of n contour points, given by [Xp, Yp] with Xp =[x1

p . . . xnp ]′ and Yp = [y1

p . . . ynp ]′. Normally, the contour lies ‘top-down’

in the image. Hence it is logical to represent the contour with x as afunction of y. For reasons of convenience the index p is left away for themoment. Following contour fitting models are described and discussedin this chapter:

• a set of tangents,

• a full second order function (circle, ellipse, parabola or hyper-bola),

• an interpolating polynomial,

• a third order non-interpolating polynomial ,

• interpolating cubic splines and

• the parameterized version of the latter two.

All these models, with some variants, are implemented in a ContourFitting Graphical User Interface (CFGUI), under MATLABr, enablinga user friendly qualitative comparison of the contour fitting models.Section B.8 briefly describes the functionality of this CFGUI.

171

Page 186: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B Contour fitting

B.1 The tangent model

The simplest contour model is a line:

x = ay + b. (B.1)

The model parameters a and b follow from a least squares fit of themodel through the set of contour points. If the contour points lieclosely together, this fitted line will approximate the tangent to thecontour at the center of the data set. Figure B.1 gives an example.The main advantage of this model is it simplicity. The contour pose

Figure B.1: Tangent to contour

(position Cxp and orientation Cθ) for one single point on the contouris directly given by the model. When the contour pose is given forconsecutive points on the contour, the curvature κ can be computed asthe change in orientation Cθ as a function of the arc length s:

κ =d Cθ

ds. (B.2)

In practice, an approximation for κ follows from the least squares so-lution of a set of m first order equations

Cθ(i) = κ · s(i) + cte i = 1 . . .m. (B.3)

The m (e.g. 9) pairs (Cθ(i), s(i)) lie symmetrically around the positionof interest. Figure B.2 gives an example. The least squares method

172

Page 187: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B.2 Full second order function

Figure B.2: Tangent to contour in nine points(left); corresponding leastsquares solution for the curvature computation (right)

has an averaging effect on the computed curvature. This is an impor-tant advantage. The curvature is, after all, proportional to the secondderivative of the position. The curvature computation is thus a verynoise sensitive operation. Using a least squares solution avoids thebuildup of noise. The computed value is the average curvature overa finite line segment. This implies also the levelling out of the cur-vature profile at contour segments with extreme or sudden change incurvature. The optimal arc length, over which the average curvature iscomputed (see figure B.2), thus follows from a trade off between noisereduction and signal deformation.

B.2 Full second order function

Often used as contour models (e.g. [30, 44, 68]) are circles and ellipses.They are described by following equation (in the variables x, y):

ax2 + bxy + cy2 + dx + ey + 1 = 0. (B.4)

Depending on the values of the parameter set (a, b, c, d, e), this equationmay also represent a parabola or hyperbola. If the data set of contourpoints is known to lie on a circle or ellipse, this equation will resultin a good (least squares) fit of model and likewise control data (seefigure B.3-left). If, on the other hand, the contour contains a deflectionpoint (or is not circular), the resulting fit may be worthless. Fitting the

173

Page 188: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B Contour fitting

contour points by a full second order function lacks robustness. FigureB.3-right gives an example: a good (total least squares) hyperbolic fitdeters completely by changing only one point by a pixel. Hence, witharbitrary contours a full second order contour model is no option.

Figure B.3: Least squares fits of full second order function through selectedcontour points: (left) good elliptic fit; (right) changing only one point (awayfrom the center of the image) by a pixel causes a good hyperbolic fit to deterin a useless one

B.3 Interpolating polynomial

If the contour points are correct, the analytical curve must pass trougheach point. Interpolating polynomials fulfill this desire. They are de-scribed by:

x = an−1yn−1 + an−2y

n−2 + . . . + a1y + a0 (B.5)

with n the number of points. Interpolating polynomials with n > 3,however, oscillate. They neither preserve the sign, nor the monotonic-ity, nor the convexity of the contour data set. They are therefore alsorejected.

B.4 Third order non-interpolating polynomial

Non-interpolating polynomials remedy most of the previously men-tioned disadvantages of interpolating polynomials. They use more

174

Page 189: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B.5 Interpolating cubic splines

Figure B.4: Good (left), inaccurate (middle) and wrong (right) 3rd orderpolynomial fit

points to be fitted closely to but not exactly through. The order ofthe polynomial is commonly limited to three:

x = ay3 + by2 + cy + d. (B.6)

The parameters (a, b, c, d) again follow from a least squares fit. Ad-vantageous to this method is the smoothing effect of the least squaresfit which reduces oscillations and suppresses noise. Moreover, the or-der of the contour points is not important. The control data are eas-ily derived: e.g. the curvature of the contour at position x = 0 isκ = (−2b)/(1 + c2)

32 . The accuracy of the polynomial fit, however, is

not satisfactory as illustrated in figure B.4. With increasing curvature,the accuracy of the polynomial fit decreases. Figure B.7 gives an ex-ample of the computed curvature profile with a polynomial fit througha set of points lying on a arc shaped contour.

B.5 Interpolating cubic splines

Interpolating1 splines (or p-splines) are a set of n−1 functions sk whichare locally defined over the range [yk, yk+1], for k = 1, . . . , n-1. Theyare continuous in the contour points, which are called the knots, and

1Non-interpolating splines (e.g. B-splines) are not considered since the knots areassumed to be correct contour points and thus need to be interpolated.

175

Page 190: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B Contour fitting

satisfy the interpolation condition{sk(yk) = xk,

sk(yk+1) = xk+1,k = 1, . . . , n−1. (B.7)

For C2-continuity, the segments need to be cubic (3rd order). Higherorder continuity is often inadequate due to the previously mentionedundesirable characteristics of higher order polynomials. We thereforerestrict this section to the discussion of cubic splines. Numerous cubicspline representations exist, differing in the way the model parametersare determined and in the extra conditions imposed on the spline. Eachcubic spline segment k corresponds to

xk = ak(y − yk)3 + bk(y − yk)2 + ck(y − yk) + dk. (B.8)

For natural cubic splines the segments are continuous in their firstand second derivatives at the knots or

s′k−1(yk) = s′k(yk), s′′k−1(yk) = s′′k(yk), k = 2, . . . , n−1. (B.9)

Choosing the derivatives in the first and the last knot, s′1(y1) ands′n−1(yn), determines all other parameters.

Hermite cubic splines interpolate the knots (xk, yk) with a givenslope (x′k, yk). They are C1-continuous in the knots. If the slopes (ortangents) in the knots are unknown, they need to be approximated ordetermined heuristically. H. Spath [75] describes several methods todetermine these slopes, one of which aims at an optimally smoothedspline. To this end, the slopes dk in the knots are determined by a firstorder approximation of the derivative: dk = (xk+1 − xk)/(yk+1 − yk),and adjusted to minimize oscillations based on following conditions:

x′k = 0 if dk−1 dk < 0x′k = x′k+1 = 0 if dk = 0sign(x′k) = sign(x′k+1) = sign(dk).

(B.10)

This results in a smooth contour provided there is monotonicity in oneof the contour coordinates.

176

Page 191: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B.5 Interpolating cubic splines

Figure B.5: Natural cubic splines

Figure B.6: Smoothed Hermite cubic splines

Figures B.5 and B.6 give some examples. In general, a cubic splinefit gives a close approximation of the real contour (given the contourpoints are monotonic), with a slightly better fit for Hermite cubicsplines than for natural cubic splines. This automatically results in ad-equate pose computation. The curvature profile of the spline contour,however, shows unacceptable oscillations as illustrated in figure B.7.The computed curvature differs significantly from the known curvatureof the circular (testing) contour. This is partly due to the presence of

177

Page 192: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B Contour fitting

Figure B.7: Computed curvature versus arc length according to three fittingmodels for a set of contour points lying on a circle with radius 150 pixels; theexact curvature is 0.00667 [1/pix]

a second derivative in the calculation of the curvature κ:

κ =d2xdy2[

1 +(

dxdy

)2] 3

2

(B.11)

and partly inherent to the very nature of interpolating splines. Forreasons of comparison, figure B.7 also shows the computed curvatureversus arc length resulting from a 3rd order polynomial fit and fromthe tangent model. Clearly the tangent model gives the best results.

B.6 Parameterized representation

Until now x is expressed as a function of y. For interpolating methods,y then needs to be monotonically increasing (or decreasing) which isnot necessarily the case for an arbitrary contour.

The parameterized representation is introduced to relieve the ne-cessity of monotonicity in x or y. In a parameterized representationboth x and y are expressed as a function of a monotonically increasing

178

Page 193: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B.6 Parameterized representation

parameter w:x = ξ(w), y = η(w). (B.12)

The functions ξ and η are determined according to the previously men-tioned methods being either non-interpolating 3rd order polynomial orcubic spline. The simplest choice for w is w = 0, . . . , n-1. If, on theother hand, w is chosen as

w1 = 0, wk+1 = wk +√

(∆xk)2 + (∆yk)2, k = 1, . . . n−1, (B.13)

then w is a linear approximation of the arc length along the contourand the resulting fit is arc length parameterized.

In general, a parameterized fit approximates the real contour betterthan a similar non-parameterized fit. This is especially the case fora parameterized 3rd order polynomial. Figure B.8 again gives someexamples, this time with a non-monotonically increasing contour. Thecurvature profile of the parameterized models, analytically computedas

κ =d2ξdw2

dζdw −

d2ζdw2

dξdw[(

dξdw

)2+

(dζdw

)2] 3

2

(B.14)

is shown in figure B.9. The conclusions are similar to the non-parameterized cases. The curvature profiles of the spline models still

Figure B.8: Examples of parameterized fits for a non-monotonically increas-ing contour; (left) parameterized 3rd order polynomial; (right) parameterizedHermite cubic splines

179

Page 194: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B Contour fitting

Figure B.9: Computed curvature versus arc length according to four fittingmodels for a set of contour points lying on a circle with radius 150 pixels; theexact curvature is 0.00667 [1/pix]

show unacceptable oscillatory behaviour. The parameterized 3rd orderpolynomial gives a fairly good (computed) curvature profile. This fig-ure, however, gives the best result ever achieved with a parameterized3rd order polynomial (and is only that good for the given set of con-tour points). If the curvature increases (as is the case in figure 3.32 insection 3.8, the resulting computed curvature profile is not that good.

B.7 Conclusion

The most important criteria in the comparison of the different proposedcontour fitting models is clearly the accuracy of the contour curvaturecomputation. Only the tangent model gives satisfying computed cur-vature values. Clearly, the twice applied, once for the tangent andonce for the curvature, least squares solution in the tangent model casegives better results than any other approach. Most of the proposedmodels have a good positional accuracy. However, as several exam-ples illustrate, this is no guarantee whatsoever for a good curvaturecomputation.

Section 3.8 already compared the computational implications of the

180

Page 195: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B.8 Graphical user interface

three best contour models: the tangent model, the parameterized 3rdorder polynomial and the parameterized Hermite splines. Again, thetangent model is the most efficient one.

The tangent model only uses local contour information. This provesto be important for a good curvature computation, even in the case ofa constant curvature profile. Other contour models too, e.g. a parame-terized second order polynomial, can be applied locally. Unfortunatelyhowever, they still imply a bigger computational effort than the tangentmodel and do not give better computed control data.

As well known, an alternative to the least squares implementa-tion is the use of a KALMAN filter for the estimation of the modeland/or control parameters. Both methods will give the same results.A KALMAN filter then provides the option to take into account themeasurement noise in the parameter estimation. Given the satisfyingresults of the tangent model, however, this option is not pertinent andhence not further investigated in our approach.

B.8 Graphical user interface

All the presented models are implemented in MATLABr. The ContourFitting Graphical User Interface (CFGUI) gives a user friendly andinteractive environment to test and compare the contour fitting models.Figure B.10 shows the layout of the CFGUI. Its functionality includes,among others,

• file management for images and contour data;

• support of PGM, GRY and JPG (8-bit grayscale) images withsize up to 512 by 512 pixels;

• on line image grabbing (from DSP);

• interactive buildup of up to 200 contour points, graphically aswell as numerically (add, delete, move, sort)

• interactive as well as automatic contour extraction by ISEF edgedetection under four direction;

• up to 13 different contour fitting models;

• tangent and curvature computation of the fitted contour;

181

Page 196: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

B Contour fitting

• postscript print out;

• automatic path generation for KUKA361 robot from the fittedcontour.

Figure B.10: Main window of the Contour Fitting Graphical User Interface

182

Page 197: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Appendix C

Experimental validation ofcurvature computation

This appendix investigates experimental results for the curvature com-putation on a constant curved contour.

A correctly computed contour curvature κ is essential for the suc-cess of the curvature based feedforward control (according to equation6.18). In order to compare the computed curvature with the correcttheoretical curvature, a piecewise constant curved test object is used.The dimensions of the test object are given in section F.2. See alsofigure C.3.

Figure C.1 shows measured and actual curvature versus arc lengthwith a forward velocity of 40 mm/s. If the curvature is constant, thecomputed value corresponds well with the actual one. At the steptransitions, the computed curvature shows a more smoothed profile.In view of the used computation method, this is only logical. Afterall, the computed curvature is the mean curvature over a finite contoursegment. With an increasing number of contour points used in the cur-vature computation or with an increasing tangent velocity, the lengthof this finite contour segment also increases. The longer the contoursegment, the more a transient in the curvature profile is smoothed. Onthe other hand, the noise sensitivity decreases if the contour segmentover which the mean curvature is calculated, increases. Figures C.1and C.2 give the experimental verification.

Figure C.1 shows the computed curvature on the basis of 15,11, 9 and 5 contour points. Less points gives a more noisy signal,

183

Page 198: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

C Experimental validation of curvature computation

Figure C.1: Measured and actual curvature of a constant curved objectwith velocity 40 mm/s using 15, 11, 9 or 5 contour points in the curvaturecomputation respectively

184

Page 199: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Figure C.2: Measured curvature of a constant curved object with velocitiesset to 45, 70 and 110 mm/s

which however follows the transitions a little better. For five points,the computed curvature becomes too noisy and hence useless.

Figure C.2 shows the computed curvature for an increased velocityof 45, 70 and 110 mm/s. For these velocities, only the bigger arcsare suited to be followed. At the highest velocity, the actual steptransition is smoothed in such a way that the resulting force profile1

with feedforward control is just barely acceptable.

1not shown here

185

Page 200: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

C Experimental validation of curvature computation

Figure C.3: Corrected offset contour of the constant curved object as mea-sured by the vision system with a velocity of 25 mm/s

Figure C.4: Measured absolute orientation with a velocity of 25 mm/s

For the sake of completeness, figure C.3 shows the corrected offsetcontour of the constant curved object as measured by the vision systemand figure C.4 gives the measured contour orientation versus arc length.This figure clearly indicates the fact that the curvature is no more thanthe tangent of the shown orientation profile. A numerical differentiationof this profile would however result in a far too noisy curvature signal.

Other experimental results, e.g. for a sinusoidal contour, are givenby Verbiest and Verdonck in [84].

The shown experimental results validate the used method for thecurvature computation. At constant curved segments the computed

186

Page 201: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

curvature equals the actual curvature. At (step) transitions in the ac-tual curvature, the measured curvature shows a much more smoothedprofile. Here, a trade-off between accuracy and noise sensitivity hasto be made, by choosing the optimal number of contour points (or theoptimal length of the contour segment) to be used in the least squarescurvature computation. The experimental results give an optimumwhich lies between 9 and 11 points.

The accuracy of the curvature computation, can improve if morecontour points, lying more closely together, are measured and used.This involves the detection of more than one contour point per image2,hereby introducing however new problems in ordering the detected con-tour points from image to image and pushing the computation time toits limits.

2with unchanged video frame rate

187

Page 202: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

188

Page 203: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Appendix D

Comrade task descriptions

D.1 3D rectangle alignment

As an example this section gives the high level task description or pro-gram, used in the 3D rectangle alignment task for which the experimen-tal results are given in section 5.3, figure 5.11. Following substitutionsare made to the textual programming file:

force <--> featureN <--> pix

Nmm <--> pix

/* PROGRAM align with rectangle *//********************************/

move to (in joint space) { /* Go to starting pose */j1: 0 degj2: 0 degj3: 90 degj4: -25 degj5:-100 degj6: -90 deg

with output to result.mfrequency 10 Hz contains

task frame feature [xt yt zt axt ayt azt]task frame position [xt yt zt axt ayt azt]desired task frame velocity[xt yt zt axt ayt azt] }

move compliantly { /* Move downwards */with task frame: base

189

Page 204: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

D Comrade task descriptions

with task frame directionsxt: velocity 0 mm/secyt: velocity 0 mm/seczt: velocity -30 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until relative distance > 100 mm }

move compliantly { /* Step 1: */with task frame: base /* Align optical axis */with task frame directions /* with object center */

xt: feature 0 pixyt: feature 0 pixzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec /* scale for xt and yt */

until relative time > 5 sec and /* -> 50/(xs.ys) */xt feature < 1 pix and xt feature > -1 pix andyt feature < 1 pix and yt feature > -1 pix }

move compliantly { /* Step 2: */with task frame: base /* Put object horizontal */with task frame directions

xt: feature 0 pixyt: feature 0 pixzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/sec /* scale for azt */azt: feature 0 pix /* -> 2.pi/(xs.ys) */

until azt feature < 0.1 pix and azt feature > -0.1 pix }

move compliantly { /* Step 3a: */with task frame: base /* Rotate about x */with task frame directions /* with feedforward on y */

xt: feature 0 pixyt: velocity 1 mm/sec * fct /* fct is a factor */zt: velocity 0 mm/sec /* depending on the */axt: velocity 0.1 rad/sec * fct /* image features */ayt: velocity 0 rad/secazt: feature 0 pix

until axt feature < 0.01 rad and axt feature > -0.01 rad }

move compliantly { /* Step 3b */with task frame: base /* Rotate about y */with task frame directions /* with feedforward on x */

xt: velocity 1 mm/sec * fct

190

Page 205: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

D.2 Planar contour following

yt: feature 0 pixzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0.1 rad/sec * fctazt: feature 0 pix

until ayt feature < 0.01 rad and ayt feature > -0.01 rad }

move compliantly { /* Steps 3a and 3b */with task frame: base /* together */with task frame directions

xt: velocity 1 mm/sec * fctyt: velocity 1 mm/sec * fctzt: velocity 0 mm/secaxt: velocity 0.1 rad/sec * fctayt: velocity 0.1 rad/sec * fctazt: feature 0 pix

until axt feature < 0.001 rad and axt feature > -0.001 radand ayt feature < 0.001 rad and ayt feature > -0.001 rad }

move compliantly { /* Move downwards */with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: velocity 0 mm/seczt: velocity -10 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until zt relative distance > 300 mm }

end_program

D.2 Planar contour following

As an example this section gives the high level task description used inthe combined vision/force planar contour following task presented inchapter 6 and for which the experimental results are given in section6.5, figures 6.10 and 6.11.

/* PROGRAM Follow Contour *//**************************/

move to (in joint space) {j1 : 60 degj2 : 0 deg

j3 : 90 degj4 : -44 degj5 : -100 degj6 : -135 deg

with output to vs_new.mfrequency 10 Hz contains

191

Page 206: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

D Comrade task descriptions

task frameposition (xt yt zt azt)

actual task framevelocity (yt azt)

task frame force (xt yt zt)vision sensor frame

position (xt yt azt)vision errors (xt azt)vision velocity (azt)vision velocity factor (xt) }

move to (in joint space) {xt : 250 mmyt : -700 mmzt : 850 mmaxt : 0 degayt : 180 degazt : -135 deg }

set_task_number(1) endcomment( Search_edge ) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: feature distance 0 mmzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until yt feature distance < 0.1 mmand yt feature distance >-0.1 mm }

move compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: feature distance 0 mmzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: feature angle 0 rad

until azt feature angle > -0.1 radand azt feature angle < 0.1 radand yt feature distance < 0.5 mmand yt feature distance >-0.5 mm }

move compliantly {

with task frame: basewith task frame directions

xt: velocity 20 mm/secyt: velocity 10 mm/seczt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until relative time > 2.5 sec }

comment(downwards) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: velocity 0 mm/seczt: velocity 20 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until zt force < -5 N }

comment(up) endmove compliantly{with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: velocity 0 mm/seczt: velocity -10 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until relative distance > 13mm }

comment(against_edge) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: force 20 Nzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until yt force > 15 N }

set_task_number(2) endcomment(orientate) end

192

Page 207: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

D.3 Planar contour following at corners

set_kp_force(3 2 3 5 5 1) endset_ktrack (4 1 4 4 4 2) endmove compliantly{with task frame:

variable_double_contactwith task frame directions

xt: velocity 0 mm/secyt: force 30 Nzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until yt feature distance < 0.1 mmand yt feature distance >-0.1 mm }

set_ktrack (4 4 4 4 4 5) endcomment(rand_volgen) endadd_text( ];B=[ ) endcomment(dubble_contact) endset_task_number(9) endmove compliantly {with task frame:

variable_double_contactwith task frame directions

xt: vision controlledvelocity 1 mm/sec

/* xt: velocity 25 mm/sec*/yt: force 30 Nzt: velocity 0 mm/sec

axt: velocity 0 rad/secayt: velocity 0 rad/secazt: track (on velocity)

until relative distance > 450 mm }

add_text( ];C=[ ) endset_task_number(-1) endcomment(upwards) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: force 0 Nzt: velocity -10 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until relative distance > 40 mm }

comment(home) endmove to (in joint space) {

j1 : 60 degj2 : 0 degj3 : 90 degj4 : -44 degj5 : -100 degj6 : -135 deg }

end_program

D.3 Planar contour following at corners

As an example this section gives the high level task description usedin the combined vision/force control at corners presented in chapter 7and for which the experimental results are given in section 7.4.

/* PROGRAM Corner *//******************/

move to (in joint space) {j1 : 0 degj2 : 0 degj3 : 90 degj4 : -44 degj5 : -100 degj6 : -135 deg

with output to corner.mfrequency 10 Hz containstask frame

position (xt yt zt azt)task frame force (xt yt zt)actual task frame

velocity (xt yt azt)vision sensor frame

position (xt yt azt)vision errors (xt azt)

193

Page 208: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

D Comrade task descriptions

vision velocity (xt azt) }

set_task_number(1) endmove to (in joint space) {

xt : 600 mmyt : -400 mmzt : 800 mmaxt : 0 degayt : 180 degazt : -175 deg }

set_ktrack (4 2 1 4 4 5) endcomment( ktrack_4_2_1_4_4_5 ) endset_task_number(2) endcomment( Search_edge ) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: feature distance 0 mmzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until yt feature distance < 0.1mmand yt feature distance > -0.1 mmand relative time > 2 sec }

comment( Orientate ) endset_kp_force(3 2 3 5 5 1) endcomment( kp_eq_3_2_3_5_5_1 ) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: feature distance 0 mmzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: feature angle 0 rad

until azt feature angle >-0.05radand azt feature angle < 0.05 radand yt feature distance < 0.5 mmand yt feature distance >-0.5 mm}

move compliantly {with task frame: basewith task frame directions

xt: velocity 8 mm/secyt: velocity 10 mm/seczt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until relative time > 2.5 sec }

comment(downwards) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: velocity 0 mm/seczt: velocity 20 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until zt force < -5 N}

comment( Upwards ) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: velocity 0 mm/seczt: velocity -10 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until relative distance > 13 mm }

comment( Against_edge ) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: force 30 Nzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until yt force > 28 N }

/* align camera again*/move compliantly {with task frame:

variable_double_contact

194

Page 209: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

D.4 Adaptations to COMRADE

with task frame directionsxt: velocity 0 mm/secyt: force 30 Nzt: velocity 0 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until yt feature distance < 0.1 mmand yt feature distance > -0.1 mm}

set_kp_force(3 3 3 5 5 1) endcomment( kp_3_3_3_5_5_1 ) endset_ktrack(4 1.5 1 4 4 2) endcomment(ktrack_4_1.5_1_4_4_2) end

/* ee-rotation ~ dx*K_track[1] *//* no filter *//* kp[5] = 1 -> complete ff */

set_task_number(3) endadd_text( ];B=[ ) endcomment( Follow ) endmove compliantly {with task frame:

variable_double_contactwith task frame directions

xt: vision controlledvelocity 50 mm/sec

yt: force 30 Nzt: velocity 0 mm/sec

axt: velocity 0 rad/secayt: velocity 0 rad/secazt: track (on velocity)

until relative distance > 300 mm}

add_text( ];C=[ ) endcomment( Upwards ) endset_task_number(4) endmove compliantly {with task frame: basewith task frame directions

xt: velocity 0 mm/secyt: force 0 Nzt: velocity -10 mm/secaxt: velocity 0 rad/secayt: velocity 0 rad/secazt: velocity 0 rad/sec

until relative distance > 40 mm }

comment( Go_home ) endset_task_number(-1) endmove to (in joint space) {

j1 : 0 degj2 : 0 degj3 : 90 degj4 : -44 degj5 : -100 degj6 : -135 deg }

end_program

D.4 Adaptations to COMRADE

This section describes the major software changes or extensions madeto COMRADE. The main features and structure of the original COM-RADE program, which is an acronym for COmpliant Motion ResearchAnd Development Environment, are given in [91]. The most importantadaptations made, essential to implement the visual servoing control,are:

• A whole new set of output variables: vision sensor frameposition, vision error, vision velocity, vision veloci-ty factor, tracking velocity, measured feature distancesin task frame, measured feature distances in vision sen-sor frame;

195

Page 210: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

D Comrade task descriptions

• New units to define vision controlled directions in the taskdescription: mm, deg or rad; New end condition parameters:feature distance, feature angle;

• New sensor types (and sensor files): hybrid vision straingaugeto indicate the combined usage of force sensor and vision sensorand vision to indicate the usage of the vision sensor only;

• Communication from DSP to COMRADE containing 18 param-eters being the measured errors for the 6 camera frame directions(3 axial, 3 polar), 6 vision based task frame feedforward veloci-ties (ff vel) and 6 feedforward velocity factors (ff vel factor).If not used, the feedforward velocities are zero, the velocity fac-tors are 1. Feedforward velocities are always added. In contrast,feedforward velocity factors are only used if the concerned direc-tion is indicated as a vision controlled velocity in the taskdescription;

• Communication from COMRADE to DSP containing 21 param-eters being the current task number (1), the task frame pose (6),the task frame forces (6), the camera frame pose (6), the relativedistance (1) and the relative time (1);

• New task frame definitions to decouple the task frame orienta-tion from the end effector orientation: variable by vision andvariable double contact;

• Added extern commands to on-line change the proportionalcontrol gains and the tracking gains, to change the tasknumber and to add comment or text in the output file:set kp force, set ktrack, set task number(nr), add text(text), comment(comment);

196

Page 211: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Appendix E

Image processing softwareon DSP

E.1 Introduction

This appendix describes the basic elements of the implemented imageprocessing software on the DSP-C40 (digital signal processor).

The image processing software is mainly written in C. Some librariesare originally written in Assembler, in order to optimize the processingtime. The image processing executables are built on top of the real-time multi-processor operating platform, VIRTUOSO, based 1) on theimage grabbing libraries for the DSP/VSP configuration provided byHEMA, 2) on the C-libraries and compiler for the DSP-C40 of TexasInstruments, 3) on the Meschach C-library to compute the least squaressolutions and 4) on the own implemented main program and routines,e.g. to compute the relative area parameters or to detect the ISEF edge.

The DSP/VSP unit can fully simultaneously grab a new image andprocess a previously grabbed image. Three memory blocks are usedto store 1) the newly grabbed image, 2) the currently processed imageand 3) the currently saved image. Figure E.1 gives the basic time chartof a standard program.

Typically, without IO to the host, the processing of one image, con-sisting of unpacking the image, contour extraction for 9 to 11 points,least squares line fit, position match and least squares curvature fit,takes about 10 to 15 msec. The remaining time is used for IO with thehost to display messages, log data and save images. The former process

197

Page 212: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

Figure E.1: Time chart of main program loop relating grabbing, processingand logging processes

is avoided as much as possible. The latter are programmed as parallelprocesses with the lowest priority (as shown by the ‘SYSDEF’-file), inorder not to disturb the real time control cycle. Saving one single im-age spans several cycles. Hence, not every image can be saved.

The following comments the VIRTUOSO ‘SYSDEF’-file and themain parts of the C-code for a contour following task and controlat a corner. Listing the complete code would be tedious. There-fore, only the key code of the main program (RUN TASK.C), themain header file (RUN TASK.H), the communication code (IOTT.C,IOFT.C and IMG2HOST1.C) and the ISEF image evaluation proce-dures (EVAL IMG.C) are listed and clarified.

198

Page 213: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.2 System definition file

E.2 System definition file

/********************************************************************//* FILE: SYSDEF *//********************************************************************/

#define ROOT DSP1 /* This is a typical system definition file */#define BYTE 1 /* for the VIRTUOSO real-time multi-processor */#define WORD 4 /* platform defining (parallel) processes, IO,*/NODE DSP1 C40 /* memory blocks, semaphores and resources. */

DRIVER ROOT ’HostLinkDma (0, 0, PRIO_DMA)’DRIVER DSP1 ’Timer0_Driver (tickunit)’DRIVER DSP1 ’RawLinkDma(1,PRIO_DMA)’

/* NOTE: Stack sizes are measured in words, not bytes.*//* If a ‘printf’ statement is inserted in the task, *//* the tasks’ stack cannot be too low (at least 268).*/

/* taskname node prio entry stack groups *//*----------------------------------------------------*/#ifdef DEBUGTASK TLDEBUG ROOT 1 tldebug 400 [EXE SYS]TASK POLLESC ROOT 1 pollesc 100 [EXE]#endifTASK CONIDRV ROOT 2 conidrv 128 [EXE]TASK STDIODRV ROOT 3 stdiodrv 128 [EXE]TASK CONODRV ROOT 2 conodrv 512 [EXE]TASK RUN_TASK DSP1 6 run_task 2048 [EXE] /* Main program */TASK IO_TO_TP DSP1 5 io_to_tp 512 [EXE] /* IO: DSP -> TP*/TASK IO_FROM_TP DSP1 4 io_from_tp 1024 [EXE] /* IO: TP -> DSP*/TASK IMAGE_TO_HOST DSP1 7 image_to_host 512 [EXE] /* Image -> host*/

/* queue node depth width *//*--------------------------------*/#ifdef DEBUGQUEUE DEBUGIN ROOT 16 WORD#endifQUEUE CONIQ ROOT 16 WORDQUEUE CONOQ ROOT 512 WORDQUEUE STDIQ ROOT 64 WORDQUEUE STDOQ ROOT 64 WORD

/* map node blocks size */ /* -> Sizes expressed in bytes! *//*------------------------------*/MAP MAP10K1 ROOT 1 10K /* Total memory is 1M of C40 words */MAP MAP10K2 ROOT 1 10K /* large. 1 word = 4 bytes = 32 bit.*/MAP MAP10K3 ROOT 1 10K /* Memory pointers always point to */

199

Page 214: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

MAP MAP10K4 ROOT 1 10K /* words, never to bytes. When */MAP MAP64K1 ROOT 1 64K /* grabbing an image, however, 4 */MAP MAP64K2 ROOT 1 64K /* pixels are packed into 1 word. */MAP MAP64K3 ROOT 1 64K /* an image of 512x512 pixels takes */MAP MAP64K4 ROOT 1 64K /* 64K memory places = 256 Kbytes. */MAP MAP64K5 ROOT 1 64K /* To process the image, it is */MAP MAP64K6 ROOT 1 64K /* unpacked, taking 256K words = the*/MAP MAP256K1 ROOT 1 256K /* size of MAP256K1. */MAP MAP256K2 ROOT 1 256K /* MAP64K4 to 6 are currently not */MAP MAP1M1 ROOT 1 1024K /* used. */

/* semaphore node *//*-----------------------*/SEMA SEM0 ROOT /* -> To protect global variables common */SEMA SEM1 ROOT /* to RUN_TASK and IO_TO_TP processes, and*/SEMA SEM2 ROOT /* likewise, for RUN_TASK and IO_FROM_TP. */SEMA SEM_REQ_PICT ROOT /* -> To protect image pointers common to */SEMA SEM_PICT_OK ROOT /* RUN_TASK and IMAGE_TO_HOST processes. */SEMA START_IMAGE_2_HOST ROOT

/* resource node *//*-------------------------*/RESOURCE HOSTRES ROOTRESOURCE STDIORES ROOTRESOURCE CONRES ROOT/********************************************************************/

E.3 Main program file (example)

/********************************************************************//* FILE : RUN_TASK.C (main file) PURPOSE: Follow edge with corner *//* Author: Johan Baeten *//********************************************************************//* uses subroutines: *//* int evaluate_img(SCAN_DATA *scan, POINT *corner, float *work_sp)*//* void least_squares(MAT *A, VEC *b, VEC *p) *//* int set_centered_var_sized1(long *picture1,long *picture2, *//* int x_p_size, int y_p_size,DMA_AUX_REG *dma_job) *//* in files: *//* SET_INL.C, EVAL_IMG.C and LEASTSQ.C *//********************************************************************/

#include "my_struc.h"#include "run_task.h"-----------------------------------------------------------------------

Here local procedures are declared (code skipped).

200

Page 215: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.3 Main program file (example)

-----------------------------------------------------------------------/* *** globals *** */float xr_global; /* Needed in iott.c */float alfa_global_rad; /* Needed in iott.c */float ff_vel[6]; /* Needed in iott.c */float ff_vel_factor[6]; /* should range from 0.0 to 2.0 or 3.0 */float com_val[21]; /* Needed in ioft.c */int nop; /* nop = number of points, needed in img2hst1.c */long *pict_to_save1; /* Needed in img2hst1.c */long *pict_to_save2; /* Needed in img2hst1.c */int SAVE_PICTURES; /* Needed in img2hst1.c */int SAVE_DATA; /* Needed in img2hst1.c */float data_to_save[42]; /* max. 13 points*3 + xr, yr and alfa */int REQ_PICT = 0; /* Needed in img2hst1.c and eval_img.c */int DX_SIZE, DY_SIZE; /* Needed in img2hst1.c and eval_img.c */

/********************************************************************//* MAIN PROGRAM *//********************************************************************/void run_task () {

---------------/* *** BEGIN INITIALIZATION *** */---------------------

The initialization part

• declares the local variables (code skipped);

• reads the external defined parameters;

-----------------------------------------------------------------------printf("\n Reading from external settings file ... ");var_extern=fopen("settings.par","r"); /* The file SETTINGS.PAR */

NEXT_PAR(LINK_WITH_TRANSPUTER,0,1); /* contains several para- */NEXT_PAR(PRINTING,0,1); /* meters which can be */NEXT_PAR(WRITING,0,1); /* changed without recom- */NEXT_PAR(SAVE_PICTURES,0,1); /* piling. */NEXT_PAR(SAVE_DATA,0,1);NEXT_PAR(X_SIZE,64,256);NEXT_PAR(Y_SIZE,64,256);NEXT_PAR(NUMBER_OF_SCANLINES,5,13);

fclose(var_extern);printf("done.\n");-----------------------------------------------------------------------

• initializes global and local variables, structures and matrices(code skipped);

• allocates memory blocks;

201

Page 216: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

-----------------------------------------------------------------------MEM_ALLOC(MAP10K1,dma_job1);MEM_ALLOC(MAP10K2,dma_job2);MEM_ALLOC(MAP10K3,dma_job3);MEM_ALLOC(MAP10K4,SOURCELINE);MEM_ALLOC(MAP64K1,picture1);MEM_ALLOC(MAP64K2,picture2);MEM_ALLOC(MAP64K3,picture3);MEM_ALLOC(MAP64K4,picture4); /* -> Currently not used */MEM_ALLOC(MAP64K5,picture5); /* -> Currently not used */MEM_ALLOC(MAP64K6,picture6); /* -> Currently not used */MEM_ALLOC(MAP256K1,unpact);MEM_ALLOC(MAP256K2,data_log); /* max number is 64 K elements */MEM_ALLOC(MAP1M1,work_space);-----------------------------------------------------------------------

• initiates communication with transputer;

-----------------------------------------------------------------------if (LINK_WITH_TRANSPUTER == 1) {

printf("Waiting for link with transputer ... ");KS_LinkinW(1,sizeof(int),&identifier);if (identifier == 1){

KS_LinkoutW(1,sizeof(int), &identifier);}else{

printf("Error: Initiating link.\n");errorsignal = -1;KS_LinkoutW(1,sizeof(int),&errorsignal);exit(1);

}printf(" OK.\n\n");KS_Signal(SEM2);

}-----------------------------------------------------------------------

• sets up the DMA pointer table for image grabbing (code skipped);

• prepares image pointers and DMA settings for first run and waitsfor transputer to signal start of first task.

-----------------------------------------------------------------------run_count = 0;dma_ptr->dma_aux_count = 0; /* autoinit from prepared table */dma_ptr->dma_aux_link = (unsigned long *) dma_job1;dma_ptr->gcontrol = dma_job1->gcontrol;picturea = *picture1;pictureb = *picture4;dma_job1or3 = dma_job1;

202

Page 217: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.3 Main program file (example)

picture1or3 = picture1;picture4or6 = picture4;current_pict_to_save = 1;state = SEARCH_EDGE;while (com_val[0] < begin_task_nr) {

KS_Signal(SEM0); /* Necessary to signal a send in */KS_Sleep(10); /* order to get a change in com_val.*/

} /* Grab every picture, */xpp_grab (in, out, 0, 0xC400, 0x81); /* stop after first. */

----------------/* *** END INITIALIZATION *** */-----------------------

The main program is an endless loop at video rate (25 Hz), which onlystops when the transputer signals the end of the last task. The mainloop consists of following steps:

• Check whether image saving is required;

• Check ending of comrade. If so, reset global variables, log dataif WRITING was set to 1, and terminate (partly skipped);

-----------------------------------------------------------------------printf("Starting endless control loop.\n");printf("------------------------------\n");printf("The distance and angular errors of the contour are \n");printf("computed in the DSP and corrected by feedback.\n\n");

KS_Elapse (&etime); /* Time managment*/

while(1){ /* *** ENDLESS LOOP *** */run_count += 1;while (dma_ptr->gcontrol_bit.aux_start == 3) KS_Sleep(1);/* Wait for dma finish, parallel processes can now do their job. */xpp_stop(in,out,0,0xC400,0); /* Stop grabbing */dma_ptr->dma_aux_count = 0; /* Autoinit from prepared table */

if ((com_val[0] == start_movie_task_nr)&&(image_dump_started == 0)){image_dump_started = 1;KS_Signal(START_IMAGE_2_HOST);

}if ((com_val[0] == end_movie_task_nr) && (image_dump_started == 1)){

KS_Abort(IMAGE_TO_HOST); /* Stop image saving */REQ_PICT = 0;

}

if (com_val[0] == stop_task_nr) { /* Normal ending of COMRADE */----------------! Code skipped !

203

Page 218: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

----------------KS_Wait(SEM1);printf("\n\n** Server terminated -- Goodby. **\n");server_terminate(); /* END PROGRAM */

}-----------------------------------------------------------------------

• Provide new image for IMAGE TO HOST task, if requested(code skipped);

• Switch image pointers and DMA control (code skipped);

• Start grabbing new image;

• Signal IO TO TP task to send control parameters to transputer(Routine in IOTT.C);

• Unpack the previous grabbed image;

-----------------------------------------------------------------------xpp_grab (in, out, 0, 0xC400, 0x81); /* Grab 1 picture */if (LINK_WITH_TRANSPUTER == 1) {KS_Signal(SEM0);} /* Send control */KS_Elapse(&vtime);word_to_4floats(unpact, picturea, DX_SIZE*DY_SIZE/4); /* Unpack */

-----------------------------------------------------------------------

• Evaluate the image (routines in EVAL IMG.C);

• Handle different image condition cases:

SCAN OK: Solve least squares line fit;Recalculate to absolute parameters with offset and withoutdeformation;Log data;If distance tool-camera is bridged,- match current undeformed position to logged data;- compute contour curvature at match;

INNER CORNER: Not implemented;

OUTER CORNER: Shift and rotate scan window;Evaluate image again;If SCAN OK,-solve least squares line fit;

204

Page 219: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.3 Main program file (example)

-calculate corner;-If corner angle large enough,–go to APPROACH CORNER state;Else reset;

BLACK: Reset parameters to search edge to the right;WHITE: Reset parameters to search edge to the left;

-----------------------------------------------------------------------image_condition = evaluate_img(&scan_data, work_space); /* ISEF */edge_valid = 0;switch(image_condition){

case SCAN_OK: /* All lines scanned OK */for (i=0; i < scan_data.nr_of_scanlines; i++){

A->me[i][0] = 2*Y[i]; /* Compose matrix A and B */A->me[i][1] = 1; /* out of X and Y */B->ve[i] = X[i];

}/* Least square fit as an approximation of the contour. */

least_squares(A, B, coef);/* Calculation of the control parameters in the image plane. */a = coef->ve[0];b = coef->ve[1]; /* Equation: x = ay + b */----------------! Code skipped !----------------/* Correct and log contour data. *//* Variables tabelxxx are cyclic buffers. */tabelalfa[einde] = com_val[18] + alfa_global_rad;/* Compute Offset */xt = X[nop/2]*F + r/sqrt(1+a*a);yt = Y[nop/2]*F - r/sqrt(1+a*a)*a;

/* Recalculate to absolute values. */tabelx[einde] = com_val[13] + xt*cos(com_val[18])

- yt*sin(com_val[18]);tabely[einde] = com_val[14] + xtn*sin(com_val[18])

+ yt*cos(com_val[18]);

/* Correct deformations due to compliance. *//* Tool stiffness = 13.2 N/mm, camera stiffness = 21.2 */tabelx[einde] += sin(com_val[6]) * com_val[8]/21.2;tabely[einde] -= cos(com_val[6]) * com_val[8]/21.2;/* Compute arc length */tabels[einde] = sqrt((tabelx[einde]-tabelx[voorlaatste])*

(tabelx[einde]-tabelx[voorlaatste])+(tabely[einde]-tabely[voorlaatste])*

(tabely[einde]-tabely[voorlaatste]));

205

Page 220: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

hulp1 = einde;voorlaatste = einde;einde = (einde+1) % TABELLENGTH;

/* Undo deform on current position. */xTCP = com_val[1] +sin(com_val[6])*com_val[8]/13.2;yTCP = com_val[2] -cos(com_val[6])*com_val[8]/13.2;

/* Search current position in table. */pointer = begin -5;if (pointer <0) pointer += TABELLENGTH;min = 50000.0;positie = 0;for (teller = 0 ; teller < 25; teller ++, pointer ++){

if (pointer >= TABELLENGTH) pointer -=TABELLENGTH;afst = (tabelx[pointer]-xTCP)*(tabelx[pointer]-xTCP)+

(tabely[pointer]-yTCP)*(tabely[pointer]-yTCP);if (afst < min){

min = afst; /* power of minimal distance */positie = teller-5; /* w.r.t. start */

}}begin = (begin + positie + TABELLENGTH) % TABELLENGTH;/* begin points to matching position. *//* Compute curvature at matching position. */pointer = (begin - NR_UITM_DIV2 + 1 + TABELLENGTH);for (i=0; i < nr_of_apoints; i++, pointer++){

if (pointer >= TABELLENGTH) pointer -=TABELLENGTH;BB->ve[i] = tabelalfa[pointer];AA->me[i][1] = 1;if (i == 0){

AA->me[i][0] = tabels[pointer];}else{

AA->me[i][0] = AA->me[i-1][0]+tabels[pointer];}

}least_squares(AA, BB, ccoef);kappa = -ccoef -> ve[0];----------------! Code skipped !----------------break;

case INNER_CORNER: /* inner corner expected */----------------! Code skipped !----------------break;

case OUTER_CORNER: /* outer corner expected */

206

Page 221: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.3 Main program file (example)

if ((com_val[0] > 2) && (state != APPROACH_CORNER)){shift = 5+(scan_data.nr_of_scanlines+1) * scan_data.delta/2;scan_data.direction = 3;scan_data.center.x = scan_data.image.dx/2 - shift;scan_data.pathlength = 70;second_image_condition =evaluate_img(&scan_data, work_space);----------------! Code skipped !----------------if (second_image_condition == SCAN_OK){

if (fabs(sin(last_abs_edge.alfa-new_abs_edge.alfa))>0.5){state = APPROACH_CORNER;

}}else{

scan_data.direction = 0; /* reset scan window */scan_data.center.x = scan_data.image.dx/2;scan_data.pathlength = 100;

}}break;

case BLACK, WHITE, default:----------------! Code skipped !----------------

} /* End switch */-----------------------------------------------------------------------

• Wait for receive of robot state parameters and release of globalvariables;

• Calculate control parameters (distance, angle and curvature) de-pending on the current state:

SEARCH EDGE: Search edge to left or to right depending onBLACK or WHITE image;

CURVED EDGE, STRAIGHT EDGE: Give curvature basedfeedforward;

APPROACH CORNER: Slow down;If corner reached,-go to AT CORNER state;

AT CORNER: Implement feedforward control;If angular distance travelled equals corner angle,-go to PAST CORNER state;

207

Page 222: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

PAST CORNER: Accelerate;If normal velocity is reached,-return to CURVED EDGE or STRAIGHT EDGE state.

• Handle timing calculations (code skipped).

-----------------------------------------------------------------------if (LINK_WITH_TRANSPUTER == 1) {KS_Wait(SEM1);}/* Calculate control variables that are sent to the transputer. */switch (state){

case SEARCH_EDGE:if (com_val[0] == 3){

state = CURVED_EDGE;scan_data.pathlength = 50;

}xr_global = xr_pix * F;alfa_global_rad = alfa_pix;break;

case CURVED_EDGE:ff_vel_factor[0]= 1; /* actual velocity = nominal velocity */ff_vel[0]=0;if (com_val[0] == 9.0){

ff_vel[5] = kappa*nominal_vel;}

}else {ff_vel[5] = 0.0;

}case STRAIGHT_EDGE:

xr_global = xr_pix * F;alfa_global_rad = alfa_pix;break;

case APPROACH_CORNER:alfa_global_rad = alfa_pix;adapt_vel_factor(ff_vel_factor, exp, min_vel, nominal_vel);if (image_condition==WHITE || image_condition == OUTER_CORNER){

xr_global = -50 * F; /* turn left to search edge */}else{

if (image_condition==BLACK || image_condition==INNER_CORNER){xr_global = 50 * F; /* turn left to search edge */

}else{xr_global = xr_pix * F;

}}if (distance_tc < 3.0){

state = AT_CORNER;ff_vel_factor[0] = 0.0;ff_vel[0] = min_vel; /* 13.2 N/mm = tool stiffness */ff_vel[5] = -1.0*ff_vel[0]/(R_TOOL-30.0/13.2);

208

Page 223: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.4 Main header file

printf("- At corner\n");start_tf_angle = com_val[6];

}break;

case AT_CORNER:alfa_global_rad = alfa_pix;xr_global = xr_pix * F;if(fabs(start_tf_angle-com_val[6])>0.91*fabs(angle_of_corner)){

state = PAST_CORNER;printf("Past corner\n");ff_vel[0] = 0.0;ff_vel[5] = 0.0;ff_vel_factor[0] = min_vel/(nominal_vel);

}break;

case PAST_CORNER:if (*ff_vel_factor < 0.9){

adapt_vel_factor(ff_vel_factor, 1.05, min_vel, nominal_vel);}else{

*ff_vel_factor = 1.0;state = STRAIGHT_EDGE;

}alfa_global_rad = alfa_pix;xr_global = xr_pix * F;break;

default: /* Should never occur */printf("State ? ");break;

}/* Handle timing calculations */----------------! Code skipped !----------------

} /* *** End while (endless loop) *** */} /* *** END OF RUN_TASK *** */

/* Subroutines */-----------------! Code skipped !----------------------------------------------------------------------------------------

E.4 Main header file

/*******************************************************************//* FILE : RUN_TASK.H *//* Header file for RUN_TASK.C */

209

Page 224: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

/* Date: 01/12/98 Johan Baeten *//*******************************************************************/

#ifndef __run_task #define __run_task#include <iface.h>#include "node1.h"#include <_stdio.h>#include <stdlib.h>#include <string.h>#include <math.h>#include <matrix.h>#include <matrix2.h>#include <dma_set.h> /* Also includes <comport> <dma> and <xpp> */#include <asmrout.h> /* High performance assembler functions *//* ***Definition of constants*** */#define PI 3.14159265359#define INF 1e32#define MU 0.011 /* Width of one pixel (mm) */#define FOCUS 6.157148 /* Focal length of the camera */#define HEIGTH -100.6#define F 0.18 /* -(HEIGTH*MU)/FOCUS */#define Ts 0.04 /* Process time [sec] */#define NUMBER_OF_COLUMNS 2 /* Number of parameters of fit */#define R_TOOL 13.0 /* Radius of tool (wheel) */

#define NO_STATE -1#define CURVED_EDGE 0#define STRAIGHT_EDGE 1#define APPROACH_CORNER 2#define AT_CORNER 3#define PAST_CORNER 4#define SEARCH_EDGE 5#define SCAN_OK 1#define INNER_CORNER -1#define OUTER_CORNER -2#define BLACK -3#define WHITE -4

#define PRESS_ANY_KEY printf("Press any key to continu.\n");\while(!server_pollkey()) {KS_Sleep (15);}

#ifndef MEM_ALLOC#define MEM_ALLOC(map,pointer)\

KS_Alloc((map),(void**) &(pointer));\if (pointer == NULL)\{printf("KS_Alloc problem with pointer!\n"); exit (1);}

#endif

/* macro’s for acquiring variables declared in external file */

210

Page 225: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.5 IO from DSP to transputer

#define NEXT_PAR(parm,minl,maxl)\if (fgets(line,128,var_extern) == NULL) {\

printf("\nError in settings.par."); exit(1); }\fscanf(var_extern,"%s",spar);\

if (strcmp(#parm,spar) == 0){\fscanf(var_extern," = %d",&(parm));\if (parm <minl || parm > maxl) {printf("\nWARNING: Parameter...

" #parm " = %d --> out of range.",parm);}}\else {printf("\nError in settings.par: Parameter " #parm " ...

is missing?\n"); exit(1); }

#define NEXT_PARF(parmf,minl,maxl)\if (fgets(line,128,var_extern) ==NULL) {\

printf("\nError in settings.par."); exit(1); }\fscanf(var_extern,"%s",spar);\

if (strcmp(#parmf,spar)==0){fscanf(var_extern,"= %f", &(parmf));\if (parmf <minl || parmf > maxl) {printf("\nWARNING: ...

Parameter "#parmf" = %.2f --> out of range.",parmf);}}\else {printf("\nError in settings.par: Parameter "#parmf" is ...

missing?\n"); exit(1); }/* end macro’s */

/* ***List of used subroutines*** */int set_centered_var_sized1(long *picture1,long *picture2,

int x_p_size, int y_p_size, DMA_AUX_REG *dma_job);int evaluate_img(SCAN_DATA *scan, float *work_sp);void least_squares(MAT *A, VEC *b, VEC *p);

#endif/*******************************************************************/

E.5 IO from DSP to transputer

/*******************************************************************//* FILE: IOTT.C 19-11-98 Johan Baeten *//* IO to Transputer - Comrade version 44v5 *//*******************************************************************//* Sending 18 variables : *//* 1-6 Errors in camera frame *//* 7-12 Feedforward velocities *//* 13-18 Velocity Factors *//*******************************************************************/

#include <iface.h>#include "node1.h"#include <comport.h> /* hema’s version, compatible with TI compiler*/

extern float alfa_global_rad;

211

Page 226: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

extern float xr_global;extern float ff_vel[6];extern float ff_vel_factor[6];

void io_to_tp() {long i, ident,teller;float flt0, flt5, fltv0, fltv5, fltvf0, fltvf1, fltvf5;long result_zero, result_one;int result[18];float flt_zero = 0.0;float flt_one = 1.0;result_one = toieee1(&flt_one); /* Conversion from c40 float to */result_zero = toieee1(&flt_zero); /* ieee float representation. */for (i=0; i<12 ;i++ ) { result[i] = result_zero; }for (i=0; i<6 ;i++ ) {result[i+12] = result_one; }

while (1) {KS_Wait(SEM0); /* Wait for updated global variables. */flt0 = xr_global; /* x distance */result[0] = toieee1(&flt0);flt5 = alfa_global_rad; /* z angle */result[5] = toieee1(&flt5);fltv0 = ff_vel[0]; /* x ff_vel */result[6] = toieee1(&fltv0);fltv5 = ff_vel[5] ; /* z angle ff_vel */result[11] = toieee1(&fltv5);fltvf0 = ff_vel_factor[0];result[12] = toieee1(&fltvf0);fltvf1 = ff_vel_factor[1]; /* y vel_factor */result[13] = toieee1(&fltvf1);fltvf5 = ff_vel_factor[5]; /* z_angle ff_vel */result[17] = toieee1(&fltvf5);KS_Signal(SEM1); /* Signal release of global variables. *//* Sending 18 Values */KS_LinkoutW (1, 18*sizeof(long), &result);

} /* end while */} /*** end io_to_tp() ***//*******************************************************************/

E.6 IO from transputer to DSP

/*******************************************************************//* IOFT.C: 19-11-98 Johan Baeten *//* IO from Transputer - Comrade version 44v5 *//*******************************************************************//* Receiving 21 variables : *//* 1 Task Number */

212

Page 227: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.7 Image to host

/* 2-7 Task Frame Position *//* 8-13 Task Frame Forces *//* 14-19 Camera Frame Position *//* 20 Relative Distance *//* 21 Relative Time *//*******************************************************************/

#include <iface.h>#include "node1.h"#include <comport.h>#include <asmrout.h>

extern float com_val[21];

void io_from_tp() {long i;long com_start[21];KS_Wait(SEM2); /* Wait for proper initialization. */while (1) {/** recieving 21 values **/KS_LinkinW (1, sizeof(long)*21, com_start);for (i=0;i<21;i++){

com_val[i] = frieee1(com_start+i);}

}} /*** end io_from_tp() ***//*******************************************************************/

E.7 Image to host

/*******************************************************************//* FILE: IMG2HOST1.C 21-6-2000 Johan Baeten *//* Save 1 image with contour data *//*******************************************************************/#include <iface.h>#include "node1.h"#include <_stdio.h>#include <stdlib.h>#include <string.h>

extern int SAVE_PICTURES;extern int SAVE_DATA;extern int REQ_PICT;extern long *pict_to_save1;extern long *pict_to_save2;extern int DX_SIZE;extern int DY_SIZE;

213

Page 228: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

extern float data_to_save[42];extern int nop; /*Number of points or scan lines*/

void requestnewimage(){REQ_PICT = 1;KS_Wait(SEM_PICT_OK);

}

void image_to_host(){int pict_x_size, pict_y_size;int run = 1, i;float *pointer;char name[14], gnm[14];FILE *fig;FILE *geg;

pict_x_size = DX_SIZE;pict_y_size = DY_SIZE;sprintf(name,"data/x%d.pgm",run);sprintf(gnm,"data/x%d.dat",run);run = run +1;

KS_Wait(START_IMAGE_2_HOST);while(1){

KS_Sleep(3);requestnewimage();if (SAVE_DATA == 1){

pointer = data_to_save;geg=fopen(gnm,"w");for (i = 0; i < nop+1 ; i++){

fprintf(geg,"%4.3f\t%4.3f\t%4.3f\n",*pointer, ...*(pointer+1),*(pointer+2));

pointer +=3;}fclose(geg);sprintf(gnm,"data/x%d.dat",run);

}if (SAVE_PICTURES == 1){

fig=fopen(name,"w");fprintf(fig,"P5\n%d %d\n 255\r",pict_x_size,pict_y_size);fwrite(pict_to_save1, pict_x_size*pict_y_size/4, 1, fig);fclose(fig);sprintf(name,"data/x%d.pgm",run);

}run = run +1;

} /*** end of while ***/} /*** end image_to_host() ***//*******************************************************************/

214

Page 229: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.8 Procedures to evaluate image

E.8 Procedures to evaluate image

Finally, the file EVAL IMG.C. groups the most important code usedto evaluate the image.

/*******************************************************************//* FILE : EVAL_IMG.C *//* *//* The PURPOSE: POINT EXTRACTION *//* *//* Date: 30/06/2000 Johan Baeten *//*******************************************************************/

/*******************************************************************//* with subroutines: *//* int edge_present (float *sourceline, int length); *//* float iseffil (float *sourceline, int length, float *work_sp); *//* void isef_source (SCAN_DATA *scan); *//* int add_point(POINT point, SCAN_DATA *scan); *//* int calc_startpixel(SCAN_DATA *scan); *//* int global_scan(SCAN_DATA *scan, float *work_sp) *//* int evaluate_img(SCAN_DATA *scan, POINT *corner,float *work_sp)*//*******************************************************************/

#include <stdlib.h>#include <_stdio.h>#include <math.h>#include "my_struc.h"

#define THRESBLACK 130#define THRESWHITE 170#define THRESHOLD 128 /* grey value */#define ISEFCONSTANT 0.5

/* ************* Definition of CALC_STARTPIXEL ***************** *//* *//* This function calculates the staring point (scan->start.x, *//* scan->start.y) for isef_edge detection given scan->direction *//* scan->pathlength, scan->delta and scan->nr_of_scan_lines. *//* The (maximum) window sizes = (image.dx, image.dy) must be met. *//* Taking the positive y-direction downwards as displayed, *//* then dir = 1 <-> -45, and dir = 3 <-> 45 giving in the image:

[ ][ / 3 ][ / ][ / ][ + -----> 0 ][ | \ ]

215

Page 230: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

[ | \ ][ | \ ][ 2 1 ] The returning value indicates */

/* whether (scan->start.x, scan->start.y ) lies in the window: *//* calc_startpixel : 0 --> error out off the window *//* calc_startpixel : 1 --> ok *//* The parameters scan->delta, scan->nr_of_scan_lines and *//* scan->pathlength must be positive !!! *//*******************************************************************/

int calc_startpixel(SCAN_DATA *scan){int sp_ok = 0;int xstart, ystart, dp, pt;int xmax, ymax, xmin, ymin;

dp = scan->delta * (scan->nr_of_scanlines-1)/2;

switch (scan->direction) {case 0:

ystart = scan->center.y + dp; /*!!! different from MATLAB !!!*/ymin = ystart - scan->delta * scan->nr_of_scanlines;xstart = scan->center.x - (scan->pathlength/2);xmax = xstart + scan->pathlength;xmin = xstart;ymax = ystart;

break;case 2:

xstart = scan->center.x - dp;xmax = xstart + scan->delta * scan->nr_of_scanlines;ystart = scan->center.y - (scan->pathlength/2);ymax = ystart + scan->pathlength;xmin = xstart;ymin = ystart;

break;case 1:

pt = scan->pathlength/2.8;ymin = scan->center.y - dp - pt;ymax = scan->center.y + dp + pt;xmin = scan->center.x - dp - pt;xmax = scan->center.x + dp + pt;ystart = scan->center.y + dp - pt;xstart = xmin ;

break;case 3:

pt = scan->pathlength/2.8;ymin = scan->center.y - dp - pt;ymax = scan->center.y + dp + pt;xmin = scan->center.x - dp - pt;

216

Page 231: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.8 Procedures to evaluate image

xmax = scan->center.x + dp + pt;ystart = ymax;xstart = scan->center.x + dp - pt;

break;default:

printf("Bug in calc_sp. This should never happen.\n");break;

}if ((ymin < 1) || (xmin < 1) || (ymax > scan->image.dy) ||

(xmax > scan->image.dx)){scan->start.x = 0;scan->start.y = 0;sp_ok = 0;

}else{scan->start.x = xstart;scan->start.y = ystart;sp_ok = 1;

}return sp_ok;

} /* End of calc_startpixel() *//*------------------------------------------------------------------*/

/* ************* Definition of ISEFFIL **************** *//* *//* This function detects an edge using the isef-filter. *//* The result of this function is the x-pixel coordinate *//* of the edge with sub-pixel precision. ( B = 0.5) *//**********************************************************//* The x_pix value is relative to i = 0, the first pixel *//* in scan->sourceline !!!! The center lies in the middle *//* of this pixel. *//**********************************************************/float iseffil (float *sourceline, int length, float *work_sp){

int i;int x_p;float *yld;float *yrd;float *s_line1,*s_line2;float max = 0.0;float d1,d21,d22,d23;float x_pix;

yld = work_sp;yrd = work_sp + 2*length - 1;s_line1 = sourceline + 1;s_line2 = sourceline + length - 2;

*yld++ = 0;

217

Page 232: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

*yrd-- = 0;

/* Calculation of left and rigth convolution */for (i = 1;i < length ;i++,yld++,s_line1++,yrd--,s_line2-- ) {/* calculation of the left convolution according to the formula */

/* yld(i) = (1 - B)*X(i) + B*yld(i - 1) */*yld = (1.0 -ISEFCONSTANT)*(*s_line1) + ISEFCONSTANT*(*(yld-1));

/* calculation of the rigth convolution according to the formula *//* yrd(i) = (1 - B)*X(i) + B*yrd(i+1) */*yrd = (1.0 - ISEFCONSTANT)*(*s_line2) + ISEFCONSTANT*(*(yrd+1));

} /* endfor *//* Calculation of maximum gradient according to the formula *//* D1(i) = yrd(i + 1) - yld(i - 1) */yld = work_sp + 3;yrd = work_sp + length + 5;for (i = 4;i < length-4 ;i++){

d1 = *yrd++ - *yld++;if (fabs(d1) > max){

max = fabs(d1);x_p = i;

}}x_pix = x_p;/* Calculation of the edge with sub-pixel precision according to *//* D2(i) = yrd(i+1) + yld(i-1) - 2*X(i) */yld = work_sp + x_p - 2;yrd = work_sp + length + x_p ;s_line1 = sourceline + x_p - 1;

d21 = (*yrd++) + (*yld++) - 2*(*s_line1++); /* just before */d22 = (*yrd++) + (*yld++) - 2*(*s_line1++); /* 2e deriv. at max */

if (d21*d22 < 0){x_pix = x_pix - fabs(d22)/(fabs(d21) + fabs(d22));

} else {d23 = (*yrd++) + (*yld++) - 2*(*s_line1++); /* just after */x_pix = x_pix + fabs(d22)/(fabs(d22) + fabs(d23));

} /* endif */return x_pix;

} /* End of iseffil() *//*------------------------------------------------------------------*/

/* **************** Definition of EDGE_PRESENT ***************** *//* *//* This function checks whether scan->sourceline contains an edge. *//* The function returns: *//* 1: edge present; */

218

Page 233: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.8 Procedures to evaluate image

/* -1: no edge but black line; *//* -2: no edge but white line. *//*******************************************************************/int edge_present (float *sourceline, int length){

int max = 0, min = 255;int edge, i;

for (i=0; i < (length-1); i++){if (*(sourceline + i) > max){

max = *(sourceline + i);}if (*(sourceline + i) < min){

min = *(sourceline + i);}

}if (max < THRESBLACK)

edge = -1;else if (min > THRESWHITE)

edge = -2;else edge = 1;return edge;

} /* End of edge_present() *//*------------------------------------------------------------------*/

/* **************** Definition of ADD_POINT *********************//* *//* This function recalculates the image coordinates to the *//* camera frame. It then puts the coordinates in de matrices *//* scan->xs and scan->ys. *//* It returns 1 if ok, 0 if error (point out off image). *//******************************************************************/int add_point(POINT point, SCAN_DATA *scan){int nr, ok;nr = scan->nr_of_points;ok = 1;

if ((point.y >= 0) && (point.y <= scan->image.dy) && (point.x >= 0)&& (point.x <= scan->image.dx)) {

/*coordinate transformation from image to camera-frame*/scan->xs[nr] = point.x - scan->image.dx/2;scan->ys[nr] = -point.y + scan->image.dy/2;

} else {printf("ERROR: Unauthorized x- or y-coordinate value!\n");scan->xs[nr] = point.x - scan->image.dx/2;scan->ys[nr] = -point.y + scan->image.dy/2;scan->condition_nr[nr] = -3;ok = 0;

}

219

Page 234: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

return ok;} /* End of addpoint() *//*------------------------------------------------------------------*/

/* **************** Definition of ISEF_SOURCE ********************//* *//* This function generates an array of pixels starting from *//* scan->start with length scan->pathlength in scan->direction and *//* puts it in scan->sourceline. *//* It further adjusts the starting position for the next scanline. *//*******************************************************************/void isef_source (SCAN_DATA *scan){int i, pt;float *startaddress;float *sourceline;

startaddress = scan->image.start + scan->start.x +scan->start.y * scan->image.dx;

sourceline = scan->sourceline;switch (scan->direction) {

case 0:for (i=0; i < scan->pathlength; i++)

sourceline[i] = startaddress[i];scan->start.y -= scan->delta; /*for next cycle*/break;

case 1:pt = scan->pathlength/1.4;for (i=0; i < pt; i++)

sourceline[i] = startaddress[i + i*scan->image.dx];scan->start.x += scan->delta;scan->start.y -= scan->delta;break;

case 2:for (i=0; i < scan->pathlength; i++)

sourceline[i] = startaddress[i*scan->image.dx];scan->start.x += scan->delta;break;

case 3:pt = scan->pathlength/1.4;for (i=0; i < pt; i++)sourceline[i] = startaddress[i - i*scan->image.dx];scan->start.y -= scan->delta;scan->start.x -= scan->delta;break;

default:break;

}} /* End of isef_source() */

220

Page 235: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.8 Procedures to evaluate image

/*------------------------------------------------------------------*/

/* ************** Definition of GLOBAL_SCAN ******************** *//* *//* The positions of valid edge-points on scan lines are saved in *//* the (scan->xs, scan->ys) arrays. If no valid edge is found, *//* the condition_nr will indicate the type of error (1 = ok): *//* --> 0: global error : scan window out of image; *//* --> -1: black line; *//* --> -2: white line; *//* --> -3: edge point lies out of image (added by add_point); *//* --> -4: edge found but very close to scan window; *//* --> -5: current edge point way off last one (by smooth_scan); *//* The function itself returns *//* 1 if all lines scanned ok; *//* 0 if one of the line scans failed. *//*******************************************************************/int global_scan(SCAN_DATA *scan, float *work_sp){int i, ok , color, nr, xstart, ystart;float xr;POINT edge;int len;

scan->nr_of_points = 0;ok = 1 ;len = scan->pathlength;if (scan->direction == 1 || scan->direction == 3) len /=1.4;if (calc_startpixel(scan) == 0){

/* start point lies out of the image window */scan->condition_nr[0] = 0;printf("Error : Scan window lies out off the image.");ok = 0;

}else{ /*OK, we can start */for(i = 0; i < scan->nr_of_scanlines; i++){

xstart = scan->start.x;ystart = scan->start.y;isef_source(scan);scan->condition_nr[i] = edge_present(scan->sourceline, len);if (scan->condition_nr[i]==1){

xr = iseffil(scan->sourceline, len, work_sp);if (xr <2 || xr > (len-3)){

ok = 0;scan->condition_nr[i]= -4;

}else{switch (scan->direction) {

case 0:edge.x = xstart + xr + 0.5;edge.y = ystart + 0.5;

221

Page 236: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

break;case 2:

edge.y = ystart + xr + 0.5;edge.x = xstart + 0.5;break;

case 1:edge.x = xstart + xr + 0.5;edge.y = ystart + xr + 0.5;break;

case 3:edge.x = xstart + xr + 0.5 ;edge.y = ystart - xr + 0.5;break;

default:printf("Bug: Faulty direction: Should never happen!");break;

}if (add_point(edge, scan) == 0) ok = 0;

}}else{

ok = 0;}

scan->nr_of_points +=1;}

}return ok;

} /* End of global_scan() *//*------------------------------------------------------------------*/

/* ************ Definition of SMOOTH_SCANDATA ****************** *//* *//* This function evaluates the scanned edge points: *//* If a point lies too far from the previous one, *//* this point is marked: condition = -5. *//*******************************************************************/int smooth_scandata(SCAN_DATA *scan){int i, ok = 1, comp_nr = 0;float dist, max_dist = 36.0; /* max_dist is squared !!!*/

/* Hence, 6 pixels. */while ((scan->condition_nr[comp_nr] != 1) &&

(comp_nr < (scan->nr_of_scanlines-1))){comp_nr +=1;

}for(i = comp_nr+1; i < scan->nr_of_scanlines; i++){

if (scan->condition_nr[i] == 1){dist = (scan->xs[i]-scan->xs[comp_nr])*(scan->xs[i]-scan->xs[comp_nr])+(scan->ys[i]-scan->ys[comp_nr])*(scan->ys[i]-scan->ys[comp_nr]);

222

Page 237: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E.8 Procedures to evaluate image

if (dist > max_dist){scan->condition_nr[i] = -5;ok = 0;

}else{comp_nr = i;

}}

}return ok;} /* end of smooth_scandata() *//*------------------------------------------------------------------*/

/* ************* Definition of EVALUATE_IMG ******************** *//* *//* Evaluates the images, scans for edge points and corners *//* The function itself returns *//* 1 if all lines scanned ok *//* 0 no corner -- error (window or point out of image) *//* -1 if inner corner expected *//* -2 if outer corner expected *//* -3 if image is completely black *//* -4 if image is completely white *//*******************************************************************/int evaluate_img(SCAN_DATA *scan, float *work_sp){int i, ok, t, sc_nr, scan_ok, smooth_ok;int lastl, last_correct_line_nr;

for (i=0; i < scan->nr_of_scanlines; i++){scan->xs[i]=0.0;scan->ys[i]=0.0;scan->condition_nr[i] = 1;

}ok = 1; /*is return value*/scan_ok = global_scan(scan, work_sp);smooth_ok = smooth_scandata(scan);if ((scan_ok == 0) || (smooth_ok == 0)){/* some lines scanned false ... *//* or some points lie way off previous ones */t = 0;for (i=0; i < scan->nr_of_scanlines; i++){

sc_nr = scan->condition_nr[i];if (sc_nr == 0 || sc_nr == -3 ||sc_nr == -4 ) ok = 0;/* Global error ? */if (sc_nr == scan->condition_nr[0]) t=t+1;/* Mark: scan->condition_nr[0] can never be = -5 */

}if (ok == 1){

if (t == scan->nr_of_scanlines){

223

Page 238: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

E Image processing software on DSP

ok = scan->condition_nr[0]-2;/*image is completely black or white */

}else{ /*So no global error, neither white nor black image*/ok = 0;/* Corner ? */lastl = scan->nr_of_scanlines -1;if (scan->condition_nr[lastl]<0 &&

scan->condition_nr[0] ==1 ){/* So, first line with edge, last line without or *//* point way off previous ones */if (scan->condition_nr[lastl] == -5){

last_correct_line_nr = 0;while (scan->condition_nr[last_correct_line_nr] != -5)

last_correct_line_nr +=1;if (scan->xs[lastl] > scan->xs[last_correct_line_nr]){

ok = -1;}else{

ok = -2;}

}else{ok = scan->condition_nr[lastl];/*(color of last line, should be -1 or -2)*/

}}

}}

}return ok;

} /* end of evaluate_img() *//*******************************************************************/

224

Page 239: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

Appendix F

Technical drawings

F.1 Sinusoidal contour

The first testing object is a 30 mm tick PVC-plate. One of it sideshas a sinus-like profile as shown in figure F.1. The trajectory of themilling tool, with radius (12.5 mm), is a perfect sine. This means thatthe ‘peaks’ (curves 1 and 3) of test object are curved more sharplythan the ‘valley’ (curve 2). The sine has an amplitude of 75 mm and aperiod of 300 mm. The colour of the test object is dark grey, makingit clearly distinguishable from a bright surrounding.

Figure F.1: Test object with sinusoidal contour, scale 1:4

225

Page 240: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

F Technical drawings

F.2 ‘Constant-curved’ contour

The second test object has a piecewise constant curved contour. It thusconsists of straight lines and arc shaped segments. Using a constantcurved contour allows a verification of the correctness of the feedfor-ward calculation. To give a good contrast with the environment, thetop layer of the object is painted black. Figure F.2 shows the exactdimensions of the this object.

Figure F.2: Piecewise constant curved test object, scale 1:6

F.3 Camera mounting - ‘ahead’

Figure F.3 gives the most important distances of the ‘ahead’ mountedcamera. The vertical distance (of 120 mm) is changeable. The exactlocation of the focal point, being the origin of the camera frame w.r.t.the end effector, has to follow from calibration. A calibration result forthe camera mounting ahead, used in the experiments of chapter 7, is:

**************************************f = 6.146c_x = 255.9 ; c_y = 256.0

vision_sensor_frame :R = [-0.01923 0.99979 -0.006340.99981 0.01928 0.001730.00185 -0.00630 -0.99997 ]

226

Page 241: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

F.3 Camera mounting - ‘ahead’

T = [0.51178e+002

-0.01877e+0022.18613e+002 ]

**************************************

Figure F.3: Camera mounting ‘ahead’

227

Page 242: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

F Technical drawings

Figure F.4: Technical drawing of lateral camera mounting

F.4 Camera mounting - ‘lateral’

Figures F.4 and F.5 give a 3D view and the distances of the laterallymounted camera. The exact location of the origin of the camera framefollows from calibration.

A calibration result for the lateral camera mounting, used in the

228

Page 243: Integration of vision and force for robotic servoingAbstract Recent research aims at the involvement of additional sensors in robotic tasks in order to reach a higher level of performance,

F.4 Camera mounting - ‘lateral’

third experiment of chapter 8, is:

**************************************f = 6.192c_x = 255.4 ; c_y =256.9

vision_sensor_frame :R = [-4.558e-002 9.989e-001 6.706e-0039.987e-001 4.571e-002 -2.079e-002-2.108e-002 5.750e-003 -9.998e-001 ]

T = [6.500e-002-1.467e+0024.560e+001]

**************************************

Figure F.5: 3D view of lateral camera mounting

229