74
Object Re-grasping with Dexterous Robotic Hands Ana Carolina Grilo da Silva Januário Thesis to obtain the Master of Science Degree in Electric and Computer Engineering Supervisor(s): Prof. Alexandre José Malheiro Bernardino Examination Committee Chairperson: Prof. João Fernando Cardoso Silva Sequeira Supervisor: Prof. Alexandre José Malheiro Bernardino Member of the Committee: Prof. Rodrigo Martins de Matos Ventura December 2015

Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Object Re-grasping with Dexterous Robotic Hands

Ana Carolina Grilo da Silva Januário

Thesis to obtain the Master of Science Degree in

Electric and Computer Engineering

Supervisor(s): Prof. Alexandre José Malheiro Bernardino

Examination Committee

Chairperson: Prof. João Fernando Cardoso Silva SequeiraSupervisor: Prof. Alexandre José Malheiro Bernardino

Member of the Committee: Prof. Rodrigo Martins de Matos Ventura

December 2015

Page 2: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

ii

Page 3: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Resumo

Esta tese estuda a manipulacao em maos roboticas destras, em particular, a habilidade de alterar o

tipo de agarre de um objecto enquanto este esta seguro na mesma mao. Esta habilidade e bastante

importante visto que, para o desempenho de tarefas com o objecto, e importante alterar a forma como

o objecto e agarrado pela mao. Por exemplo, quando agarramos uma caneta com o intuito de escrever,

normalmente temos que manipula-la, alterando para a pose correcta de agarre para escrever. E um

problema desafiante visto que mover os dedos, pode alterar algumas propriedades de estabilidade.

Tratamos este problema atraves da abordagem de reposicionamento dos dedos no objecto. Dado

um agarre inicial, movem-se os dedos sequencialmente, de um ponto na superfıcie do objecto para

outro, alcancando a posicao final desejada. E um problema desafiante na medida em que, ao remover

um dedo de um agarre estavel, podemos alterar alguma propriedade de estabilidade fazendo o ob-

jecto cair. Desse modo, desenvolvemos um algoritmo de planeamento que escolhe a sequencia de

reposicionamento dos dedos mantendo a estabilidade durante todos os passos de reagarre.

A implementacao deste algoritmo e feita utilizando as ferramentas ROS-Gazebo-MoveIt. Este con-

junto engloba um ambiente de simulacao robotica, deteccao de (auto-)colisao e planeamento de movi-

mentos. Assim, os algoritmos desenvolvidos podem ser adaptados a outros robos.

Sao apresentados resultados em simulacao para a mao do robo iCub. Assumimos que um metodo

apropriado de alcance ao objecto coloca a mao robotica perto do objecto com uma pose conveniente

para o agarre. Assim, fazemos o planeamento de movimentos, evitando colisoes, para que os dedos

completem o primeiro agarre. De seguida, adquirimos e processamos a informacao relativa aos contac-

tos atraves de sensores tacteis para entao computar as propriedades de estabilidade do agarre (teste

de fecho de forca). Depois, comecamos por testar que contactos podem ser removidos sem perder es-

tabilidade. Finalmente, procedemos ao reposicionamento dos dedos, mantendo um fecho de forca, ate

que o agarre final seja alcancado. No final, avaliamos a nossa abordagem em termos de propriedades

de estabilidade ao longo da sequencia de reagarre, e da robustez relativa a pequenas variacoes na

forma dos objectos.

Palavras-chave: reagarre, reposicionamento de dedos, fecho de forca, mao destra, ROS,

Gazebo, deteccao de colisoes

iii

Page 4: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

iv

Page 5: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Abstract

This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of

changing the grasp type on an object while holding it. This is an important skill, since for the execution

of many tasks it is important to change the way the object is grasped in the hand. For instance, when

we grasp a pen to write, typically we have to manipulate it in the hand to place it in the right pose for

writing. This is a very challenging problem because moving the fingers on the object may change its

stability properties.

We address this problem via a finger gaiting approach. Given an initial grasp, move the fingers in

sequence, from one location to the other in the surface of the object, to achieve a desired final position.

This is a challenging problem because removing one finger from the surface of the object may change

its stability properties and make the object fall. Therefore, we develop a planning algorithm that chooses

the best sequence of finger gaits to maintain stability during the re-grasping steps.

We have implemented the algorithm using the pipeline ROS-Gazebo-MoveIt. This pipeline already

includes a generic robot simulation environment, (self-)collision detection, and motion planning. Thus,

the developed algorithms could be adapted to other robots.

Results are presented in simulation for the iCub robot hand. We assume that an appropriate reaching

method has put the robot hand close to an object in a convenient pose to grasp. Then, we do the motion

planning, with collision avoidance, for the fingers to accomplish the first grasp. Next, we acquire and

process the contact information from the tactile sensors so we can compute the stability properties of

the grasp (force closure test). Then, we start testing which contacts can be removed without loosing

stability. Finally, we proceed to the fingers repositioning, keeping a force closure grasp, until the final

grasp is reached. We evaluate our approach in terms of the stability properties along the re-grasping

sequence and robustness to object small shape variations.

Keywords: re-grasp, finger gaiting, force closure, dexterous hand, ROS, Gazebo, collision

avoidance.

v

Page 6: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

vi

Page 7: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Contents

Resumo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1 Introduction 1

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Topic Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.4 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.5 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.6 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Background 5

2.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 Grasp Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.2 Force Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.1.3 Collision Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.1.4 Motion Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2 State of The Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.2.1 Grasping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.2.2 Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3 Outline of the Proposed Algorithm 31

3.1 Finger Gaiting Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Implementation 35

4.1 ROS Development Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.1.1 Brief Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.1.2 Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.2 Gazebo Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

vii

Page 8: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

4.2.1 Brief Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.2.2 The best simulator for the iCub humanoid robot . . . . . . . . . . . . . . . . . . . . 38

4.2.3 Gazebo and ROS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4.3 Hand Model for Gazebo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.4 Contact Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.5 Force Closure Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

4.6 Motion Planning and MoveIt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.6.1 Reaching the Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5 Results 49

5.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.2 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.3 Reaching the First Grasp Pose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.4 In-hand Manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.5 Force Closure Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.6 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

6 Conclusions 55

6.1 Achievements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

6.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Bibliography 57

viii

Page 9: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

List of Tables

2.1 Components transmitted and not transmitted through the contact in each contact model. . 7

2.2 Table with the most common contact models. Adapted from [4]. . . . . . . . . . . . . . . . 8

4.1 Most important features for a simulator. Table from [38]. . . . . . . . . . . . . . . . . . . . 39

4.2 Most important criteria for choosing a simulator. Table from [38]. . . . . . . . . . . . . . . 39

4.3 Set of time samples from the topic we acquire contact information. Notice the inconsis-

tency of information during time without any motion of the fingers during the acquisition. . 45

ix

Page 10: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

x

Page 11: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

List of Figures

1.1 Hand model on Gazebo. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2.1 Two examples of friction cone. The example on the left is a friction cone with four itera-

tions. The example on the right has six iterations. Image source [4]. . . . . . . . . . . . . 9

2.2 Example of a grasp with two fingers of a box. Image source [4]. . . . . . . . . . . . . . . . 9

2.3 A set of five wrenches represented in the wrench space (abscissa- force; ordinate-torque).

Image source [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.4 a)AABB , b)OBB. Image source [10] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.5 Example of kIOS applied in yellow object. The curves in blue means the 3 spheres that

origins the bounding volume(in red). Image source [12]. . . . . . . . . . . . . . . . . . . . 13

2.6 a)Dois objectos em colisao. b) Polıtopo e pontos resultantes da Diferenca de Minkowski. 14

2.7 a)Dois objectos sem colisao. b) Polıtopo e pontos resultantes da Diferenca de Minkowski. 15

2.8 Example of EPA with 2 iterations: a)First edge closest to origin selected, b)First edge

closest to origin expanded. Second edge closest to origin selected, c)Second edge clos-

est to origin expanded. A Edge of A − B polytope is the closest to the origin. Expansion

terminated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.9 a)Overlapping objects seen from xx axis, b)Non-overlapping objects seen from yy axis,

c)Non-overlapping objects seen from zz axis, d)Set of pairs of objects: First contains pairs

which overlapping on xx axis; Second contains pairs which overlapping on xx and yy axis;

Third contains pairs which overlapping on xx, yy and zz axis. . . . . . . . . . . . . . . . . 17

2.10 a)Overlapping objects seen from xx axis, b)Overlapping objects seen from yy axis, c)Overlapping

objects seen from zz axis, d)Set of pairs of objects: First contains pairs which overlapping

on xx axis; Second contains pairs which overlapping on xx and yy axis; Third contains

pairs which overlapping on xx, yy and zz axis. . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.11 Example of two SV and their collision. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.12 a)BVH result of an articulated object. b) BVH based on AABB(see 2.1.3.1). c) BVH based

on OBB(see 2.1.3.1). Dashed borders represent BV boundary. Continuous lines mean

original object boundaries. Adapted image from [19] . . . . . . . . . . . . . . . . . . . . . 20

2.13 a)BVH of object A ; b)BVH of object B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.14 Resulting BVTT of objects in figure 2.13. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.15 Example of KPIECE discretization with three levels. Image from [22] . . . . . . . . . . . . 23

xi

Page 12: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

2.16 Example of RRT method application without final state specification. The image repre-

sents the evolution of the tree during the method from the left to the right. Image from

[23]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.17 Example of PRM method application in an environment with some objects(in gray). The

graph is composed with gray edges and orange vertex and the solution is the set of orange

edges and vertices.Image from [25] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

2.18 8 different kinds of grasps already implemented. Image from [27]. . . . . . . . . . . . . . . 26

2.19 Example of an object reorientation by finger gaiting. Image from [33]. . . . . . . . . . . . . 28

2.20 Example of a sphere manipulated with finger substitution. Image from [31]. . . . . . . . . 28

2.21 Example picking and dropping an object. Image from [34]. . . . . . . . . . . . . . . . . . . 29

3.1 Flowchart illustrating the proposed algorithm (see text). . . . . . . . . . . . . . . . . . . . 33

4.1 Simple example of topics and nodes interaction (publications and subscriptions). . . . . . 36

4.2 Example of a ROS tf tree. Each edge represents a transform from the parent node to the

child node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.3 ROS structure of this algorithm (see text). . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.4 Gazebo an ROS architecture scheme (see text). Image from (http : //gazebosim.org/tutorials/?tut =

ros control). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.5 Minimum and maximum angles of the model hand joints (see text). . . . . . . . . . . . . . 41

4.6 Index Finger touching an object (see text). . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

4.7 Metric Results using different levels of discretization. . . . . . . . . . . . . . . . . . . . . . 45

5.1 Objects used in this algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

5.2 The Version of Parallel extension grasp used in this work. . . . . . . . . . . . . . . . . . . 50

5.3 Index Ring to Middle grasp (IR2M). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

5.4 a) The hand and the object in the simulation environment. b) First grasp pose. . . . . . . 50

5.5 4 possible re-grasp sequences tested with the algorithm. . . . . . . . . . . . . . . . . . . 51

5.6 4 possible re-grasp sequences tested with the algorithm. . . . . . . . . . . . . . . . . . . 52

5.7 Metric Results using mean calculation, in blue, and using all contact points, in orange. . . 53

xii

Page 13: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Chapter 1

Introduction

The contents of this chapter will be the motivation to develop the work. Next, it will be presented the

state of the art required, the objectives/goals to achieve and finally, the thesis outline.

1.1 Motivation

Robotics is an expanding area of knowledge, specially when it is about grasping with dexterous hands

because they are such a complex part of the human body. When imagining our future, we would like

robots to do somethings that we, humans, do not appreciate so much, like housework. The most objects

we interact on a daily basis were built for humans so anthropomorphic hands are the natural manipulation

devices for such objects. This kind of hands has many degrees of freedom, which gives them more ability

than simple grippers. This complexity also makes the control of these devices harder.

For robot automation turn reality, the robot must be able to pick an object and change the pose of

the fingers to grasp it in a way that can use the object in some task. The process associated with that

change between grasps is the re-grasping. In some manipulation cases, it is considered the re-grasp

with rolling and sliding. Rolling is the phenomenon when the contact is not removed, but the surface of

contact change by rolling the fingers on the object surface. The contact area of the finger changes as

the zone of the object where the contact is applied. The slide phenomenon happens when the friction

is smooth and the fingers can slip through the object surface. There is another technique named finger

gaiting that consists in lift the finger, removing the current contact and reposition the finger in a new

pose on the object surface. We will approach finger gaiting technique mainly because with finger gaiting

we can reach the same poses reached by finger rolling and finger sliding. We will not consider rolling

neither sliding phenomena.

The main problem of this work is to give the necessary information so the in-hand manipulation

becomes possible to accomplish, always ensuring that, during the finger gaiting, the object do not fall.

We need to move the fingers from one position to another one avoiding undesired collision situations.

So we do motion planning, with collision avoidance and only touch the object on the desired pose for the

final grasp.

1

Page 14: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

The model that will be used is based on the iCub robot hand (http://www.icub.org/). However, the

model is adapted to have one controller for each joint, which makes it different from the iCub robotic

hand since the real model has some joints coupled sharing the same controller. The hand model has

nineteen joints: each finger has one joint for abduction (with the exception of middle finger), one for

proximal, other for intermediate and another for distal. The figure 1.1 has the hand model used for this

work described.

Figure 1.1: Hand model on Gazebo.

1.2 Topic Overview

As people grab objects, anthropomorphic robot hands should grab objects in the same way. In an almost

automatic way, we pick an object with enough stability for the object do not drop if some external force is

applied. This stability is verified using a test named Force Closure Test([1]). This technique is essential

in this work since we need grasps to be stable so the object is not thrown in some direction due to some

uncompensated force that was applied.

A grasp is defined as a set of contact poses between the fingers and the object surface, the corre-

spondent applied forces and material properties, like friction coefficients. The problem to solve is, given

a certain grasp, to be able to identify unnecessary fingers to have a stable grasp, and be able to move

those fingers to other positions to reach some other grasp previously selected.

1.3 Objectives

The main objective of this work is to develop a technique for in-hand object re-grasping through finger

gaiting. This technique basis is to remove the current contact and then, reposition the finger on a different

part of the object. First, we need to identify the expendable fingers from grasps, so it becomes possible

to move these fingers to the final positions without destabilising the grasp.

We propose to combine techniques already developed in similar areas to accomplish the re-grasping

method such as some present in ROS (Robot Operating System) like the package MoveIt which con-

tains some libraries of motion planning such as OMPL (Open Motion Planning Library) [2] and collision

2

Page 15: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

detection, FCL(Flexible Collision Library)[3]. These libraries will be explained in chapter 2. Another

technique to be studied is the force closure test also studied in [1] and [4].

More specifically, the manipulation method we present is holding the object and change some fingers

to other poses to reach the final grasp. That final grasp has the purpose of the robot to do a task by

holding the object with that configuration.

1.4 Assumptions

We have to take into account object’s characteristics required for this problem. Objects need to be:

• Rigid, so they can be simulated using standard tools;

• Light, so the robotic hand is strong enough to compensate any external forces such gravity and

inertia;

• High friction coefficient, so they do not slip over during the manipulation.

We assume that the robotic hand controller is able to compensate external disturbances and keep

the object static in the hand. The control for rejecting disturbances will not be developed in this work.

1.5 Contributions

With this work, we aim to contribute developing a safe framework for finger gaiting with an anthropomor-

phic robotic hand in 3 dimensions. As previously referred, we will use a hand model similar to the iCub

robotic hand enhancing the re-grasping applications of the robot.

We will use collision detection features from MoveIt to avoid and generate collision while removing

the current contact and creating the desired contact, respectively. The contact model to be used and

implemented is the soft contact model since it is a more realistic model to represent contacts.

To develop the method, we will use the tools available in ROS, Robot Operating System, and as

simulation tool, we will use Gazebo Simulator.

1.6 Thesis Outline

This thesis is composed of five chapters, organized as follow. In chapter 2 we present the background

knowledge we will use to get the solution of this problem and make a brief overview of the state-of-the-art

in grasping and manipulation.

In chapter 3 we introduce the proposed algorithm to achieve the mentioned goals.

In chapter 4 it will be introduced the software used such ROS, Gazebo Simulator, the model hand

used and how to compute contacts between the fingers and the object. We also describe how to use the

force closure detection and how to to motion planning for each finger. Finally, we present the structure

of the proposed algorithm using the previous explained techniques.

3

Page 16: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

In chapter 5 there are presented the results of applying the methods by describing the inputs, outputs

and sequence of tasks required to show the algorithm result.

Finally, chapter 6 concludes our work and discuss some applications and point some proposals for

future work.

4

Page 17: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Chapter 2

Background

2.1 Concepts

In this section we will describe the main concepts required for this work. First, we give some notions of

existing grasp models. Grasp stability analysis through force closure is introduced in the second section.

In the third section, we describe notions to perform the collision detection with MoveIt package as some

methods of motion planning explained.

2.1.1 Grasp Model

By definition, a grasp is a set of contact poses between the fingers and the object surface, the set of the

applied forces and material properties, like friction coefficient. Such forces consist of linear and angular

components.

All human grasps belong to a set of the following:

• Pad - the forces between the pads of fingers and thumb held the object.

• Palm - When the opposite force of the fingers, is against the palm.

• Side - The object is held between the thumb and the side of the index finger.

A grasp has some properties:

• The ability to resist to any external force. We want, given a certain set of forces to be applied on

an object, the forces generate a opposing wrenches and compensate each other (force closure

2.1.2).

• The ability to, dexterously, manipulate and re-grasp the object. We must be able to change the

position of some fingers in order to re-grasp the object and perform some task.

2.1.1.1 Human Grasp Choice

Grasps are divided in two categories suggested by Napier(1956) [5]: power and precision. Power cat-

egory is distinguished by large areas of contact between the surface of the fingers and the object. It is

5

Page 18: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

chosen when we want stability and security but it has a little or none ability to impart motions with the

fingers. When we want to consider sensitivity and dexterity we choose a precision grasp. In precision

grasps, the object is held with the tips of the fingers and thumb.

Looking at the tree in diagram 2.1 , we can see that, the first choice is between power and preci-

sion. In the next level, our choice is based on geometric and task related. Both task requirements and

geometry information are important to select a grasp.

If we choose precision grasps, the question to be made is about the shape of the object: Is the object

prismatic or circular? After answering this question, we move to the next level. Then pick the type of

grasp according to the task to be done.

If we choose power grasps, the first question is different from the question for precision. Here, we

ask about the object need to be supported or clamped to sustain forces from all directions. If it needs to

be supported, we pick a non-prehensile hook grasp or palmar support. If it needs to be clamped, then it

might need a prehensile grasp in which the fingers or the palm confine the object.

Grasp

Power

Prehensile

Circular

Disk Sphere

LateralPinch

Prismatic

LightTool

HeavyWrap

LargeDiameter

SmallDiameter

AdductedThumb

MediumWrap

Non-Prehensile

Hook (PlatformPush)

Precision

Compact

Circular

Disk Sphere Tripod

Long

Prismatic

(opposed thumb+ finger(s))

Thumb-3Finger

Thumb-4Finger

Thumb-2Finger

Thumb-IndexFinger

Diagram 2.1: Tree with the taxonomy and manufacturing of grasps adapted from [6].

Napier suggest that objects should be grasped depending first, on the task and then, on their shapes,

because with one simple object we can perform multiple tasks. During the course of a single task with a

single object, the hand can adopt different grips to adjust to changing force/torque conditions.

For this work we will adopt precision grasps because we will need enough degrees of freedom for

manipulation task and single-handed operations.

2.1.1.2 Analyzing The Contact Model

A contact model [7] is based in characteristics like force and torque, that can be applied to an object,

allowing relative motions of the bodies in contact. It depends on friction coefficients and other materials

properties of both parts(finger and object).

6

Page 19: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

There are three most known contact models [8]. In table 2.1 there is a list of components (contact

forces, torques, moments and velocities) that are transmitted through each of the three contact models.

This table was made based on the information present in the text of the [8].

When the point-contact-without-friction (PwoF) model is used, the friction between the finger and the

object is nearly zero and the contact surface is very small.

The hard finger (HF) model also known as point-contact-with-friction is when there is a significant

contact friction, but the contact patch is small, so that no appreciable friction moment exists.

The third model is the soft finger (SF). It considers that the surface friction and the contact patch are

large enough to generate significant friction forces. A friction moment about the contact normal is also

transmitted. It is the most complete model that allows forces to be applied in a cone (see Friction Cone)

about the surface normal.

In table 2.2, we have some characteristics about the different contact models. The notions of friction

cone and wrenches will be explained in the next subsections.

Table 2.1: Components transmitted and not transmitted through the contact in each contact model.

Contact ModelComponent Point-Contact-Without-Friction Hard Finger Soft Finger

Normal Contact Force Transmitted Transmitted TransmittedContact Frictional Force Assumed to be Negligible Transmitted TransmittedNormal Contact Moment Assumed to be Negligible Not Transmitted Transmitted

Contact Frictional Moment Assumed to be Negligible Not Transmitted TransmittedNormal Translational Velocity Transmitted Transmitted Transmitted

Tangential Translational Velocity Not Transmitted Transmitted TransmittedNormal Angular Velocity Not Transmitted Not Transmitted Transmitted

Tangential Angular Velocity Not Transmitted Not Transmitted Transmitted

Relative to the contact model, we will adopt an extended version of soft finger contact model. This

version considers not only one point of the contact but all points belonging to the contact. Each point is

represented using the soft finger contact model.

Now we will introduce the definition of friction cone mentioned in table 2.2.

2.1.1.3 Friction Cone

With soft finger contact model, we can apply forces in a cone. This cone is named as friction cone [4],

[1] and it is built aligned with the normal to the surface of contact.

The model chosen, soft finger, is a more realistic model and allows the forces to be applied in the

friction cone and the torques about the contact normal. The friction model to be used is Coulomb’s

friction model, [4]. The formula for Coulomb’s friction model when we have soft finger contact model is:

FCci =

{f ∈ R4 :

√f21 + f22 ≤ µff3, f3 ≥ 0, |f4| ≥ γf3

}(2.1)

where γ(γ ≥ 0) is the torsional friction coefficient, f4 is the torque magnitude along the contact normal

direction, f1, f2 and f3 represent the force along x, y and z respectively, µf is the static coefficient of

7

Page 20: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Table 2.2: Table with the most common contact models. Adapted from [4].

Contact type Features Wrench basis Friction Cone

Frictionless point contact Allows forces to be appliedin the contact normal.

001000

f1 ≥ 0

Point Contact with Friction(Hard Finger)

Allows forces be appliedin a cone centered in the

contact normal.

1 0 00 1 00 0 10 0 00 0 00 0 0

√f21 + f22 ≤ µff3,

f3 ≥ 0

Soft Finger

Allows forces be appliein a cone centered in the

contact normal.Allows torques about

contact normal.

1 0 0 00 1 0 00 0 1 00 0 0 00 0 0 00 0 0 1

√f21 + f22 ≤ µff3,

f3 ≥ 0, |f4| ≥ γf3

friction. In table 2.2 we can see the main differences between the most common contact models and to

know more about contact models, consulting [4].

Looking at table 2.2, in the line corresponding to the soft finger contact model, we have a wrench

basis (matrix) where the three first lines, when multiplied by the vector of the forces and torques in all

directions, are related with the forces in the three axis x, y and z and the three last lines correspond to

the three components of the torque. Once this model allows torque only about the normal of the contact,

coordinate z, it only has one non-null line corresponding to the torque about z axis.

To analyze a grasp from the set of wrenches that represents it, we compute the friction cone sampled

over its outer limits and we obtain equation (2.2), with the normal force being unit of magnitude, f3 = 1.

Hereupon, the equation (2.2) provides us the base framework for the grasp formulation.

wkci =

µf sin(θk) µf sin(θk)

µf cos(θk) µf cos(θk)

1 1

0 0

0 0

γ −γ

, θk ∈ [0, 2π] (2.2)

In image 2.1 there are two examples of approximation of friction cones by sampling the base of the

cone with a discretization level of four in the example on the left and a discretization level of eight on

the right example. This discretization of friction cone accelerates the process, because it approaches a

cone by a pyramid and instead of calculate the all perimeter of the cone’s basis, it calculates a polygon

base.

In figure 2.2 it is an example of a two-fingers grasp of a box. The coordinates z of each finger

(represented with a cylinder) is the direction in which the friction cone will be computed.

8

Page 21: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 2.1: Two examples of friction cone. The example on the left is a friction cone with four iterations.The example on the right has six iterations. Image source [4].

Figure 2.2: Example of a grasp with two fingers of a box. Image source [4].

2.1.1.4 Grasp Representation

Since we already introduce notion of grasp and some of its varieties, some concepts about contact

model and friction cone, we will show how the grasp is mathematically represented.

All contacts are defined as sets of wrenches (forces and torques), positions and normal. We just

need to transform all coordinates into a common reference frame so we can treat data correctly.

The set of all transformed wrenches defines a grasp and it is designated as grasp map.

The grasp map is represented by the matrix G [1]:

Gi =

Rci 0

[pci ]xRci Rci

ωci i ∈ [1, ..., n]. (2.3)

where Rci is the rotation matrix from one reference frame to another reference frame and [pci ]x is

the skew-symmetric matrix of pci which is defined by:

px =

0 −p3 p2

p3 0 −p1−p2 p1 0

, (2.4)

9

Page 22: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

and n is the total number of contacts.

One important note is that, the choice of the reference frame will not influence the final result which

allows us to use any framework to transform the coordinates. Although, we will use as global reference

frame, the palm of the hand.

The resulting grasp map is:

G =[G1, ..., Gn

]. (2.5)

With the grasp map defined, we have the tools to compute force closure test to validate the stability

of a grasp.

2.1.2 Force Closure

Force closure is a binary test to evaluate the grasp stability. A grasp is stable and so a force closure

grasp if the grasp can resist any applied wrench. Mathematically speaking, a grasp is force closure if

there is a combination of contact forces, fc such that, when applying an external wrench to the object,we:

Gfc = −we, (2.6)

where fc is a vector of the forces of each contact in friction cone:

fc =

fc1

fc2

.

.

.

fcn

, fci ∈ FC (2.7)

and n is the number of contacts that compose the grasp.

We can easily evaluate if a grasp is a force closure grasp by analysing the convex hull of the grasp

map, ConvexHull(G). The convex hull of G is the minimum convex region spanned by G on the wrench

space.

In figure 2.3 it has represented the wrench space (force, torque). It has only 2 dimensions since, if

we discretize all force components, with the torque it would be a 4D space. So, with the wrench vectors,

corresponding to each contact, in it, we have the minimum convex region that contains all wrenches that

represent the grasp, in green, as known as convex hull of the set of wrenches. In this example, we can

say that the grasp is force closure since the origin is inside the convex hull.

Since each contact between one finger and the object has many points of contact, the convex hull

of all points generates a patch that will be the surface of contact. We will use all points of the surface

contact to obtain the force closure result.

10

Page 23: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 2.3: A set of five wrenches represented in the wrench space (abscissa- force; ordinate-torque).Image source [1].

2.1.3 Collision Detection

One of the most essential parts of this project is the collision detection. When some finger reaches the

object, it needs to avoid collision during the motion and, at the same time, we need to promote a collision

between fingers and an object to create the contact of grasping the object. For this purpose there is a

library called FCL - Flexible Collision Library [3] present in ROS through MoveIt package.

2.1.3.1 FCL - Flexible Collision Library

This library was developed with many algorithms for collision detection and distance computation. To do

that, the objects must be represented with simple shapes to accelerate the whole process. In this section

we will present a topic where we briefly explain the object representations methods, a topic with all the

techniques to detect collisions and compute the distance before any collisions happen and another topic

explaining hierarchical collision detection.

Object Representation A generic object may have lots of details and be non-convex which can slow

down its analysis. To deal with this problem, FCL creates simplified representations of objects based

in triangle meshes or point clouds. We will not study point clouds, just triangle meshes. To accelerate

collision detection and distance computations, each object is approximated by a finite set of convex

components named bounding volume (BV) [9]. By definition, BV is the smallest volume, regarding simple

geometry like spheres, boxes, cones or cylinders, which contains the object inside. The larger amount

of components, the better is the approximation to the original shape of the object and, at maximum we

can have one BV for each triangle of the mesh.

11

Page 24: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

There are many types of bounding volumes and mostly FCL’s bounding volumes are boxes whose

faces, edges and vertices delimit the object. There are bounding spheres which surface delimits the

object boundaries too that are less used. Next, we will describe some methods used in FCL to represent

objects.

AABB - Axis-aligned Bounding Box

With AABB [10], the BV of an object is created by approaching its edges, aligned with world axis, to

the object until it reaches the object boundaries like we can see in figure 2.4a).

Figure 2.4: a)AABB , b)OBB. Image source [10]

One Disadvantage is when we are dealing with rotational objects and at each rotation, the bounding

volume has to be recomputed since this representation does not allow rotation. It is more indicated to

fixed or with linear motion objects. One advantage is the collision detection is faster because it is a

simpler representation.

OBB - Oriented Bounding Box

Representing objects with OBB [10], the BV of an object is created in similar way than AABB. After

finding AABB, the box rotates and adjust its faces, edges and vertices to the object boundaries until the

pose in which the BV volume is smaller like in figure 2.4b) where we can see that the empty space inside

the bounding box is smaller than 2.4a) due to rotation applied.

This representation is more recommended to articulated bodies since it follows the rotation of the

object but as consequence is slower than AABB.

kDOP - Discrete Oriented Polytope

kDOP method [11] is a generalization of AABB in object representation where k symbolizes the num-

ber of direction in which each pair of planes approaches the object.

kIOS - Intersection Of k Spheres

This method kIOS [12] results from intersecting a k number of spherical surfaces whose delimit the

object boundaries. Spheres distribution is related with the dimensions of the object: the largest dimen-

sion is delimited by the first sphere. The second larger dimension is delimited by the second sphere in

12

Page 25: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

case it exists and so on. The value of k is related with objects dimensions: if dimensions are similar

(the ration between them smaller than 1.5), then k = 1; if only two dimensions are similar, then k = 3,

otherwise, k = 5 as we can see in algorithm 1.

In figure 2.5 since we have a 2 dimensional case, there is only two possible value for k depending

on similarity between those dimensions. Since the two dimensions of the object are different, k will take

the value 3.

Figure 2.5: Example of kIOS applied in yellow object. The curves in blue means the 3 spheres thatorigins the bounding volume(in red). Image source [12].

Algorithm 1 kIOS algorithm

1: ratio← COMPUTE RATIO(object.dimensions)2: if ratio < 1.5 then3: n sphere← 14: else5: ratio← COMPUTE RATIO(object.smaller dimensions)6: if ratio < 1.5 then7: n sphere← 38: else9: n sphere← 5

10: end if11: end if12: bv ← COMPUTE IOS(n sphere, object.dimensions)

Collision Detection in Simple Objects Collision detection is made by compare both BV of objects.

To do so, FCL has many techniques such those applied to static objects (discrete collision detection) in

which is not take into account the motion of the objects. There are other techniques which consider the

motion over time (continuous collision detection).

The queries in FCL are : discrete collision detection, depth penetration, collision detection between

multiple objects, continuous collision detection and distance computation.

13

Page 26: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Discrete Collision Detection and Depth Penetration

It consists in detecting collisions between two BV. We call it discrete because only detects collision in

a moment of time and not during a full motion. To compute this detection, FCL is based on GJK (Gilbert-

Johnson-Keerthi) [13], EPA (Expanding Polytope Algorithm) [14]. Both based on Minkowski Difference

[13].

Minkowski Difference

The Minkowski Difference method creates a new object from every point of one object, subtracts every

point of other object. The resulting object is a convex hull of the set of such resulting points. As we can

see if figure 2.6, if two objects are intersecting, the result of applying Minkowski Difference contains the

origin and, if two objects do not collide like in figure 2.7, the distance between them is the distance from

the object to the origin.

Figure 2.6: a)Dois objectos em colisao. b) Polıtopo e pontos resultantes da Diferenca de Minkowski.

GJK - Gilbert-Johnson-Keerthi

GJK is a method based on Minkowski Difference but, instead of using all points from Minkowski

difference, it uses the pair of points (a, b) with a ∈ ObjectA and b ∈ ObjectB closest to each other

with gives the desired result by computing the distance between the origin and the result of Minkowski

Difference since ‖a− b‖ = d(A,B).

Each iteration constructs a simplex within the polytope resulting of Minkowski that lies nearer the

origin than the one built in previous iteration. In algorithm 2, W is a set of vertices of the simplex

belonging to that iteration and it is initialized as empty, vk is the point in the simplex nearest the origin

and it is initialized as an arbitrary point of Minkowski Difference resulting polytope. Then GJK generates

a sequence of simplices with w = s(A−B) in which s(A−B) is the support function [13]. The support

14

Page 27: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 2.7: a)Dois objectos sem colisao. b) Polıtopo e pontos resultantes da Diferenca de Minkowski.

function of a set, maps a vector v to a specific point of the same set called ”support point”. This point is

the most extreme point of an object in a certain direction v.

Algorithm 2 GJK algorithm

1: v← ”arbitrary point in C = A - B”2: W ← ∅3: µ← 04: close← False5: while close == False and v 6= 0) do6: w← s(A−B)(−v);7: δ ← v.w

‖v‖ ;

8: µ← max{µ, δ}9: close← ‖v‖ − µ ≤ ε;

10: if not(close) then11: v← υ(conv(W ∪ {w}));12: W ← ”smallest X ⊆W ∪ {w} such that v ∈ conv(X)”;13: end if14: end while15: return‖v‖

EPA - Expanding Polytope Algorithm

EPA algorithm [14] has as main purpose computes the depth penetration when there is a collision be-

tween two objects. To do that, EPA uses the simplex containing the origin resulting from GJK algorithm.

Geometrically speaking, EPA expands that simplex to obtain the Minkowski Difference edge closest

to the origin. Since the penetration depth is the point in Minkowski Difference polytope boundary closest

to the origin, EPA expands the initial simplex by adding vertices that lie on the same boundary, by calling

support function also used in GJK. In each iteration, EPA selects the polytope facet closest to the origin

and subdivides it adding vertices using support function. Some examples can be found on [14].

15

Page 28: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 2.8 shows us an example of EPA application with the initial simplex in purple and polytope

from Minkowski Difference in green. Algorithm 3 explains how the expansion is done in which spolytope(p)

is the support function that returns a point in polytope boundary. The function CalculateDepth(polytope)

takes the expanded polytope, find the closest point in polytope boundary and returns the distance to the

origin and the orientations which gives us the penetration depth and the penetration vector.

Figure 2.8: Example of EPA with 2 iterations: a)First edge closest to origin selected, b)First edge closestto origin expanded. Second edge closest to origin selected, c)Second edge closest to origin expanded.A Edge of A−B polytope is the closest to the origin. Expansion terminated.

Algorithm 3 EPA algorithm

1: polytope← GJK simplex2: closest← polytope.closest edge3: while closest /∈ (A−B)polytope do4: spolytope(p)5: polytope.add vertice(p)6: closest← polytope.closest edge7: end while8: [lengthdepth, vectordepth]← CalculateDepth(polytope)9: return [lengthdepth, vectordepth]

Collision Detection Between Multiple Objects

In a environment with so many objects, it becomes mandatory that it should exist a previous screening

which eliminates all pair of object that are not in collision. For that, FCL has the Sweep and Prune

algorithm [15].

SaP - Sweep and Prune

This algorithm verifies the existence of any collision in each axis at a time. First, SaP tests each pair

of objects for overlapping in xx axis and if they overlap, it puts them into a first list of possible colliding

objects. The next step is verified if the pairs in the first list collide in yy axis. If the pairs of objects present

in the first list also collide is yy axis, they will be allocated in a second list. Finally, SaP tests all left pair

in second list. If they collide in zz axis, it puts them into a final list of pairs of objects that actually collide.

16

Page 29: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

In figure 2.9 we can see an example of object that seems to be in collision but just when view from xx

axis. That pair of objects will go to the first list but they do not pass for second list. The example shown

in figure 2.10 shows that pair of objects is colliding not only in xx axis but in yy and zz axis too. So that

pair will pass for all three lists and it will be classified as pair of objects in collision.

Figure 2.9: a)Overlapping objects seen from xx axis, b)Non-overlapping objects seen from yy axis,c)Non-overlapping objects seen from zz axis, d)Set of pairs of objects: First contains pairs which over-lapping on xx axis; Second contains pairs which overlapping on xx and yy axis; Third contains pairswhich overlapping on xx, yy and zz axis.

Figure 2.10: a)Overlapping objects seen from xx axis, b)Overlapping objects seen from yy axis,c)Overlapping objects seen from zz axis, d)Set of pairs of objects: First contains pairs which over-lapping on xx axis; Second contains pairs which overlapping on xx and yy axis; Third contains pairswhich overlapping on xx, yy and zz axis.

Continuous Collision Detection And Distance Computation

With FCL it is also possible obtain distance between two objects which are not in collision with Con-

servative Advancement [16]. We can also get the time instant in which such collision will happen if the

motion of the objects keep unchanged.

CCD - Continuous Collision Detection

Some continuous collision detection algorithms based on linear motion interpolation between two

different objects configurations. Those interpolations are reflected on a representation of motion named

swept volume [17].

SV - Swept Volume

The idea of swept volume is, given the motion of a certain object, draw that object as one during its

movement (see algorithm 4). In figure 2.11 there is an example of two objects in motion and the result

of they being swept through their motion. We can see also the moment of collision between them.

17

Page 30: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Algorithm 4 SV algorithm

1: SV ← ∅2: height← BV height3: width← BV width4: depth← getTrajectory(Objmotion)5: depth← depth+BV depth6: SV ← getV olume(height, width, depth)7: returnSV

Figure 2.11: Example of two SV and their collision.

CA - Conservative Advancement

With conservative advancement [16] techniques we are able to estimate the instance of time when

some collision is going to happen. In algorithm 5 adapted from [16], we can observe the pseudo code

of the CA algorithm. The leaf nodes refer to bounding volume hierarchies that are based in tree which

nodes are bounding volume of object parts and it will be explained in topic Hierarchical Collision Detec-

tion. The method computes upper limit of time of contact that is the instance of time when collision is

going to happen. This computation is made by advance repeatedly the object through is motion with a

∆t time step.

Separation Distance between pairs of objects

With EPA algorithm we get the penetration depth. Similarly, in distance computation we intent to

find the distance in which two objects are from each other. Beside existing many methods to find that

distance like in [18], FCL based on EPA and Minkowski Difference polytope.

In figure 2.7, the two objects do not collide and so the (A−B) polytope does not contain the origin. For

instance, for two objects that collide, the penetration depth is the distance from the origin to the closest

point of polytope boundary to the origin. In two objects that do not collide, the separation distance is the

distance from the origin to the closest point of the boundary to the origin.

Hierarchical Collision Detection With hierarchical collision detection, the object is represented through

a tree is which each node represents one part of the object and, with such representation, we are able to

know more strictly in which part of the object, the collision happened. Such hierarchy is called bounding

volume hierarchy, BVH.

18

Page 31: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Algorithm 5 Conservative Advancement algorithm

1: if number of CA iterations > Rmax and dcur < Dthreshold then2: w = 1;3: end if4: if n1, n2 are leaf nodes then5: d = Distance(n1, n2);6: if d < dcur then7: dcur = d;8: end if9: ∆t = CalculateCAStep(d, n1, n2);

10: if ∆t < ∆tcur then11: ∆tcur = ∆t;12: end if13: return∆tcur, dcur14: else if n1 is not a leaf node then15: A← n1.leftchild; B ← n1.rightchild; C = D = n2;16: d1 = Distance(A,C); d2 = Distance(B,D);17: else18: A = B = n1; C = n2.leftchild; D = n2.rightchild;19: d1 = Distance(A,C); d2 = Distance(B,D);20: end if21: if d2 < d1 then22: if d2 < wdcur then23: returnC2A(B,D, dcur,∆tcur, w);24: else25: ∆t = CalculateCAStep(d2, B,D);26: if ∆t < ∆tcur then27: ∆tcur = ∆t;28: end if29: end if30: if d1 < wdcur then31: return C2A(A,C, dcur,∆tcur, w);32: else33: ∆t = CalculateCAStep(d1, A,C);34: if ∆t < ∆tcur then35: ∆tcur = ∆t;36: end if37: end if38: else39: if d1 < wdcur then40: return C2A(A,C, dcur,∆tcur, w);41: else42: ∆t = CalculateCAStep(d1, A,C);43: if ∆t < ∆tcur then44: ∆tcur = ∆t;45: end if46: end if47: if d2 < wdcur then48: return C2A(B,D, dcur,∆tcur, w);49: else50: ∆t = CalculateCAStep(d2, B,D);51: if ∆t < ∆tcur then52: ∆tcur = ∆t;53: end if54: end if55: end if

19

Page 32: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

BVH - Bounding Volume Hierarchy In algorithm Conservative Advancement we saw that object

might be represented as trees called bounding volume hierarchies. The main node of those trees is

a BV with all object inside. That node is divided into two children nodes in which each node will be a

BV with the relative part of the object inside. The children of the next node will be divided just like the

previous one and they will contain the BV of correspondent parts of the object. In algorithm 6 adapted

from [11], we can see the process to detect collisions in such trees. Figure 2.12 has an example of a

BVH of a articulated object by applying AABB and another one by applying OBB.

Figure 2.12: a)BVH result of an articulated object. b) BVH based on AABB(see 2.1.3.1). c) BVH basedon OBB(see 2.1.3.1). Dashed borders represent BV boundary. Continuous lines mean original objectboundaries. Adapted image from [19]

As time goes by, the objects can move. To take the motion into account, the position of the object is

actualized as objects are moving. Those updates can be done by two different ways:

1) The first one is the usual way in which it builds a new tree for each iteration of motion.

2) In the second one, it fits the existent BVH by updating the position and orientation of the object.

After updating the trees, FCL proceeds to develop a BVTT, bounding volume test tree [20] similar

to algorithm 6 that will contain the collision information of each node of one tree comparing with the

other node of the other tree like we can see in figure 2.14 which is the BVTT of the two object A and B

presented in figure 2.13.

BVTT process is used mostly because if the objects do not collide, it is only need to verify the first

node of the BVTT that accelerates the all collision detection process.

2.1.4 Motion Planning

To plan the motions of each finger, it is necessary resort algorithms to do motion planning. For this

purpose we use the Open Motion Planning Library from Moveit which has, as default algorithms RRT

(Rapidly-Exploring Random Trees ), in case the state space has no default projection and, if the state

space has a default projection, which is going happen if it is used any built-in state spaces, KPIECE

algorithm.

20

Page 33: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 2.13: a)BVH of object A ; b)BVH of object B.

Figure 2.14: Resulting BVTT of objects in figure 2.13.

2.1.4.1 OMPL - Open Motion Planning Library

Using this library became required since, to manipulate objects, the fingers have to change their pose in

the object. The algorithms in OMPL, RRT and KPIECE are both tree-based planners since they begin

by rooting a tree at the starting configuration of the robot that, in this case, is one finger. The method of

expansion of each method differs but in the end, if it exists a path with no collisions, the algorithm return

that path. For path we mean a sequence of connected possible states that reaches a final state in the

robot state space with all elements collision free.

In OMPL there are two types of algorithms:

- Control-based Planners - Motion planners in which the result is a control law that given the initial

state, the final state and the world state, allows the robot reach the final state take into account possible

changes in world state.

- Geometric planners - in which the result is a predefined trajectory to be executed by an external

controller.

Despite OMPL having lots of algorithms, in the next sections, we will refer some geometric planners

KPIECE, RRT and PRM.

21

Page 34: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Algorithm 6 BVTT algorithm

1: a = bounding volume of A′s tree2: b = bounding volume of B′s tree3: a[i], b[i] are children of a and b, resp.4: TRAV ERSE(a, b) :5: if a or b is empty then6: return7: end if8: if b is leaf then9: if a is leaf then

10: check primitives enclosed by a and b11: return12: else13: for all i do14: if a[i], boverlap then15: TRAV ERSE(a[i], b)16: end if17: end for18: end if19: else20: if a is leaf then21: for all i do22: if a, b[i]overlap then23: TRAV ERSE(a, b[i])24: end if25: end for26: else27: for all i do28: for all j do29: if a[i], b[j]overlap then30: TRAV ERSE(a[i], b[j])31: end if32: end for33: end for34: end if35: end if

KPIECE - Kinodynamic Planning by Interior-Exterior Cell Exploration

KPIECE is a tree-based method which explores the state space by discretizing the state space into

uniform cells with same fixed size. During the exploration, the cells are classified as interior or exterior

cells depending on non-diagonal neighbour cells [21].

• Exterior cell has less than 2n neighbour cells.

• Interior cell has 2n neighbors.

The tree is initialized with a movement of zero duration which consists on the initial state of the

robot. Exterior cells are preferred to interior cells and when all neighbor cells are interior or exterior, the

importance of each cell is the new criterion to exploration. The definition of importance can be found

on equation 2.8 where p is the cell to evaluate, I is the iteration number when p was created, score is

initialized with value 1; S stands for the number of times p was selected for expansion, N means the

number of neighbor cells instantiated at the same level of discretization and C is a positive measure of

22

Page 35: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

coverage for cell p. The cell coverage is simply the sum of the durations of the motions in the all tree until

the present iteration. The maximum level of discretization is predefined at the beginning and essentially

increases like shown in figure 2.15.

Importance(p) =log (I).score

S.N.C(2.8)

Each node is defined by the previous state and the movement to go to next cell. The algorithm

finishes when a stopping condition is satisfied such as the final state was reached or all cells was

explored and no solution was found.

Figure 2.15: Example of KPIECE discretization with three levels. Image from [22]

RRT - Rapidly-Exploring Random Trees

In this algorithm, a tree where all nodes correspond to collision free states is built in which the first

node is the initial configuration of the robot. The origin of the node created in each iteration is shown

in algorithm 7 adapted from [23] where xinit is the initial state, Kiterations is the maximum number

of iterations, ∆t is the duration of each movement, Tree is the tree of all motions, xrand is the new

sample of the state space, xnear is the closest node of the tree to xrand, and xnew is the xrand with a

movement u from the xnear during ∆t time. Function RANDOM STATE() return a random state which

will added to the tree after finding the nearest neighbor through NEAREST NEIGHBOR(xrand, T ree)

function. TheNEW STATE(xnear, u,∆t) function creates the state which goes from xnear to xrand with

movement u for ∆t time.

All movements and states are free of obstacles. The simplest movement is a linear trajectory which

connects two states.

This algorithm allows search in non-convex high dimensional spaces and also can deal easily with

23

Page 36: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Algorithm 7 RRT algorithmGENERATE RRT(xinit,Kiterations,∆t)

1: Tree.init()2: for i = 1 to Kiterations do3: xrand ← RANDOM STATE()4: xnear ← NEAREST NEIGHBOR(xrand, T ree)5: xnew ← NEW STATE(xnear, u,∆t)6: Tree.ADD V ERTEX(xnew)7: Tree.ADD EDGE(xnear, xnew, u)8: end for9: RETURN Tree

obstacles, non-holonomic robots and kinematic constrains.

If no final state is defined, the algorithm connects the initial state to all states from state space by

generating random trajectories limited by iterations number. Figure 2.16 shows an example of a solution

for a call of RRT algorithm without final state.

Figure 2.16: Example of RRT method application without final state specification. The image representsthe evolution of the tree during the method from the left to the right. Image from [23].

PRM - Probabilistic Roadmap Tree

PRM is a quite different from the above algorithms since PRM uses random sampling to build the

roadmap of the state space. This roadmap is a set of states in which each state is connected with all

states where there is no object between them. The map created has loops and then call the Dijkstra

algorithm [24], to select the shortest path in found. The phase of map build is the first phase and the

second phase is named as query phase (where Dijkstra algorithm is called).

• Dijkstra Algorithm - This algorithm has as input, a graph and returns the shortest path in the

graph. At first, it creates a list S in which there are the shorter paths that are calculated during the

process. List is initialized empty and at each iteration the edge with lower cost is added to the list

and the vertex is marked as covered. Finally the shortest path in that list is selected and returned.

Figure 2.17 is an example of applying PRM method in a certain environment. The orange path is

returned as the shortest one and it will be connected to the initial and finals states by connecting the

initial state to the beginning of the graph and the final state to the end of the graph.

24

Page 37: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 2.17: Example of PRM method application in an environment with some objects(in gray). Thegraph is composed with gray edges and orange vertex and the solution is the set of orange edges andvertices.Image from [25]

In short, PRM is more expensive since it build an entire roadmap before any information and only

after the map is built it finds the shortest path. In case the final state is relatively near to initial state, it is

not necessary explore all map.

Both RRT an KPIECE are tree-base algorithms but the bigger difference is KPIECE discretizes the

state space into uniform cell and explores it until it reaches the final state while RRT connects random

samples to a tree and it could take more time since the pose of each sample is random. The fastest

algorithm is KPIECE because the that it explores the state space is more balanced and uniform and

only if it needs, it can increase the discretization level, for example, when there is obstacles with lower

dimensions than cells dimension.

2.2 State of The Art

In the state of the art we present some grasping tools already developed as some manipulation tech-

niques in the last section of this chapter.

2.2.1 Grasping

There is already some work developed in robotic grasp. There are grasps performed just with grippers

like with PR2 robot in [26] where the pre-processing to find and execute a grasp is based in hand

sensors: one on the hand and one on each finger of the gripper. That pre-processing is to find the

adequate grasp after identify the shape of the object.

In [27] there are defined some grasps performed with an anthropomorphic hand using synergies. An

anthropomorphic hand is a hand very similar with human hand: same joints, same form, etc. Synergy

25

Page 38: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

is the coordinated effect of two or more systems perform a task. In this case, those systems are the

fingers of the robot’s hand and the task is single handed object’s manipulations.

In figure 2.18 we can see eight different grasps that fit in the diagram 2.1, for example: the tripod fits

with precision circular tripod grasp in the diagram while palmar pinch fits with the precision-prismatic-

thumb and index fingers.

Furthermore, in figure 2.18, we have the eight separated grasps and we want the robot write some-

thing and to do so, the selected grasp may not be the writing tripod. Let’s admit that writing tripod is

the only one able to writing. If the pen or pencil is on some surface, the robot need to pick up the pen

and the robotic hand cannot pick the pen with writing tripod at first, so it need to pick the pen with, for

example, palmar pinch and then change some fingers positions to achieve the writing tripod grasp.

Figure 2.18: 8 different kinds of grasps already implemented. Image from [27].

Such grasps are composed by fingers postures. Each posture, p is given by:

p =[θ1, ..., θn

]∈ Rn. (2.9)

where n is the number of joints in the robotic hand. With synergies, each posture is:

p =

b∑i=1

aiei (2.10)

in which {e1, e2, ..., eb} is a linear combination of b hand synergies and {a1, a2, ..., ab} are the weights

of each synergy, ei.

We can also identify the necessary fingers, or joints, needed to perform each grasp and with that we

26

Page 39: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

can reach any grasp by verifying the actual joints values and compare to the expected values to reach

the desired grasp.

2.2.2 Manipulation

Currently, there are some work developed in dexterous manipulation considering: object reorientation

[28], fine manipulation [29], finger gaiting [30], [31] and finger rolling with finger gaiting [32]. Finger

gaiting is defined as any periodic sequence of fingers relocations used to achieve a different grasp from

a initial one. It is also known as re-grasping.

In object reorientation, there is mostly used rolling and slipping movements. In [28], there was used

large scale objects and rolling fingers was allowed. The format of the manipulators was three-fingers

gripper (hemispherical fingertips).

In [29] the authors implement finger gaits for turning a sphere with a three-fingered robotic hand by

teaching the sequence of motions required. It builds a RRT tree in which two vertices are adjacent only

if there exists a finger switching between the two grasps from the two vertices. This property allows

re-grasp planning to be formulated as a graph search.

Similarly, [30] shows us a way to orienting a sphere using a 3-fingered manipulator by finger gait also

using RRT.

The closest paper to our approach is [33] which deals with motion planning for multi-fingered hand

manipulating object capable to compute object and fingers trajectories as well as fingers relocation on

the object. It connects configurations of the subspace where the object is grasped into a PRM graph and

its paths are computed using RRT method. The main characteristics are the use of four fingers grippers

like in figure 2.19 and the use of hard finger contact points.

Like previous methods, [31] also uses RRT techniques to develop a finger gait planner to accomplish

objects manipulation. A formulation based on hybrid automaton is used, named finger substitution.

Such formulation based, initially on a three-fingered hand, like in figure 2.20, with two fingers holding

an object and the third does not move. When the grasp need to be adjusted, mainly because one or

both fingers reached its limits to grasp the object in force closure way, the third finger creates a contact

with the object so one of the other fingers can be removed from the grasp, creating a new one. Here,

with the new grasp, the manipulation continues until the goal pose (position + orientation) is achieved

or some other constraint makes the motion stop. In this approach does not exist rolling neither sliding

motions, and the contact model used is hard finger model contact. Both the object and the fingers are

rigid bodies. Geometries of the object and the fingers are known.

Some situations may happen like [32] where it is admitted that all fingers are needed to perform

the initial force closure grasp so gaiting will not possible to apply unless it uses rolling and/or sliding

first to move fingers contacts in object surface. So, at least, one finger can be dispensed from the new

force closure grasp. In the referred paper, it is used rolling and finger substitution. It is used hard finger

contacts for three finger grasps and soft finger contact for two finger grasps. Robotic hands format is a

three-fingered gripper similar to [30].

27

Page 40: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 2.19: Example of an object reorientation by finger gaiting. Image from [33].

Figure 2.20: Example of a sphere manipulated with finger substitution. Image from [31].

There is previous work on dexterous manipulation in which is used not only fingers for manipulation

but also palm to accomplish the task of pick up an object a let the object roll from the palm to the

fingertips so the object is deposited on a surface ([34]). This approach uses an anthropomorphic hand

model that provides stable grasps and allows non-prehensile manipulations and so the exploration of

object motion with fewer actuated degrees of freedom to manipulate the object it is possible. We can

see an application example in figure 2.21 where the object is grasped; there is some manipulation to

drop the object statically on the palm and with continuous rolling movement of the object, move the

object to closer from the fingertips.

The article [35] proposes switching from one grasp configuration to other by a 4-fingered hand using

hard finger contact model and similar to previous methods, it is based on switching graph that allows

re-grasping planning. It takes into account non-convex polygons but do not consider curve objects. They

also consider rolling motion of fingers. One of the future work proposed is extend their approach to three

28

Page 41: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 2.21: Example picking and dropping an object. Image from [34].

dimensional case.

This last approach is very similar to our approach since we intend to explore re-grasping feature with

robotic hands. In our case we want to use 5-fingered anthropomorphic hand.

Similarly to what happens in finger gaiting, we intend to move some fingers to different contact

positions on the object to reach a new grasp. Due to having 5-fingered dexterous hand, we barely have

grasps where all five fingers are required to have a force closure grasp. We will not need to resort to

rolling neither sliding to make a grasp forced closure with less fingers.

The manipulation that we intend is basically grasp an object and achieve some other grasp so we

can perform some task with the grasped object. We will not consider fingers rolling neither sliding in

fingers repositions. However, during the time when the object is used to do some task, some rolling

might happen but we will not address explicitly those cases.

Our approach has a bit of each approaches referred above since we want to re-grasp objects with

fingers relocation but with 5-fingered dexterous hand. We also want to give the ability to hold the object

so it can be used with its own purpose.

29

Page 42: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

30

Page 43: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Chapter 3

Outline of the Proposed Algorithm

In the literature there exists some algorithms that generate grasps that allow the robot to pick up objects.

With this work, we intend to give the ability to robots change the position of the fingers, with the object

held. The purpose is to reach some other grasp to use the object.

3.1 Finger Gaiting Algorithm

First of all we should define the required inputs of this algorithm. We need a robotic hand, a set of, at

least, two force closure grasps, one as the initial pose and another as the final pose and an object to

perform those grasp poses. The object should be compatible with the selected grasps so the re-grasp

task can be accomplished.

Given an initial force closure grasp, we start by the contact acquisition so we can identify which

fingers are removable from the current grasp.

This identification in described in algorithm 8.

Algorithm 8 Identify Removable Fingers

1: fingersnames ← ”names of fingers from topic positions normals”2: for i = 1 to numberfingers do3: Vpositions ← ”positions of i-th finger from topic positions normals”4: Vnormal ← ”normal of i-th finger from topic positions normals”5: end for6: if ForceClosure(Vpositions,Vnormal) then7: for fingeri = 1 to numberfingers do8: Vnormalaux ← Vnormal ”without normal from i-th finger”9: Vpositionsaux ← Vpositions ”without positions from i-th finger”

10: if ForceClosure(Vpositionsaux ,Vnormalaux ) then11: Vremovable ← fingeri12: end if13: end for14: end if15: publishVremovable in removable contacts

It tests finger by finger, which can be removed from the grasp without loose the stability. If one of the

fingers is not touching the object, that finger is automatically a removable one.

31

Page 44: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Identified those fingers, we start with the re-grasp task. This re-grasp task is based on finger gaiting.

The gait process is done as shown in algorithm 9.

At each contact changing, we update the force closure result so we can have a force closure grasp

all the way. At each true result of force closure, the removable fingers might change, since they are

updated at each finger gait.

Algorithm 9 Identify Removable Fingers

1: while current pose 6= final pose do2: removable fingers← ”names of fingers from topic removable contacts”3: if removable fingers 6= ∅ then4: if ∃removable fingers(i) ∈ fingers to reposit(j) then5: if finger done(j) = 0 then6: Lift finger(removable fingers(i))7: Reposition finger(removable fingers(i))8: Land finger in obj(removable fingers(i))9: finger done(j)← 1

10: end if11: end if12: end if13: end while

In figure 3.1 we have a flowchart to illustrate the sequence of actions done by the algorithm.

At each update of the force closure test and the removable fingers set, the fingers that could not be

relocated in previous iterations and now can, are selected to be relocated until they reach the final pose.

The algorithm gives priority to reposition the needed fingers for the final grasp pose. After all the

fingers from the final grasp are in the corresponding final poses, the fingers not required are lifted. The

final pose is reached and th algorithm ends.

As outputs from this algorithm we have the finger gaits that lead each of the fingers, from the initial

grasp, to the final grasp. Each finger is relocated at a time so we can have a forced closure grasp in

every iteration of this algorithm. The final result will be the robotic hand holding the object with the

configurations of the final grasp.

32

Page 45: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 3.1: Flowchart illustrating the proposed algorithm (see text).

33

Page 46: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

34

Page 47: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Chapter 4

Implementation

4.1 ROS Development Platform

4.1.1 Brief Explanation

ROS, Robot Operating System [36], is an open-source flexible framework like a middle ware to write

robot software. It has many tools, libraries and conventions that simplify the task of creating robot

applications across many robotic platforms with abstraction of hardware, device drivers. It is very robust

since it has cross-language interoperability.

The ROS file system is leveled in packages and manifest. Packages are the folders in which have

libraries, messages, services and nodes. Manifest is a description file of the package that defines the

dependencies and it has information about the package.

ROS is known by its architecture and communication tools. The fundamental concepts of ROS are

nodes, messages, topics and services.

Nodes are processes that perform computation, also known as software modules. Packages can

have one or more nodes that communicate with each other by sending or receiving messages through

topics. Messages are typed data structures that are mostly composed of a header and primitive data type

components (single, arrays, arrays of other messages). A topic is where the messages are published

and the nodes can subscribe topics which provide information that was published by other node. In figure

4.1 there is an example of communication between some nodes and topic by publishing and subscribing

topic’s information.

To communicate between nodes, ROS also has service client models that are defined by a pair

of message structures: one for the request (client) and one for the reply(service). The service must

be always active and is waiting for the client sends a request message to use the service and then the

service reply to that request. Each client has only one service associated unlike topics that allow multiple

subscriptions.

ROS also has a transform package named tf used to acquire transform data from one reference

frame to another. The figure in appendix 6.2 illustrates a tree which is dynamically constructed by ROS

to allocate all transforms.

35

Page 48: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

topic 1

node 1

topic 2

node 2node 3

topic 3

subscribing

publishind

subscribingsubscribing

publishing

publishing

Figure 4.1: Simple example of topics and nodes in-teraction (publications and subscriptions).

base link

torso

neck

head

right shoulder

right elbow

right wrist

Figure 4.2: Example of a ROS tf tree. Each edgerepresents a transform from the parent node to thechild node.

4.1.2 Software Architecture

From communication tools available in ROS we used topics communications and service clients.

One of the service clients was used to make a request from force closure code which was written in

Python language. The server is always active and when needed, the client sends a request to obtain

the result of force closure test provided by the server. The server itself is a node that has the code to

compute the result while the client can be a node or may be associated in any node since the node only

has one service.

The other service client was used to acquire the updated values from the joints when the hand is

holding the object with the first grasp.

The communication by topics was used to acquire information about contact links in Gazebo topics

(see 4.2), publish and subscribe transformed coordinates of such contacts, publish the values from the

angles of each joint, publish joint values to fingers joints controllers, publish and subscribe the fingers

that are available to remove, given a certain grasp.

We use tf package to acquire and to apply transforms to contacts’ positions and normal coordinates

when acquired from Gazebo’s topics. In appendix 6.2 there is the tf tree from the model used, where

36

Page 49: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

we can see all sequences from the right hand to each fingertip, the model of the box used and the base

used so the hand can be fixed.

It was developed based on the diagram of figure 4.3 where ellipses stand for nodes. Rectangles

represent packages. Diamond stands for the Force Closure and Joint Angles services respectively. The

arrows represent communications. Continuous arrows with one direction mean from publisher node to

subscriber node topics. Arrows with both directions stand for service calls and answers. Dashed arrows

with one way mean consult a package to help in some computation.

Software Architecture (ROS)

19Instituto Superior Técnico

ContactAcquisition

Identify Removable

Fingers

Gazebo

Gazebo MoveFingers

JointsAngles Service

ServiceCall

FingersClosing

Force Closure Service

ServiceCall

TF

MoveIt

CollisionDetection

ROS

/contacts/positions

&normals

/removable fingers

/joints angles

/joints values

/joints values

ROS Nodes

ROS Packages

ROS Services

ROS Topics

Labels:

03/12/15 Figure 4.3: ROS structure of this algorithm (see text).

In the figure 4.3, we can see the main components of the implemented software architecture. The

red part is the package MoveIt from ROS used to do the collision avoidance. Blue part has the ROS

parts belonging to this algorithm. The orange part has the code to do the first grasp pose and it uses

the MoveIt package.

The major parts were developed under ROS although we need to acquire the contact information

from Gazebo. The green part represents the Gazebo tools: simulator and the topic with the information

about contacts.

37

Page 50: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

4.2 Gazebo Simulator

4.2.1 Brief Explanation

Gazebo simulator, documented in [37], is designed to simulate a 3D dynamic multi-robot environment

capable of recreating the complex worlds with many physics parameters and light conditions that can

be adjustable by the user. Gazebo server reads URDF or SDF files formats to generate the world and

populates it according to what is in the files. These files extensions are ”.world” to load world parameters,

”.sdf” to introduce robot’s models into the built world. The robots are dynamic structures composed of

rigid bodies connected by joints. Angular and linear forces can be applied to generate movement and

interaction with simulated world. All simulated objects have mass, velocity, friction and other attributes

that allow them to behave realistically when pushed, pulled, knocked over, or carried. Simulated bodies

can have sensors, cameras or any interfaces associated to facilitate data flow. They can have none or

more joints to connect bodies and so to form dynamic and kinematic relationships. There are three kinds

of joints: fixed just to connect two bodies, planar joints so translation motion can happen and rotation

joints so a body can rotate like happen in articulated bodies. Joints also can exist like motors where the

force can be applied to generate motion due to friction between the two bodies.

Physics engine used by Gazebo, simulates dynamics and kinematics associated with articulated rigid

bodies and has features as collision detection, mass and rotational functions.

Besides simulation, Gazebo also provides visualization tools so the users can understand in a better

way the environment and interact through mouse and keyboard.

To make information data from simulation environment easier to access, Gazebo allows building plu-

gins. These plugins provide access to the respective libraries without using the communication system.

There are five types of plugins allowed: world, model, sensor, system and visual plugins. These are the

five different Gazebo components.

A world plugin is attached to a specific world. It can be used to control physics engine, and another

world properties like ambient lighting. A model plugin is attached to a specific model and the user can

control robot’s joints and state. A sensor plugin is attached to one sensor and allows the user to acquire

information from the sensor and it also controls sensor properties. A system plugin is specified on the

command line and is the first to load during gazebo start up giving to the used the control over the start

up process.

In this work we will not use Gazebo plugins. In this work we resort solely to interaction between ROS

and Gazebo simulator. The relationship between this two structures will be explained next.

4.2.2 The best simulator for the iCub humanoid robot

[38] analyses which is the best simulator for the iCub humanoid robot. There is discussed the dynamics

simulation, since it is crucial for humanoid robotics. Gazebo is shown as the best simulator since it

supports multiple physics engines and can be easily extended with new features, thanks to its modular

and plugin-based structure, among other features. They did a report with some data from users of dif-

38

Page 51: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Table 4.1: Most important features for a simulator. Table from [38].

# Feature Rating1 Stability of Simulation Very Important, Crucial2 Speed Important3 Precision of Simulation Important4 Accuracy of Contact resolution Important

5 Same interface between realand simulated system Important

6 Computational load (CPU) Not So Important7 Computational load (memory) Not So Important8 Visual Rendering Not So Important

Table 4.2: Most important criteria for choosing a simulator. Table from [38].

# Most Important Criteria Users (%)1 Simulation very close to reality 32%2 Open-source 24%3 Same code for simulated and real robot 19%4 Light and Fast 11%5 Customization 6%6 No interpenetration between bodies 3%

ferent simulators. The results are presented in table 4.1 and 4.2, respectively about the most important

features in a simulator and the main criteria to take into account when choosing a simulator. There are

some other tables and data to select the best simulator dynamics. At the end, the chosen simulator was

Gazebo since it is the most used in humanoid robots’ simulation, it is open-source, it has an active com-

munity and supports multiple physics’ engine which makes it the ideal candidate to simulate dynamics

with contacts. More information about this study can be found in the article [38].

4.2.3 Gazebo and ROS

Since ROS is the most used robotic platform, it has already a package developed to use ROS and

Gazebo simulator together. Thus, ROS users can simulate their experiments and applications in a

simulated environment which also provides visualization tools. The interaction between these structures

is done by installing gazebo ros pkgs. With this package, when both applications are running, some

ROS topics are created so the user can access Gazebo information as link states, model state and

some parameters of Gazebo. It also creates some services of ROS to send control commands to

gazebo model and to extract the information from Gazebo. In figure 4.4, we can see a scheme with all

interaction between ROS, Gazebo and the plugin gazebo ros control that loads the hardware interfaces

of the robot.

Despite Gazebo being the best choice of simulator due to its dynamical system, to work with Gazebo

and ROS Indigo simultaneously, the adequate version of Gazebo to be used is 2.2 (released in 2013

November) and the latest Gazebo version is currently 6.1.0 released on 2015, August and in version

2.2, there are some features not available to use (like dynamics that has not enough stability). To do the

manipulation of the objects, a task with a certain level of complexity, without a good dynamic simulation

system, the best and safest option is to simulate static objects to make sure the results received are as

39

Page 52: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 4.4: Gazebo an ROS architecture scheme (see text). Image from (http ://gazebosim.org/tutorials/?tut = ros control).

reliable as possible.

4.3 Hand Model for Gazebo

To create a model on Gazebo, we need to fulfill some requirements. There are some requirements for

both links and joints.

A link contains physical properties of the respective body of the model and it can have many collisions

and visual elements. The required element is the inertia which determines the dynamic properties. To

visualize the body we need to add a visual element to the link. If we will use collision information we also

need to add a collision element to the link and the sensor, in case we need sensor information. We can

represent links with meshes previously built or primitive shapes existing on gazebo database.

Joints connect links and have parented, child and origin elements as required. Parent and child

refer the parent link and the child link to connect with. The origin element means the position and the

orientation of the child link in the parent link reference frame.

Finally, we can add model plugin elements, in case we need some plugin to improve the control of

the model.

Figure 1.1 has the model of the hand used in this work. We used the meshes present to visualize

40

Page 53: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

the iCub in rviz visualizer and some boxes to support the wrist of the hand model. We choose this way

to represent and support the hand because the process of loading all robot would be too expensive. So

we load just the essential parts of the robot (wrist, hand and fingers).

Figure 4.5: Minimum and maximum angles of the model hand joints (see text).

Initially we choose to work with a known model of a robotic hand, the hand from iCub. When we load

the iCub model of Gazebo, we notice that there was no hand on the robot so we had extract the hand

from an existing model for visualization on RViz. By adjusting some parameters and by adding other

required parameters for Gazebo, we got the hand presented in image 1.1.

In figure 4.5 we can see the palm and each link of each finger which turns from the position of the

figure 4.5a) until it reaches the limit of 90o according to the real model. The maximum reachable angles

of each joint are presented: the distal links at their maximum in sub figure 1.1b); the intermmediate and

distal links in 1.1c), proximal links, with exception of thumb, in 1.1d), thumb proximal reachable angle in

1.1e) and the abduction reachability of thumb in 1.1f) and 1.1g).

Now, we can send controls to the joints so they can bend and start to create some contacts.

4.4 Contact Computation

When sending commands to a certain joint, an effort is applied to the joint and it bends. With a set of

efforts to each joint of each finger we can create contacts. Figure 4.6 has an illustration of a contact: the

object is the white box. In 4.6a) there is the contact centered in the blue sphere and the normal of the

contact surface is in green. The indications in blue and green are shown when selected the option on

menu View - Contacts of Gazebo window. The sub figure 4.6b) is a zoom of 4.6a) and 4.6c) shows us

the other side of the object. There we can see the normal directions of the contact surface.

The contact model acquired from Gazebo is based on soft finger contact model since it returns a

contact structure composed by: all the positions(x, y and z coordinates), all normal of the applied forces

(x, y, z values, normalized so the vector norm be 1), the applied force (decomposed in x, y and z

coordinates), the torque about the normal of the contact and the name of the bodies in contact. The

contact information received from Gazebo is not stable. During simulation when there is a contact, the

finger that is touching the object is trembling due to the singularities associated to the contact and as

41

Page 54: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 4.6: Index Finger touching an object (see text).

consequence the contact is not constant. When all the fingers are in contact with the object, the contact

information is received like showed in table 4.3.

The frequency of trembling is not the same for all contacts and then if you take a sample of contact

information, it has, for example, 2 fingers in contact, in the next sample there are no contacts, in the next

sample it has 3 contacts (one already detected in the first sample and 2 new contacts) and so on. To

a better understanding, in table 4.3, we show a set of samples from the topic. The topic does not give

us synchronized information within a unique sample but it has an advantage: the sampling frequency is

high and it not take us a long time to acquire all contacts.

This table was build when there is no contact yet and, at sample 15 is acquired the first contact from

the grasp. From that sample until the end of the table, we can see that the contacts will appear but they

are almost no sample that have all contacts detected. The values of the joints were not modified while

this acquisition was made.

We need to post-process that information so we can get all the contacts from the simulated grasp.

We acquire the information from all samples and ignore the not relevant contacts to the grasp, such as

the contact between the box from the base that supports the hand and the ground.

This acquisition is done using a temporal filtering since samples are not constant through time as

desired. This filtering allows us to acquire consistent information about all existing contacts in simulation.

Such adaptation to the simulator data is not that simple since, the interval of samples acquired is not

always the same. If we update the contact information based in just a sample at once, we would not

have all contact information existing in the environment and barely have a force closure grasp situation.

This limitation brings another problem: when we want to know if a contact does not exist anymore.

Just one acquisition is not enough to be sure about when a contact is removed. To take care of this we

create some thresholds to do that detection and update the real contact information.

To save data from contact information we copy the positions and normal data into vectors, one vector

of positions and one vector of normal to each link of each finger.

After we collect all contacts information, since the positions and normal received are in the reference

frame of the correspondent link, we transform all positions and normal in a common reference frame.

We are always verifying if any contact is no longer a contact in the simulation. We publish all the

information about the contacts in a topic named positions normals (see figure 4.3). In this topic we have

42

Page 55: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

all information with just one sample. The choice of the reference frame is not relevant since the effect is

the same once the transforms are applied correctly.

With position and normal data of contacts already published in the topic, we are able now to calculate

if all the contacts create a force closure grasp. We do not save forces and torque from gazebo because

force closure test just needs the positions and normal and admits that any contact can be applied with

unlimited force to resist any unlimited external disturbance.

The contacts with any part of each finger is detectable since we have contact sensors in all links

surface of the hand, like we have in human hands. The palm will not be considered for the grasps.

# samples #contacts Bodies in collision’s names

0 0 -

1 1 box from base vs ground

2 0 -

3 0 -

4 0 -

5 1 box from base vs ground

6 0 -

7 0 -

8 0 -

9 0 -

10 0 -

11 0 -

12 1 box from base vs ground

13 1 box from base vs ground

14 0 -

15 2 object vs right index fingertip, box from base vs ground

16 1 box from base vs ground

17 4object vs right index fingertip, box from base vs ground, object vs right

ring fingertip, object vs right middle fingertip

18 1 object vs right index fingertip

19 2 object vs right middle fingertip, object vs right ring fingertip

20 3object vs right ring fingertip, object vs right middle fingertip, box from base

vs ground

21 2 object vs right ring fingertip, object vs right little fingertip

22 4object vs right index fingertip, box from base vs ground, object vs right

ring fingertip, object vs right middle fingertip

23 0 -

43

Page 56: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

24 3object vs right index fingertip, object vs right ring fingertip, object vs right

middle fingertip

25 2 object vs right ring fingertip, object vs right little fingertip

26 5object vs right index fingertip, box from base vs ground, object vs right ring

fingertip, object vs right middle fingertip, object vs right thumb fingertip

27 1 object vs right little fingertip

28 5object vs right index fingertip, box from base vs ground, object vs right ring

fingertip, object vs right middle fingertip, object vs right thumb fingertip

29 1 object vs right little fingertip

30 5object vs right index fingertip, box from base vs ground, object vs right ring

fingertip, object vs right middle fingertip, object vs right thumb fingertip

31 1 object vs right little fingertip

32 4object vs right index fingertip, object vs right ring fingertip, object vs right

middle fingertip, object vs right thumb fingertip

33 3box from base vs ground, object vs right ring fingertip, object vs right

thumb fingertip

34 5object vs right index fingertip, box from base vs ground, object vs right ring

fingertip, object vs right middle fingertip, object vs right thumb fingertip

35 1 object vs right little fingertip

36 5object vs right index fingertip, box from base vs ground, object vs right ring

fingertip, object vs right middle fingertip, object vs right thumb fingertip

37 1 object vs right little fingertip

38 3box from base vs ground, object vs right ring fingertip, object vs right

thumb fingertip

39 1 object vs right little fingertip

40 6

object vs right index fingertip, box from base vs ground, object vs right

ring fingertip, object vs right middle fingertip, object vs right thumb

fingertip, object vs right little fingertip

41 0 -

42 5object vs right index fingertip, box from base vs ground, object vs right ring

fingertip, object vs right middle fingertip, object vs right thumb fingertip

43 2 box from base vs ground, object vs right ring fingertip

44 6

object vs right index fingertip, box from base vs ground, object vs right

ring fingertip, object vs right middle fingertip, object vs right thumb

fingertip, object vs right little fingertip

45 0 -

44

Page 57: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

46 5object vs right index fingertip, box from base vs ground, object vs right ring

fingertip, object vs right middle fingertip, object vs right thumb fingertip

Table 4.3: Set of time samples from the topic we acquire contact information. Notice the inconsistencyof information during time without any motion of the fingers during the acquisition.

4.5 Force Closure Computation

With the points of contact established between the fingers and the object we will verify if the force closure

condition is satisfied. To do so, we will test the set of contacts received with the equations in 2.1.2. In

this test we considered each point of contact as a unique and compute the friction cone to each of them.

To compute the friction cone, we approximate the base by a polygon on x edges. The value of x is

named as the level of discretization of the friction cone.

Figure 4.7: Metric Results using different levels of discretization.

With different discretization levels of the friction cone, the result also change. We perform some

tests with the discretization level changing from 3 to 30. In figure 4.7, we have the resulting values from

one of those tests. To acquire this results, we compute the force closure test with one set of contacts

corresponding to a unique grasp pose. Looking at the graphic of figure 4.7, we can see the discretization

level is stable between the level 4 and 7 and then increases and start oscillating. The purpose is to have

a value as stable as possible so we will choose a level of discretization between 5 and 6. The time

interval took to compute the force closure result increases linearly so the best value is the smallest

possible. Once we also intend to use a stable value in a safe zone and smallest as possible. We use

the value 5 for the discretization level of the friction cone to calculate the force closure metric.

Instead of having one friction cone associated to each finger, we have as many friction cones as

45

Page 58: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

contact points in each finger. This approach gives us better force closure results. This choice relies in

the fact that simulations does not give us constant data to process so we decide to consider all contact

points of each finger. This decision allows us to have better result because having just one point for

each contact, it could give us a false negative force closure result due to position and normal direction of

that point and with this we are sure that we are computing a more accurate result of force closure. This

is our variety of soft finger contact.

We adopted the work done in [1] and by adjusting some parameters, like discretization level from

friction cone, we achieve better results of force closure tests.

In terms of the implementation in ROS, since it is a method needed sporadically, we put the code into

a service node and once we need to know the result of the force closure test, we fill the request, create

a client and call the service by that client. The software architecture is in figure 4.3.

4.6 Motion Planning and MoveIt

4.6.1 Reaching the Object

To position each fingertip on the object we define a trajectory in each finger that results in a coordinated

hand closing motion without colliding to external objects or to other fingers. We have a well known

package from ROS, MoveIt which has the libraries OMPL and FCL previously introduced in chapter 2.

MoveIt is capable of choosing the best algorithms and methods to do motion planning with collision

avoidance according to available data about the environment. The information about the environment is

in a planning scene that has the robot and objects.

We need to deal with the fact that this package requires that the initial and final states need to be

in the object free area. One way to deal with this issue is to attach the object to the fingers allowing

collision between them but another issue appears, the fingers attached to the object are now able to

pass inside the object what is not feasible in the real world. Our solution, once we need collisions in the

final state, is based on the planning the trajectory to the closest collision free point from the object, and

then we increase the value of the joints’ degrees by a certain percentage so we get collision situation

with the object on the exact area we want the collision to happen. This trajectory is not planned with the

MoveIt algorithms.

To position each fingertip on the object we need to do the motion planning of each finger.

Since the finger gaiting motion is based on remove the contact and move the finger to the final

position, our approach starts by lift the finger from the object surface. To positioning the finger aligned

to the final pose we used the joints limits from each finger so the motion fit with any object. If the finger

can not pass in its own limits, so it can not pass.

To create the final contact between the finger and the object, we implement an algorithm. The

algorithm is an optimization to find the closest pose of the object, free of collision, based on a relation

with a geometric series:

46

Page 59: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

20∑n=1

(1

2

)n

, (4.1)

and the resulting angle at n iterations is given by:

αn = αn−1 ± αmax ×(

1

2

)n

, α0 = 0. (4.2)

At each iteration of this algorithm, depending on the existence of collision, the finger keeps the motion

of getting closer from the object, in case of the previous iterations detects a collision, the finger moves

away from the object.

At the end, if there is any collision we subtract 0.1% of the maximum angle until it is a collision free

pose and then we do the motion planning. This percentage is low because we do not want the finger go

too far from the object.

For collision detection, we use the FCL, previously introduced in chapter 2.

With this solution we guarantee a collision free path and a final state with the desired collision be-

tween the object and the finger.

47

Page 60: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

48

Page 61: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Chapter 5

Results

5.1 Problem Description

In this work it was developed a method to perform re-grasp actions using finger gaiting by identify which

fingers are expendable to a certain grasp so the finger gaiting can be made. To do that, we need to

know which fingers we can reposition on the object.

To have that information we need to know if it is safe to remove some fingers. Those fingers need

to be moved to another position. We use force closure tests to make sure that the grasps are always

forced closure grasps during the finger gaiting.

In this chapter we establish the baseline for the solution to the re-grasping problem with finger gaiting.

We will also show some choices made for some adjustable parameters.

5.2 Experimental Setup

To approach the re-grasping task using finger gaiting, we need a hand model, one object and, at least,

two different grasps to perform in-hand manipulation.

The hand model chosen is the model from the iCub robot, present in figure 1.1. The objects used

are illustrated in figure 5.1.

The re-grasp task consists in changing the contacts between some fingers and the object, to reach

a different grasp. We used the grasps present in figures 5.2 and 5.3.

Figure 5.1: Objects used in this algorithm.

49

Page 62: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 5.2: The Version of Parallel extensiongrasp used in this work. Figure 5.3: Index Ring to Middle grasp (IR2M).

5.3 Reaching the First Grasp Pose

The first grasp is created by moving the fingers against the object like explained in 4.6.1. We imple-

mented a motion planning method to achieve the version of parallel extension grasp. With this grasp

pose we can show the robustness of the removable fingers identification since we could perform a force

closure grasp with just the middle and the thumb fingers but we use all fingers to show that some fingers

can be removed.

Initially, the hand model and the object are like in figure 5.4a). Next, we apply the motion planning

method and the fingers are placed on object surface like showed in figure 5.4b). All fingers are touching

the object.

Figure 5.4: a) The hand and the object in the simulation environment. b) First grasp pose.

5.4 In-hand Manipulation

In figure 3.1, we showed the flowchart scheme of this algorithm. Figures 5.5 and 5.6 shows us 4 different

ways of re-grasp, with the algorithm that leads to the same result using the box from figure 5.1.

During the test phase we mostly used the first sequence of the grasp phase, the upper sequence

from figure 5.5. It starts with the grasp pose from figure 5.4b), reposition the index and the ring fingers,

follow one of the sequences, reaching the grasp when all fingers are in contact with the object. From

there, it only needs to remove the little and the thumb fingers, by any order to reach the final pose. It can

50

Page 63: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 5.5: 4 possible re-grasp sequences tested with the algorithm.

perform any sequence. The main advantage of performing any re-grasp sequence reflects in the case

of we need to remove, for example, index finger and it is not an expendable finger. Then, we can remove

ring finger instead of the opposite may also happen. With this, we have more chance of achieving the

final grasp pose. The algorithm gives preference to reposition the finger needed for the final grasp pose.

When they are repositioned, it remove the fingers no required for the final pose.

We show the robustness of the algorithm by using another object with a different shape: the cylinder

from figure 5.1. Performing this algorithm with an object like a cylinder is challenging because its surface

is not plane as a rectangle, which implies a more accurate re-grasp task to ensure force closure grasp

phases when re-grasping. Figure 5.6 illustrates the result of applying this approach with a different

object. It can perform one of the 4 possible sequences, present in figure 5.6 of the re-grasping task.

Notice that the movement of the index and ring fingers are the best to perform the re-grasp task

described since the fingers are shrunken at the maximum so they can pass behind any object. If the

fingers collide with the object, then the hand should be slightly moved away from the object so the fingers

can be repositioned.

5.5 Force Closure Computation

In addition to the unsynchronized information, we also have noisy contact normal and positions. This

issue increases the chances to get inconsistent data to compute the force closure test, corrupting the

51

Page 64: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Figure 5.6: 4 possible re-grasp sequences tested with the algorithm.

results of the test. To deal with this problem, we have two options: compute the mean of all contact

normal and positions from each link and compute the force closure with that mean value or, we use all

contact positions, and respective normal, to compute the force closure result.

We test both options, based on the sequence a, b1, c1, d, e1, f from figure 5.5. For each grasp

phase we acquire the set of contact normal and positions and compute the force closure test with the

mean calculation and without it.

In figure 5.7 we have the force closure results with and without the mean of the contact normal

and positions. This test was made using the box from image 5.1 as the object to grasp. For a better

understanding of the x axis:

• First letter correspond to the grasp phase in the sequence shown in figure 5.5:

– a - initial grasp pose

– b1 - changing the index finger

– c1 - index finger in the final position, ring finger in lifted

– d - index and ring fingers in the final positions

– e1 - little finger lifted

– f - both little and thumb fingers lifted; final grasp pose

52

Page 65: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

• Second letter means the finger that we are testing with the test:

– i - index finger

– l - little finger

– m - middle finger

– r - ring finger

– t - thumb finger

If the metric value is negative, that means that the force closure result is False. If the metric value is

positive, that means that the force closure result is positive.

Figure 5.7: Metric Results using mean calculation, in blue, and using all contact points, in orange.

We notice that, without computing the mean, we get more accurate and credible results and while

the method is running, when using the metric with mean, sometimes the result is false. This happens

because we are computing the mean with the non constant data from the simulator. Without mean

calculations we also do not get constant points but the fact that we are using all the points, the metric

returns more constant results in terms of binary result so we use all points without averaging.

5.6 Limitations

We accomplished the proposed work, although, there is some issues to improve. We can start by

describing some limitations of this method.

With the present implementation of the contact acquisition, there is no ambition to wait for the ini-

tial grasp pose. This issue might bring some troubles, namely the fact that some required fingers to

reposition might not be available to reposition because they are not expendable from the grasp.

53

Page 66: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

With another pair of grasps, this method is not able to verify if the sequence is feasible directly or

indirectly. It could be feasible but the order of gaiting might be crucial or it might be necessary resort to

another grasp pose that will connect the two sequences. The order of the gait can be done resorting to

the metric value from the force closure test result. For example, when two grasps can be reached used

on a sequence of re-grasp, we might choose the safer one.

54

Page 67: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Chapter 6

Conclusions

6.1 Achievements

This work proposed a methodology to perform in-hand re-grasping actions with finger gaiting. This

approach aimed, given an initial grasp pose, to changing the contacts between the fingers and the

object to reach a new grasp pose. The re-grasp task was performed always with a force closure grasp

so the object did not fall.

In this approach we did finger gaiting by moving one finger at a time to a final pose. This motion of

the fingers required an algorithm of motion planning, with collision avoidance for not creating undesired

collisions and perhaps, jeopardize the re-grasping process.

Moving one finger at a time required the identification of the fingers that could be removed from the

current set of contacts with the object. We identified the removable fingers by testing the stability of that

grasp without considering the finger under analysis. If the result of the force closure test remained true,

then the finger was removable from the grasp and we could move that finger freely to any other point of

contact on the object surface.

While performing the gaiting process, we removed and repositioned a finger at each time. As the

contacts changed, the force closure test returned the result related to the updated contact information.

We considered that any part of the finger can be in contact with the object, in other words, we

considered that all links from each finger have sensors on their surface allowing the acquisition of contact

information from any part of the finger.

We created a new unusual grasp pose that allowed us to do re-grasp with a static object in simulation,

IR2M - Index Ring to Middle.

This approach is different from those already developed since this one was developed in the ROS

robot platform, giving our contributions for the ROS community and also allows adaptability to other

robots. It is able to deal with sensors in all links in the fingers and it also works with sensors only on

fingertips.

We showed our way to do the motion planning for finger gaiting with collision avoidance and, at the

same time, create the desired contact situation with the object. We showed how to deal with gazebo

55

Page 68: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

contact information using temporal filtering. We also showed that, with non-constant data acquisition,

the best way to compute force closure tests is using all contact points between the object and the fingers.

Also, we contributed for the community by introducing a model of the iCub hand to use in the Gazebo

simulator.

We showed the robustness level of this algorithm, not only in terms of variations of the object shapes,

but also in terms of performing different sequences of finger gaiting, reaching the same final grasp pose.

6.2 Future Work

Future work will improve this research, by enhance the repertoire of pre-defined grasp classes so it

could be used with this method. To enhance this suggestion it could be introduced grasp skills, not only

for the iCub but also to general dexterous hands

Developing a planner to ensure the final grasp pose is reachable from the initial one would be a

challenging approach. It could plan the all sequence of re-grasp and, if it needed, use sub-grasps to get

to the final grasp pose.

To transcend another limitation, we suggest to create some ambition to the algorithm so it wait until

the first grasp is reached and then perform the finger gaiting.

The validation of the compatibility with other robotic hands it could be enhance this work too.

We suggest the optimization the object surface area where each finger can reach.

Another way to keep this research is to couple the fingers joint as in the real iCub hand model,

favoring the use in the real robot. Notice that the current iCub hand model only has tactile sensor on

fingertips and on palm.

We suggest also to add rolling and sliding techniques to enhance re-grasp feature.

56

Page 69: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Bibliography

[1] F. Veiga. Robotic grasp optimization from contact force analysis. Master’s thesis, Instituto Superior

Tecnico, April 2012.

[2] I. A. Sucan, M. Moll, and L. E. Kavraki. Open motion planning library, December 2012.

[3] D. M. J. Pan, S. Chitta. FCL: A General Purpose Library for Collision and Proximity Queries,. in

IEEE Intl. Conf. on Robotics & Automation Magazine, Maio 2012.

[4] S. S. S. Richard M. Murray, Zexiang Li. A Mathematical Introduction to Robotic Manipulation. CRC

Press, 1994.

[5] J. Fischer, N. Thompson, and J. Harrison. The prehensile movements of the human hand. In

P. A. Banaszkiewicz and D. F. Kader, editors, Classic Papers in Orthopaedics, pages 343–345.

Springer London, 2014. ISBN 978-1-4471-5450-1. doi: 10.1007/978-1-4471-5451-8 85. URL

http://dx.doi.org/10.1007/978-1-4471-5451-8_85.

[6] M. Cutkosky and R. Howe. Human grasp choice and robotic grasp analysis. In S. Venkataraman

and T. Iberall, editors, Dextrous Robot Hands, pages 5–31. Springer New York, 1990. ISBN 978-1-

4613-8976-7. doi: 10.1007/978-1-4613-8974-3 1.

[7] J. B. I Kao, K. Lynch. Contact Modeling and Manipulation, pages 647–670. Springer, 2008.

[8] D. Prattichizzo and J. Trinkle.

[9] T. Karras and T. Aila. Fast parallel construction of high-quality bounding volume hierarchies. In

Proceedings of the 5th High-Performance Graphics Conference, HPG ’13, pages 89–99, New York,

NY, USA, 2013. ACM. ISBN 978-1-4503-2135-8. doi: 10.1145/2492045.2492055. URL http:

//doi.acm.org/10.1145/2492045.2492055.

[10] H. S. N. Walker, M. C. Lin. CAB: Fast Update of OBB Trees for Collision Detection Between Articu-

lated Bodies, volume 9(2). Journal of Graphical Tools, 2004.

[11] G. Zachmann. Rapid Collision Detection by Dynamically Aligned DOP-Trees. VRAIS ’98. IEEE

Computer Society, Washington, DC, USA, 1998. ISBN 0-8186-8362-7.

[12] X. Zhang and Y. Kim. k-IOS: Intersection of spheres for efficient proximity query. May 2012. doi:

10.1109/ICRA.2012.6224889.

57

Page 70: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

[13] C. J. Ong and E. Gilbert. The gilbert-johnson-keerthi distance algorithm: a fast version for incremen-

tal motions. In Robotics and Automation, 1997. Proceedings., 1997 IEEE International Conference

on, volume 2, pages 1183–1189 vol.2, Apr 1997. doi: 10.1109/ROBOT.1997.614298.

[14] G. V. D. Bergen. Proximity queries and penetration depth computation on 3d game objects. In In

Game Developers Conference, volume 170, 2001.

[15] D. Tracy, S. Buss, and B. Woods. Efficient Large-Scale Sweep and Prune Methods with AABB

Insertion and Removal. March 2009. doi: 10.1109/VR.2009.4811022.

[16] M. Tang, Y. Kim, and D. Manocha. C2a: Controlled conservative advancement for continuous colli-

sion detection of polygonal models. In Robotics and Automation, 2009. ICRA ’09. IEEE International

Conference on, pages 849–854, May 2009. doi: 10.1109/ROBOT.2009.5152234.

[17] K. Abdel-Malek, J. Yang, D. Blackmore, and K. JOY. Swept volumes: fundation, perspectives, and

applications, volume 12. World Scientific, 2006.

[18] M. C. L. Ehmann, Stephen A. Accurate and fast proximity queries between polyhedra using convex

surface decomposition. Computer Graphics Forum, 20(3):500–511, 2001.

[19] T.-C. Publishing. Scene graphs (advanced methods in computer graphics) part 2, Dezem-

bro 2015. URL http://what-when-how.com/advanced-methods-in-computer-graphics/

scene-graphs-advanced-methods-in-computer-graphics-part-2/.

[20] I. N. G. Z. R. Weller, C. Mendoza. Kinetic Separation Lists for Continuous Collision Detection of De-

formable Objects. 3rd Workshop in Virtual Reality Interactions and Physical Simulation ”VRIPHYS”,

2006.

[21] I. Sucan and L. Kavraki. A sampling-based tree planner for systems with complex dynamics.

Robotics, IEEE Transactions on, 28(1):116–131, Feb 2012. ISSN 1552-3098. doi: 10.1109/TRO.

2011.2160466.

[22] I. Sucan and L. Kavraki. Kinodynamic Motion Planning by Interior-Exterior Cell Exploration, vol-

ume 57 of Springer Tracts in Advanced Robotics. Springer Berlin Heidelberg, 2010. ISBN 978-3-

642-00311-0. doi: 10.1007/978-3-642-00312-7 28.

[23] S. M. LaValle. Rapidly-Exploring Random Trees: A New Tool For Path Planning. 1998.

[24] Dijkstra’s algorithm, Dezembro 2014. URL http://en.wikipedia.org/wiki/Dijkstra%27s_

algorithm. Wikipedia.

[25] R. U. Kavraki Lab. The open motion planning library: A primer, 2014.

[26] K. Hsiao, S. Chitta, M. Ciocarlie, and E. G. Jones. Contact-reactive grasping of objects with par-

tial shape information. In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International

Conference on, pages 1228–1235. IEEE, 2010.

58

Page 71: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

[27] M. A. M. Henriques. Controlo e planeamento de maos roboticas antropomorficas utilizando siner-

gias. Master’s thesis, Instituto Superior Tecnico, Julho 2013.

[28] L. Han and J. C. Trinkle. Object reorientation with finger gaiting, April 1998.

[29] J. Hong, G. Lafferriere, B. Mishra, and X. Tan. Fine manipulation with multifinger hands. In Robotics

and Automation, 1990. Proceedings., 1990 IEEE International Conference on, pages 1568–1573

vol.3, May 1990. doi: 10.1109/ROBOT.1990.126232.

[30] J. Xu, T. Koo, and Z. Li. Finger gaits planning for multifingered manipulation. In Intelligent Robots

and Systems, 2007. IROS 2007. IEEE/RSJ International Conference on, pages 2932–2937, Oct

2007. doi: 10.1109/IROS.2007.4399189.

[31] J. Xu, T.-K. Koo, and Z. Li. Sampling-based finger gaits planning for multifingered robotic hand.

Autonomous Robots, 28(4):385–402, 2010. ISSN 0929-5593. doi: 10.1007/s10514-009-9164-5.

URL http://dx.doi.org/10.1007/s10514-009-9164-5.

[32] L. Han and J. Trinkle. Dextrous manipulation by rolling and finger gaiting. In Robotics and Automa-

tion, 1998. Proceedings. 1998 IEEE International Conference on, volume 1, pages 730–735 vol.1,

May 1998. doi: 10.1109/ROBOT.1998.677060.

[33] J.-P. Saut, A. Sahbani, and V. Perdereau. A Generic Motion Planner for Robot Multi-fingered Manip-

ulation. Advanced Robotics, 25(1-2):23–46, Jan. 2011. URL https://hal.archives-ouvertes.

fr/hal-00736011.

[34] Y. Bai and C. Liu. Dexterous manipulation using both palm and fingers. In Robotics and Automation

(ICRA), 2014 IEEE International Conference on, pages 1560–1565, May 2014. doi: 10.1109/ICRA.

2014.6907059.

[35] T. P. Attawith Sudsang. Regrasp planning for a 4-fingered hand manipulating a polygon, September

2003.

[36] M. Quigley, K. Conley, B. P. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Y. Ng. Ros: an

open-source robot operating system. In ICRA Workshop on Open Source Software, volume 3, May

2009.

[37] N. Koenig and A. Howard. Design and use paradigms for gazebo, an open-source multi-robot

simulator. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2149–

2154, Sendai, Japan, Sep 2004.

[38] S. Ivaldi, J. Peters, V. Padois, and F. Nori. Tools for simulating humanoid robot dynamics: a survey

based on user feedback. In Proceedings of the IEEE-RAS International Conference on Humanoid

Robots (Humanoids), pages 842 – 849. Madrid, Spain, Nov 2014.

59

Page 72: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

60

Page 73: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

Appendix A - TF Frame Trees

61

Page 74: Object Re-grasping with Dexterous Robotic Hands · Abstract This thesis addresses in-hand manipulation for dexterous robotic hands, in particular with the ability of changing the

view_fram

es Result

root_link

base_box

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

r_hand

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

base_box1

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

base_link

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

box_model

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

base_box2

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

base_box3

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

world

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

r_indexbase

r_index0

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

r_index1

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_littlebase

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_middlebase B

roadcaster: /robot_state_publisherA

verage rate: 10.227 Hz

Most recent transform

: 67.800 ( 0.045 sec old)B

uffer length: 4.400 sec

r_ringbase

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_thumb0

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_little0

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

r_little1

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_middle0

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

r_middle1

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_ring0

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

r_ring1

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_thumb1

Broadcaster: /robot_state_publisher

Average rate: 50.223 H

zM

ost recent transform: 68.292 ( -0.447 sec old)

Buffer length: 4.480 sec

r_thumb2

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_index2

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_index3

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_little2

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_little3

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_middle2

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_middle3

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_ring2

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_ring3

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_thumb3

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

r_thumb4

Broadcaster: /robot_state_publisher

Average rate: 10.227 H

zM

ost recent transform: 67.800 ( 0.045 sec old)

Buffer length: 4.400 sec

Recorded at tim

e: 67.845

62