Detect Human Hand Movement With Marker

Embed Size (px)

Citation preview

  • 8/13/2019 Detect Human Hand Movement With Marker

    1/5

    Vision-Based Grasp Planning System for Dexterous Hands

    Jiting Li, Wenkui Su, Yuru Zhang, Weidong Guo

    Robotics Institute, Beihang University, Beijing, China, 100083

    E-mail [email protected] [email protected]

    Abstract

    This paper introduces a new approach of grasp planning

    for robotic dexterous hands. Master-slave control strategy

    is adopted for integrating human intelligence into theplanning system in which human hand directly controls the

    robot hand interactively. The human-machine interface is

    a computer vision system with two CCD cameras. Thefingertips of the human hand are marked with markers of

    different geometry, thus can be identified by the computer

    vision system. When the human fingers move, the vision

    system firstly captures the image of the markers andperforms feature-based identification. Then the images of

    the two cameras are matched and their image centers are

    used to calculate the positions of the fingertips. After this

    the human fingertip positions are mapped onto those ofthe dexterous hand in its palm frame. The human operator

    observes the motions of the robot fingers and decides the

    next step of motion. Through this procedure the humanhand can guide the robot fingers to the target position of

    grasping and manipulation. To verify the validity of the

    proposed grasp planning approach, two tasks are

    demonstrated in a virtual environment, one is pressing abutton with a single finger and the other is moving thethumb and index fingers. The tasks are simulated in real

    time and performed successfully.Key words: computer vision, grasp planning,multi-fingered dexterous hand, master-slave control

    1 Introduction

    As a key issue of the dexterous hand, the graspplanning is broadly investigated [1-6]. In recent years,

    using master-slave and telemanipulation techniques to

    plan the grasp for multifingered hands extracts theextended interests [2-5]. The main idea is that human hand

    directly joins in the grasp loop so as to integrate thehuman experiences and intelligence to ease the grasp

    decisions. Its usual in the master-slave system that human

    hand executes the master grasp, the human-machine

    interface measures the human hand motion, after themapping from human hand onto robot hand, the robot

    hand executes the slave grasp. Thus the complexity of

    grasp planning is greatly reduced.

    In the master-slave system, the human-machine

    interface and motion mapping are two important issuesthat greatly influence the performance of the grasping,

    such as accuracy and speed. At present, the datagloves are

    often used as the interface to measure the human handmotion[1-5]. Apparently it is convenient for measuring the

    joint angles. However it cannot satisfy the expected

    precision when the fingertip positions are accuratelyneeded. Therefore the computer vision is used to calibrate

    the dataglove combining with the artificial neural nettechnique in telemanipulation system for DLR Hand [2].

    Choice of the interface is dependent on the motion

    parameters to be measured and the space where motion aremapped. Similarly the motion mapping can be done in

    joint space or the Cartesian space, which depends on the

    motion parameters to be mapped.Our goal is to set up a master-slave grasp system, in

    which the human hand can precisely control the fingertip

    positions of the dexterous hand in real time by means of

    the suitable human-machine interface and feasible

    master-slave motion-mapping algorithm. As stated above,dataglove cannot satisfy the precision requirement of the

    positions. Therefore the stereo computer vision is adopted

    in our system to measure the positions of the human

    fingertips. To meet the real time need for master-slavegrasp, the master environment and fingertip markers are

    designed as simple as possible. Thus the image processing

    time is greatly decreased.The other key issue solved in this paper is the

    master-slave motion mapping. It is required to make the

    master-slave manipulation simple and intuitive. Aiming at

    this goal, we firstly establish the corresponding relation

    between the master and slave hand palms. Then the linearincremental mapping of the fingertip positions is made in

    Cartesian space in the palm frames.

    2 System structure

    The master-slave grasp system consists of humanhand, a virtual robot hand and a human-machine interface,

    as shown in figure 1. As the master hand, the human hand

    is marked with markers of different geometries on its

  • 8/13/2019 Detect Human Hand Movement With Marker

    2/5

    fingertips. The slave hand is a dexterous hand, namedBH4 Hand, which is developed by our group at Robotics

    Institute of Beihang University, China. The virtual model

    is shown as figure 2. The human-machine interface is a

    computer vision system with two CCD cameras to acquire

    and identify the movement of the human fingertips. The

    calculated human fingertip positions are then mapped ontothose of the dexterous hand in its palm frame and the

    dexterous fingers move. The human operator observes the

    motions of the robot fingers and decides the next step ofmotion. Through this procedure the human hand can guide

    the robot fingers to the target position of grasping.

    Figure2.Virtual model of BH4 Hands

    3 Identification of the human hand motion

    The working procedure of the computer vision

    system is shown as figure 3. Its well known that the

    cameras are firstly calibrated. The main considerations ofthis section are marking and identifying the human fingertips.The identifying is feature based and divided into two steps,

    position identifying and shape identifying.

    Calibrating of camera

    Identifyingfeatures

    Marking the human

    fingertips

    3.1 Marking the human fingertips

    The markers are used to calculate the fingertip

    positions, so they should not be too big. They should alsobe simple, regular and with remarkable different features

    for different fingers to make the image processing and

    identification easy and efficient. We choose several specialgeometries as the markers, shown in figure 4.

    Figure4. Markers on the fingertips

    3.2 Identification for marker positions

    The task for position identification is to determine the

    positions of the markers in the whole image. As shown infigure 4, markers are isolated black areas in the image and

    surrounded in white area. So the identified positions

    satisfy the following conditions:1) The centers are black.

    2) Their eight-neighbored areas are black.

    3) They are surrounded in a larger white area.

    The result is shown in figure 5. The identified markers are

    framed with squares.

    Figure3. Identification for computer vision

    Feature

    matching

    Calculating the 3D

    coordinates of markers

    Position

    identifying

    Shape

    Identifyingactivefeedback

    Figure1.Vision-Based Master-Slave Grasp Planning

    System for Dexterous Hands

    Vision ofhuman

    Motion ofhuman hand

    Motion

    mapping

    Motion ofrobot hand

    masterslave

    Human-machine interface

    Computer

    visionIdentification of

    human hand

  • 8/13/2019 Detect Human Hand Movement With Marker

    3/5

    Figure5. Result of position identification

    3.3 Shape identification for markers

    After the position identification, the target of this step

    is to match the images of the left and right cameras todistinguish the different fingers. The edge numbers of the

    geometries are chosen as the feature to be matched. If the

    two images have the same number, they are considered tobe the images of the same finger. And then their image

    centers are used to calculate the position of thecorresponding fingertip. The result of shape identification

    is shown in figure 6.

    Figure6. Result of shape identification

    4 Motion mapping between master and

    slave hands

    Assume the palm is fixed and the motion mapping is

    in the Cartesian Space. We define

    iHiiR rr = k ,

    where and are the increment of the positions

    of the fingertips of the human and robot hands

    respectively in their palm frames.

    iHr iRr

    [ ]TiHiHiH

    zyxiH

    =r

    [ ]TiRiRiRiR zyx=r .The symbol stands for the every finger, and the symbolsiRand for the robot and human hands. The symbol

    is the mapping factor for the master/slave motion. It is

    defined as the length multiplier of every robot finger to thecorresponding human finger.

    H

    ik

    5 Experiments for master-slave grasping

    The experimental system is shown in figure 7.

    Figure7. Experimental system

    Figure 8 Diagram of the system

    Open left/right eye image

    Calculate image centers of the markers

    Are both the images identified?

    Have the fingertips been

    initiated?

    Calculate of the fingertipsiP

    iP

    Start

    Search the markers globally

    No

    Yes

    Calculate the coordinates of fingertips in world frame

    No

    Yes

    Transmit to the virtual fingers

    End

  • 8/13/2019 Detect Human Hand Movement With Marker

    4/5

    The virtual prototype of BH4 dexterous hand ismodeled in OpenGL image environment. The software is

    written in VC++. The processing time for a single image is

    less than 20 ms with the windows 2000 operating system

    and the CPU of PIV 2.0G.The average measuring error of

    position of computer vision is 1mm and the maximal error

    is less than 2mm. The procedure is illuminated in figure 8.We choose two operations that human hand often operates

    in daily life to test the presented method. One is pressing a

    button with a single finger and the other is pinching withthumb and index fingers. As the results shown, the tasks

    are simulated in real time and the first task is performed

    successfully when human fingers move slowly. For thesecond task the two fingers can be controlled to move by

    that of the operator respectively, but sometimes the

    fingertip positions of the thumb finger are out of its

    workspace. Therefore there are some important problems

    remaining to be solved. The algorithm of imageprocessing is expected being improved to increase the

    correctness and grasp speed. And also the mappingmethod is to be improved.

    Figure9. Pressing the button

    Figure10. Movingthe thumb and index fingers

    6 Conclusion

    The positions of robot fingertips are decided by that

    of the human fingertips which are measured by thecomputer vision in the presented grasp planning system.

    The computer vision is proven capable of measuring the

    motion of the human fingers in real time and having

    higher precision compared with the dataglove. Theintegration of the human intelligence and experiences not

    only reduces the complexity of grasp planning, but also

    make the system capable of adapting to the unstructured

    and unknown environment. It also provides the possibilityfor the robot hand to grasp the arbitrary shaped objects to

    release the human from the dangerous, heavy and tedious

    work. However there are some issues that remain to beimproved, especially to find out the more reliable

    algorithm of computer vision to increase the rate of the

    correct identification and to reach the normal speed of the

    human hand movement. In addition, the vision is expectedto combine with the other sensors to make the grasp more

    efficient.

    Acknowledgement This project is supported by theNational Natural Science Foundation of China (59985001)

    and the Doctoral Grant of the Education Ministry of China(2000000605).

    References

    [1] Sing Bing Kang and Katsushi Ikeuchi, Toward

    Automatic Robot Instruction from Perception

    Mapping Human Grasps to Manipulator Grasps,

    IEEE Trans. on Robotics and Automation,pp.81-95,13(1),1997.

    [2] M. Fischer, P. van der Smagt, and G.Hirzinger,Learning Techniques in a Dataglove Based

    Telemanipulation System for the DLR Hand, Proc.

    1998 IEEE Intl. Conf. on Robotics and Automation ,

    pp.1603-1608,Leuven, Belgium, 1998.[3] Haruhisa Kawasaki, Kanji Nakayama, Tetsuya Mouri,

    and Satoshi Ito, Virtual Teaching Based on Hand

  • 8/13/2019 Detect Human Hand Movement With Marker

    5/5

    Manipulability for Multi-Fingered Robots, Proc.

    2001 IEEE Intl. Conf. on Robotics and Automation ,

    pp. 1388-1393,Korea, 2001.

    [4] Bruno M. Jau, Dexterous Telemanipulation with

    Four Fingered Hand System, Proc. 1995 IEEE Intl.

    Conf. on Robotics and Automation, pp. 338-343,1995

    [5] Michael L. Turner, et al., Development and Testingof a Telemanipulation System with Arm and Hand

    Motion,Proc. of ASME IMECE DSC-Symposium on

    Haptic Interfaces, pp. 1-8,2000.

    [6] Danica Kragic, Andrew T. Miller and Peter K. Allen,

    RealTime Tracking Meets Online Grasp Planning,

    Proc. 2001 IEEE Intl. Conf. on Robotics and

    Automation, pp. 2460-2465,Korea, 2001.