6
Proceedings of the 1999 fEEE Intemationaf Conference on Robotics & Automation Detroit, Michigan May 1999 Share Control in Intelligent Arm/HandTeleoperated System YOLI SOllg Wang Tianmiao Wei Jun Yang Fenglei Zhang Qixian Robotics Institute of Beijing University of Aero. and Astro. Beijing, China, 100083 Tel: (8610) 82317748 FAX: (8610) 62371315 Email: syoll@pLlbllic, bLlaa.edll.cll Abstract: This system is mainly composed of industrial robot, dexterous hand (BH-3), graphic simulation and planning module, 6-DOF teleoperated mechanical arm (BH-TMA) and data glove with 5- fingered 11-DOF (BHG-3) etc. It consists of vision, force, torque, fiber, angle, and fingertip tacti Ie sensors. In order to implement some complex operations in the integrated system, we propose a task-orientecl hierarchical control share mode]. Moreover, we also express our viewpoints about share control in teleoperated system. Finally, the experimental and simulative results are given to show that the share control construction is high-efficiency, valuable and successful. 1. Introduction Share control in teleoperated system is a very important issue in the front field of space robotics. Many researches have been developed in space robots, autonomous agents, inte[ iigent control and dexterous manipulation etc. Lots of experiments ‘1231show that it is impossible for space robot to autonomously perform space-manipulating tasks under complex environment, therefore astronaut or operator on the ground need to remotely monitor and operate the executive system. Simultaneously, in the influence of universal communication time-delay and micro- gravity manipulation, the error judges made by the astronaut and operator can’t be avoided since the cause-effect relation wi II be destroyed. The teleoperated share control technique is a very effective method to resolve above questions by coordinating high-level man-monitoring harmony and low-level autonomous control. With the development of robot application and research, a teleoperated robot system necessarily depends on varied sensors and external instruments ‘“~] to obtain the relative environmental information, such as vision, force, distance, tactile and so on. Furthermore, with the increment of sensor’s quantity and type, each sensor type has own characteristics and functions. Therefore, it isn ‘t feasible to find a general model for some different sensors that are independent of the physical sensors. So sensor integration, fusion and share technique is becoming increasingly important to improve performance and robustness in SLICh systems. [n this arnl/ hand system, becaLlse of the disequilibrium and Llncellaitlty of time, space, and position, a single control model for various different tasks is impossible. la terms of past references ‘8], share control of multi-sensor integration and data fusion is still difficulty task. Traditional share control method often adapts Bayes decision approach “1 and Dempster-Shafer theory of evidence model “], but these two methods have respective defaults. Bayes decision approach can’t strictly distinguish between uncertain and unknown. Denlpster-Shafer evidence theory can make up for this default, but it lacks tightness in axiomatic mathematics definition. Here, we propose the task- oriented muki-agent share construction which is put forward to this system. 2. Architecture This teleoperated system is a platform for research and application. It comprises 4 main modules: ]) Graphic simulation for task and trajectory planning using BH-TMA and BHG-3; 2) Local autonomous control for tracking, grabbing and manipulating workplaces in the workspace based on multi-sensors, such as global and local vision, wrist Force/Torque, optical iiber, and force on the finger tip etc; 3) Te]elllallipLl] atioll from global simulation to local planning and collaborate robotic arnl/hand via the remote data communication; 4) Remote control of robotic arn~/hand manipulation by BH-TMA and BHG-3. The system physical diagram is given as shown as Fig. 1. There are 15 DOF in autonomous control sub- system, 6 for the arm and 9 for the hand, and 17 DOF in teleoperated sub-system, 6 for teleoperated mechanical arm and 11 for the data glove. With so many degrees of freedom, an effective approach is to decompose the search space into lower dimensional subsets that can be explored using heuristic search techniques. Even then, well-chosen sensing and ~econstruction strategies are essential to reduce the (Teometric complexity of-the planning problem. ~ 0-7803-51 80-0-5/99 $10.00 @ 1999 IEEE 2489

ieee

Embed Size (px)

Citation preview

Page 1: ieee

Proceedings of the 1999 fEEEIntemationaf Conference on Robotics & Automation

Detroit, Michigan ● May 1999

Share Control in IntelligentArm/HandTeleoperatedSystemYOLI SOllg Wang Tianmiao Wei Jun Yang Fenglei Zhang Qixian

Robotics Institute of Beijing University of Aero. and Astro.Beijing, China, 100083

Tel: (8610) 82317748 FAX: (8610) 62371315b

0

Email: syoll@pLl

Abstract: This system is mainly composed ofindustrial robot, dexterous hand (BH-3), graphicsimulation and planning module, 6-DOF teleoperatedmechanical arm (BH-TMA) and data glove with 5-fingered 11-DOF (BHG-3) etc. It consists of vision,force, torque, fiber, angle, and fingertip tacti Ie sensors.In order to implement some complex operations in the

integrated system, we propose a task-orientecl

hierarchical control share mode]. Moreover, we alsoexpress our viewpoints about share control inteleoperated system. Finally, the experimental andsimulative results are given to show that the sharecontrol construction is high-efficiency, valuable andsuccessful.

1. Introduction

Share control in teleoperated system is a veryimportant issue in the front field of space robotics.Many researches have been developed in space robots,autonomous agents, inte[ iigent control and dexterousmanipulation etc. Lots of experiments ‘1231show that itis impossible for space robot to autonomouslyperform space-manipulating tasks under complexenvironment, therefore astronaut or operator on theground need to remotely monitor and operate theexecutive system. Simultaneously, in the influence ofuniversal communication time-delay and micro-gravity manipulation, the error judges made by theastronaut and operator can’t be avoided since thecause-effect relation wi II be destroyed. The

teleoperated share control technique is a very effectivemethod to resolve above questions by coordinatinghigh-level man-monitoring harmony and low-levelautonomous control.

With the development of robot application andresearch, a teleoperated robot system necessarilydepends on varied sensors and external instruments ‘“~]to obtain the relative environmental information, suchas vision, force, distance, tactile and so on.

Furthermore, with the increment of sensor’s quantityand type, each sensor type has own characteristics andfunctions. Therefore, it isn ‘t feasible to find a generalmodel for some different sensors that are independentof the physical sensors. So sensor integration, fusion

and share technique is becoming increasingly

-7803-51 80-0-5/99 $10.00 @ 1999 IEEE 2489

llic, bLlaa.edll.cll

important to improve performance and robustness inSLIChsystems.

[n this arnl/ hand system, becaLlse of the

disequilibrium and Llncellaitlty of time, space, and

position, a single control model for various differenttasks is impossible. la terms of past references ‘8],share control of multi-sensor integration and datafusion is still difficulty task.

Traditional share control method often adaptsBayes decision approach “1 and Dempster-Shafertheory of evidence model “], but these two methodshave respective defaults. Bayes decision approachcan’t strictly distinguish between uncertain andunknown. Denlpster-Shafer evidence theory can makeup for this default, but it lacks tightness in axiomaticmathematics definition. Here, we propose the task-oriented muki-agent share construction which is putforward to this system.

2. Architecture

This teleoperated system is a platform for researchand application. It comprises 4 main modules: ] )Graphic simulation for task and trajectory planningusing BH-TMA and BHG-3; 2) Local autonomouscontrol for tracking, grabbing and manipulatingworkplaces in the workspace based on multi-sensors,such as global and local vision, wrist Force/Torque,optical iiber, and force on the finger tip etc; 3)Te]elllallipLl] atioll from global simulation to localplanning and collaborate robotic arnl/hand via theremote data communication; 4) Remote control ofrobotic arn~/hand manipulation by BH-TMA andBHG-3. The system physical diagram is given asshown as Fig. 1.

There are 15 DOF in autonomous control sub-system, 6 for the arm and 9 for the hand, and 17 DOFin teleoperated sub-system, 6 for teleoperatedmechanical arm and 11 for the data glove. With somany degrees of freedom, an effective approach is todecompose the search space into lower dimensionalsubsets that can be explored using heuristic searchtechniques. Even then, well-chosen sensing and~econstruction strategies are essential to reduce the(Teometric complexity of-the planning problem.~

Page 2: ieee

Fig. 1 System physical diagram

3. Implementing Techniques

The hand/arm teleoperated system can control,make decision and execute based on ]multi-sensorfusion information. It can adapt environmental change,track and locate object, modify planning modu Ie,remotely manipulate workpieces, receive simu Iat ingdata, harmonically perform dexterous assemble task.

In this system, the share control is mainlycomposed of three modules: autonolnous control,teleoperation and simulation. In this article, theemphases of our share control has three differentcontexts:. Sensor data share● Multi-agent-based share

● Man/machine interactive share

The sensor data share is a basic share mode, and isthe basement of low-level local autonomoLls control.The multi-agent-based share is a behavior-based andtask-oriented share mode, it is important for hand/armto perform dexterous and precise tasks. Man/math ineinteractive share is a system-level share mode tocoordinate high-level planning and low-levelautonomous control and it is guaranty to remotelyfulfill various manipulations in a safe condition.

3.1 Autonomous control

Autonomous control module fllses theenvironmental data obtained from sensors, comparesand filtrates them with a optimized model. 1n terms ofplanning result of high-level simulation system, itdetermines action and task sequences for path, orbit

2490

and grasping optimization, in turn controls the low-Ievel controller and mechanical basement to performrespective task. To finish these functions, autonomouscontrol architecture is shown in Fig. 2 diagram:

! MolIonmoduleplanning,Taskplanning1

: Modulesandprotocolsharecntrol

[1 ~ =]1 I

/

c1priorimage lzE5trealmenl

Fig. 2 Autonomous control Architecture

3.2 Sensor data share

The integrated system consists of many kinds ofsensors. Two CCD provide the location parameter to

control the motion of arnl/hand and calibrate theenvironment between robot and worktable inworkspace. The 6-D wrist Force/Torque sensormounted on the end of effecter of PUMA560 and the3-D tactile sensor on the fingertip of BH-3 dexteroushand are used to test the force and make compliantcontrol. Nine angular potentiometers in the fingerjoints are used to provide the grasp space information.Three optical fiber sensors are used to avoid obstacleand collision. Countering so many sensors data, wepropose the low-level sensors data share architectureshown in Fig.3.

Page 3: ieee

, .“,.., , “,!,.. s,’

Fig. 3 Sensor Data Share Architecture

3.3 Multi-agent-based share

From above figure we can see, there are a lot ofsensors and control hardware in this system. How toshare well so many external information resourcesand make full use of them is the key to perfectlyimplement manipulation tasks. The task-orientedsensor fusion and share is the chief strategy. Thefusion module in manipulating process can be

expressed in following function:

[

P(sl) S1 3 (Col’zdition Sl)

P= P(S2) S2 3 (condition s2)

P(S3) S3 3 (condition s3) ...

Where P is logic control parameter based on lmLllti-sensor data, T is task decision valve to shifl differentsub-task operation. S represents the sensor statLls toassist fusion and decision. In this model, we can

simply illustrate as follow:When S1 is under relative operating range in the

work process, the fusion data P are mainly acquiredfrom it, and operation gets into relative task section.When s] is out a relative operating range, the fusiondata is mainly acquired from S2 or S3 or others, thenthe control system shifts into different task sect ion.

The prior level of S is defined by different task, andS will be integrated to take into action under differentsensor condition, A modular program with leve Icontrol performs sensor data share, data

communication, and low-level robot cooperation.In teleoperated system, we adopt a multi-level

sensor integration and data fusion module, thedifferent sensors functions are:Distance sensor: By using three fiber sensors, we canachieve the distance information from tile fingertip ofdexterous hand to workpiece. It is necessary toperform the operating tasks accurately, on the otherhand, it is the key to avoid unexpected collision

2491

between hand and operating object,

Force/Torque sensors: 6-D wrist force/torque sensorcan provide force and torque value to performprecisely axle-hole assembly and workpiece access,and ensure experiment safety by compliant control,

Fingertip Force sensor: By fingertip force sensor, we

can obtain the touch force between finger and objectin the process of operation, and can compute the valueof 3 fingers’ force. [f the value is over the valve in adirection, sub-system send a command to stop themovement in this direction right away, and the wholemanipulation will not finish until the value of force inevery direction is ok. By fusing the finger force data,the force acted on fingers is Iimited in a proper scope.

Angular sensor: During dexterous hand movement,

sLlb-system will obtain every joint’s angle in every

action cycle(25ms) to judge if the joint angle is in thenormal range. Once the angle exceeds the normallimit, operation is stopped at once,

Visual treatment: Vision sensor can provide the

information used to calibrate the system, locateaccurately workpiece’s position for performing real-time visual track and search.

3.4 Man/machine interactive share

We also develop a real-time control program usingthe 6-DOF mechanical arm and 5-finger 11-DOF datag[OVe made by 11s. [t iS very LlsefLll platform to

research spatial robotics, teleoperated share control

technology and so on.

Data glove: By using the BHG-3 data glove, we cancontrol the dexterous hand to perform coordinatelythe teleoperated tasks.Tele-operating: By using the 6-D BH-TMA, we cancontrol the PU.MA560 robot arm to performcoordinately the teleoperated tasks.

3.5 5-Finger 1l-Joint data glove

Fig.4 Data glove full view

BHG-3 Data Glove consists of mechanical parts,electrical parts, A/D data collecting board andsimulation software. It suits different types of adult

Page 4: ieee

hand, and checks small movements of 11 DOFdistributed on five fingers. The mechanical part has193 spares, and net weight is 300g. It can check tiny

lnOVement frOm -2t)0 to 90° , and resolution is up to

0.6°. By graphic simulative software, it performs real-timely man/machine interactive control,

BHG-3 adopts mechanical connecting rodmechanism to detect joint’s movement, includes 2-DOF

spatial 6-rod structure and 1-DOF plane 4-rod structure.When wearing BHG-3, it can be fixed in the hand byleather belt, The detecting theoretical analysis of 2-DOFmechanism is shown in Fig. 5.

/

I

/’”

Fig.5 detecting theoretical analysis

Where a is joint angle, J? is potentiometer angle.

If there are subtle change Aa in joint angle,

potentiometer can detect subtle change A~. We can

compute unconcentric circle movement to obtainfinger joint position. Based on sine theorem, we

work out Aa:

sin ~ r—_— (1)sin y d

sinflxdsin y =

r(2)

Consider that i5yis subtle change:

sin~ + 3Y)

= siny + @” cosy

Solve:

d.sin(fl+A~) -r. sinyAy = (4)

r.cosy

From (2), (4), Aa is the following:Aa=Afi+Ay=

AP+d. sin(fl+Afl)-d. sin~ (5)

i-2 -d2 .sin2~

Because the function between Aa and A~ is stable,

2492

it is very convenient and quick to compute the jointchange. Meanwhile, the data glove structure is simpleand suited for teleoperation by operator.

3.6. Visual servo calibration

We adopt vision sensor in hand/arm teleoperatedsystem to resolve the uncertainty, calibrate theworkpiece position and postLlre, identify and locate

the object in external environment. Meanwhile, it canimprove system autonomous ability and performclexterous and accurate manipulation.

System has two CCD cameras, one is arranged onthe head of workspace as global sense and another ison the back of the dexterous hand as local sense. Theimagines collected from two CCD are transmitted tohigh-level PC, then are pre-processed, computed andanalyzed by visual algorithm.

in visual servo system, to locate accurately, wemust build the image relation between 2-D plane and3-D space to define the target 3-D position. The usualmethod is to calibrate at first to obtain the inner (focaldistance, proportional factor and distortion coefficientetc) and outer (the direction and position in the worldcoordination etc) parameters of vision sensor, thencompute precise target.

Consider these complex calibrating and locatingalgorithm limited in a special environment, wepropose a new calibrating and locating methoddirected at our hand/arm integrated teleoperatedsystem. We use a 3-D two-layer template withmarkers designed by ourselves to resolve simplyvision sensor calibration and correct image distortion.From grabbing two images when robot moves, wemodel two projection linear function through targetpoint, and work out the point of intersection to locatethe real position of target point, that is the target pointcoordination. Because the projection line throughtarget point has intersecting point with template, wecan get its coordination in accordance with theprojection relation.

There are two stages in the whole locating process:

calibration and location. The calibrating principlediagram is given in Fig. 6.

Y vx,> >0

ACB

Fig. 6 Calibrating principle

Where A is the image plane; B is the first

Page 5: ieee

4

a

whmoTspsts

a

4

F

position of the template; C is the moving position ofthe template; OXYZ represents image co~rdinate axis;Ouvw represents template coordinate axis; drepresents the sliding distance of the template; P is thetarget point; P’ is its image point; P 1 and P2 are theintersecting points between projection line and

template. Supposed that the marker point (X,,Y, ) inthe template has different coordination (X ,,,Y Ii) and

(Xti$Yzi) i=l. 2S““m (m is the quantity of mark point)in the image.

{

) Y = fiy(~,, >l’’,,) (5)x, = flx(~l,?q, ~ J

x, = f~.r(~z,, ~z,)> E = f2J(~2, , y?, )

In terms of (5), we can calibrate the visLlal system

coefficient (~X, ~Y) . According to the projection

relation saved between images and template, we canget the intersecting point between the projection Iineand virtual template (i.e. defines the projection linethrough target point). In turn we can define the twoprojection lines in two images, at last find theintersecting point to end a location.

4. Experiments

4.1 Autonomous operation

A) Switch buttonln the limited operating environment, the system

can autonomously use local vision to track and locatebutton on the worktable. At the same time, arm andhand approach the button in a hominine postLlre topush it. Once the threshold of force detected real-timeby the force sensor is exceeded, the system stopsimmediately, that is shown in Fig. 7.

Fig. 7 Push button

B) Twist the bulb and the valveBy the guide of local vision, the system can locate

accurately the bulb, valve in the operating table, andcontrol the dexterous hand to take hold of them. Aftergrasping the bulb and the valve, the robot and BH-3hand can harmonically perform a series of

2493

manipulations, such as twisting-loosing-twistingagain-loosing again etc as shown in Fig.8.

Fig. 8 Twist bLllb

.2 Plug-in hole for assembly

To perform the plug-in hole, degenerated graspmethod is explored to ensure grasping reliability,concentricity and strength. To ensure plllg-inprecision, vision tracking is Llsed for locating the

workpiece in operation table. To ensure system’s

reliable and safety. 6D wrist force sensor is Llsed fordetecting the threshold of the tactile force real timedLlring inserting hole.

Fig. 9 Plug in the hole

For example in Fig.9, the system can control therm and hand to move above workpiece position, and

use local vision immediately to locate the accurateorkpiece on the operation table. After that, the BH-3and firmly grasps the workpiece with a degeneratedethod, pLIlls the workpiece up from the hole in

peration table, and then holds it to next work area.he system autonomoLlsly switches the local vision toearch the hole, and locate accLwately the holeosition. in the guide of the 6D-wrist force sensor, theystem inserts the workpiece slowly. Once thehreshold of the inserting force is exceeded, it can

top the operation autonomously to protect the arm

nd hand.

.3 Grab the cup and pour a cup of tea

The poLlring water experiment is given as shown inig.10.

Page 6: ieee

Fig. 10 Pour a cup of tea

We design a set of horninine tasks to demonstratedexterous manipulation. The syste]m grabs a CLIpthatis full of tea on the operation table from the worktable,by the guide of vision, it move to another position andpour water accurately into a empty cup slowly.Moreover, the system can also adopt optical-fiberdistance sensor to protect from collision in case thereare some damages during moving.

4.4 Remote control with teleoperated mechanicalarm and data glove

We also develop an approach to control PUMA560Carm using BH-TMA and control BH-3 dexteroLls handusing BHG-3 in a long distance, to perform therelevant avoiding obstacle and grabbing cup operation,lt is useful to research space robotics and teleoperatedshared control technique. Our research works areshown in Fig.1 1 and Fig.12.

Fig. 11 Control arm/hand to grab cup by

BH-TMA and BH-DG

Fig. 12 Real avoiding obstacle operation

2494

5. Conclusion

This paper proposes a multi-sensor integrated sharemodel for hand/arm teleoperation. We perform localaLltonomy, teleoperated control and simulatingplanning experiments. By oLlrexperimental researches,mu It i-agent-based share and man/math ine interact iveshares are our main contributions. OLlr experimentalverifications show that the methods Llsed in operatingtasks are high efficient, and simple, inclLidinggrabbing the cup and pouring a cLlpof tea, plLlgging ill

hole for assembly, twisting the bLllb (valve) andteleoperated control etc.

In oLlr next research, we will continLloLlsly improvethe hardware environment for increasing the

integrated system’s real-t ime, accuracy, and w iII add a

set of VR eqLlipment for teleoperated dexterity. Wewil I deeply develop the web-based teleoperatedtheory, large delay control, graphic simulation,intelligent local aLltonomy, force compliant controland so on.

[11

[21

[31

[41

[51

[61

[71

[81

References

DLR - Institute for Robotics and System dynamics.

Prof. Dr.-lng. G. Hirzin~er. “SPACE ROBOT

ACTIVITIES - A SURVEY.” 1987-1992 Scientific

Report.

K. Machida, Y. Toda, Y. Murase, and S.Komada.

“Precise Space Telerobotic System Using 3-Finger

Multisensory Hand”. Proc. of the IEEE lnt. Conf.

on Robotics and Automation, 1995.

M. Nohmi, D.N. Nenchev and M. Uchiyama.

“Momentum Control of a Tethered Space Robot

Through Tether Tension Control”. Proc. of the

IEEE lnt. Conf. on Robotics and Automation, 1998,

pp920 -925

Rafael Kelly, PaLll Shirkey and Mark W. Spong.

“ Fixed-Canlera Visual servo control for pianar

Robots.” Proc. of the IEEE Int. Conf. on Robotics

and Automation, 1996, pp2643 -2649.

Andersen U., “Design of a Data Glove Input

System.” M,SC Thesis. University of Salford.

1994.

Durrant-whyte H F. “Sensor Models and

Multisensory Integration”. Int J Robot Res, 1988,

7(6): 97-113.Bogler p L “Shafer-dempster Reasoning with

Applications to Multisensor Target Identification

Systems”, IEEE Trans. Syst. Man Cybern, 1987,

SMC-17(6):968 -977

Green F C A, et al. “Multisensory robot assembly

station”, Robotics, 1986.2:’205-214