Visual Robot Control

Embed Size (px)

Citation preview

  • 8/8/2019 Visual Robot Control

    1/6

    The301hAnnual Conferenceof the IEEE Industrial ElectronicsSoclety, November2 - 6,2004, Busan, Korea

    New Visual feedback ControlDesign about Guidance of a MobileRobot using Vanishing PointShigemUchikado',Sun Lili' andMidori Nagayoshi'

    ' nformation & Sciences Department, Tokyo D enki University, Ishizaka Hatoyama-machi Hiki-gun Saitama, 350-0394, apane-mail: [email protected], 03smi [email protected], 04smi13@,i dendai.ac.k

    Abstract- We consider a problem about navigation ofa mobile robot with a camera in Indoor environment. Anew visual control method using a vanishing point ofparallel lines at both sides of the corridor is introduced,and we call this the vanishing point visual controlmethod. This method gives a lo t o f useful inform ation onthe design. The refore we can't need a priori informationexcept both widths of the co rridor an d the robot, b u t weca n easily design the visual feedbac k system for guidancean d obstacle avoidance by using this method. Th e idea isbased on perspective geometry such as perspective pro-jection, epipolar geometry, vanishing point, and projec-tive transformation.1 INTRODUCTION

    Two typical methods [11 [2 ] using one camera have beenproposed. One uses some fluorescent tubes as natural land-marks, and also needs a ma p of the lights provided inadvance. The other uses the wall-following scheme and thespecial omni-directiond sensor for moving a corridor.Th e problem considered in this paper is to design guid-ance in a corridor and a corner, and an av oidance obstacle.Toachieve these, we propose a new method that is called thevanishing point control method. he method is based onimportant conceptions of projective geometry, and a visualcontrol system of the robot can be designed ea sily using themethod.First of all, we show perspective geometry such as per-spective projection, epipolar geometry, and vanishing point.By using projective transformation between both imageplanes of the onboard camera and an imaginary camera,epipolars of these image planes indicate the robot's transla-tion direction. Hence we can easily turn the robot's transla-tion direction in an arbitrary direction. The design consists ofthree methods. That is, methods to guide the robot in thecorridor and at a comer, and themethod to avoid an ob stacle.Here a new v isual control method using a vanishing point ofparallel lines at both sides of the corridor is introduced, andwe call this the vanishing point visual control method. Thismethod gives a lot of useful information for the design. Wecan't need a priori infomation except both widths of thecorridor and the robot, but we can easily design the visualfeedback system for guidance and obstacle avoidance byusing this method.

    11. PROJECTIVE GEOMETRYA . Coordinate systems

    Here we consider three coordinate systems, that is, the world,the robot and the cam era coordinate systems with the sub-scripts "H ", ' R " and ''C , espectively. And also thesedenote their current positions. A Cartesian " X y " coordinatesystem on the normalized image plane, called the imagecoordinate system, is taken in 5uch a way that the X - andJJ-axes are parallel to the X , - and Yc -axes, respectively.,

    y w , y R The Wodd C. System C.:=Coordinate1 The Robot C. SystemFig. 1 Four Coordinate Systems

    Fig.2 Perspective Projection of Pinhole Camera ModelWhere R, i s the 3 x 3 rotation matrix, Tog is the

    627

  • 8/8/2019 Visual Robot Control

    2/6

    3X 1 translation vector, and ( A , , ) are the offsetvalues between the robot and the cam era coordinate systems.

    B. Pinhole Camera ModelWe use a pinhole camera shown in Fig.2 as the simplestand ideal model of camera function.

    C . Perspective ProjectionSuppose that points Pc and X, = (x, y ) are in the

    camera and normalized im age coordinate systems, respec-tively. Let a camera c' be obtained by rotating and trans-lating the camera C by R and T .The relations betweenpc and p; ca n be described as

    Pl =RP, +T (1)

    Fig.3The Camera C' otated and translated by R and TWhere (R ,T are called the ex trinsic parameters andR, 15' are the normalized image planes of the camerac, c', espectively. Then he normalized image coo&-na te s(x ,y) are related to the camera coordi-nates(q-y, 9x, by

    x = .fX& 9 Y =PC/ZC

    A,; = (Ii0)Pc, (3)A;;; = ( I i O ) i $ =( R : T ) F c , (4)

    (2)In homogeneous coordinates, the above equations (1) and(2) with f = 1 become

    C*

    - 1 -Iwherep c ,xc, C c are homogeneous coordinates,and 4 ER,& E R .

    No w the following 3 x 3 upper triangular matrix, A,,containing the intrinsic parameters of the camera isintroduced[4][5]. Then we can transform th e normalizedimage coodinates into pixel coordinates by

    T - 'where GC=(U, V , 1) and mc = (U ' , v ' , arehomogeneous coordinates of points (U, ) and (U', ' )in a pixel image coordinate system corresponding to thenormalized image system, respectively. These yield the fol-lowing equations.

    A& = A ( I o ) ; c ,+ + 4 ( R i T ) P c , (7)

    ( 6 )

    R. Epipolar GeometryThe epipo lar constraint says that the three po ints, that is,

    the centersof the camera c and the imag inary camera c' ,and the point Pc in the cam era coordinate system, are allcoplanar [4][5]. Then the constraint can be expressed simplyas

    (8)-7 Txc ( T x R Z , ) = OBy defining [TI, s the matrix such that [TI,,= T x yfor any vector y ,we can rew rite the above equation as alinear pation.

    d

    Fig.4 Epipolar Plane E

    Where E = [TI , R is called the Essential matrix [4][5].Moreover using equations ( 5 ) , it follows that

    - 9 Tm F h c = O (10)cWhere F = A - [TI, A- is the Fundam ental matrix[41[51.Furthermore from equations (6) and (7),we have4fiL = A , A U - ' 6 , + rl ,A T, f7, E R (11)

    62 8

  • 8/8/2019 Visual Robot Control

    3/6

    Now d efine points 2, z i as; = A T (13)e'= A R ~ T (14)G=O (15)FTz*= (16)

    Then these points satisfy the following equations.

    Hence 2 nd z1 re called the epipole.E. 17anishing PointNow consider twoparallel lines in the world written as

    Xi@,)=x,+ Y y , ,vyi E R , v, = ( v , , O ) T E R 4 x 1 , j = 1 , 2

    (17)

    where v, s called the vanishing point [4][5].F. Projective TransformationFirst of all, since the rank of F is 2, we have the fol-lowing Singular Value Decomposition of the fundamentalmatrix[7].

    where, U, are orthogonal matrices andF =UDVT (18)U U T=w T I(Unit Matrix)

    r 0 O

    0

  • 8/8/2019 Visual Robot Control

    4/6

    The center of the transformed plane is the translation direc-tion of the robot [7]. Sowe can easily set the car to an arbi-trary translation direction by tuming the car and the cameraC . + l t l " -+ r ro"(by camera) (by cur)

    Fig.6 The Transformed Plane

    Iv. STATEMENTOF T IE PROBLEMOur research deals with the mobile robot that is used inindoor environment such as office and hospital. Hence thespace that the robot move s has various restrictions as shown

    figure 7.That is, there are corridors, obstacles such asflowerpots and people, and comers.The car's weight can be ignored, and the intrinsic matrix Ahas been alreadv identified. And the offset values

    So suppose the following.

    (R , ,Torr> re known, an d Tog is small. FurthermoreII IIboth widths of the car and the corridor 2d , dc are knownand both sides of the corridor are straight parallel lines andidentified on the image plane. r\estination

    Fig.7 Indoor Figure of the Bu ilding considered in this Paper

    Given navigation course shown by the dashed line infigure 7, our aim of the research is to design safe guidance ofthe robot from the start to the destination by using only acurrent image taken with the camera c . In order to achievethis, we have to design the foltowing.

    Guidance:How to gu ide the car in a corridor and at a comer

    Control;How to avo id an obstaclet

    Fig.8 The Vanishing Control Method

    v. DESIGNMETHODF THE SAFE GUIDANCEA. Method toguide he robot in the corridorNow we propose a vanishing point visual control method(VPVCM) for guiding the robot in a corridor.Th e VPVCM uses a vanishing point of parallel lines at bothsides of the corridor as shown in figure 8, that is, if the van-

    ishing point vp on the normalized image plane coincideswith the epipole .Then it isobvious that the robot can goto the point Vp straightly along the dash line in figure 8 isparallel to he parallel lines at both sides of the corridor. Thismethod is similar to the wall-following scheme [2] thatneeds a special omn idirectional sensor.In this case, the robot is a controlled system and is controlledas follows.Reference output : r = e(vp)out put Error: e = r -B(P)Controller:

    AvR = k,e, AvL = klevR= v RC. AV, , v L = vLC.+ Ay L

    where, v R , vL are inputs of the controlled system, 8 ( P )is its output, P s a point on the image plane denoting therobot's translation direction, k, > 0 is an arbitrary constantcontrol gain, and vRC., Lc . are the same con stant nor-mal speeds. The control system for the robot is shown infigure 9.

    630

  • 8/8/2019 Visual Robot Control

    5/6

    Fig.9 The Control System for the RobotThat is, the robot can be easily guided and translated in theconidor by keeping a constant distance d from the wall.( 1 ) Identificationof the Vanishing Point of the CorridorMany methods [S][9] o identify the parallel lines of thecorridor have been proposed and consequently the vanishingpoint can be easily calculatedby using these identified lines.(2) Determination of a Translation Lane in the Im age PlaneIt is difficult to determine the translation lane when onecamera is used. However it can be easily done if the propertyabout the vanishing point is used. No w suppose that d , dcan d A R , /A2 are known. Then the translation lane i sdetermined as it will be satisfied the following relationshipbetween the parallel lines and its lane on the transformedplane as shown in figure 10.

    d lY2-Ylldc 1 ~ 3 - 4-=

    Where p1= (A, yl), p 2 = ( ~ 2 ,2 ) ,p 3 = (x3. .v3) and xl = x2 = x3 .

    Co rridor*

    Fig. 10 The Figure Explaining the Determination of theLane

    E. Merhod to ovoidan obstacleThree ways that the robot does not bump against an ob -

    stacle that is in front of it are considered, that is, keepinggoing, stopping before bum ping, and going on by avoidingthe obstacle. The methods that will be proposed here areillustrated in the following figures.

    Robot*O- - - -+I I(1) Not Changing Lanes

    (2) Changing Lanes

    I I(3) Stopping There

    Fig.11 The idea about three Methods2 d x d r

    dcwhere dl is dl =If there is a space dl on the leR, then the robot moveswithout changing lanes. And if there is a space in the rightbut not on the left, then the robot changes lanes onto the rightside.Furthermore if & do S d , hen the robot stops there.However there is the remaining problem which is how tochange lanes.

    Step 1: Translation to p5 :The reference output is set to e ( P 5 ) and the pointP5 is set so that the following is satisfied.

    63 I

  • 8/8/2019 Visual Robot Control

    6/6

    where, dp = [ p 4-P4, nd is considered onlyin the case d ( h ) / d t > O .Thismeans that the controlpoint p5 is set in proportion to the relative speed ofthe obstacle that is calculated by the rate of which theobstacle goes large. This step will be ended if a spacefor the translation lane o fth e robot is formed on th eright side.

    Step 2: VPVCM on the right sideThe robot is controlled again on the right side byusing the VPVCM like the method proposed in sec-tion 4.2.Step 3: VPVCM on the left sideAfter the robot passes the obstacle and constant timepasses, the robot will again be con trolled by step1 sothat the robot can be guided in the left side. A control

    point like P5 ca n be arbitrarily set in this case.C. Method to guide the rob06at a comerFigure 12 is an image of the camera on the robot that ismoving near a comer. It is noticed from the leA line of th econidor in figure 12 that the corridor curves to the left at thecomer. So the problem is how to control the robot to the leftat the com er. All significant information in figure 12 is lostduring the turn and we can't use it for guidance.So weconsider a simple guidance of the robot that uses only the

    point P6 and the distance d2 .Namely if the point P6vanishes on an mage plane and hen constant time passes,the robot will begin to turn o the lePt by the following openloop control aw until the next vanishing point appears.

    A y R = k3( l / d2 ) , AV, = 0Where k3 > 0 is an arbitrary design parameter and therobot tums at the speed which is inversely proportional tothe distance d2 .

    VI . CONCLUSIONSIn this paper we proposed an new visual con trol methodfor navigation of the mobile robot in indoor environment.

    Tbenew visual control method using a vanishing po int ofparallel lines at both sides ofthe corridor is introduced, andwe call this the vanishing point visual control method. Thevanishing point visual control method gives a lot of u s e l linformation for the design.

    Vanishing Point.

    Fig. 12An Image of the camera on the Robot near a CornerBy using the method, we can easily design guidance in acorridor an d at a corner, and an avoidan ce obstacle. And wealso show the visual open loop control system for guidanceat the corridor. Finally a numerical simulation is performedin order to investigate the effect o ft he proposed method byusing a simple example, and good results are obtained.

    VII. REFERENCES[11 Fabien LAUNAY,Akihisa OHYA and Shin'ichi YUTA; "Vision-BasedNavigation of Mobile Robot using Fluorescent Tubes", EEEiRSJ Intema-tional Conference of Intelligent Robotsand Systems,FAB,Kagawa Japan[2]A.K. Das,R . Fierro,V.Kumar, B Southall, J.Spletzer,and C.J.Taylor;"Real-Tim Vision-BasedControl o f a Nonholonomic Mobile Robot", The200I IEEE International Conference on Robotics and Automation, SeoulKoEa, 00[3] Kyoung Sig Roh, Wang Heon Lee, In So Kweon ;"Obstacle Deratio nand Self-Localizationwithout Camra Calibration Using Projective In-variants", EEWRSJ International Conferenceou ntelligent Robots andSystem, (1997)[4] Kenichi Kanatani; "Group-Theoreticalmethodsin ImageUnderstand-ing", Springer-Verlag (Berlin), 1990[SI ZZhang and O.Faugem, "3D ynamic Scene Analysis", Springerseries in information sciencesvol. 27 , Springer-Verlag (Berlin), 1992[6] Miyazaki Fumio, Masutani Yasuhim and Nishikawa Atusi, "lntroduc-tion io Robotics, Chapter 6",Kyoritu Press,2000Matrix",IEEE Tmnsaction on Patiern Analysis and Machine Intelligence,Vo1.19,N0.2, pp.133-135, 1997[SI P.V.C. Hough; Method and means for recognizing complex patterns,US. atent 306965, 1962191R.O.Duda and P.EHart;Use of the Hough ransformation to detect lineandcurvesinPictures,CommACM, lS , N o . l , p p . l l - 1 5 , 1972

    (2000)

    Richard 1. Harlley, "Kruppa'sEquations Derivedfrom he Fundamental

    632