12
A Vision-based Controller for Path Tracking of Autonomous Underwater Vehicles Carlos Berger 1,2 ,Mario A. Jordán 1,2, , Jorge L. Bustamante 1,2 and Emanuel Trabes 1 1 Argentinean Institute of Oceanography (IADO-CONICET). Florida 8000, Complejo CCT, Edificio E1, B8000FWB, Bahía Blanca, ARGENTINA. 2 Dto. Ingeniería Eléctrica y de Computadoras- Univ. Nac. del Sur (DIEC-UNS). No Institute Given Abstract. This paper aims a design of a vision-based controller for autonomous underwater vehicles (AUVs). The control system performs the path following of a line on the seafloor and tunes automatically the cruise kinematics for the advance.The navigation process rests upon a sensor which estimates the relative vehicle position from the perspective of the camera directly without the need of employing camera models for estimating physical variables. For the state estimation, image processing in real time is employed. The control strategy is developed in the state space of pure vision coordinates. The strategy points to reach rapidly favorable configurations of the AUV and afterwards to pursue a perfect alignment with the line. Experiments of the real subaquatic world are presented. Key words: Autonomous underwater vehicle, Control and guidance systems, Path following, Vision sensor, Vision-based control. 1 Introduction The interest for subaquatic vehicle applications is continuously increasing, driven above all by the high technological instrumentation available for underwater surveys and the highly advanced processing capabilities. Applications embrace not only the classical ones of the off-shore oil industry but also rescues, wide oceanographic and scientific missions (Narimani et al., 2009). One of the most relevant technical application concerns the pipeline tracking on the sea bottom with the goal of inspection and post-lay pipe surveys (Wadoo and Kachroo, 2010; Inzartev, 2009; Wang et al., 2007; Foresti, 2001). This frame- work gives input to the main motivation of this paper in the framework of control systems with camera sensors. Vision-based guidance systems with blurred imaging come into consideration principally in scenarios when the altitude to bottom is relatively small. Moreover, in such complex scenarios an intelligent sensor to identify the pipe on the sea floor with its particularities and a guidance system to follow it automatically Corresponding Author: Mario A. Jordán: E-mail: [email protected]. Address: CCT-CONICET. Florida 8000, B8000FWB, Bahía Blanca, ARGENTINA 15th Argentine Symposium on Technology, AST 2014 43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 13

A Vision-based Controller for Path Tracking of Autonomous

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

A Vision-based Controller for Path Tracking of

Autonomous Underwater Vehicles

Carlos Berger1,2,Mario A. Jordán 1,2,⋆, Jorge L. Bustamante1,2 and Emanuel

Trabes 1

1 Argentinean Institute of Oceanography (IADO-CONICET).Florida 8000, Complejo CCT, Edificio E1, B8000FWB,

Bahía Blanca, ARGENTINA.2 Dto. Ingeniería Eléctrica y de Computadoras- Univ. Nac. del Sur

(DIEC-UNS).No Institute Given

Abstract. This paper aims a design of a vision-based controller forautonomous underwater vehicles (AUVs). The control system performsthe path following of a line on the seafloor and tunes automatically thecruise kinematics for the advance.The navigation process rests upon asensor which estimates the relative vehicle position from the perspectiveof the camera directly without the need of employing camera models forestimating physical variables. For the state estimation, image processingin real time is employed. The control strategy is developed in the statespace of pure vision coordinates. The strategy points to reach rapidlyfavorable configurations of the AUV and afterwards to pursue a perfectalignment with the line. Experiments of the real subaquatic world arepresented.Key words: Autonomous underwater vehicle, Control and guidancesystems, Path following, Vision sensor, Vision-based control.

1 Introduction

The interest for subaquatic vehicle applications is continuously increasing, drivenabove all by the high technological instrumentation available for underwatersurveys and the highly advanced processing capabilities. Applications embracenot only the classical ones of the off-shore oil industry but also rescues, wideoceanographic and scientific missions (Narimani et al., 2009).

One of the most relevant technical application concerns the pipeline trackingon the sea bottom with the goal of inspection and post-lay pipe surveys (Wadooand Kachroo, 2010; Inzartev, 2009; Wang et al., 2007; Foresti, 2001). This frame-work gives input to the main motivation of this paper in the framework of controlsystems with camera sensors.

Vision-based guidance systems with blurred imaging come into considerationprincipally in scenarios when the altitude to bottom is relatively small. Moreover,in such complex scenarios an intelligent sensor to identify the pipe on the seafloor with its particularities and a guidance system to follow it automatically

⋆ Corresponding Author: Mario A. Jordán: E-mail: [email protected]. Address:CCT-CONICET. Florida 8000, B8000FWB, Bahía Blanca, ARGENTINA

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 13

may be a useful complement of other navigation sensors (Jordán, 2010; Sattaret al., 2008; Huster et al., 2002).

In this paper, the design of a controller for path following with desired kine-matics based on pure vision characteristics provided by a camera pointing to aline is described. It is stressed the absence of any model of the AUV dynamicsin the controller design. Closing the paper, real experiments on an underwaterscenario are illustrated.

2 PreliminaryLet us consider an AUV navigating at constant altitude h over a patterned linelaid on a planar floor. Imagine a line stretch captured by the in-board cameralike in the Fig. 1. The well known pin-hole camera model is considered (Hartley,2004). Here, a line stretch is ideally described by its tangent line on a point, withparameters xL (in pixel) and α (radian). Actually, the tangent line represents anapproximation of a secant line which fits at best the orientation of the physicalline in the frame. Moreover, the camera is placed at a distance d from the vehiclemass center and its orientation angle with respect to the vertical line is β. Whenthe AUV is misaligned with the line, this can be characterized by a lateral shiftYt of its mass center and an orientation angle ψ with respect to the tangent line,both defined in a line-fixed coordinate system (Xt, Yt, Zt) with Xt oriented alongthe line. These vision and the physical parameters are related by (see Berger,2014a)

Yt = s1 sinψ + s2xL cosψtgψ = s3tgα+ s4xL

, (1)

with s1=d + h tanβ, s2=−h

f cosβ, s3=− cosβ, s4=

sinβf

,and f being the focaldistance.

Fig. 1 - Camera frame with the physical patterned line and its approximatedtangent line

Clearly, the equilibrium points xL = α = 0 and Yt = ψ = 0 are arrived atthe same time in different state spaces. So, the future control objective will beto force trajectories to the equilibrium xL = α = 0 in the vision state space onlyand to automatically establish kinematics references.

Let us consider a planar motion at constant altitude. Assume the vehicle hasbeen constructed to rotate at about the mass center. Let ΩZM be the angular

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 14

rate about the vertical axis Z and XM the advance rate. It is shown in Berger(2014a) that the camera kinematics obeys to[xy

]=

[−Kd+ f

(1 + x2

f2

)sin(β) + y cos(β) x sin(β)K

f

−xK hf

−K cos(β) +K yfsin(β)

][ΩZM

XM

]

(2)where

K =f

Z=

f cos(β)− y sin(β)

h.

3 Vision-based sensorThe core of the guidance system and control feedback is the sensor. It carriesout the image processing in order to estimate the camera position and its kine-matics with respect to the seafloor. It also enable the construction of kinematicsreferences in order to achieve an adequate vehicle advance in the path tracking.Finally, a monitoring of the quality of the estimations constitutes another outputof the sensor. Taken the hardware capacities into account (e.g., frame-grabberand mini PC-platform), the frames are processed at a rate of c · fps, where fpsis the frame per second property of the camera and 0 < c < 1 defines a subsam-pling rate. The different modules concerning the sensor functioning are depictedin the following figures. These are executed successively each 1/(c ·fps) seconds.

The first module is summarized in Fig. 2. It contains the estimation of therelative vehicle position to the line in the vision space through the parametersxL and α. The first step is to improve the quality of the raw frame in order toenhance the contrast between patterns and environment and to get an imagesegmentation of these regions. Then, one extracts image features that ends withthe determination of pattern centroids of the line. The reasonable ambiguity ofthe estimation is delimited by defining a certainty zone around it.

In Fig. 3 the second module is presented. It performs the estimation of thecamera motion first and the vehicle motion in the following. The key idea is tocalculate a velocity field over the identified centroids of the previously segmentedset of patterns in two successive frames and then to employ this velocities tosolve (2) in order to estimate XM and ΩZM , or equivalently in the vehicle-fixedcoordinate system: u and r, respectively.

The first step in this module consists in attaining the maximal number ofcentroids as possible from both frames. So, eventual holes that are originatedfrom lost patterns may be recognized to enhance the sets. Here, their coherencewith eventual logical metrics-calculated positions enables to interchange holesby simulated centroids.

The rate field of the centroids is worked out by overlapping the directrixlines of the centroid sets. The way this is done consists in rotating one direc-trix line to the other around the crosspoint in order to get them parallel. Inthis form, homologous centroids can be associated by a distance representationwith respect to the crosspoint (the ad-hoc origin) in order to construct pairs.Finally the rate field results from the ratio between the distances of each paircentroids (x, y) and the period 1/(c · fps). In order to reduce the influence ofnumeric errors in the estimation of the vehicle rate variables XM and ΩZM , (2)

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 15

is solved for the estimated pairs in the sense of a least squares metrics. Addition-ally the centroid positions suffer from quantification errors and false perspectiveappreciation which are common in 2D-image approaches. So the evolution of theestimates may be irregular and a ad-hoc filtering is needed.

FILTERING

FEATURE EXTRACTION

Gray ScaleConversion

BrightnessCorrection

Contrast Adjustment

Smoothing Threshold

Contour Detection

Counting ofObjects

Centroids Determination

Certainty ZoneRestriction

Line Fitting

Statistic Correction

Certainty ZoneAdaptation

NoiseFiltering

Lineparameters

RGB-Image

VISION PARAMETERS

POSITION ESTIMATION

Fig. 2 - Sensor module I: position estimation

PAIR

ASSOCIATION

INPUTS

CAM ERA

KINEM ATICS

actual set of centroids

previous set of centroids

velocity field ofthe centroid set

estimation ofline parameters

crosspointdetermination

completenessof centroid holes

alignment of the centroid sets

distancerepresentation

association ofhomologous pairs

2DOF system solution

maximun difference

filter

static correction FILTERING

VEHICLE

KINEM ATICSvehicle rate

Fig. 3 - Sensor module II: kinematics estimation

To this end, a so-called maximum-difference filter is used to reduce highfluctuations due to elevated noise level (change of contrast, caustic waves, etc.).Finally, average in time with a forgetting factor is necessary to smooth the rateestimations that are affected from erratic centroid shifts. This step ends theestimation of the vehicle motion.

In Fig. 4, the third sensor module is described. Herein, the real-time monitor-ing of the sensor performance is produced in order to alert the controller whenthe sensor is momentarily unreliable. In this sense, the controller will have totake extra actions for the continuity of the navigation. In customary operations,pattern estimations may fail, for instance due to sporadic bad image quality (incloudy waters or in the presence of caustic waves) or simply when patterns ranover the frame (due to strong force perturbations).

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 16

P R E P R O C E S S IN G

S T A T IS T IC S

S e le c tio no f a R O I

S p e c tru mm a x im u m

S p e ctra ld e n s ity

S ta n d a rd d e v ia tio n

Im a g e w ithp a tte rn ce n tro id s

H is to g ra m

C la s s ific a tio n Q u a lity fla g g e n e ra tio n

S U P E R V IS IO N

Fig. 4 - Sensor module III: sensor performance supervision

The strategy in the real-time supervision task consists in analyzing the sta-tistical properties of a selected ROI in the certainty zone. Then its histogramparameters (spectral density and maxima, standard deviation) are comparedwith those of the seafloor which has been collected in a calibration phase. It isexpected that the line histogram looks approximately like a bimodal function,while the floor (which is generally dark) has a rather one-peak histogram.

4 Controller DesignThe line tracking is enabled by the guidance and control system. They are illus-trated in Fig. 5. The sensor acts as the navigation system. No other informationthan the estimated position (xL, α) and vehicle rate (u, r) is employed to gener-ate velocity references (uref , rref ) and control actions (here termed the force τuand the moment τr to cause push/braking and turn, respectively).

Whereas an ideal controller has to maintain all the time the line vertically onthe middle of the screen (i.e. at the desired position xL=α=0), a good controllerhowever will attempt to achieve this goal asymptotically after a sporadic pertur-bation or more practically, by forcing the track errors to be small during a curvepath. It is also stipulated that the vehicle achieves a nominal cruise rate unomwhen it is well aligned with the line. The actual reference velocity (generallylower than unom) will depend on its misalignment with respect of the line.

VelocityEstimation

PositionEstimation

Pre-processing Pre-processingVideo

Acquisition

SENSOR

Guidance systemReference generator

AUV dynamicsImage-based Controller

xL=0

--

α=0

(xL , α)

Thrusters

(u, r)

(uref, rref)Control actions

Bottomimage

MeasurementCorrection

Supervision

Sensor performance alarm

(τu ,τr)unom

Fig. 5 - Vision-based control system

The control strategy is constructed on the vision state space (see Fig. 6), inwhere position trajectories evolve from an initial condition to the equilibriumpoint xL=α=0 of the perfect alignment.

Basically the space is delimited by the visibility of the line segment in theframe. So, when the segment stays on the contour of the frame (i.e., one of the

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 17

four sides or one of the four corners), then the line is described by a point onone of the four curves αLIMi

. On the other side, it is practical for the controlto restrict de angle α in an interval, it is -π/2<αMIN ≤ α ≤ αMAX<π/2 toavoid ambiguity in the orientation sense with respect to the current course ofnavigation.

Fig. 6 - Vision state space for the control strategy

When mapping the four quadrants of the physical space (Yt, ψ) into thevision space by means of (1), it originates four regions identified as S1, S2, S3and S4 delimited by two contour curves, namely αy0(xL) (i.e., the map for Yt=0)and αψ0(xL) (i.e., the map for ψ=0), which clearly represent the maps of theorthogonal axes of the physical plane (Yt, ψ). The AUV positions depicted inFig. 7 corresponds to these particular cases pointed out before.

3

0:

0t

SY

ψ >

>

2

0:

0t

SY

ψ >

<

1

0:

0t

SY

ψ <

<

4

0:

0t

SY

ψ <

>

tY

Fig. 7 - AUV positions in the four quadrants of the physical space

Clearly, regions S2 and S4 are smaller in size than S1 and S3 because thesight horizon is shorter when the AUV camera is located on the AUV nozzle.Another restrictions on the vision space arrives from the control strategy. Thecause for further subdivisions on the plane rests on the fact that the controllerdesign does not employ any model of the AUV dynamics. So, when the AUV staysboth close to the vision limits or close to the equilibrium point, it is reasonable todiminish the energy amount of the control efforts in order to cause a lower motionquantity. In this sense, it appears appropriate first to define narrower limits onxL and α, namely ±xLCRIT and ±α

CRIT. Additionally, in a neighborhood of

the equilibrium point, it seems cautious to diminish the controller gain whenthe vehicle stays approximately aligned with the line. So, in this case a kind ofcorridor is defined for small lateral displacements with wide equal to 2δR.

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 18

5 Guidance and control system

5.1 Guidance

The guidance law is worked out on the basis of the following idea. The moredirectly the vehicle aims to the line the more cautious is the guidance system topush it going through the line. So, when the vehicle brings closer to the line, theenergy amount to decelerate it is relative small. Conversely, by closer alignment,the guidance system speed up the course. With this argument, we propose thereference law for the advance velocity

uref =unom

(1 +Kcψ2), (3)

with Kc a positive real-valued constant (cf., Fig. 5). Another particularity ofthe control strategy is defined when the vehicle is far away to the line. The firstattempt of the controller is to direct the vehicle with a constant angle ψ in orderto diminish the lateral error xL as quick as possible. Only when the vehicle staysclose the line, the controller forces the vehicle a turn appropriately. The referencerate rref is constructed as following.

First, we employ the well-known relation between lateral rates with respectto a line-fixed and a vehicle-fixed systems, namely Yt = u sinψ. Now we canestablish a law for the desired turn as

ψd = arcsin

(Ytd − ky(Ytd − Yt)

uref

)

, (4)

with Ytd = Ytd = 0 and ky < 0 which penalizes side errors. So, the followingreference for guidance is proposed

rref = rd − kψ(ψd − ψ), (5)

with kψ < 0 penalizing the orientation errors. Taking into account that

ψ = rref = rd − kψ(ψd − ψ),

Yt = uref sinψd = uref

(Ytd − ky(Ytd − Yt)

uref

)

and defining ψ = ψd−ψ and Yt = Ytd −Yt, one arrives to the error dynamics as

−·

ψ + kψψ = 0, −·

Yt + kyYt = 0.

Clearly, the equilibrium point ψ = Yt = 0 is exponentially stable.However, the practicability of (3) and (5) is reduced since Yt and ψ are

unknown. Certainly, their estimation by means of (1) is viable but not recom-mended when the operation and camera parameters (f, β, h) are changeable oruncertain.

An alternative to design the guidance system is to employ directly a metricsin the vision state space which does not suffer from such uncertainties. To thisgoal, consider the space in Fig. 8 where any vehicle position point h is at distancesdR and dT to the isolines αy0(xL) and αψ0(xL), respectively. Clearly, dR anddT are positions variables like the equivalent Yt and ψ. Much easier to compute

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 19

than dR and dT , are their projection distances onto the axis α, namely dR anddT . Consequently, the equilibrium point ψ=Yt=0 occurs uniquely by dR=dT=0.

So, the guidance system is redesigned as following. The kinematics referencesare replaced by

uref =unom

(1 +Kcd2T ), (6)

dTd = arcsin

(dRd

− ky(dRd− dR)

uref

)

,

rref = dTd − kψ(dTd − dT ),

with dRd= dRd

= dTd = 0.

Rd

Rd

Td

Td

( )0x

y Lα

( )0xLψα

αh

xL

Fig. 8 - Vision-based metrics for position5.2 Control strategyThe challenge now is to restrict the guidance law in such a manner that it beeffective not only across the length and breadth of the vision space, but also ina neighborhood of the equilibrium point and in the periphery close to the visioncontours. So we take up again the subdivisions of the space in Fig. 6 accordingto the limits αMIN , αMAX , αCRIT

, xLCRIT and finally the corridor 2δR wide.In consequence, the control law is region-dependent with different mathe-

matical expressions each one. This means that the vehicle can be propelled orstopped when trespassing the contours between two regions, diminishing or ac-centuating the rate of navigation. In short, the bandwidth of the controller isvariable. For instance, subtle refinements of the track are necessary in the neigh-borhood of the equilibrium point, while strong thrusts are required in peripheralvision zones when the line is about to disappear from the image.

Accordingly we define the control strategy involving three laws for imple-menting the guidance.

Guidance law 1 - Risk of line loosing

On the condition |α| > |αCRIT

|, the strategy consists in generating a rotationin order to approach the isoline αψ0(xL). Consequently, we choose the desiredvalue dTd = 0 and so (6) get into

uref =unom

(1 +Kcd2T ), (7)

rref = dTd + kψdT .

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 20

Guidance law 2 - Vehicle is almost aligned

If the trajectory fulfills for any point h ∈ S1, S3 with |xL| > |xLCRIT | and|dR| > δR, the control strategy has to compel a movement that allows the vehicleto approach αy0(xL) and, at the same time, to be repulsed from the nearest limitαLIM .

To attain both goals, dD and dI are defined as the distances of h to the linexL = xLCRIT if xL > 0 or to xL = −xLCRIT if xL < 0 and to the closest curveαLIM (i.e., αLIM1 in S1 and αLIM4 in S3).

Thus

dD = ||xL| − xLCRIT | ,

dI =

∣∣|xL|+ H2tanα− W

2

∣∣ if h ∈ S3∣∣|xL| − H2tan−W

2

∣∣ if h ∈ S1,

with W and H the number of pixels sideways and lengthways, respectively.According to this metrics, the new guidance law is designed as

dTd = kr

(1 +

dI − dDdI + dD

)dT , (8)

dTd = max(min(dTd , αCRIT),−α

CRIT),

uref =unom

(1 +Kcd2T ),

rref = dTd − kψ(dTd − dT ),

with 0 < kr < 1. The heuristics behind (8) is the following. On one side, thequotient (dI − dD)/(dI + dD) results in a real value in [−1, 1]. The minimumquotient value corresponds to the case dI = 0 for h on one border αLIM . Soone gets dTd = kr (1− 1) dT = 0 and the guidance produces a rotation with thepower enough to move h to the αψ0(xL). On the contrary, by dD = 0 then it

results the maximum quotient value, originating dTd = kr2dT . In this way, thevariation of the reference rref is gradual, namely it grows up in order to stopthe rate until the sense of the rotation be changed. Particularly, with kr < 0.5

one gets kr

(1 + dI−dD

dI+dD

)< 1 and

∣∣∣dTd∣∣∣ < |dT |, causing a continuous repulsion of

from the border αLIM inside the interior region with |xLk | < |xLCRIT |.Guidance law 3 - Vehicle is misaligned

Whenever h corresponds to other region that those circumvented in laws 1and 2, the expression (6) applies with a saturation on dTd in order to avoid thatthe trajectories move into zones defined by |α| > α

CRIT. So

uref =unom

(1 +Kcd2T ), (9)

dTd = arcsin

(dRd

− ky(dRd− dR)

uref

)

,

dTd = max(min(dTd , αCRIT),−α

CRIT),

rref = dTd + kψdT .

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 21

Generally, all the laws attains the repulsion of the vehicle from the criticalzones to inside and finally to the perfect alignment. However this occurs when thegainsKc, ky, kψ and kr have the signs as indicated. The tuning of these controllerparameters pertains to a commissioning phase. The smaller the tracking errorsψ = ψd−ψ and Yt = Yd−Yt the the more exact are the approximations dT anddR to ψ and Yt, respectively.

5.3 Control law

The following of kinematics references provided by the guidance system areachieved by the vision-based controller which is connected in cascade in thedirect path of the control system as depicted in Fig. 5.

The controller determines the thrust laws for a push and a rotation torquedenominated τu and τr, respectively. The equations are defined on the basis ofthe kinematics errors u and r calculated upon the the sensor rate estimations uand r. It is valid τu = Kavu, τr = Krotr,with u = uref − u, r = rref − r and the controller gains Kav > 0 and Krot > 0.

6 Experimental testWith the purpose of evaluating the performance of the vision-based control ap-proach in scenes as close as possible to the real world, a series of experimentswere carried out in a water tank with an experimental AUV (see Berger, 2014a;Berger, 2014b). Here, we present a case study employing a line with luminouspatterns distributed uniformly along its length. The water scenario is charac-terized by a high turbidity and a poor natural illumination. As drawback inthis conditions, the degree of blurring of the luminous points is very high. Thesubmarine with the camera on board is directed approximately to the visionfield over the line. As illustrated in Fig. 9, selected frames of the navigation areorganized vertically from bottom (test start) to top (test end). To the left, thepath of the submarine is indicated with reconstructed positions and orientationsthrough the video. The triangle symbol in red illustrates samples of the AUVposition. The line tracking is started from a favorable position (see first frame onthe figure bottom). The supervision module indicates during the whole processthat the track is successful by showing the specific symbol (circle in green) atthe frame top.

0 1 2 3 4 5 6 7-0.5

0

0.5

Trajectory inPlane (Y

t ,X

t )

Yt

Xt

Fig. 9 - Case study of path tracking with side disturabances. Performanceevaluation

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 22

In the middle of the navigation stretch, a strong force perturbation was exter-nally exerted from one side on the submarine, moving it off the track. One seeson the vision field that the line begin to move rapidly to a corner (second framefrom bottom) but the controller is able to recover the course immediately (thirdframe from bottom). In the final path, the controller is succeeding in improvingthe track precision increasingly (see the fourth frame from bottom).

The good performance of the control system can be also judged analyzingthe time evolutions of the state variablesYt, ψ, u, r, the approximations dR anddT and finally the thrusts τu and τr. All these are depicted in Fig. 10.

-0.4

-0.2

0

0.2

0.4

Relative

Position [m]

Yt

dR

-0.5

0

0.5

1

Relative

Orientation [rad]

ψ dT

0

0.1

0.2

0.3

0.4

Linear

Velocity [m/sec]

uref

u

-0.5

0

0.5

Angular

Velocity [rad/sec]

rref

r

0 5 10 15 20 25 30 35 40-5

0

5

10

Time [sec]

Linear

Force [N]

τ1

0 5 10 15 20 25 30 35 40-10

-5

0

5

10

Time [sec]

Angular

Torque [Nm]

τ2

Fig. 10 - Time evolution of the state variables, references and thrusts

The null kinematics initial conditions cause the acceleration of the vehicleduring 5s until the cruise velocity is approximately reached. At the start, thepush and the torque evolve slightly jerky though their mean values are moresignificant to counteract the vehicle inertia. The reason is the form how thelaw does calculate the control actions by amplifying the path errors directlywithout filtering. On the other side, one sees the evolution of the vehicle by theforce perturbation and the rapid control of the course with the cruise velocities.Finally, one can compare the evolution of the vision-based metrics through dRand dT , with the physical variables Yt and ψ. Clearly, the small they are themost precise is the coincidence.

In Fig.11, the trajectories for position of the AUV in different metrics aredepicted. One appreciates the great effect of the perturbation on the dynamicsthrough the large separation of the trajectories with respect to the initial con-dition point. One notices how disimil are the trajectories in both spaces despitethe fact that their convergences occur at the same point.

7 Conclusions

In this work a vision-based guidance and control system was presented for anAUV whose navigation system is constituted by a camera only. The design of

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 23

the sensor modules and controller were described. Also a case study, which wasselected from of a series of experimental tests, was analyzed. In particular, thecontroller and the guidance system are based on a pure vision-based metrics,which, in theory, may present the advantage of not being influenced by para-meter uncertainties like camera focus and altitude among others. This last hy-pothesis stays under verification in our future work. The all-round performanceevaluated through experiments, has been ranged as good, robust and reliable inscenarios with cloudy waters and blurred images.

-200 -100 0 100 200 300 400-0.6

-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1Trajectory in Plane (xL,α) [px,rad]

xL

α

-0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6-0.1

-0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35Trajectory in Plane (ψ,Yt ) [m,rad]

ψ

Yt

Fig. 11 - Time evolution of the state variables, references and thrusts

References

1. Berger, C. E. (2014a). PhD Thesis: Navegación de Vehículos Autónomos Subacuáti-cos basados en Control por Visión. Universidad Nacional del Sur, Argentina.

2. Berger, C. E. (2014b). Experimental video sequences: https://www.dropbox.com/sh/84woaxb5bklkboo/qeW3SoEjwf

3. Foresti, G.L. (2001). Visual inspection of sea bottom structures by an autonomousunderwater vehicle. In IEEE Trans. on Systems, Man, and Cybernetics, Part B:Cybernetics, Vol. 31 (5), pp. 691-705.

4. Hartley, R. I. and Zisserman, A. (2004). Multiple View Geometry in ComputerVision, Cambridge University Press, ISBN: 0521540518.

5. Inzartev A.V.(Ed.) (2009). Underwater vehicles, InTech, Vienna, Austria.6. Jordán, M.A., Berger, C.E., Bustamante, J.L. and Hansen, S. (2010). Path Track-

ing in Underwater Vehicle Navigation - On-Line Evaluation of Guidance Path Er-rors via Vision Sensor. 49th IEEE Conf. on Decision and Control. Atlanta, USA.

7. Narimani, M., Nazem, S. and Loueipour, M. (2009). Robotics vision-based systemfor an underwater pipeline and cable tracker. OCEANS’09, Bremen, Germany, 1-6.

8. Sattar, J., Dudek, G., Chiu, O., Rekleitis, I., Giguere, P., Mills, A., Plamondon, N.,Prahacs, C., Girdhar, Y., Nahon, M. and Lobos, J.-P. (2008). Enabling autonomouscapabilities in underwater robotics. In IEEE/RSJ Int. Conf. on Intelligent Robotsand Systems (IROS 2008), Nice, France, pp. 3628-3634.

9. Wadoo, S. and Kachroo, P. (2010): Autonomous Underwater Vehicles - Modeling,Control Design and Simulation, CRC Press.

10. Wang, Y. and Hussein, I.I. (2007). Cooperative Vision-Based Multi-Vehicle Dy-namic Coverage Control for Underwater Applications. In IEEE Int. Conf. on Con-trol Applications (CCA 2007), Singapore, pp. 82-87.

15th Argentine Symposium on Technology, AST 2014

43 JAIIO - AST 2014 - ISSN 1850-2806 - Página 24