05164661

Embed Size (px)

Citation preview

  • 7/30/2019 05164661

    1/6

    Abstract This paper presents the design and implementationof a vision-based navigation and landing algorithm for anautonomous helicopter. The vision system allows to define targetareas from a high resolution aerial or satellite image todetermine the waypoints of the navigation trajectory or thelanding area. The helicopter is required to navigate from aninitial position to a final position in a partially knownenvironment using GPS and vision, to locate a landing target (ahelipad of a known shape or a natural landmark) and to land onit. The vision system, using a feature-based image matchingalgorithm, finds the area and gives feedbacks to the controlsystem for autonomous landing. Vision i s used for accurate target

    detection, recognition and tracking. The helicopter updates itslanding target parameters owing to vision and uses an on boardbehavior-based controller to follow a path to the landing site.Results show the appropriateness of the vision-based approachthat does not require any artificial landmark (e.g., helipad) and isquite robust to occlusions, light variations and seasonal changes(e.g., brown or green leaves).

    I. I NTRODUCTION NMANNED Aerial Vehicles (UAVs) constitute aresearch field that has been widely explored in the lastdecade [1]. Varied choices of military and civilian

    applications have been proposed, where human presence could be impossible, risky or expensive. Small scale helicopters

    received a particular attention due to their unique flyingcapabilities: they can perform agile manoeuvres, including lowspeed and low altitude cruising and vertical take-off andlanding, as well as hover in place. In literature a wide range of studies on autonomous helicopters has been reported:modelling and identification, simulation, sensor integration,control design and fault diagnosis [2-3]. The use of computer vision as secondary or primary method for autonomoushelicopter control has been discussed frequently in recentyears, since classical combination of Global PositioningSystem (GPS) and Inertial Navigation System (INS) can notsustain autonomous flight in any situation [4-5].The degree of autonomy that an helicopter can achieve dependson factors such as the ability to solve unexpected critical

    situations, e.g., loss of GPS signal, and the ability to interactwith the environment, e.g., to use natural landmarks. A vision-

    based solution for autonomous waypoint navigation and safelanding on unstructured terrain represents a strong

    improvement for both these abilities. Several techniques have been implemented, decoupling the problem of locating andtracking a high contrast well known landmark, e.g., a classicalhelipad, that can be easily identified by standard image

    processing techniques [6-8], from the problem of detecting andavoiding natural obstacles, e.g., steep slopes and rocks, on alanding area [9-11]. The results achieved in safe landingtechniques can be considered good. The dependence on fixed,artificial landmarks and on optimal visibility conditionsconstitutes instead a strong limit for visual-based navigation inreal-environment applications. In some approaches vision

    approaches based on moment descriptors are used; they imposeno constraints on the design of the landing pad except that itshould lie on a two dimensional plane, but artificial landmarkscomposed by polygons are indispensable [7]. Moreover theweakness of that strategy appears in situation in which natural(e.g., due to debris or leaves) or artificial (e.g., due to enginesmoke) occlusions can make the computation of momentdescriptors very difficult.In this paper a robust feature-based visual approach is

    presented. Vision is used for precise target detection andrecognition. The implemented hierarchical behaviour-basedcontrol architecture allows the high-level controller to updatetarget parameters basing on vision. The on board low-levelcontroller receives the references to follow a path to thelanding site. The vision system also allows defining naturallandmarks from high resolution aerial or satellite images,overcoming the limit due to the use of well constructedartificial landmarks. Hence, waypoints for navigation trajectorycan be easily defined without the knowledge of accurate GPScoordinates. Moreover landing areas can be chosen andreached independently from the presence of a predisposedhelipad. The helicopter is required to navigate from an initial

    position to a final position in a partially known environment basing on GPS and vision references, to locate a landing targetand to land on it. The vision system, using a feature basedimage matching algorithm, scans the area finding the landmark.When the target is detected the vision system is able to track it.In case of landing task on unstructured areas, the ground isanalyzed to verify if the area is flat and proper for landing,since Digital Elevation Map (DEM) resolution can notguarantee the possibility to identify a flat and safe target area.The landing task is managed by the vision system through thelocalization of features geometry centre of mass. Appropriatefeedbacks are given to the autonomous landing control system.

    Vision-based Autonomous Navigation and Landing of an UnmannedAerial Vehicle using Natural Landmarks

    Andrea Cesetti, Emanuele Frontoni, Adriano Mancini, Primo Zingaretti, Sauro LonghiDipartimento di Ingegneria Informatica Gestionale e dellAutomazione

    Universit Politecnica delle MarcheAncona, Italy

    {cesetti, frontoni, mancini, zinga}@diiga.univpm.it, [email protected]

    U

    17th Mediterranean Conference on Control & AutomationMakedonia Palace, Thessaloniki, GreeceJune 24 - 26, 2009

    978-1-4244-4685-8/09/$25.00 2009 IEEE 910

  • 7/30/2019 05164661

    2/6

    In particular the vision part of this paper focused on the problem of landmark detection and recognition and threeabilities have been verified: artificial landmark detection,natural landmark detection, feature geometry centre of massdetection. Results show the appropriateness of the vision basedapproach that is also quite robust to occlusions, light variationsand seasonal changes.The organization of the paper is as follows. In section 2 the

    problem of locating, tracking and inspecting an assignedlanding area is formulated. In section 3 the vision approach, afeature based image matching algorithm, is presented. Section4 describes the hierarchical behaviour-based controlarchitecture. Section 5 presents the results achieved. Finally insection 6 concluding remarks and directions for futureimprovements are discussed.

    II. TEST -BED AND EXPERIMENTAL TASK The experimental test-bed HELIBOT is a customized BergenTwin Observer. It is a twin-cylinder, gasoline-powered radio-controlled model helicopter approximately 1,5 meters in lengthand capable of lifting approximately 9 kg of payload. Onboardavionics include a PC/104-based computer stack running theQNX RTOS (800 MHz PIII CPU with 256 Mb DRAM and256 Mb flash disk), a GPS-41EBF3V receiver with WAAS andEGNOS (available in Europe) corrections and a Microstrain3DM-GX1 Attitude Heading Reference System (AHRS). Thehelicopter is also equipped with a downward-pointing Nikoncamera. The ground station is a laptop that is used to send highlevel control commands to the helicopter as well as displayingtelemetry from it. Data are sent using an advanced radiomodem that transmits and receives on the 868MHz band.Autonomous flight is achieved using hierarchical behaviour-

    based control architecture, discussed further in section IV.Algorithms testing was largely performed using a helpfulsimulation framework developed by our research group. Theframework is provided with a customizable HELIBOT model,allowing an easy switching from simulated to real tasks.

    Figure 1. The Gruppo Aeromodellistico Rio Ete flying area used for tests

    Waypoint navigation requires the high-level controller toswitch between control references when the landmark entersinto the camera scanning range. Long-range navigation isnormally planned and carried out using GPS and inertial data.When the target area is reached, the vision system becomesable to find the navigation target. Once the natural landmark isdetected, the system tracks it and the state estimation algorithmsends navigation commands to the helicopter low-levelcontrollers. A specific task (e.g. inspection, data collection, )can be carried out before the navigation proceeds. In the case

    of a programmed landing, the vision-based controller pilots thehelicopter over the landing pad and manages the landing. Fig. 1shows the flying area in which experimental tests have beencarried out. Vision algorithm is described in the next session.

    III. VISION BASED APPROACH The vision approach is based on the concept of naturallandmark. The main difficulty to attain fully autonomous robotnavigation outdoors is the fast detection of reliable visualreferences, and their subsequent characterization as landmarksfor immediate and unambiguous recognition. A visually-guidednavigation system has been developed. This system allows auser to control the robot choosing the navigation target in theimages received from an on-board camera or from a highresolution aerial or satellite image. This form of navigationcontrol is convenient for exploration purposes or when there isno previous map of the environment, situations in whichsystems like GPS, even if available, become useless and can bediscarded. The user can decide the next target for the robot andchange it as new views of the environment become available.This vision based navigation approach was also used to locatelanding area and to allow the autonomous landing givingfeedback to the UAV control system. The system should beable to detect natural landmarks in unstructured outdoor environments while the robot operates. These naturallandmarks must be repeatable and provide reliable matchingresults. To overcome the inherent difficulties with matchingdiffering images of an object or scene captured outdoor duringmotion, the landmarks must be based on invariant imagefeatures. These features must be distinctive and invariant or

    partially invariant to translation, rotation, scale, 3D cameraview point and outdoor illumination conditions. Furthermore,robustness to image noise, occlusion (Fig. 2), backgroundchanges and clutter should be granted while maintaining near real-time features extraction and matching performance.

    Figure 2. A typical occlusion caused by UAVs engine smoke

    To solve all the discussed aspects an improved version of afeature extractor is used. The Scale Invariant FeatureTransform (SIFT), developed by Lowe [12,13], is invariant toimage translation, scaling and rotation. SIFT features are also

    partially invariant to illumination changes and affine 3D projection. These features have been widely used in the robotlocalization field as well as many other computer vision fields.SIFT features are distinctive and invariant features used torobustly describe and match digital image content betweendifferent views of a scene. While invariant to scale and

    911

  • 7/30/2019 05164661

    3/6

    rotation, and robust to other image transforms, the SIFT featuredescription of an image is typically large and slow to compute.Our improvements, better described in [14,15] consist of twodifferent steps:1) Divide the image in different sub-images;2) Adapt SIFT parameters to each sub images and computeSIFT feature extraction, only if it is useful.During the first step, the image is divided into several subimages according to its resolution. The number of scales to becomputed is also decided. The image is doubled at the first stepof the SIFT algorithm. During the second step the followingthreshold value ( Tr ) is computed to define the contrastthreshold value of the SIFT algorithm.

    ,

    , ,, 0

    ( ) ( ) DimX DimY

    i j i ji j

    I x y I x y

    Tr k DimX DimY

    =

    =

    In the previous expression k is a scale factor, DimX and DimY are the x and y image resolution, I(x,y) is the intensity of thegray level image and I(x,y) is the medium intensity value of the

    processed image. In the Lowes SIFT implementation thecontrast threshold is statically defined and low contrast key

    points are rejected because they are sensitive to noise. To speed

    up the feature extraction every sub-images that has a very lowthreshold value is not computed because of the high probabilityof not finding features in an almost constant image region (i.e.,extreme detection algorithm will fail). Further details of theused approach can be found in [14, 15]. In the application of this approach to the problem of safe landing we underline thefact that the main motivation for the use of this simplified SIFTelaboration is the reduction of computational time; even if flatsurfaces suitable for landing are often characterized by uniformareas we have to underline the fact that natural landmark areidentified from an high altitude, where is almost impossible tohave really uniform surfaces and consequently images.Experiments will show that also from a flat grass field can beextracted useful features.

    IV. CONTROL STRATEGY Unmanned Aerial Vehicles, in particular helicopters asHELIBOT, are complex flying machines. The mathematicalmodel of a reduced scale helicopter has been deeply studiedduring last years [2,16]; the various proposed approachesrequire a large set of well-known parameters (e.g., geometrical,aerodynamic, mechanical) that in the real case can only beestimated with a weak accuracy and precision. Simulationenvironments are useful tools which support the design processof control laws and tasks management [17]. Given an accuratemodel of system, simulations assure to save time and money,due to the high costs of UAVs; in the case of task managementinfluenced by the presence of a vision system, it is necessary tore-arrange the control architecture. Moreover the presence of vision sensor which aids the autonomous navigation and landing , reinforces the concept of hierarchical control, here

    presented. Hierarchical structure is a wide adopted approach inUAV [18-20] because allows de-coupling two main aspects:

    High-level control , represented by strategy and task management;

    Low-level control which translates the assigned behavior inreferences to actuators controllers.Each level is supported by a set of sensors and/or actuators; inthis case the vision sensor is used to track features of interest inthe field of view. Information retrieved by computer visionalgorithms is fused with position data obtained by GPS (withEGNOS corrections) and inertial data by AHRS; the advantageis the robustness to loss/degradation of GPS signal. In Fig. 3 agraphical representation of hierarchical structure implementedin HELIBOT is shown.

    Figure 3. The hierarchical structure allows to de-couple problems, solved atdifferent level by specialized modules

    The top level is represented by Mission/Task(s) assignmentmodule, where the interaction between UAV and humanoperator is maximum. Task management is a part of FlightManagement System (FMS) [17] and generates the path toaccomplish a specific task. The last level manages the controlof actuators. Due to the complexity of mathematical model of helicopter, the control approach is often model-free ; in the caseof HELIBOT, a triple nested PID scheme has beenimplemented as shown in Fig. 4.

    Figure 4. The nested PID controllers scheme separates the problem of minimizing position error assuring moderate speed when error is high and low

    speed when the error tends to be small

    As mentioned in the previous section, where the vision-basedapproach has been presented, to locate the landing area or toexplore a zero-knowledge area, common approaches can faildue to non-robust techniques (e.g., scale, rotation, change of illumination, point of view). The use of landmarks, natural or not, gives the Task Management module the capability tocontrol the vehicles even in presence of faults. The question iswhen and how to switch from the standard behaviour (e.g.,move to waypoint) - basically driven by standard sensors - tothe vision aided behaviour. The challenge is to ensure thestability during the switching. Simulation stages are useful to

    predict the dynamics of helicopter during the switch.

    912

  • 7/30/2019 05164661

    4/6

    Switching control strategy by behaviour Switching control is a common approach when one controller is insufficient to cover the dynamics of a plant/process. A set of controllers tuned on a particular operational condition oftenguarantees better performance in terms of precision to externalreferences and robustness to change of parameters. In theexamined case, the switch happens when there is a change of task. Standard controllers used for navigation are not suitablefor fine tuning. The main motivation is that during navigationthe accuracy and precision in terms of position, velocity andattitude can not be maximal; a critical task as landing requires

    basically the use of dedicated sensors as vision system andrange finder (e.g., laser or sonar).In Fig. 5 the proposed switching architecture for attitude isshown. The structure of the tail controller and the engine power is similar, but is not reported here for the sake of brevity. Themain difference from the previous scheme is the presence of asupervisor and a vision based controller. In this case thesupervisor acts by switching the controller and taking intoaccount that a switching condition must be satisfied to avoidinstability. References from the vision based controller areimposed without velocity loop because the speed in this phase

    is low.

    Figure 5. Switching architecture for fine tuning tasks as landing or low speedfeatures tracking

    Switching Condition: A switching at time k can occur if andonly if speed (translational and angular) tends to zero.The condition is a bit conservative, but it is reasonable in thecontext of landing or low speed feature tracking. Theadvantage of low speed during transition is the reducedmagnitude of bump between old references and new ones.The vision based controller provides fine planar alignment

    between helicopter and feature and height over ground is alsoregulated. The vision based controller acts as a quasi-

    proportional controller; as shown in Fig. 6 there is a dead zoneto overcome chattering problems induced by small oscillationsaround the equilibrium. A saturator, both for maximum andminimum changes avoids imposing dangerous references (e.g.,high values in collective can cause the overturn of helicopter).

    Figure 6. Transfer funcion between input from vision algorithms (differencein position along an axis between helicopter and target)

    An integral (I) action with saturation can be added to increasethe steady state precision in terms of position error. A morecomplex scheme that makes use of fuzzy rules is under development. At the present state the switching supervisor isnot automated, but is given by a human in the loop feedback.

    Simulation of vision sensor feedback in virtual scenarioSimulation is needed to evaluate the behavior of the helicopter in transition stages. All the simulations have been carried outon a complete MATLAB/Simulink model. Virtual scenario asshown in Fig. 7, developed in VRML, allows increasing thereality of simulation.

    Figure 7. a) the initial frame when the vision system detects features of interest as helipad or other known structures; b) tracking is active and the

    helicopter performs fine tuning to reach the target c) the helicopter attempts toreduce height over ground; d) simulated scenario in VRML

    In this case, it is possible to simulate dynamically a visionsensor exploiting the camera property of VRML. Morecomplex scenarios with real textures can be also used to stressthe system more. All other sensors as GPS and AHRS areintegrated in the developed simulator, allowing a full test of thesystem. Vision algorithms, based on features extractor asSIFT/SURF and matching, run synchronously with thesimulation of helicopters dynamics.In Figures 8 and 9 the results of a simulation in the developed

    multi-agent simulator are shown. The simulation shows aswitch from normal behavior (waypoints following) to landingin a safe area detected by the vision algorithm.

    913

  • 7/30/2019 05164661

    5/6

    Figure 8. Graphical representation of position along x,y,z axes

    Figure 9. 3D graph of trajectory followed by the helicopter during themission; the switch introduces a singularity only in the change of orientation

    along the yaw axis, which does not imply a loss of stability

    Switching is not critical because the relevant change involvesthe heading of the helicopter as shown in Fig. 9; low speedtransition, which guarantees stability, is maintained as required.Switch is triggered by the vision controller that detects thelanding area; this high level information is the used by thesupervisor which takes the decision to switch from standard to

    vision based control.V. R ESULTS

    For the vision part images shot with a Nikon camera with aresolution of 320x240 pixels are used. These images can be

    processed at 5 frames per second. In the case of autonomouslanding this is also the frequency of the feedback input to thecontrol system. The vision system has been tested in differentsituations: the tracking of an artificial landmark for way point

    based navigation, the landing using an artificial landmark andgiving a feedback to the control system, the detection of alanding area using only natural landmarks.Fig.10 shows the matching between the frames while the UAVis flying over an artificial landmark (the logo of our

    university). This test was performed to show the robustness of the vision system when the UAV is far from the landmark andis moving at high speed. Blue lines are the results of the featurematching that are used to track the way point. It is also visible(see Fig.10) the smoke, produced by the engine of thehelicopter, which causes several partial occlusions; the system

    Figure 10. Matching between two frames using an artificial landmark for landmark localization and tracking.

    behaves well also in these cases, due to the high number of features to be tracked.

    Figure 11. A sequence of the vision based landing showing the matching between the artificial landmark and the actual UAV view. The SIFT based

    approach detects the landmark and starts giving feedback to the control systemwhile the UAV is going down over the landing area.

    Another test showed the feasibility of the feature tracking

    system to guide an autonomous vision based landing. Datacollected from this test are also used for a simulation of thecontrol system performances at the end of this paragraph. Thefeedback is given to the control system evaluating the centre of mass of matched features with respect to the centre of theactual view. Fig. 11 shows a sequence of a landing usingalways the same feature based approach and the classicalhelipad artificial landmark; the reference image is grabbed atdifferent day time and it has also a different resolution. Fig 12reports two examples of the evaluation of the centre of massand distance from the target.

    Figure 12. The red circle is centred in centre of mass of matched features andhas a radius proportional to the average distance between the centre of mass

    and the farther feature.

    914

  • 7/30/2019 05164661

    6/6

    Even if there are a lot of other methods to detect a helipad thistest allowed us to verify the speed and precision performanceof our method that, differently from other approaches, copesvery well with natural unstructured landmarks, as shown in thenext test. This final test (Fig. 13 and 14) was performed toevaluate the feasibility of using natural landmark for navigationand landing. A natural landmark (red square) is selected fromthe actual view of the UAV as a navigation waypoint or landing area, and is correctly tracked by the vision systemduring the flight over the area, from different points of viewand from different altitudes. Different tests were performed andno false positive matching areas were detected during theflight.

    Figure 13. Landmark selected for navigation or landing

    Figure 14. Tracking of the landing area with respect to the given one, using anatural landmark and dealing with smoke occlusions.

    VI. CONCLUSIONS AND FUTURE WORKS In this paper the design and implementation of a vision-basedlanding and navigation algorithm for an autonomous helicopter is presented. A hierarchical behaviour-based controller exploitsalternately GPS and vision input data, allowing the helicopter

    to navigate from an initial position to a final position in a partially known environment, to locate a landing target and toland on it. The vision system allows defining a target area froman high resolution aerial or satellite image, hence to define thewaypoints of the navigation trajectory or the landing area.Results show the appropriateness of the vision based approachthat does not require any artificial landmark and is quite robustto occlusions, light variations and seasonal changes.Future works are oriented to improve some important aspectsof the landing task concerning with safety. Unknown andunstructured environments need to be analyzed beforeattempting a landing to find out a safe area and avoiding

    possible obstacles. Furthermore the inherent instability due toground effect will be taken into account. The design of a

    dedicated module is planned to model the disturbance andimprove the performance of low-level attitude controllers. Alaser scanner will be tested to evaluate the improvements dueto its use in the very last part of the landing task.

    ACKNOWLEDGMENT

    The authors gratefully acknowledge Antonio Pinelli, BiagioAmbrosio and Alberto Cardinali for their essential support.

    R EFERENCES [1] Office of the Under Secretary of Defense, Unmanned aerial vehicles

    annual report, Defense Airborne Reconnaissance Office, Pentagon,Washington DC, Tech. Rep., July 1998.

    [2] T.J. Koo and S. Sastry. Output tracking control design of a helicopter model based on approximate linearization. Proceedings of the 37thIEEE Conference on Decision and Control, Dec 1998.

    [3] A. Monteri, P. Asthana, K. Valavanis, and S. Longhi. Model-basedsensor fault detection and isolation system for unmanned groundvehicles: Theoretical aspects (part i and ii). Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRA), 2007.

    [4] O. Amidi, An autonomous vision-guided helicopter, Ph.D.dissertation, Robotics Institute, Carnegie Mellon University, 1996.

    [5] O. Amidi, T. Kanade, and K. Fujita, A Visual Odometer for Autonomous Helicopter Flight, Robotics and Autonomous Systems, 28,

    pp. 185-193, 1999.[6] O. Shakernia, R. Vidal, C. S. Sharp, Y. Ma, and S. S. Sastry. Multiple

    View Motion Estimation and Control for Landing an Unmanned AerialVehicle. IEEE Conf. Robotics and Automation, pp. 27932798, 2002.

    [7] S. Saripalli, J. Montgomery and G. Sukhatme, Visually-GuidedLanding of an Unmanned Aerial Vehicle. IEEE Transactions onRobotics and Automation, 19(3) pp. 371-81, 2003.

    [8] S. Saripalli, G. Sukhatme, Landing an helicopter on a moving target.IEEE International Conference on Robotics and Automation, 2007.

    [9] Pedro J.Garcia-Padro, Gaurav S.Sukhatme, and J.F.Montgomery,Towards vision-based safe landing for an autonomous helicopter,Robotics and Autonomous Systems, 2000.

    [10] A.johnson, J. Montgomery, L. Matthies. Vision guided landing of anautonomous helicopter in hazardous terrain. Proceedings of the 2005IEEE International Conference on Robotics and Automation, 2005.

    [11] T. Templeton, D. H. Shim, C. Geyer and S. Sastry. Autonomous vision- based landing and terrain mapping using am MPC-controlled unmannedrotorcraft. In Proceedings of the IEEE International Conference onRobotics and Automation, pages 1349--1356, 2007

    [12] S. Se, D. Lowe, and J. Little, Vision-based mobile robot localizationand mapping using scale-invariant features, in Proceedings of the IEEEInternational Conference on Robotics and Automation (ICRA) 2001,Seoul, Korea, May 2001, pp. 20512058.

    [13] D. Lowe, Distinctive image features from scale-invariant keypoints,Int. J. Comput. Vision, vol. 60, no. 2, pp. 91 110, 2004.

    [14] E. Frontoni, P. Zingaretti, Adaptive and Fast Scale Invariant FeatureExtraction, Second International Conference on Computer VisionTheory and Applications, Workshop on Robot Vision, Barcelona, 2007

    [15] E. Frontoni, P. Zingaretti, Feature extraction under variable lightingconditions, CISI 2006, Ancona, Italy, September 2006.

    [16] A.R.S. Bramwell, G. Done, and D. Balmford. Bramwells Helicopter Dynamics. Butterworth Heinemann, second edition, 2001.

    [17] A. Mancini, A. Cesetti, A. Iuale, E. Frontoni, P. Zingaretti, S. Longhi.A framework for simulation and testing of UAVs in cooperativescenarios. International Symposium on Unmanned Aerial Vehicles(UAV08), 2008.

    [18] J. Montgomery. Learning Helicopter Control through Teaching byShowing. Ph.D. Thesis, School of Comp. Sci., USC, 1999.

    [19] Maja J. Mataric, Behavior-based control: Examples from navigation,

    learning and group behavior, Journal of Experimental and TheoreticalArtificial Intelligence, special issue on Software Architecture for Physical Agents, vol. 9, no. 2-3, pp. 6783, 1997.

    [20] D. H. Shim, H. J. Kirn and S. Sastry. Hierarchical control systemsyntesys for rotorcraft-based unmanned aerial vehicles. AIAAGuidance, Navigation and Control Conference and Exhibit, 2000.

    915