Machine Vision of landmarks for UAVs

Embed Size (px)

Citation preview

  • 8/11/2019 Machine Vision of landmarks for UAVs

    1/25

  • 8/11/2019 Machine Vision of landmarks for UAVs

    2/25

    234 J Intell Robot Syst (2010) 57:233257

    1 Introduction

    A helicopter is a well-suited air vehicle for a wide variety of operations, rangingfrom search and rescue (e.g., rescuing stranded individuals or launching life buoys at

    drowning people) to surveillance and inspection missions (e.g., landmine detectionor inspection of towers and transmission lines for corrosion and other defects). Allthese applications demand dangerous flight patterns in close proximity to the groundor to other objects that can attempt to the pilot safety. Additional hazards derivefrom operations in dangerous or contaminated areas, e.g., inspection after chemicalor nuclear accidents. An unmanned helicopter that operates autonomously or ispiloted remotely will eliminate these risks and increase the helicopters effectiveness.Typical missions of autonomous helicopters require flying at low speeds to follow apath or hovering near an object. Positioning equipments such as Inertial NavigationSystems (INSs) or Global Positioning Systems (GPSs) are well-suited for long range,low precision helicopter flights and fall short for very precise, close proximity flights.Manoeuvring helicopters close to objects requires accurate positioning in relation tothe objects. Visual sensing is a rich source of data for this relative feedback.

    Unmanned Aerial Vehicles (UAVs) constitute a research field that has beenextensively explored in the last decade [1]. In literature a wide range of studies onautonomous helicopters has been reported: modelling and identification, simulation,sensor integration, control design and fault diagnosis [25].

    The use of computer vision as secondary or primary method for autonomoushelicopter control has been discussed frequently in recent years, since classical

    combination of GPS and INS systems can not sustain autonomous flight in anysituation [6,7]. Several studies have demonstrated the effectiveness of approachesbased on motion field estimation[8] and feature tracking [9] for visual odometry.

    A global view of the main aspects related to the research field of computer visionfor UAVs can be found in [10], along with the results of applying these techniquesin several applications. In [11] Caballero et al. propose a work on visual odometrybased on geometric homography. They use vision to compute a frame by frameodometry for Simultaneous Localization And Mapping (SLAM). A cooperativevision-based system for detecting and tracking objects of interest on or near theground is presented in[12].

    Vision based methods have been proposed even in the context of autonomouslanding management: in [13] Merz et al. utilize inertial sensors combined with a singlecamera and a specially designed landing pad in order to be independent from GPS;in[14] Daquan and Hongyue estimate all the motion parameters of the aircraft areobtained by exploiting images of an airport runway lighting acquired by the airbornecamera. The problem of how to land a helicopter in unknown terrain is tackled in[15]. A combination of feature tracking, motion estimation, and multi-frame planar-parallax obtained by a stereo rig is utilized in order to estimate a digital elevationmap of the terrain, allowing the determination of a safe landing area of terrain and

    map waypoints to a desired landing spot.In general, we can say that the degree of autonomy that an helicopter can achievedepends on factors such as the ability to solve unexpected critical situations, e.g., lossof GPS signal, and the ability to interact with the environment, e.g., using naturallandmarks. A vision-based solution for autonomous waypoint navigation and safe

  • 8/11/2019 Machine Vision of landmarks for UAVs

    3/25

    J Intell Robot Syst (2010) 57:233257 235

    landing on unstructured terrain represents a strong improvement for both theseabilities. Several techniques have been implemented, decoupling the problem oflocating and tracking a high contrasted, well known landmark, e.g., a classical helipadthat can be easily identified by standard image processing techniques [1618], from

    the problem of detecting and avoiding natural obstacles, e.g., steep slopes and rockson a landing area [1921]. The dependence on fixed, artificial landmarks and onoptimal visibility conditions constitutes a strong limit for visual-based navigation inreal-environment applications. In some works vision approaches based on momentdescriptors are used; they impose no constraints on the design of the landing padexcept that it should lie on a two dimensional plane, but artificial landmarks com-posed by polygons are indispensable[17]. Moreover the weakness of that strategyappears in situations in which natural (e.g., due to debris or leaves) or artificial (e.g.,due to engine smoke) occlusions can make the computation of moment descriptorsvery difficult.

    In this paper a vision based approach for guidance and landing of an UAV isproposed. Differently from previous works in this field this paper presents a vision-based system for guidance and safe autonomous landing of a helicopter based onthe concept of reuse of local features extracted from the vision system. The visionbased architecture, better described in Section3,is novel and even if presented herein the case of helicopter navigation and landing, can be applied to any UAV, evenfixed wing vehicles. The general idea is to guide the UAV using natural landmarksonly, which can be selected from aerial images of the operating area. Such images canbe already available before or just acquired during the flight. The features extracted

    from the natural landmarks can be chosen as navigation targets allowing a vision-based feature tracker to compute the references for the flight control.

    The main aim of such approach is to provide a remote operator with an intuitiveand direct control system, exploiting the benefits deriving from the use of visualinformation, which constitutes the friendliest interface for an operator to interactwith the environment. Through the video stream the operator is able not only tosupervise but even to directly lead the operations, assigning the desired navigationtargets. Manoeuvres are then managed by the on board control system.

    The main advantage of this approach consists in computing the displacementrelatively to the target rather than through a coordinate system fixed to the earth.

    This approach becomes particularly useful in all those situations in which GPS signalis not available (e.g., when operating in urban like environments or in case of fault)or target coordinates are unknown. Moving the UAV using the visual feedback thatthe operator receives from the on-board camera can be really useful when the pathcan not be planned before, as during search and rescue or investigation missions.

    In the case of landing on unstructured areas the ground is analyzed to verify if thearea is flat and proper for landing, since Digital Elevation Map (DEM) resolution cannot guarantee the possibility to identify a flat and safe target area. The landing taskis managed by the vision system through the localization of the geometrical centre of

    mass of the matched features. Appropriate feedbacks are given to the autonomouslanding control system.In this paper studies have focused on the problem of natural landmark detection

    and recognition and safe landing area detection, which, as said before, are the mainnovelties of the paper; for this reason we focused results on the vision based methods,

  • 8/11/2019 Machine Vision of landmarks for UAVs

    4/25

    236 J Intell Robot Syst (2010) 57:233257

    leaving a minor space to the validation of the control system. Results show theappropriateness of the vision based approach, which is also robust to occlusions,light variations and scene changes (i.e., reference images grabbed in different daysand hours).

    The organization of the paper is as follows. In Section2the problem of locating,tracking and inspecting an assigned landing area is formulated. In Section 3 thevision approach, a feature-based image-matching algorithm, is presented. Section4describes the hierarchical behaviour-based control architecture. Section 5presentsthe results achieved. Finally in Section 6 concluding remarks and directions for futureimprovements are discussed.

    2 Test-bed and Experimental Task

    The experimental test-bed HELIBOT is a customized Bergen Twin Observer. It isa twin-cylinder, gasoline-powered radio-controlled model helicopter approximately1,5 meters in length and capable of lifting approximately 9 kg of payload. Onboardavionics include a PC/104-based computer stack running the QNX RTOS (800 MHzPIII CPU with 256 MB DRAM and 256 MB flash disk), a GPS-41EBF3V receiverwith WAAS and EGNOS (available in Europe) corrections and a Microstrain3DM-GX1 Attitude Heading Reference System (AHRS). The helicopter is also

    equipped with a downward-pointing Nikon camera. The ground station is mainlyconstituted by a laptop, used to send high level control commands to the helicopteras well as displaying telemetry from it. Data are sent using an advanced radiomodem that transmits and receives on the 868 MHz band. Autonomous flightis achieved using the hierarchical behaviour-based control architecture presentedin Section4.

    The testing of algorithms was largely performed using a simulation frameworkdeveloped by our research group. The framework is provided with a customizableHELIBOT model, allowing an easy switching from simulated to real tasks.

    As shown in Fig.1 four operating modes are proposed for the high level control

    system. The approach to the operating area is managed with the aid of GPS infor-mation. The user gives to the system the waypoint coordinates and the helicopter isautonomously guided to the goal. Then the operating mode is switched and visionsystem enters into the control loop. The user analyzes a high resolution image ofthe area and selects a target. The system extracts the appropriate features fromthe target image to define a natural landmark and starts to search it in the imagesequence coming from the onboard camera. Once features are detected and tracked,the system uses the image location of these features to generate image-based velocityreferences to the low level attitude and height controllers.

    In the case of a programmed landing, the vision-based controller pilots thehelicopter over the landing area and manages the landing. If the landing area isunknown, an appropriate analysis is performed to select the nearest flat area for asafe landing.

    Figure2shows the flying area in which experimental tests have been carried out.The vision algorithm is described in the next session.

  • 8/11/2019 Machine Vision of landmarks for UAVs

    5/25

    J Intell Robot Syst (2010) 57:233257 237

    Fig. 1 Control system operating modes

    3 Vision Based Approach

    The main difficulty to attain fully autonomous robot navigation outdoors is thefast detection of reliable visual references, and their subsequent characterizationas landmarks for immediate and unambiguous recognition. The vision approachpresented here following is based on the concept of natural landmark. This systemallows a user to control the robot choosing the navigation target in the imagesreceived from an on-board camera or from a high resolution aerial or satellite image.This form of navigation control is convenient for exploration purposes or when there

    Fig. 2 The Gruppo Aeromodellistico Rio Ete flying area

  • 8/11/2019 Machine Vision of landmarks for UAVs

    6/25

    238 J Intell Robot Syst (2010) 57:233257

    is no previous map or knowledge of the environment, situations in which systems likeGPS, even if available, become useless and can be discarded. The user can decide thenext target for the robot and change it as new views of the environment becomeavailable. This vision based navigation approach was also used to locate a landing

    area and to allow autonomous landing by giving feedbacks to the UAV controlsystem.

    Figure 3 depicts this concept, underlining the key idea that any high leveltask relies on the same features extracted at the lower layer. This ensures timeperformances of the whole approach and the chance of choosing different featureextractors, following research developments in computer vision, but not dependingon a single approach.

    The vision based system performs also a scene analysis to obtain a first roughclassification of the safe landing area. All elaborations are based on the same featureset extracted from the current image and elaborated for different purposes: naviga-tion, waypoint identification, landing area detection, safe landing area classification.Processing is based on high level analysis of point features, which in this approach areextracted using SIFT [22], but that could be extracted using any feature extractionalgorithm able to detect stable points and to provide a robust descriptor.

    3.1 Feature Detection for Natural Landmark Tracking

    The system should be able to detect natural landmarks in unstructured outdoorenvironments while the robot is operating. These natural landmarks must provide

    reliable matching results. To overcome the inherent difficulties with matching imagesof an object or a scene captured outdoor during motion, the description of landmarksmust be based on invariant image features. These features must be distinctiveand invariant or partially invariant to translation, rotation, scale, 3D camera viewpoint and outdoor illumination conditions. Furthermore, robustness to image noise,occlusion, background changes and clutter should be granted while maintaining nearreal-time feature extraction and matching performance.

    To solve all the discussed aspects an improved version of the Scale InvariantFeature Transform (SIFT), developed by Lowe [22, 23] is used. SIFT is invariant

    . . .

    FEATURE EXTRACTION LAYER (SIFT, SURF, SEGMENTED COLOR REGIONS, ETC.)

    TARGET

    MATCHING

    SEMANTIC LAYER:

    LOCALIZATION MOTION ESTIMATION MAPPING MISSION PLANNING

    LANDING AREA

    ANALYSIS

    LANDING AREA

    CLASSIFICATION

    MOTION

    DETECTION

    Fig. 3 The vision based architecture

  • 8/11/2019 Machine Vision of landmarks for UAVs

    7/25

  • 8/11/2019 Machine Vision of landmarks for UAVs

    8/25

    240 J Intell Robot Syst (2010) 57:233257

    Fig. 4 Typical occlusions caused by engine smoke; camera is mounted on the bottom of the UAVand it points the terrain

    3.2 Optical Flow for Safe Landing Area Classification

    Surface reconstruction can be defined as the process of inferring a mesh of intercon-nected points representing a three-dimensional surface. The surface is often assumedto be rigid and fixed. Computer vision systems ideally would like to be able toreconstruct objects or environments from a sequence of pictures.

    In the UAV application presented in this paper we are interested only in depth

    estimation to determine if the landing area is safe for landing.To reach this purpose we use two different approaches, based on typical helicoptermovements. The first case assumes that images are taken at a constant height andwith a pure translational motion; this restriction can be overcome since the controlsystem can assure those flight conditions during landing area inspection. The secondapproach assumes that the helicopter is approaching the landing phase and is movingmaintaining the position and decreasing the altitude to complete a pure verticallanding; also in this case we use the feature-based vision system to inspect the landingarea relying on the same set of local features. This motion is quite typical too and canbe easily managed by the control system.

    In the first case we use an optical flow for depth perception that allows safeautonomous landing. The system is based on feature-based surface reconstructionusing an optical flow computation. This approach chooses image features that arestable for large motions. Thus, a sparse set of very confident depth estimates isobtained. If the surface is considered quite stable the area is used for the autonomouslanding maneuver.

    First, an image sequence of the target landing area is taken by the camera mountedon the UAV and the SIFT features are used to estimate the optical flow in twosuccessive frames; then, structure (in particular depth) information of target landingarea relative to UAV is recovered; finally, a simple classifier based on a binarythreshold is used to decide if the surface has variable depth or not and, consequently,if it can be used as a landing area.

    Experiments using both computer simulated images and real video images demon-strate the correctness and effectiveness of our method. In detail, the optical flowvector field is evaluated using a simplified approach: the optical flow is computed onthe base of SIFT features that have been matched in two consecutive frames captured

  • 8/11/2019 Machine Vision of landmarks for UAVs

    9/25

    J Intell Robot Syst (2010) 57:233257 241

    by the UAV during a translational flight over the area. Translational optical flow isthen filtered on the base of the flow vector orientation and, finally, a computation ofthe variance of the optical flow module is performed to measure surface flatness.A binary threshold is used to define if the surface is suitable or not for safe

    landing of the UAV. The threshold adopted is now experimentally determined.A specific study is planned to relate it to some parameters to obtain a dynamicestimation.

    Feature extraction and matching algorithms as SIFT are useful to estimate theflatness of a surface especially in translation conditions as above introduced. IfFk =

    xi,yi, xi, yi, i = 1, . . . ,Nkis the set ofNkfeatures at step k formed by quadruples

    of matched features, we estimate the flatness of surface applying a simple assumption.

    Assumption 1 If a vehicle is moving with pure translational motion, the flatness ofbeneath surface is calculated by comparing the variance of distances between two ormore sets of matched features; low variance corresponds to safe areas, while greatvariations are typical of unsafe regions.

    We define dik =(xi,yi) xi, yijas the distance (norm j, with j= 2) between

    two matched features at stepk; the flatness of surface is calculated as:

    f latnessk = std

    Nk

    n=1dnk

    wherestd is the standard deviation operator applied to a set of data. The classifier isa binary threshold on theflatnesskvalue, which operates in the following way:

    f latnessk =

    safe if

  • 8/11/2019 Machine Vision of landmarks for UAVs

    10/25

    242 J Intell Robot Syst (2010) 57:233257

    reason we compute the variance of the variation vector as a measure of the previousexplained heuristic. High variances bring to the classification of unsafe areas, whilevery low variance gives us a feedback of the flatness of the landing area. Fromanother point of view this method evaluates a kind of radial movement of features

    while the helicopter is landing vertically; it is the same case of a zoom aligned with thecentre of the image that enlarges every object in the figure. This heuristic evaluatoris very quick, even if not exact, due to the possible presence of false feature matches.

    An assumption also made in this heuristic surface classification method is thefact that a quite large number of features is extracted almost uniformly from thecurrent helicopter view; the assumption is not restrictive in any of the featureextraction methods here proposed because, usually, they are able to extract hundredsof features.

    Here following a short meta code description of the approach is proposed.

    3.2.1 Slope Detection Algorithm

    search for feature matchings in a pair of consecutive images; calculation of distances among features in the first image (D1 matrix); calculation of distances among features in the second image (D2 matrix); detection of wrong matchings (if d1i,j >d2i,j); filtering of inconsistent data; calculation of normalized variations in distances among features; calculation of mean and the variance of the previous results.

    The two methods bring to a correct and conservative classification of the area,choosing only flat surfaces as safe areas. Results, even if preliminary, show thefeasibility of the proposed method. Time performances of the algorithm are goodbecause it does not add time consuming evaluations to the feature extraction process.

    4 Control Strategy

    Unmanned Aerial Vehicles, in particular helicopters as HELIBOT, are complex

    flying machines. The mathematical model of a reduced scale helicopter has beendeeply studied during last years [26]; the various proposed approaches require alarge set of well-known parameters (e.g., geometrical, aerodynamic, mechanical)that in the real case are estimated or measured [27]. Simulation environments areuseful tools that support the design process of control laws and task management[27]. Given an accurate model of system, simulations assure to save time and money,due to the high costs of UAVs. The introduction of a vision sensor that aids theautonomous navigation and landing requires the development of a hierarchicalcontrol architecture. The use of a hierarchical control structure is a widely adoptedapproach in UAV[2830], that allows to de-couple the two main aspects:

    High-level control, represented by strategy and task management, Low-level control, which translates the assigned behaviour to actuator

    controllers.

    Each level is supported by a set of sensors and/or actuators. In Fig. 5 a graphicalrepresentation of hierarchical structure implemented in HELIBOT is shown. The

  • 8/11/2019 Machine Vision of landmarks for UAVs

    11/25

    J Intell Robot Syst (2010) 57:233257 243

    Fig. 5 The hierarchical structure allows to de-couple problems, solved at different level byspecialized modules

    vision sensor is used to track features of interest in the field of view. For improvingthe robustness to loss/degradation of GPS signal, information retrieved by the com-

    puter vision algorithms is fused with position data obtained by GPS (with EGNOScorrections) and inertial data by AHRS.The top level is represented by Mission/Task(s) Assignment module, where the

    interaction between UAV and human operator is maximum. Task management is apart of Flight Management System (FMS)[27] and generates the path to accomplisha specific task. The last level manages the control of actuators. The complexityof mathematical model of helicopter justifies the use of model-free controllers. Atriple nested PID control scheme has been implemented for HELIBOT as shownin Fig.6.

    As stated in the previous section, where the vision-based approach has been

    presented, to locate the landing area or to explore a zero-knowledge area, commonapproaches can fail due to non-robust techniques (e.g., scale, rotation, change of lightconditions, point of view). The use of landmarks, natural or artificial, gives the Task

    Fig. 6 The nested PID controllers scheme separates the problem of minimizing position errorassuring moderate speed when error is high and low speed when the error tends to be small

  • 8/11/2019 Machine Vision of landmarks for UAVs

    12/25

    244 J Intell Robot Syst (2010) 57:233257

    Management module the capability to control the vehicle even in presence of faults,such as loss/degradation of GPS signal. The question is when and how to switchfrom the standard behaviour, basically driven by standard sensors, to the vision aidedbehaviour. The challenge is to ensure the stability during the switching. Simulation

    stages are useful to predict the dynamics of helicopter during the switch.

    4.1 Switching Control Strategy by Behaviour

    Switching control is a common approach when one controller is not sufficient tocover the dynamics of a plant/process. A set of controllers tuned on a particularoperational condition often guarantees better performance in terms of precisionto external references and robustness to change of parameters. In the consideredcase, the switch occurs when there is a change of task. Standard controllers usedfor navigation are not suitable for fine tuning, as required in the landing task. Infact, during navigation high accuracy and precision in terms of position, velocity andattitude are not required, while in a critical task as landing a high accuracy on positionis required and the use of dedicated sensors as vision systems and range finders (e.g.,laser or sonar) becomes necessary.

    In Fig.7 the proposed switching architecture for attitude is shown. The structureof the tail controller and the engine power is similar, but is not reported herefor the sake of brevity. The main difference from the control scheme of Fig. 6 isthe introduction of a supervisor and a vision based controller. The supervisor actsswitching the controller, taking into account that a switching condition must be

    satisfied to avoid the loss of stability. References from the vision based controllerare imposed without velocity loop because the speed in this phase is low.

    Switching Condition A switching at timekcan occur if and only if speed (transla-tional and angular) tends to zero.

    The condition is a bit conservative, but it is reasonable in the context of landingor low speed feature tracking. The advantage of low speed during transition is thereduced magnitude of bump between old references and new ones. The vision based

    Fig. 7 Switching architecture for fine tuning tasks as landing or low speed features tracking

  • 8/11/2019 Machine Vision of landmarks for UAVs

    13/25

    J Intell Robot Syst (2010) 57:233257 245

    controller provides fine planar alignment between helicopter and features found inthe landing area and height over ground is also regulated. The vision based controlleracts as a quasi-proportional controller, as shown in Fig. 8, where a dead zone isintroduced to overcome chattering problems induced by small oscillations around

    the equilibrium. Saturator levels, both for maximum and minimum changes, avoid toimpose dangerous references (e.g., high values in collective can cause the overturnof helicopter).

    An integral action with saturation can be added to increase the steady stateprecision in terms of position error. A more complex scheme that makes use of fuzzyrules is under development.

    4.2 Ground Effect Estimation during Low Height Flights and Landing Flights

    Ground effect is an aerodynamical effect that turns significant when the helicopterflights at altitudes that are comparable with the main rotor span. The main effectsrelated to this phenomenon are a significant increase of lift and a reduction ofinduced velocity. The problem is well known and deeply investigated especially afterFirst World War, when the military forces understood the tactical importance ofhelicopters due to the hovering feature of these vehicles.

    During the last period many approaches have been proposed to precisely modelthe unsteady aerodynamics of rotorcraft In Ground Effect (IGE). Several depen-dences (e.g., blade loading, blade aspect ratio, twist) are weak and not significant

    for control purposes. Well-known simple models bind the rotor span normalizedaltitude (height / rotor span) with an increase of thrust or, alternatively, a reductionof induced velocity. In Fig.9the behaviour ofT/Tin fratio is reported, whereTin f isthe thrust out of the ground effect.

    Taking into account the ground effect, the altitude controller is improved in-troducing the dynamics of thrust when the helicopter is approaching to ground.Following the control scheme previously introduced (Fig.6), two controllers must berevised: the collective controller and the engine gas controller. They are responsibleof varying the helicopter altitude when the attitude is proximal to zero. The gascontroller tends to maintain the same main rotor speed MR and the collective

    controller is varied to control the thrust. In the case of IGE, a gain scheduling

    Fig. 8 Transfer functionbetween input from visionalgorithms (differencexinposition along an axis betweenhelicopter and target) andvariationuof attitude

    reference

  • 8/11/2019 Machine Vision of landmarks for UAVs

    14/25

    246 J Intell Robot Syst (2010) 57:233257

    0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 21

    1.05

    1.1

    1.15

    1.2

    1.25

    1.3

    1.35

    1.4

    z/R

    T/Tinf

    Ground effect on Thrust

    Fig. 9 The considered ground effect on thrust

    approach is used to tune in real time the controller gain. In this condition MR islightly reduced owing to the increase of lift. Gain is expressed by:

    K= K(kG) 0 1

    where Kis the nominal gain value in the case of Out of Ground Effect (OGE), aweighting factor and kG is related to the variation of lift, calculated at each step bythe following formula:

    kG = 1

    0.9926+ 0.037942R

    z

    2A graphical representation of gain scheduling approach in Fig.10is shown. Gain

    scheduling can be applied to both collective and gas controllers.

    Fig. 10 Gain scheduling applied to PID that controls the collective to contrast the ground effect

  • 8/11/2019 Machine Vision of landmarks for UAVs

    15/25

    J Intell Robot Syst (2010) 57:233257 247

    5 Experimental Tests

    This section is organized as follows: first, results about simulations of the controlsystem using the visual feedback as input in a virtual scenario are presented. Then,

    simulation results of ground effect during low height flights and landing flightsare described. Finally, the visual feedback is evaluated using a real scenario andpresenting results about the navigation and landing, taking into account also the safelanding area detection.

    5.1 Simulation of Vision Sensor Feedback in a Virtual Scenario

    As mentioned above, simulation is needed to evaluate the behaviour of the heli-copter in transition stages. All the simulations have been carried out on a complete

    MATLAB/Simulink model. The virtual scenario shown in Fig. 11, developed inVRML, allows to increase the reality of simulation.

    In this case, it is possible to simulate dynamically a vision sensor exploiting thecamera property of VRML. More complex scenarios with real textures can be alsoused to stress the system more. All other sensors as GPS and AHRS are integratedin the developed simulator, allowing a full test of the system. Vision algorithms,based on feature extractors and matching, run synchronously with the simulationof helicopters dynamics.

    Fig. 11 a the initial frame when the vision system detects features of interest as helipad or otherknown structures; b tracking is active and the helicopter performs fine tuning to reach the target;cthe helicopter attempts to reduce height over ground; dsimulated scenario in VRML

  • 8/11/2019 Machine Vision of landmarks for UAVs

    16/25

    248 J Intell Robot Syst (2010) 57:233257

    0 1000 2000 3000 4000 5000 60001

    0

    1

    2

    3

    samples

    X[m]

    0 1000 2000 3000 4000 5000 60001

    0

    1

    2

    3

    samples

    Y[m]

    0 1000 2000 3000 4000 5000 60002

    0

    2

    4

    6

    samples

    heig

    ht[m]

    Switch

    Fig. 12 Graphical representation of position alongx,y,zaxes

    In Figs.12 and13the results of a simulation in the developed multi-agent simu-

    lator are shown. The simulation shows a switch from normal behaviour (waypointfollowing) to landing in a safe area detected by the vision algorithm.

    Switching is not critical because the relevant change involves the heading of thehelicopter as shown in Fig.13; a low speed transition, which guarantees stability, is

    108

    64

    20 1

    2

    3

    4

    0

    2

    4

    6

    8

    10

    Y[m]

    Trajectory of helicopter during approach to land area

    X[m]

    height[m]

    Switch

    Fig. 13 3D graph of trajectory followed by the helicopter during the mission; the switch introducesa singularity only in the change of orientation along the yaw axis, which does not imply a loss ofstability

  • 8/11/2019 Machine Vision of landmarks for UAVs

    17/25

    J Intell Robot Syst (2010) 57:233257 249

    maintained as required. Switch is triggered by the vision controller that detects thelanding area; this high level information is then used by the supervisor, which takesthe decision to switch from standard to vision-based control.

    5.2 Simulation of Ground Effect during Low Height Flights and Landing Flights

    In Figs.14,15and16results for an interesting scenario are presented. The scenariois characterized by the presence of a switch that simulates a trigger from the visionsensor system to start an emergency/urgent landing in a safe area. Then the helicopterquickly reduces the altitude and receives the command to flight at very low altitude(0.25 m) to test the performance of the controller during the IGE hovering.

    The helicopter switches from standard navigation (as shown with red line inFig.16) to landing task; in the last phases the rate of descent is low according to

    desired behaviour. Ground flights are suitable for special tasks as mine detection,where the helicopter is equipped with a Synthetic Aperture RADAR (SAR); in

    Fig. 14 Trajectory of helicopter during entire mission

  • 8/11/2019 Machine Vision of landmarks for UAVs

    18/25

    250 J Intell Robot Syst (2010) 57:233257

    Fig. 15 Diagram of positions (horizontalsandvertical);bottom graphreports the trend of correctivefactor to contrast the ground effect

    this case it is essential that the distance (also speed) between antenna and groundis minimal.

    5.3 Vision Results in Real Scenario

    For the vision part, images with 320 240 pixels and shot by a Nikon cameraare used. These images can be processed at five frames per second. In the caseof autonomous landing this is also the frequency of the feedback input to thecontrol system. The vision system has been tested in different situations: the trackingof a natural landmark for waypoint based navigation, the landing using a naturallandmark and giving a feedback to the control system, the detection of a safe landing

    area.Figures17and18show the matching between the frames while the UAV is flying

    over natural landmarks. This test was performed to show the robustness of the visionsystem when the UAV is far from the landmark and is moving at high speed. Bluelines are the results of the feature matching used to track the waypoints. In Fig.17is also visible the smoke produced by the engine of the UAV, which causes severalpartial occlusions; the system behaves well also in these cases, due to the high numberof features being tracked. In Fig.19an example to show the rotational invariance ofSIFT matching is presented.

    The next test shows the feasibility of the feature tracking system to guide anautonomous vision based landing. Data collected from this test are also used fora simulation of the safety analysis of the landing area, at the end of this section.The altitude feedback is given to the control system evaluating the centre of massof matched features with respect to the centre of the actual view. Figure 20shows asequence of a landing using always the same feature-based approach and a naturallandmark.

  • 8/11/2019 Machine Vision of landmarks for UAVs

    19/25

    J Intell Robot Syst (2010) 57:233257 251

    0 2000 4000 6000 8000 10000 120003

    2

    1

    0

    1

    2

    a

    samples

    yaw[rad]

    Helicopter Heading

    Switch

    Fine alignment

    b

    Fig. 16 Heading of helicopter during mission; red lines evidence significant instants (switch and finealignment)

    The final tests were performed to evaluate the optical flow approach for a safe

    landing area classification system on the real UAV. Firstly, images taken duringlinear flights at constant height over the ground (first case described in Section 3)

    Fig. 17 Tracking of the landing area with respect to the given one, using a natural landmark anddealing with smoke occlusions

  • 8/11/2019 Machine Vision of landmarks for UAVs

    20/25

    252 J Intell Robot Syst (2010) 57:233257

    Fig. 18 Matching between two frames using a natural landmark for landmark localization andtracking

    have been analyzed. Here following two different cases of depth estimation andsurface classification are reported. In Fig. 21 two illustrative sequences of imagestaken flying over an unsafe area (Fig. 21a) and a safe (Fig. 21b) area are shown.Every pair of consequential images was analyzed. A variance analysis of geometricalcharacteristics of the blue lines obtained from SIFT feature matching was performedand a representative value computed. Figure22shows two different situations: thefirst matching (unsafe area) presents a variance parameter almost hundred timesgreater than second one (safe area). The variance parameters in Fig. 22a and b are382.06 and 2.78 respectively, with an experimentally determined threshold of 50.

    A second test (case of vertical landing described in Section3) was also performedin a real scenario. The radial movement of features, shown in Fig.23,is the base ofthe heuristic that evaluates the kind of surface and, according to an experimental

    Fig. 19 An example to show the rotational invariant SIFT matching

  • 8/11/2019 Machine Vision of landmarks for UAVs

    21/25

    J Intell Robot Syst (2010) 57:233257 253

    Fig. 20 A sequence of target identification and tracking used for autonomous navigation and landing

    threshold, classifies the safe landing area. Figure23shows the case of the helicopterapproaching a safe (Fig. 23a) and an unsafe (Fig. 23b) landing area. In the firstcase the control system allows the helicopter to land. In the second case the safelanding area classification gives back to the supervisor the command to leave thearea because it is unsafe.

    Figure24is the comparison between parameters of the safe and unsafe area. Theevaluation of the variance of the distances between features among all extracted fea-tures and frame by frame during the vertical landing, is a good and fast methodology.The heuristic does not add computational time to the process with respect to thefeature extraction process, which is much bigger than all the post processing hereproposed and presented. This enforces the general idea of re-using the same featuresextracted for landing area detection and tracking for other purposes, such as the safelanding area classification.

    Other preliminary results were obtained using SURF and other variants of theseapproaches, previously cited. All bring to very similar results and here only SIFT

  • 8/11/2019 Machine Vision of landmarks for UAVs

    22/25

    254 J Intell Robot Syst (2010) 57:233257

    Fig. 21 Two illustrative sequences of images taken flying over an unsafe area (a) and a safe area (b),respectively

    ones are presented. The proposed approach is totally general and applicable to any

    kind of local feature extractor.The real scenario results obtained with the use of HELIBOT are encouraging andcurrent works are on the extension of this approach for motion analysis of the landingarea (e.g., avoiding streets or flat areas used by people).

    6 Conclusions and Future Works

    In this paper the design and implementation of a vision-based landing and navi-gation algorithm for an autonomous helicopter has been presented. A hierarchical

    behaviour-based controller exploits alternately GPS and vision input data, allowingthe helicopter to navigate from an initial position to a final position in a partiallyknown environment, to locate a landing target and to land on it. The vision system

    Fig. 22 SIFT Features matching analysis. Variance parameters are 382.06 (a) and 2.78 (b)

  • 8/11/2019 Machine Vision of landmarks for UAVs

    23/25

    J Intell Robot Syst (2010) 57:233257 255

    Fig. 23 Frame by frame matching of a safe (a) area and of an unsafe landing region (b). Evaluatedparameters are reported in next figure

    0 5 10 15 20 25 30 35 40 45 500

    1

    2

    3

    4

    5

    6x 10

    5

    flat (blue) : mean = 3.07e006

    variance = 2.82e012

    slope (red) : mean = 4.33e006

    variance = 7.89e011

    Fig. 24 Mean (dashed lines) and Variance (continuous lines) parameters used for the classification;blue continuous line represents a flat area, while the red one is the variance of the slope, that is anunsafe area for the landing

  • 8/11/2019 Machine Vision of landmarks for UAVs

    24/25

    256 J Intell Robot Syst (2010) 57:233257

    allows to define a target area from a high resolution aerial or satellite image, hence todefine the waypoints of the navigation trajectory or the landing area. Two algorithmsfor safe landing area detection are also proposed, based on the optical flow analysis.Results show the appropriateness of the vision based approach that does not require

    any artificial landmark and is robust to occlusions and light variations.Future works are oriented to improve robustness of the whole system and

    specifically some important aspects of the landing task concerning with safety. Thepossibilities related to the optical flow techniques presented in the last part of thepaper will be explored to obtain not only the simple evaluation of the landing areaflatness, but even the capacity to actively search a safe zone. The first objective willbe the detection of safe and unsafe areas from the same image.

    The applications of such a system are notable; from exploration and rescue totarget tracking and data acquisition.

    Acknowledgements The authors gratefully acknowledge Antonio Pinelli, Biagio Ambrosio andAlberto Cardinali for their essential support.

    References

    1. Valavanis, K.: Advances in unmanned aerial vehicles: state of the art and the road to autonomy.Intelligent Systems, Control and Automation: Science and Engineering33(2007)

    2. Bejar, M., Ollero, A., Cuesta, F.: Modeling and control of autonomous helicopters, advances in

    control theory and applications. Lect. Notes Control Inf. Sci. 353(2007)3. Lee, D., Jin Kim, H., Sastry, S.: Feedback linearization vs. adaptive sliding mode control for aquadrotor helicopter. Int. J. Control Autom. Syst.7(3), 419428 (2009)

    4. Bernard, M., Kondak, K., Hommel, G.: Framework for development and test of embeddedflight control software for autonomous small size helicopters. Embedded Systems Modeling,Technology, and Applications, pp. 159168 (2006)

    5. Monteri, A., Asthana, P., Valavanis, K., Longhi, S.: Model-based sensor fault detection andisolation system for unmanned ground vehicles: theoretical aspects (part i and ii). In: Proceedingsof the IEEE International Conference on Robotics and Automation (ICRA) (2007)

    6. Conte, G., Doherty, P.: An integrated UAV navigation system based on aerial image matching.In: IEEE Aerospace Conference, pp. 110 (2008)

    7. Luo, P., Pei, H.: An autonomous helicopter with vision based navigation. In: IEEE International

    Conference on Control and Automation (2007)8. He, Z., Iyer, R.V., Chandler, P.R.: Vision-based UAV flight control and obstacle avoidance. In:American Control Conference (2006)

    9. Mondragon, I.F., Campoy, P., Correa, J.F., Mejias, L.: Visual model feature tracking for UAVcontrol. In: IEEE International Symposium on Intelligent Signal Processing, WISP (2007)

    10. Campoy, P., Correa, J.F., Mondragn, I., Martnez, C., Olivares, M., Mejas, L., Artieda, J.:Computer vision onboard UAVs for civilian tasks. J. Intell. Robot. Syst. 54(13), 105135 (2009)

    11. Caballero, F., Merino, L., Ferruz, J., Ollero, A.: Vision-based odometry and SLAM for mediumand high altitude flying UAVs. J. Intell. Robot. Syst.54(13), 137161 (2009)

    12. Bethke, B., Valenti, M., How, J.: Cooperative vision based estimation and tracking using multipleUAVs. In: Advances in Cooperative Control and Optimization. Lect. Notes Control Inf. Sci.,vol. 369, pp. 179189 (2007)

    13. Merz, T., Duranti, S., Conte, G.: Autonomous landing of an unmanned helicopter based on visionand inertial sensing. Experimental Robotics IX, Springer Tracts in Advanced Robotics, vol. 21,pp. 343352 (2006)

    14. Daquan, T., Hongyue, Z.: Vision based navigation algorithm for autonomic landing of UAVwithout heading & attitude sensors. In: Proceedings of the Third International IEEE Conferenceon Signal-Image Technologies and Internet-Based System, pp. 972978 (2007)

    15. Meingast, M., Geyer, C., Sastry, S.: Vision based terrain recovery for landing unmanned aerialvehicles. In: 43rd IEEE Conference on Decision and Control (CDC), vol. 2, pp. 16701675 (2004)

  • 8/11/2019 Machine Vision of landmarks for UAVs

    25/25

    J Intell Robot Syst (2010) 57:233257 257

    16. Shakernia, O., Vidal, R., Sharp, C.S., Ma, Y., Sastry, S.S.: Multiple view motion estimationand control for landing an unmanned aerial vehicle. In: Proceedings of the IEEE InternationalConference on Robotics and Automation (ICRA), pp. 27932798 (2002)

    17. Saripalli, S., Montgomery, J., Sukhatme, G.: Visually-guided landing of an unmanned aerialvehicle. IEEE Trans. Robot. Autom.19(3), 371381 (2003)

    18. Saripalli, S., Sukhatme, G.S.: Landing a helicopter on a moving target. In: IEEE InternationalConference on Robotics and Automation (ICRA), pp. 20302035 (2007)

    19. Garcia-Padro, P.J., Sukhatme, G.S., Montgomery, J.F.: Towards vision-based safe landing foran autonomous helicopter. In: Robotics and Autonomous Systems, vol. 38, no. 1, pp. 1929(11).Elsevier (2002)

    20. Johnson, A., Montgomery, J., Matthies, L.: Vision guided landing of an autonomous helicopterin hazardous terrain. In: Proceedings of the IEEE International Conference on Robotics andAutomation (2005)

    21. Templeton, T., Shim, D.H., Geyer, C., Sastry, S.: Autonomous vision-based landing and terrainmapping using am MPC-controlled unmanned rotorcraft. In: Proceedings of the IEEE Interna-tional Conference on Robotics and Automation, pp. 13491356 (2007)

    22. Se, S., Lowe, D., Little, J.: Vision-based mobile robot localization and mapping using scale-

    invariant features. In: Proceedings of the IEEE International Conference on Robotics andAutomation (ICRA), pp. 20512058 (2001)

    23. Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2),91110 (2004)

    24. Frontoni, E., Zingaretti, P.: Adaptive and fast scale invariant feature extraction. In: SecondInternational Conference on Computer Vision Theory and Applications, Workshop on RobotVision (2007)

    25. Frontoni, E., Zingaretti, P.: Feature extraction under variable lighting conditions. CISI (2006)26. Bramwell, A.R.S., Done, G., Balmford, D.: Bramwells Helicopter Dynamics, 2nd edn. Butter-

    worth Heinemann (2001)27. Mancini, A., Cesetti, A., Iuale, A., Frontoni, E., Zingaretti, P., Longhi, S.: A framework for simu-

    lation and testing of UAVs in cooperative scenarios. In: International Symposium on Unmanned

    Aerial Vehicles (UAV08) (2008)28. Montgomery, J.: Learning helicopter control through teaching by showing. Ph.D. Thesis,

    School of Comp. Sci., USC (1999)29. Mataric, M.J.: Behavior-based control: examples from navigation, learning and group behavior.

    J. Exp. Theor. Artif. Intell. (Special Issue on Software Architecture for Physical Agents) 9(23),6783 (1997)

    30. Shim, D.H., Kirn, H.J., Sastry, S.: Hierarchical control system syntesys for rotorcraft-basedunmanned aerial vehicles. In: AIAA Guidance, Navigation and Control Conference and Exhibit(2000)