Real-time indoor autonomous vehicle test environment .pdf

Embed Size (px)

Citation preview

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    1/14

    Real-Time Indoor

    Autonomous Vehicle

    Test EnvironmentJONATHAN P. HOW, BRETT BETHKE, ADRIAN FRANK, DANIEL DALE, and JOHN VIAN

    A TESTBED FOR THE RAPID

    PROTOTYPING OF UNMANNED

    VEHICLE TECHNOLOGIES

    Unmanned aerial vehicles (UAVs) are becoming vital warfare and

    homeland-security platforms because they significantly reduce

    costs and the risk to human life while amplifying warfighter and

    first-responder capabilities. These vehicles have been used in Iraq

    and during Hurricane Katrina rescue efforts with some success,

    but there remains a formidable barrier to achieving the vision of multiple

    UAVs operating cooperatively. Numerous researchers are investigating

    systems that use multiple autonomous agents to cooperatively execute

    these missions [1][4]. However, little has been said to date about how toperform multiday autonomous system operations. Autonomous mission

    systems must balance vehicle capability, reliability, and robustness

    issues with task and mission goals when creating an effective strategy.

    In addition, these systems have the added responsibility of interacting

    with numerous human operators while managing both high-level mis-

    sion goals and individual tasks.

    To investigate and develop unmanned vehicle systems tech-

    nologies for autonomous multiagent mission platforms, we

    are using an indoor multivehicle testbed called Real-time

    indoor Autonomous Vehicle test ENvironment (RAVEN)

    to study long-duration multivehicle missions in a con-

    trolled environment. Normally, demonstrations of

    multivehicle coordination and control technologiesrequire that multiple human operators simultane-

    ously manage flight hardware, navigation, control,

    and vehicle tasking. However, RAVEN simplifies all

    of these issues to allow researchers to focus, if desired, on

    the algorithms associated with high-level tasks. Alternative-

    ly, RAVEN provides a facility for testing low-level control algo-

    rithms on both fixed- and rotary-wing aerial platforms. RAVEN is also

    Digital Object Identifier 10.1109/MCS.2007.914691

    1066-033X/08/$25.002008IEEE APRIL 2008 IEEE CONTROL SYSTEMS MAGAZINE 51

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    2/14

    being used to analyze and implement techniques for

    embedding the fleet and vehicle health state (for instance,

    vehicle failures, refueling, and maintenance) into UAV

    mission planning. These characteristics facilitate the rapid

    prototyping of new vehicle configurations and algorithms

    without requiring a redesign of the vehicle hardware. This

    article describes the main components and architecture of

    RAVEN and presents recent flight test results illustrating

    the applications discussed above.

    BACKGROUND

    Various research platforms have been developed to study

    UAV concepts [5][23]. The BErkeley AeRobot (BEAR)project features a fleet of commercially available rotary-

    wing and fixed-wing UAVs retrofitted with special elec-

    tronics. These vehicles have been used in applications such

    as autonomous exploration in unknown urban environ-

    ments and probabilistic pursuit-evasion games [20], [22].

    In the Aerospace Controls Laboratory at the Massachusetts

    Institute of Technology (MIT), an outdoor testbed consist-

    ing of a fleet of eight fixed-wing autonomous UAVs pro-

    vides a platform for evaluating coordination and control

    algorithms [12], [17]. Similarly, researchers in the Multiple

    AGent Intelligent Coordination and Control (MAGICC)

    Lab at Brigham Young University have built and flown

    small fixed-wing UAVs to perform multivehicle experi-ments outdoors [19]. These planes are launched by hand

    and track waypoints autonomously.

    Likewise, the DragonFly project at Stanford Universi-

    tys Hybrid Systems Laboratory uses a heavily modified

    fixed-wing model aircraft [7], [21]. The objective of this

    platform is to provide an inexpensive capability for con-

    ducting UAV experiments, ranging from low-level flight

    control to high-level multiple aircraft coordination. Simi-

    larly, to demonstrate new concepts in multiagent control

    on a real-world platform, the Hybrid Systems Laboratory

    developed the Stanford Testbed of Autonomous Rotorcraft

    for Multi-Agent Control (STARMAC). STARMAC is a

    multivehicle testbed consisting of two quadrotor UAVsthat autonomously track a given waypoint trajectory [10].

    Quadrotors are used in STARMAC due to their convenient

    handling characteristics, low cost, and simplicity.

    In addition, several indoor multivehicle testbeds have

    been constructed to study multiagent activities. For exam-

    ple, the HOvercraft Testbed for DEcentralized Control

    (HOTDEC) Platform at the University of Illinois Urbana-

    Champaign is a ground-vehicle testbed used for multivehi-

    cle control and networking research [23]. At Vanderbilt

    University, researchers have built the Vanderbilt Embed-

    ded Computing Platform for Autonomous Vehicles, which

    supports two flight vehicles in an indoor environment [18].

    Also, the UltraSwarm Project at the University of Essex is

    designed to use indoor aerial vehicles to examine ques-

    tions related to flocking and wireless cluster computing

    [11]. Likewise, researchers at Oklahoma State University

    use the COMET (COoperative MultivehiclE Testbed)

    ground-vehicle testbed to implement and test cooperative

    control techniques both indoors and outdoors [6].

    Some of these testbeds have limitations that inhibit their

    utility for investigating health management questions

    related to multiday, multiagent mission operations. Forexample, most outdoor UAV test platforms can be flown

    safely only during daylight hours. In addition, many out-

    door UAV test platforms must be flown during good

    weather conditions to avoid damage to vehicle hardware.

    These outdoor UAVs also typically require a large support

    team, which makes long-term testing logistically difficult

    and expensive.

    In contrast, RAVEN is designed to examine a wide vari-

    ety of multivehicle missions using both autonomous

    ground and air vehicles. The facility uses small electric heli-

    copters and airplanes and can accommodate up to ten air

    vehicles at the same time. At the heart of the testbed is a

    global metrology motion-capture system, which yieldsaccurate, high-bandwidth position and attitude data for all

    of the vehicles.

    Although RAVENs indoor global metrology system is

    expensive, there are significant advantages to operating

    indoors. Since the position markers are lightweight, the

    motion-capture system can sense vehicle position and atti-

    tude without adding substantial payload. Thus, RAVEN

    does not require significant modifications to off-the-shelf,

    low-cost, radio-controlled (R/C) vehicle hardware. This

    setup allows researchers to conduct experiments with

    large teams (ten units) of inexpensive flying vehicles

    (US$1000 each). In addition, one operator can set up and

    flight test multiple UAVs in under 20 min. RAVEN thusfacilitates rapid prototyping of multivehicle mission-level

    algorithms at a fraction of the cost needed to support mul-

    tiday, multivehicle external flight demonstrations.

    The motion-capture metrology system can be used to

    obtain navigation and sensing data for many independent

    vehicles both in ideal and degraded form to simulate GPS

    obstructions. The motion-capture metrology data can then

    be augmented with onboard sensing to extend the scope of

    the flight experiments being performed.

    RAVEN is designed to explore long-duration autonomous air operations using

    multiple UAVs with virtually no restrictions on flight operations.

    52 IEEE CONTROL SYSTEMS MAGAZINE APRIL 2008

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    3/14

    FIGURE 1 (a) Multivehicle command and control architecture block diagram and (b), (c) RAVENs integrated vehicle system block diagrams.

    This system architecture is designed to separate the mission- and task-related components of the architecture from RAVENs vehicle infra-

    structure. Therefore, adjustments in the mission, task, and trajectory planning components of the system do not require changes to the test-

    beds core software infrastructure and can be made in real time.

    MissionPlanning

    TaskAssignment

    TrajectoryDesign

    EnvironmentEstimator

    Vehicle

    VehicleControllers

    Vehicle Cmds

    Vehicle Cmdsto Trainer Port

    OutputProcessing

    Control

    Processing

    InputProcessing

    EnvironmentEstimator

    TrajectoryDesign

    System Health Information

    Waypoint

    Plans

    Actuation

    Vehicle/Obstacle/Target States

    System Health Information

    TaskAssignment

    MissionPlanning

    MissionRequirements

    Vehicle/Obstacle/Target States

    Ground Computer

    Position/Attitude Data

    (by Ethernet)

    VehiclePosition/Attitude

    Data

    Draganfly V Ti ProQuadrotor

    Vicon PositioningSystem

    UAV #1

    UAV #1 Cmds

    UAV #NCmds

    UAV #N

    ViconPositioning

    System

    VehiclePositioning

    andAttitude

    Data

    GroundVehicle #M

    GroundVehicle #1Ground

    Vehicle #1CPU

    GroundVehicle #M

    CPU

    PerceptionCPU

    PlanningCPU

    UAV #NControlCPU

    Vehicle Commands Sent to OnboardComputer by RF Modem

    MissionPlanning

    Perception

    CPU

    PlanningCPU

    UAV #1ControlCPU

    Vehicle CommandsSent to Trainer Port

    Communication LinkBetween UAVs

    Waypoints

    Position/Attitude Data(by Ethernet)

    (a)

    (b)

    (c)

    APRIL 2008 IEEE CONTROL SYSTEMS MAGAZINE 53

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    4/14

    The primary flight test range, which is located on

    the MIT campus, is approximately 10 m by 8 m by 4 m.

    The environmental conditions of the indoor facility can

    be car efull y controlled for fl ight test ing. Conditions

    can range from ideal to wind induced, for example, by

    blowers or fans. The controlled environment is avai l-

    able 24/7 since it is not dictated by weather, day/night

    cycles, or other external factors such as poor visibility.

    In addition, flight experiments using small-scale, elec-

    tric-powered vehicles in an indoor range can be care-

    fully monitored and set up in a safe way by

    establishing flight procedures and installing safety fea-

    tures such as nets.

    SYSTEM ARCHITECTURE AND COMPONENTS

    One objective of RAVEN is to enable researchers to test a

    variety of algorithms and technologies applicable to multi-

    UAV mission problems in real time. Therefore, the architec-

    ture must facilitate adding and removing hardware and soft-

    ware components as needed. Another design objective is to

    provide a single operator with the capability to simultane-

    ously command several autonomous vehicles. Consequent-

    ly, the architecture must include components that provide

    the mission operator with sufficient situational awareness to

    verify or issue changes to a vehicles plan in real time.

    To meet these requirements, RAVENs architecture has

    the hierarchical design shown in Figure 1(a). This architec-

    ture separates the mission and task components from the

    testbeds vehicle infrastructure. Thus, changes in the mis-

    sion and task components of the system can be made in realtime without changing the testbeds core software infra-

    structure. This approach builds on the system architecture

    used in the DARPA-sponsored Software Enabled Control

    capstone flight demonstration by the MIT team [24], [25].

    As shown in Figure 1(b), the

    planning part of the system archi-

    tecture has four major compo-

    nents: 1) a mission-planning level

    that sets the system goals and

    monitors system progress, 2) a

    task-assignment level that assigns

    tasks to each vehicle in support of

    the overall mission goals, 3) a tra-jectory-design level that directs

    each vehicle on how to best per-

    form the tasks issued by the task-

    processing level, and 4) a

    control-processing level that car-

    ries out the activities set by high-

    er levels in the system. In

    addition, health information

    about each system component is

    used to make informed decisions

    on the capabilities of each subsys-

    tem in the architecture. As a

    result, system components aredesigned to support and promote

    strategies that are in the systems

    best interests for each task.

    The architecture used in

    RAVEN allows researchers to

    rapidly interchange mission-sys-

    tem components for the purpose

    of testing various algorithms in a

    real-time environment. For

    FIGURE 2 Scatter plot of (x, y) vehicle position. Part (a) shows a scatter plot of the measured

    (x,y) position (in meters) of a quadrotor sitting on the floor at position (0, 0 ); (b) and (c) show

    histograms of the measured percentage of time of the vehicles measured (b) x- and (c) y-posi-

    tions. Note that the scale in these plots is meters 104. With the rotors not turning, the maxi-

    mum x-position error measured by the system in this test is 0.325 mm, while the maximum

    y-position error measured by the system in this test is 0.199 mm.

    1

    2

    1.5

    1

    0.5

    0

    0.5

    1

    1.5

    24 3 2 1 0

    X-Axis Position (m)

    (a)

    (b) (c)

    2 3 4

    04 3 2 1 0

    X-Axis Position (m)

    1 2 3 4 4 3 2 1 0

    Y-Axis Position (m)

    1 2 3 4

    5

    10

    15

    20

    25

    30

    %

    atLocation

    0

    5

    10

    15

    20

    25

    30

    %

    atLocation

    Y-Ax

    isPosition(m)

    x 104

    x 104

    x 104x 104

    RAVEN enables rapid prototyping of single vehicle flight control

    and multivehicle coordination algorithms.

    54 IEEE CONTROL SYSTEMS MAGAZINE APRIL 2008

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    5/14

    example, to allow users to rapidly prototype mission,

    task, and other vehicle-planning algorithms with

    RAVENs vehicle hardware, each vehicle must be able to

    accept command inputs, such as waypoints, from a high-

    level planning system. Although low-level commands

    such as fly at a set speed can be issued to any vehicle in

    the platform, a waypoint interface to the vehicles allows

    users to substitute mission-, task-, and path-planning

    components into the architecture without changing the

    vehicles base capabilities. This interface allows users to

    add, remove, and test algorithms and other related com-

    ponents as needed. In fact, the waypoint interface to the

    vehicles allows users to develop and implement code on

    the platform in real time to test centralized and distrib-

    uted planning algorithms using computers from other

    locations on campus. As a result, researchers can imple-

    ment and test control, navigation, vehicle-tasking, health,

    and mission-management algorithms on the system

    [26][28]. Likewise, users can incorporate additional vehi-

    cles into the testbed architecture since vehicle controllersand base capabilities can be added, removed, and tested

    in the architecture without affecting the remaining system

    components.

    MAIN TESTBED HARDWARE

    Figure 1(c) shows the system setup, with perception, plan-

    ning, and control processing performed in linked ground

    computers. This system configuration can emulate distrib-

    uted planning (using network models of the environment

    to govern vehicle-to-vehicle communication) as if the plan-

    ning were being done on board. Note that the motion-

    capture system, which provides vehicle position and

    attitude data, is the only centralized component; however,each vehicles position and attitude data are transmitted to

    that vehicles ground-based computing resources. There-

    fore, each vehicle is unaware of the other UAVs during an

    operation unless a vehicle-to-vehicle communication link

    is established.

    Currently, the control processing and command data

    for each vehicle is processed by a dedicated ground com-

    puter and sent over a USB connection to the trainer port

    interface on the vehicles R/C transmitter. Each dedicated

    ground computer has two AMD 64-bit Opteron processors

    and 2 Gb of memory and runs Gentoo Linux.

    A Vicon MX motion-capture system provides position

    and attitude data for each vehicle in the testbed [29]. Byattaching lightweight reflective markers to the vehicles

    structure, the motion-capture system tracks and computes

    each vehicles position and attitude at 100 Hz. The accura-

    cy of the motion-capture systems position and attitude

    estimates is difficult to confirm during flight operations.

    To assess the static accuracy, Figure 2 shows a scatter plot

    of the measured (x, y) position (in meters) of a quadrotor

    sitting on the floor at position (0, 0). With the rotors not

    turning, the maximum x-position error measured by the

    system is 0.325 mm, while the maximum y-position error

    measured by the system is 0.199 mm. Tracking multiple

    reflectors in a unique orientation on each vehicle enables

    the motion-capture system to calculate the position of the

    center of mass and the attitude of each air/ground vehicle

    within range. For example, an 18-camera configuration can

    track five air vehicles and multiple ground vehicles in an

    8-m-by-5-m-by-3-m flight volume.

    Currently, RAVEN is comprised of various rotary-

    wing, fixed-wing, and ground-based R/C vehicle

    types, as shown in figures 3, 4, and 5. However, most

    testbed flight experiments are performed using the

    Draganflyer V Ti Pro quadrotor [30]. This quadrotor is

    small ( 0.7 m from blade tip to blade tip) and light-

    weight (under 500 g), with a payload capacity of about

    100 g and the ability to fly between 1317 min on one

    FIGURE 3 (a) Multivehicle search and track experiment and (b)

    operator interface visualization. The sensing system records the

    ground vehicle locations in real time. When the vehicles are being

    tracked by the UAVs, the vehicles locations are displayed to the

    operator. In (b), the cylinders below the flying vehicles show the

    vehicles (x, y) location in the flight space.

    (a)

    (b)

    APRIL 2008 IEEE CONTROL SYSTEMS MAGAZINE 55

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    6/14

    battery charge (using a 2000-mAh lithium-ion battery

    from Thunder Power [31]) while carrying a small cam-

    era. The four-propeller design simplifies the dynamics

    and control, and the vehicles airframe is robust and

    easy to repair in the event of a crash. The rotor blades

    are designed to fracture or bend when they hit a solid

    object. The Draganflyer V Ti Pro quadrotor is thus

    durable and safe for indoor flight.

    A separate landing and ground-maintenance system

    are used to support the quadrotor vehicle hardware in an

    around-the-clock environment. More specifically, the

    landing hardware and its associated real-time processing

    aid the vehicles guidance and control logic during takeoff

    and landing. In addition, a maintenance module evaluates

    whether the vehicles are due for maintenance and moni-

    tors the recharging of the batteries prior to flight [32], [33].

    Likewise, modified R/C trucks made by DuraTrax are

    used as ground vehicles in the platform [34]. The modifi-

    cations consist of replacing the stock onboard motor con-

    troller and R/C receiver with a custom motor driver

    circuit, Robostix microcontroller [35], and RF communi-

    cation module with 802.15.4 wireless connectivity. These

    modifications improve the vehicles precision-driving

    capabilities while making it possible to autonomously

    command and control multiple ground vehicles by

    means of one ground computer in mission scenarios,

    such as airborne search and track missions, search and

    recovery operations, networking experiments, and air

    and ground cooperative mission scenarios.

    Task Processing and Operator Interface ComponentsThe control system for each vehicle in the testbed can

    process and implement tasks defined by a system compo-

    nent or user. For example, each vehicle has a vehicle man-

    ager module that is executed on the dedicated ground

    computer and designed to handle the task processing,trajectory generation, and control processing. This module

    FIGURE 4 Airplane model in (a) autonomous hover and the airplane

    axis setup for (b) hover experiments. The lightweight reflective

    spheres glued to the vehicles structure shown on the airplane in

    (a) and (b) are used by the motion-capture system to track and

    compute the airplanes position and attitude.

    (a)

    (b)

    X

    Y

    Z

    FIGURE 5 Fully autonomous flight test with (a) five UAVs and (b) a

    closeup of five UAVs in flight. In this flight, the autonomous tasking

    system commands all five vehicles to take off and hover 0.5 m

    above the ground for two minutes. In (b) the five vehicles are shown

    as they hover during the test. In (a) the five transmitters for the vehi-

    cles are shown. The transmitter for each vehicle is connected direct-

    ly to a dedicated ground computer, which monitors and commands

    the vehicle during flight.

    (a)

    (b)

    56 IEEE CONTROL SYSTEMS MAGAZINE APRIL 2008

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    7/14

    is designed to allow an external system or user to commu-

    nicate with the vehicle using task-level commands, such as

    fly to waypoint A, hover/loiter over area B, search

    region C, classify/assess object D, track/follow object

    E, and take off/land at location F. These commands can

    be sent to the vehicle at any time during vehicle opera-

    tions. Each agents vehicle manager processes these tasks

    as they arrive and responds to the sender acknowledging

    the task request.

    RAVEN is also designed with an automated task man-

    ager that manages the testbeds air and ground vehicles

    using task-level commands. As a result, multivehicle mis-

    sion scenarios, such as search, persistent surveillance, and

    area denial, can be organized and implemented by the task

    manager without human intervention. Figure 5 shows a

    mission in which the automated task manager controls a

    group of five UAVs.

    Although coordinated multivehicle flight tasks can also

    be managed autonomously by RAVEN, the system has an

    operator interface with vehicle tasking capability. The taskmanager system is designed to allow an operator to issue a

    command to any vehicle at any time. Currently, the opera-

    tor interface includes a three-dimensional display of the

    objects in the testing area (as shown in Figure 3) and a

    command and control user interface, which displays vehi-

    cle health and state data, task information, and other mis-

    sion-relevant data.

    Each vehicles trajectory is specified as a sequence of

    waypoints consisting of a location xi = (x,y,z), vehicle

    heading i, and speed vi. Given these waypoints, several

    options are available for selecting the vehicles reference

    inputs. Perhaps the simplest path is to follow a linear inter-

    polation of the points defined by

    xref(t)=1

    |xi+1 xi|

    xi+1 xi

    vit, (1)

    where the choice of vi can be used to move between

    waypoints at varying speeds. This approach is used to

    automate the takeoff and landing procedure for the

    quadrotor vehicles. As a result, the quadrotor vehicles are

    fully autonomous from takeoff to landing during all flight

    operations.

    QUADROTOR CONTROL DESIGN MODEL

    A model of the quadrotor vehicle dynamics is needed todesign a hover controller. Figure 6 defines an inertial

    frame (xE, yE, zE), which is used to specify the vehicles

    position, and the roll, pitch, and yaw Euler angles , , and

    , which are used to specify the vehicles orientation. The

    body frame is specified by the xB-, yB-, and zB-axes; the

    body moments of inertia are Ix, Iy, and Iz; and p, q, and r

    denote the body angular rates in the body frame. Addi-

    tional parameters include the distance L from the quadro-

    tors center of mass to its motors, the mass m of the vehicle,

    the moment of inertia JR of a rotor blade, and a disturbance

    d generated by differences in rotor speed. The inputs to the

    system are collective, roll , pitch, and yaw, which are the

    collective, roll, pitch, and yaw input commands, respec-

    tively. Starting with the flat Earth, body-axis six-degree-of-

    freedom (6DOF) equations [36], the kinematic and moment

    equations for the nonlinear quadrotor model can be

    written as

    p = qrIyIz

    Ix

    JR

    Ixqd+

    L

    Ixroll, (2)

    q = pr

    IzIx

    Iy

    +

    JRIy

    rd+L

    Iypitch, (3)

    r = pq

    IxIy

    Iz

    +

    1

    Izyaw, (4)

    = p+ tan (qsin +r cos ), (5)

    = qcos r sin , (6)

    = (qsin +r cos ) sec , (7)

    where (JR/Ix)qd and (JR/Iy)rd represent the distur-

    bances caused by changing rotor speeds (as discussed in

    [37]), although the gyroscopic effects due to these distur-bances for the Draganflyer [30] model are small. Also, the

    cross-product moments of inertia are omitted since Ixz is

    considerably smaller than Ix and Iz due to the shape of the

    quadrotor, while Ixy and Iyz are zero due to the vehicle

    symmetry. Since the force applied to the vehicle as a result

    of the collective command can be represented as a thrust

    vector along the negative zB-body axis, the nonlinear navi-

    gation equations for the quadrotor in the reference frame

    defined in Figure 6 are given by

    FIGURE 6 Quadrotor model axis definition. The inertial frame

    (xE,yE, zE) is used to specify the vehicles position. The vehicles

    body frame (xB,yB, zB) and the roll, pitch, and yaw Euler angles

    , , and are used to specify the vehicles orientation.

    ZE

    YB

    XB

    YE

    XE

    ZB

    APRIL 2008 IEEE CONTROL SYSTEMS MAGAZINE 57

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    8/14

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    9/14

    xE = (sin cos cos sin sin )1

    mu, (8)

    yE = ( sin sin cos sin cos )1

    mu, (9)

    zE = g +(cos cos )1

    mu, (10)

    where u= collective. After linearizing this model around

    the hover flight condition with 0, setting

    collective= mg + collective, and dropping small terms in the

    xE and yE dynamics, yields

    xE = g , (11)

    yE = g , (12)

    zE =1

    mcollective, (13)

    = p , (14)

    = q , (15)

    = r , (16)

    p = L

    Ixroll, (17)

    q = L

    Iypitch, (18)

    r =1

    Izyaw. (19)

    The values Ix, Iy, Iz, L, m, andJR are measured or determined

    experimentally for the Draganflyer models used in RAVEN.

    An integrator of the position and heading error is

    included in each loop so that the controller can removesteady-state position and heading errors in hover. Since the

    motion-capture system accurately and directly measures

    the systems state, four linear-quadratic-regulator (LQR)

    controllers using four combinations of vehicle states, name-

    ly and xE, and yE, , and zE, are used to stabilize and

    control the quadrotor. The regulators are designed to opti-

    mize the vehicles capabilities in hover, while ensuring that

    the vehicle can respond quickly to position errors.

    The vehicles controllers are optimized for surveillance

    experiments. In these experiments, a camera is mounted on

    the quadrotor vehicles facing toward the ground. Thus,

    large changes in pitch, roll, and yaw affect the vehicles abil-

    ity to focus on a ground target. In addition, an anti-windupbumpless transfer scheme (similar to those described in

    [38]) with adjustable saturation bounds is used to prevent

    the integrators from winding up while the vehicle is on the

    ground before, during, and after takeoff and landing.

    HOVERING AIRPLANE CONTROL DESIGN MODEL

    In addition to quadrotors, a foam R/C aircraft [shown in

    Figure 4(a)] is used to explore the properties of an aircraft

    flying in a prop-hang (that is, nose up) orientation for the

    purpose of landing vertically and performing other com-

    plex maneuvers, such as perching. To avoid Euler-angle

    singularities, the airplanes body-fixed reference frame is

    defined such that the positive x-axis points down from the

    airplanes undercarriage, the positive y-axis points out

    along the right wing, and the positive z-axis points out the

    tail of the aircraft as shown in Figure 4(b). Using thisreference frame, the vehicles nominal attitude reference in

    hover corresponds to = = = 0.

    Assuming that the Earth is an inertial reference frame

    and the aircraft is a rigid body, the aircraft equations of

    motion [36], [39] can be written as (5)(7), (8)(10) with

    u= throttle, and

    p = qr

    IyIz

    Ix

    +ArudderLrudder

    Ixrudder, (20)

    q = prIzIxIy +

    AelevatorLelevatorIy

    elevator, (21)

    r = pq

    IxIy

    Iz

    +

    propIprop

    Iz

    +Cl,prop

    prop

    2

    2d5prop

    Iz

    +AaileronLaileron

    Izaileron, (22)

    where dprop is the diameter of the propeller.

    FIGURE 8 Single-vehicle waypoint tracking experiments. In this

    test, a quadrotor vehicle is commanded to fly from points

    (1.5, 0, 1), (1.5, 0.5, 1) , (1.5, 0.5, 1), (1.5, 5.5, 1), and

    (1.5, 5.5, 1) and then back to (1.5, 0, 1) m. These results showthat the vehicle can track a path at low speeds. In this test, the

    vehicle is commanded to fly and hover at waypoints in a rectangu-

    lar flight pattern. These results show that the vehicle does not drift

    away from the path more than 15 cm at any point during the flight.

    First FlightSecondFlight

    6

    5

    4

    3

    2

    1

    0

    14 3 2 1 0 1 2 3 4

    X-Axis Position (m)

    Y-AxisPosition(m)

    APRIL 2008 IEEE CONTROL SYSTEMS MAGAZINE 59

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    10/14

    Near hover, the influence of external forces and

    moments on the aircraft, other than gravity, are negligible,

    and velocities and rotational velocities are small. Using

    small angle approximations, the equations of motion

    (5)(7), (8)(10), and (20)(22) can be considerably simpli-

    fied. Since Iprop/Iz 0.04 1, the torque due to a change

    in the rotational speed of the motor can also be disregard-

    ed. Next, define throttle= mg+throttle and

    aileron= Cl,prop

    prop,0

    2

    2d5prop

    Iz+aileron, (23)

    where prop,0 is the average rotational speed of the pro-

    peller to keep the airplane in hover [40]. Next, since the

    vehicles reflective markers are mounted on top surface of

    both wings and the fuselage, they are only visible to cam-

    eras facing the top of the vehicle when the airplane is in

    hover. Therefore, reference = (/2) to ensure that there

    are at least three or more cameras facing the reflective

    markers. Linearizing the equations of motion for the air-

    plane in hover yield

    xE = g, (24)

    yE = g, (25)

    zE =1

    m

    throttle, (26)

    = q, (27)

    = p, (28)

    = r, (29)

    p = ArudderLrudder

    Ixrudder, (30)

    q = AelevatorLelevator

    Iyelevator, (31)

    r = AaileronLaileron

    Izaileron . (32)

    Here, Ix, Iy, and Iz correspond to the body moment of

    inertia terms, while Aelevator ,Arudder , and Aaileron correspond

    to the deflected area of each con-

    trol surface that is subject to

    propeller flow while the air-

    plane is in hover. In addition,

    Lelevator, Lrudder , and Laileron are

    the lengths of the control surface

    moment arms. These terms are

    measured or determined experi-

    mentally for this model.

    Four control schemes are

    applied to combinations of vehi-

    cle states and xE, and yE, ,and zE to stabilize and control the

    airplane in hover. Each loop uses

    proportional plus derivative (PD)

    and proportional plus integrator

    plus derivative (PID) controllers

    to maintain the vehicles position

    during the hover condition. In

    particular, the controllers use

    large gains on the state derivative

    errors to prevent the vehicle from

    moving too quickly when trying

    to maintain its position.

    Several issues arise in trying tocontrol an airplane in hover. For

    example, propeller drag torque

    changes with motor speed, caus-

    ing the vehicle to rotate about its

    zB-axis shown in Figure 4(b). This

    rotation is mitigated by adding

    an aileron deflection proportional

    to motor speed error around the

    equilibrium speed. The varying

    FIGURE 9 Multivehicle coordinated flight experiment. In this test, two vehicles are commanded to

    fly at a constant speed in a circular pattern with changes in altitude. The vehicles are command-

    ed by the systems task advisor to take off and fly a circular trajectory maintaining a constant

    speed of 0.25 m/s and 180 deg of angular separation. In particular, the vehicles are flying in a cir-

    cle (as projected in the xy coordinate frame shown in (a) and (b)), while they are changing alti-

    tude (flying from an altitude of 0.5 m to 1.5 m) as they move around the flight pattern as shown in

    (c). This test is repeated multiple times, and the vehicles fly similar flight paths in five consecutive

    tests as shown in (b). Notice that the vehicle trajectories in the lower right corner of the plot

    appear to be more noisy. This disruption is partially caused by the quadrotors flying through the

    rotor downwash from another vehicle. These results show that the tasking system can command

    and coordinate the actions of multiple vehicles.

    1

    0.5

    0

    1

    0.5

    1.5 1 0.5 0 0.5 1 1.5

    X-Axis Position (m)

    (a)

    Y-AxisPosition(m)

    1

    0.5

    0

    1

    0.5

    1.5 1 0.5 0 0.5 1 1.5

    X-Axis Position (m)

    (b)

    Y-AxisPosition(m)

    1.6

    1.4

    1.2

    1

    0.8

    0.6

    1 0.5 00.5

    1 1

    0

    1

    Y-Axis Position (m)

    (c)

    X-Axis Position (m)

    Z-AxisPosition(m)

    60 IEEE CONTROL SYSTEMS MAGAZINE APRIL 2008

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    11/14

    speed of the propeller also affects the speed of airflow

    over control surfaces, hence affecting control surface

    actuation. This issue is most prominent in roll control.

    To overcome this effect, the ailerons are deflected addi-

    tionally at low throttle settings. Unfortunately, as

    aileron deflection is increased, the ailerons block a por-

    tion of the propeller wash that would otherwise reach

    the elevator. To resolve this issue, a gain proportional to

    aileron deflection is added in the elevator control path

    to compensate for reduced airflow over the elevator and

    improve the vehicles pitch response when the ailerons

    are deflected.

    Finally, fast control surface motions can cause the vehi-

    cles airframe to deform during flight. This deformation

    causes the reflective markers used by the motion-capture

    system to shift in position, thereby giving an incorrect esti-

    mate of the airplanes momentary location. Consequently,

    the controller gains are sized to minimize rapid changes in

    control surface position. Flight results for the airplane in

    hover are given in the next section.

    RESULTS

    An overview of RAVEN testbed operations is provided in

    Testbed History.

    FIGURE 10 Airplane hover experiment. In this flight test, the airplane is commanded to hover at (x, y, z ) = (0, 0, 0.7) m for 5 min.

    Part (a) shows the xy plot of vehicle position, while (b)(d) show histograms with percentage of time at location for x,y, and zpositions.

    These results demonstrate that the vehicle can hold its position reliably during flight. In this test, the vehicle remains inside the 20-cm box

    over 63% of the time during the 5-min hover test flight and stays within 0.8 m of the desired location throughout the entire test.

    1

    0.5

    0

    0.5

    1

    1.5 1 0.5 0 0.5 1 1.5

    X-Axis Position (m)

    (a)

    Y-AxisPosition(m)

    35

    40

    30

    25

    20

    15

    105

    01 0.5 0 0.5 1

    X-Axis Position (m)

    (b)

    %o

    fTimeatLocation 35

    40

    30

    25

    20

    15

    105

    01 0.5 0 0.5 1

    Y-Axis Position (m)

    (c)

    %o

    fTimeatLocation 35

    40

    30

    25

    20

    15

    105

    00.6 0.65 0.7 0.75 0.8

    Z-Axis Position (m)

    (d)

    %o

    fTimeatLocation

    Testbed HistoryVarious multivehicle tests and mission scenarios have been flown

    using RAVEN. In particular, since January 2006, more than 2500

    vehicle experiments have been performed, including approximately

    60 flight demonstrations during a 16-h period at the Boeing Tech-

    nology Exposition at Hanscom Air Force Base near Lexington,Massachusetts, on May 34, 2006. Each test performed at the

    event involved two vehicles. One test involved two air vehicles fly-

    ing a three-dimensional coordinated pattern (see Figure 9), while

    another involved an air vehicle following a ground vehicle. These

    demonstrations show that the platform can perform multiple UAV

    missions repeatedly, on demand, and with minimal setup.

    APRIL 2008 IEEE CONTROL SYSTEMS MAGAZINE 61

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    12/14

    Quadrotor

    Typical results from a 10-min hover test are shown in Figure 7.

    In this test a single quadrotor is commanded to hold its posi-

    tion at (x,y,z) = (0, 0, 0.7) m for a 10-min period. Figure 7

    shows four plots, including a plot of the vehicles x and y posi-

    tions. The dashed red box in the picture is 10 cm from the

    center point. As shown in Figure 7, the vehicle maintains its

    position inside this 20-cm box during the entire flight. The

    remaining plots in the figure are the histograms of the vehi-

    cles x, y, and z positions during these tests. This test shows

    that a quadrotor can maintain both its altitude (staying

    between 0.650.75 m) and position (staying mostly within 0.05

    m from center) in hover over the full charge cycle of a battery.

    The results of a single-vehicle waypoint tracking experi-

    ment are shown in Figure 8. In this test the vehicle is com-

    manded to hold its altitude at 1 m while flying at a velocity of

    0.05 m/s between each of the following waypoints:

    (1.5, 0, 1) m, (1.5, 0.5, 1) m, (1.5, 0.5, 1) m, (1.5, 5.5, 1)

    m, and (1.5, 5.5, 1) m, and then back to (1.5, 0, 1) m. In

    addition, the vehicle hovered at each waypoint for 10 s before

    flying to the next waypoint. This test demonstrates that thevehicle can follow a set trajectory around the indoor laborato-

    ry flight space. The plots in Figure 8 show that the vehicle fol-

    lows the trajectory around a 3-m-by-6-m rectangular box as

    specified. The cross-track error is less than 15 cm from the

    specified trajectory at any given time during the flight.

    In addition to these single-vehicle experiments, several

    multivehicle experiments and test scenarios have been con-

    ducted, as shown in Figure 5. These tests include, but are

    not limited to, formation flight tests, coordinated vehicle

    tests involving three air vehicles, and multivehicle search

    and track scenarios. In particular, Figure 9 shows the

    results from a two-vehicle coordinated flight experiment. In

    this experiment, the vehicles are commanded by the sys-tems task advisor to take off and fly a circular trajectory

    maintaining a constant speed of 0.25 m/s and 180 of angu-

    lar separation. In particular, the vehicles are flying in a cir-

    cle (as projected in the x y coordinate frame) while they

    change altitude (flying from an altitude of 0.5 m to 1.5 m) as

    they move around the flight pattern. Figure 9(a) shows the

    x y projection of one of the five circle test flights com-

    pleted as part of this experiment. Notice that the vehicle

    trajectories in Figure 9(a) appear to be more noisy. This

    disruption is partially caused by the quadrotors flying

    through the rotor downwash from another vehicle. Flight

    testing shows that the downwash from these quadrotor

    vehicles is substantial, thus making it difficult to fly onequadrotor below a second quadrotor without significant

    altitude separation. Figure 9(c) shows a three-dimensional

    view of the trajectory, making it clear that the vehicles are

    also changing altitude during the flight. Figure 9(b) shows

    the results of five consecutive two-vehicle circle test flights

    performed over a 20-min time span. These results demon-

    strate that experiments run on RAVEN are repeatable and

    that the platform can be used to perform multiple test

    flights over a short period of time.

    This article describes an indoor testbed developed at MIT for studying

    long-duration multivehicle missions in a controlled environment.

    FIGURE 11 Hovering airplane waypoint tracking experiments. In

    these flight tests, the vehicle flies between points (1.5, 0, 1),

    (1.5, 0, 1), and (1.5, 5.5, 1) m. These results show that the hovering

    airplane can fly between waypoints. In both (a) and (b), the vehicle

    stays within one meter of the desired trajectory line during both

    hover test flights.

    6

    5

    4

    3

    2

    Y-AxisPosition(m)

    1

    0

    1

    4 3 2 1 0 1 2 3 4 5

    X-Axis Position (m)

    (a)

    (b)

    6

    5

    4

    3

    2

    Y-AxisPosition(m)

    1

    0

    1

    4 3 2 1 0 1 2 3 4 5

    X-Axis Position (m)

    62 IEEE CONTROL SYSTEMS MAGAZINE APRIL 2008

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    13/14

    Hovering Airplane

    Just as with the quadrotor, hover tests are performed with

    the foam airplane shown in Figure 4. In Figure 10 the vehi-

    cle is commanded to hold its position at (xE,yE,zE) =

    (0, 0, 0.7) m for 5 min. Figure 10 shows four plots, includ-

    ing a plot of the vehicle xy location while it maintains its

    position and attitude. The dashed red box in Figure 10 is

    0.5 m from the center point. As shown in Figure 10, the

    vehicle maintains its position inside this 1-m box for most

    of the 5-min test period. The remaining plots give his-

    tograms of the vehicles x, y, and z positions. These plots

    confirm that the vehicle is within a 50-cm box around the

    target point more than 63% of the time and stays within 0.8

    m of the desired location throughout the entire test. These

    plots also show that the vehicle maintains its altitude by

    staying between 0.650.75 m during the entire hover test.

    A video of the aircraft in the hover flight condition can be

    found online at [41].

    Figure 11 shows two waypoint tracking tests for the

    airplane starting from hover and then moving at a hori-zontal rate of 0.3 m/s between a set of waypoints while

    remaining nose-up. In (a) the vehicle starts from

    (1.5, 5.5, 1) m, while the vehicle starts from (1.5, 0, 1) m

    in (b). In both tests, the airplane flies along the desired

    flight path as it hovers between each waypoint despite the

    fact that the vehicle has less control authority in its yaw

    axis, making it difficult to maintain the vehicles heading

    during flight. The reduced control authority of the vehi-

    cles yaw axis in hover is due to the fact that propeller

    wash covers less than 10% of the ailerons in the hover

    flight condition, thus reducing the vehicles ability to

    counteract disturbances in propeller torque while main-

    tain vehicle heading. As a result, external disturbancescause the vehicle to deviate from a straight flight path

    between waypoints. However, the vehicle stays within 0.5

    m of the intended flight path throughout most of the tests.

    Rapid Prototyping discusses the short timelines for

    completing these aircraft flight results, which validates

    RAVENs rapid prototyping capabilities.

    CONCLUSIONS

    This article describes an indoor testbed developed at MIT

    for studying long-duration multivehicle missions in a con-

    trolled environment. While other testbeds have been

    developed to investigate the command and control of

    multiple UAVs, RAVEN is designed to explore long-dura-tion autonomous air operations using multiple UAVs with

    virtually no restrictions on flight operations. Using the

    motion-capture system capability, RAVEN enables the

    rapid prototyping of coordination and control algorithms

    for different types of UAVs, such as fixed- and rotary-

    wing vehicles. Furthermore, RAVEN enables one operator

    to manage up to ten UAVs simultaneously for multivehi-

    cle missions, which meets the goal of reducing the cost

    and logistic support needed to operate these systems.

    Finally, several mission-management, task-allocation,

    and path-planning algorithms have been successfully imple-

    mented and tested on RAVEN [32], [33]. For example, the

    UAVs use software monitors that determine when they

    must return to base for recharging. These monitors, in con-

    junction with other system health-management information,

    enable the strategic- and tactical-level mission planning

    modules to make informed decisions about the best way toallocate resources given the impact of likely failures. Future

    work designed to develop and integrate alternative health-

    management and multi-UAV mission technologies into the

    platform for real-time testing is ongoing.

    ACKNOWLEDGMENTS

    The authors would like to thank James McGrew and Spencer

    Ahrens for their assistance in the project. This research has

    been supported by the Boeing Company, Phantom Works,

    Seattle, and AFOSR grant FA9550-04-1-0458. Brett Bethke

    would like to thank the Hertz Foundation and the American

    Society for Engineering Education for their support.

    REFERENCES[1] P.R. Chandler, M. Pachter, D. Swaroop, J.M. Fowler, J.K. Howlett, S.Rasmussen, C. Schumacher, and K. Nygard, Complexity in UAV coopera-tive control, in Proc. 2002 American Control Conf., Anchorage, AK, 2002,

    pp. 18311836.[2] P. Gaudiano, B. Shargel, E. Bonabeau, and B. Clough, Control of UAVSWARMS: What the bugs can teach us, in Proc. 2nd AIAA Unmanned Unlim-ited Systems, Technologies, and Operations Aerospace Conf., San Diego, CA,2003, AIAA-2003-6624.[3] R. Olfati-Saber, W.B. Dunbar, and R.M. Murray, Cooperative control ofmulti-vehicle systems using cost graphs and optimization, in Proc. 2003

    American Control Conf., Denver, CO, 2003, pp. 22172222.[4] H. Paruanak, S. Brueckner, and J. Odell, Swarming coordination ofmultiple UAVs for collaborative sensing, in Proc. 2nd AIAA Unmanned

    Unlimited Systems, Technologies, and Operations Aerospace Conf., San Diego,CA, 2003, AIAA-2003-6525.

    [5] O. Amedi, T. Kanade, and K. Fujita, A visual odometer for autonomoushelicopter flight, Robot. Autonom. Syst., vol. 28, pp. 185193, 1999.[6] D. Cruz, J. McClintock, B. Perteet, O. Orqueda, Y. Cao and R. Fierro,Decentralized cooperative control: A multivehicle platform for research in

    networked embedded systems, IEEE Contr. Syst. Mag., vol. 27, no. 3, pp.5878, 2007.[7] J. Evans, G. Inalhan, J.S. Jang, R. Teo, and C. Tomlin, DragonFly: A

    versatile UAV platform for the advancement of aircraft navigation andcontrol, in Proc. 20th Digital Avionics Systems Conf., Daytona Beach, FL,2001, pp. 1.C.311.C.312.

    [8] R. Franz, M. Milam, and J. Hauser, Applied receding horizon control ofthe Caltech ducted fan, in Proc. 2002 American Control Conf., Anchorage,MA, 2002, pp. 37353740.

    Rapid PrototypingThe aircraft in Figure 4 was incorporated into RAVEN within three

    weeks of acquiring the vehicle, including one week for construc-

    tion. In addition, three days after making its first human-controlled

    flight, the airplane made its first autonomous hover flight. This

    activity was performed from the end of September 2006 to the

    middle of October 2006, validating the platforms rapid prototyping

    capability for new vehicle hardware. A video of the aircraft in hover

    flight can be found online at [41].

    APRIL 2008 IEEE CONTROL SYSTEMS MAGAZINE 63

  • 7/21/2019 Real-time indoor autonomous vehicle test environment .pdf

    14/14

    [9] V. Gavrilets, I. Martinos, B. Mettler, and E. Feron, Flight test and simula-

    tion results for an autonomous acrobatic helicopter, in Proc. 21st Digital

    Avionics Systems Conf., Irvine, CA, 2002, pp. 8.C.318.C.36.[10] G. Hoffmann, D.G. Rajnarayan, S.L. Waslander, D. Dostal, J.S. Jang, and

    C. Tomlin, The Stanford testbed of autonomous rotorcraft for multi agentcontrol (STARMAC), in Proc. 23rd Digital Avionics Systems Conf., Salt LakeCity, UT, 2004, pp. 12.E.4112.E.410.

    [11] O. Holland, J. Woods, R. De Nardi, and A. Clark, Beyond swarm intel-ligence: The UltraSwarm, in Proc. 2005 IEEE Swarm Intelligence Symp.,

    Pasadena, CA, June 2005, pp. 217224.[12] J.P. How, E. King, and Y. Kuwata, Flight demonstrations of cooperativecontrol for UAV teams, AIAA 3rd Unmanned Unlimited Techni cal Conf.,Workshop and Exhibit, Chicago, IL, 2004, AIAA-2004-6490.

    [13] E.N. Johnson and D.P. Schrage, System integration and operation of aresearch unmanned aerial vehicle,AIAA J. Aerosp. Computing, Inform., Com-mun., vol. 1, pp. 518, 2004.[14] Z. Jin, S. Waydo, E.B. Wildanger, M. Lammers, H. Scholze, P. Foley, D.

    Held, and R.M. Murray, MVWT-11: The second generation Caltech multi-vehicle wireless testbed, in Proc. 2004 American Control Conf., Boston, MA,2004, pp. 53215326.

    [15] E. Kadioglu and N. Papanikolopoulos, A method for transporting ateam of miniature robots, in Proc. 2003 IEEE/RSJ Int. Conf. IntelligentRobots Systems, Las Vegas, NV, 2003, pp. 22972302.

    [16] T. Kalmar-Nagy, R. DAndrea, and P. Ganguly, Near-optimal dynamictrajectory generation and control of an omnidirectional vehicle, Robot.Autonom. Syst., vol. 46, pp. 4764, 2004.

    [17] E. King, M. Alighanbari, Y. Kuwata, and J.P. How, Coordination andcontrol experiments on a multi-vehicle testbed, in Proc. 2004 American Con-trol Conf., Boston, MA, 2004, pp. 53155320.

    [18] T.J. Koo, Vanderbilt Embedded Computing Platform for AutonomousVehicles (VECPAV), July 2006 [Online]. Available: http://www.vuse.van-derbilt.edu/~kootj/Projects/VECPAV/

    [19] D.R. Nelson, D.B. Barber, T.W. McLain, and R.W. Beard, Vector fieldpath following for small unmanned air vehicles, in Proc. 2006 American Con-trol Conf., Minneapolis, MN, 2006, pp. 57885794.

    [20] D. Shim, H. Chung, H.J. Kim, and S. Sastry, Autonomous explorationin unknown urban environments for unmanned aerial vehicles, in Proc.AIAA Guidance, Navigation, Control Conf. Exhibit, San Francisco, CA, 2005,

    AIAA-2005-6478.[21] R. Teo, J.S. Jang, and C.J. Tomlin, Automated multiple UAV flight theStanford DragonFly UAV program, in Proc 43rd IEEE Conf. DecisionControl, Paradise Island, Bahamas, 2004, pp. 42684273.[22] R. Vidal, O. Shakernia, H.J. Kim, H. Shim, and S. Sastry, Multi-agent

    probabilistic pursuit-evasion games with unmanned ground and aerial vehi-cles, IEEE Trans. Robot. Autom., vol. 18, no. 5, pp. 662669, 2002.[23] V. Vladimerouy, A. Stubbs, J. Rubel, A. Fulford, J. Strick, and G. Dullerud,A hovercraft testbed for decentralized and cooperative control, in Proc. 2004American Control Conf., Boston, MA, 2004, pp. 53325337.[24] M. Valenti, T. Schouwenaars, Y. Kuwata, E. Feron, J. How, and J.Paunicka, Implementation of a manned vehicle-UAV mission system,in Proc. AIAA Guidance, Navigation, Control Conf. Exhibit, Providence, RI,

    AIAA-2004-5142, 2004.[25] T. Schouwenaars, M. Valenti, E. Feron, J. How, and E. Roche, Linearprogramming and language processing for human-unmanned aerial-

    vehicle team missions, J. Guid., Contr., Dyn., vol. 29, no. 2, pp. 303313,Mar. 2006.[26] K. Culligan, M. Valenti, Y. Kuwata, and J.P. How, Three-dimensional

    flight experiments using on-line mixed-integer linear programming trajec-tory optimization, in Proc. 2007 American Control Conf., New York, NY,2007, pp. 53225327.

    [27] G. Tournier, M. Valenti, J.P. How, and E. Feron, Estimation and con-trol of a quadrotor vehicle using monocular vision and moire patterns, inProc. AIAA Guidance, Navigation, Control Conf. Exhibit, Keystone, CO, Aug.

    2006, AIAA-2006-6711.[28] M. Valenti, B. Bethke, G. Fiore, J. How, and E. Feron, Indoor multi-vehicle flight testbed for fault detection, isolation, and recovery, in Proc.AIAA Guidance, Navigation, Control Conf. Exhibit, Keystone, CO, Aug. 2006,AIAA-2006-6200.[29] Vicon, Vicon MX Systems, June 2006 [Online]. Available:

    http://www.vicon.com/products/viconmx.html[30] Draganfly Innovations Inc., Draganfly V Ti Pro Web site, June 2006[Online]. Available: http://www.rctoys.com/draganflyer5tipro.php

    [31] Thunder Power Batteries, Thunder Power Batteries Web site, Sept.2006 [Online]. Available: http://www.thunderpower-batteries.com/

    [32] M. Valenti, B. Bethke, J. How, D.P. de Farias, and J. Vian, Embedding

    health management into mission tasking for UAV teams, in Proc. 2007

    American Control Conf., New York, June 2007, pp. 57775783.[33] M. Valenti, D. Dale, J. How, and J. Vian, Mission health management

    for 24/7 persistent surveillance operations, in Proc. AIAA Guidance, Navi-

    gation, Control Conf. Exhibit, Myrtle Beach, SC, Aug. 2007, AIAA-2007-6508.[34] Hobbico, Inc., DuraTrax Mini Quake Web site, Dec. 2006 [Online].

    Available: http://www.duratrax.com/cars/dtxd11.html[35] Gumstix, Inc., Gumstix.com Web site, Dec. 2006 [Online]. Available:

    http://www.gumstix.com[36] B.L. Stevens and F.L. Lewis,Aircraft Control and Simulation, 2nd ed.

    Hoboken, NJ: Wiley, 2003.[37] S. Bouabdallah, P. Murrieri,and R. Siegwart, Design and control of anindoor micro quadrotor, in Proc. 2004 IEEE Int. Conf. Robotics Automation,

    New Orleans, LA, Apr. 2004, pp. 43934398.[38] C. Edwards and I. Postlethwaite, Anti-windup and bumpless transferschemes, in Proc. UKACC Int. Conf. Control 1996 (Conf. Publ. No. 427), Exeter,

    England, Sept. 1996, pp. 394399.[39] J. How, Lecture Notes: Aircraft Stability and Control (16.333): Lectures3 and 4 Sept. 2004 [Online]. Available: http://ocw.mit.edu/OcwWeb/

    Aeronautics-and-Astronautics/16-333Fall-2004/LectureNotes/index.htm[40] N. Knoebel, S. Osborne, D. Snyder, T. Mclain, R. Beard, and A.Eldredge, Preliminary modeling, control, and trajectory design for minia-

    ture autonomous tailsitters, in Proc. AIAA Guidance, Navigation, Control

    Conf. Exhibit, Keystone, CO, 2006, AIAA-2006-6713.[41] Aerospace Controls Laboratory MIT, UAV SWARM Health Manage-

    ment Project Web site, June 2006 [Online]. Available: http://vertol.mit.edu

    AUTHOR INFORMATION

    Jonathan P. How ([email protected]) is a professor in the

    Department of Aeronautics and Astronautics at the Massa-

    chusetts Institute of Technology (MIT), where he leads the

    Aerospace Controls Laboratory. He graduated from the

    University of Toronto with a B.Sc. in aerospace engineer-

    ing and received the M.Sc. and Ph.D. degrees in aeronau-

    tics and astronautics from MIT in 1989 and 1993,

    respectively. He is a Senior Member of the IEEE. He can be

    contacted at Massachusetts Institute of Technology, Aero-

    space Controls Laboratory, 77 Massachusetts Ave., Rm 33-326, Cambridge, MA 02139 USA.

    Brett Bethke is a research assistant in the Aerospace

    Controls Laboratory at MIT. He has S.B. degrees in physics

    and aerospace engineering, an S.M. in aeronautics and

    astronautics, and is pursuing his Ph.D. degree in the Aero-

    nautics and Astronautics Department at MIT. He currently

    has a fellowship from the Hertz Foundation and the Amer-

    ican Society for Engineering Education.

    Adrian Frank is a visiting student in the Aerospace

    Controls Laboratory at MIT. He is currently pursuing his

    M.Sc. degree in aeronautical engineering at the Royal Insti-

    tute of Technology in Stockholm, Sweden.

    Daniel Dale received an M.Eng. degree from the Elec-

    trical Engineering and Computer Science Department at

    MIT in 2007. He has S.B. degrees in physics and electrical

    engineering from MIT.

    John Vian is a technical fellow with The Boeing Company,

    where he is responsible for leading Phantom Works self-

    aware adaptive systems strategic technology area. He holds a

    B.Sc. in mechanical engineering from Purdue University and

    an M.Sc. in aeronautical engineering and Ph.D. in electrical

    engineering, both from Wichita State University.

    64 IEEE CONTROL SYSTEMS MAGAZINE APRIL 2008