9
American Institute of Aeronautics and Astronautics 1 Intelligent Collaborative Control For UAVs Dr. Lora G. Weiss * Applied Research Laboratory/The Pennsylvania State University State College, PA 16801 Abstract This paper describes an autonomous Intelligent Controller (IC) architecture for collaborative control of unmanned autonomous vehicles. These vehicles may operate independently or cooperate to carry out complex missions. The IC technology has been demonstrated in prototype designs of actual, autonomous, unmanned, intelligently controlled vehicles and integrates human interactions at multiple levels. Recently, this control architecture has been applied to intelligent collaboration among multiple unmanned air vehicles (UAVs) operating in adverse environments. These past efforts expanded a preliminary collaborative control capability and now allow a group of autonomous agents or UAVs to dynamically and automatically reconfigure in response to varying mission requirements and to continue a mission even if the size of the group is reduced. Enhancements to the current controller include support for operations in three dimensions and exploiting higher fidelity information provided by sensors. I. Introduction There are many variants of behavior-based architectures. One of the earliest was the subsumption architecture of Rodney Brooks 1 . The basic concept is that the control system is constructed around a collection of largely independent operational capabilities referred to as behaviors or behavior-generating elements 1,3,7 . Prototype designs of such systems have shown that the overall capability of a system can exceed that of more conventional architectures, and sometimes to a surprising degree. The Intelligent Controller (IC) described in this paper was initially based on the subsumption approach, but actual system needs presented additional requirements. This resulted in a modified approach to intelligent control architectures 6,7 , one of which is summarized in this paper. The IC architecture was then extended for collaborative behaviors which resulted in a unique approach to coordinated control of UAVs 10,11,13,14 . The Intelligent Control architecture presented in this paper was designed to address the issues of intelligent autonomy for single and multiple coordinated unmanned vehicles. An important component of the IC design is that it is robust with respect to reacting to unforeseen situations and to achieving multiple mission requirements when one or more of the autonomous units is partially or completely disabled, including the supervisor unit. Key features of this IC architecture include: A robust ability to react to unforeseen situations (no scripted cases) Autonomous, on-the-fly dynamic planning and re-planning Situational awareness Autonomous threat response A common architecture for multi-vehicle collaboration Expandable to incorporate new capabilities as they are identified Human interaction at any desired level, but not required To demonstrate this capability, a modular intelligent controller was developed for each vehicle in a collection of UAVs operating within a mission context. This initial controller demonstrated how a collection of UAVs can autonomously collaborate to achieve individual and multiple mission objectives. Several sample scenarios were evaluated. These scenarios demonstrated in-situ re-planning of search areas and coordinated formation flying and * Senior Research Associate, Autonomous Control and Intelligent Systems Division, State College, PA, AIAA Member Infotech@Aerospace 26 - 29 September 2005, Arlington, Virginia AIAA 2005-6900 Copyright © 2005 by L. Weiss, Penn State University. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

[American Institute of Aeronautics and Astronautics Infotech@Aerospace - Arlington, Virginia ()] Infotech@Aerospace - Intelligent Collaborative Control for UAVs

  • Upload
    lora

  • View
    212

  • Download
    0

Embed Size (px)

Citation preview

American Institute of Aeronautics and Astronautics

1

Intelligent Collaborative Control For UAVs

Dr. Lora G. Weiss * Applied Research Laboratory/The Pennsylvania State University

State College, PA 16801

Abstract

This paper describes an autonomous Intelligent Controller (IC) architecture for collaborative control of unmanned autonomous vehicles. These vehicles may operate independently or cooperate to carry out complex missions. The IC technology has been demonstrated in prototype designs of actual, autonomous, unmanned, intelligently controlled vehicles and integrates human interactions at multiple levels. Recently, this control architecture has been applied to intelligent collaboration among multiple unmanned air vehicles (UAVs) operating in adverse environments. These past efforts expanded a preliminary collaborative control capability and now allow a group of autonomous agents or UAVs to dynamically and automatically reconfigure in response to varying mission requirements and to continue a mission even if the size of the group is reduced. Enhancements to the current controller include support for operations in three dimensions and exploiting higher fidelity information provided by sensors.

I. Introduction There are many variants of behavior-based architectures. One of the earliest was the subsumption architecture of

Rodney Brooks1. The basic concept is that the control system is constructed around a collection of largely independent operational capabilities referred to as behaviors or behavior-generating elements1,3,7. Prototype designs of such systems have shown that the overall capability of a system can exceed that of more conventional architectures, and sometimes to a surprising degree.

The Intelligent Controller (IC) described in this paper was initially based on the subsumption approach, but actual system needs presented additional requirements. This resulted in a modified approach to intelligent control architectures6,7, one of which is summarized in this paper. The IC architecture was then extended for collaborative behaviors which resulted in a unique approach to coordinated control of UAVs10,11,13,14.

The Intelligent Control architecture presented in this paper was designed to address the issues of intelligent autonomy for single and multiple coordinated unmanned vehicles. An important component of the IC design is that it is robust with respect to reacting to unforeseen situations and to achieving multiple mission requirements when one or more of the autonomous units is partially or completely disabled, including the supervisor unit.

Key features of this IC architecture include: • A robust ability to react to unforeseen situations (no scripted cases) • Autonomous, on-the-fly dynamic planning and re-planning • Situational awareness • Autonomous threat response • A common architecture for multi-vehicle collaboration • Expandable to incorporate new capabilities as they are identified • Human interaction at any desired level, but not required

To demonstrate this capability, a modular intelligent controller was developed for each vehicle in a collection of UAVs operating within a mission context. This initial controller demonstrated how a collection of UAVs can autonomously collaborate to achieve individual and multiple mission objectives. Several sample scenarios were evaluated. These scenarios demonstrated in-situ re-planning of search areas and coordinated formation flying and

* Senior Research Associate, Autonomous Control and Intelligent Systems Division, State College, PA, AIAA Member

Infotech@Aerospace26 - 29 September 2005, Arlington, Virginia

AIAA 2005-6900

Copyright © 2005 by L. Weiss, Penn State University. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.

American Institute of Aeronautics and Astronautics

2

were evaluated against specific metrics to assess the improvements attainable9,10,11. Metrics included the new notion of a Collaborative Gain Index4,5.

The capabilities addressed by the Intelligent Controller architecture include: Fully Autonomous Vehicle Control, Dynamic Planning and Re-planning, Autonomous Threat Response, and Multi-Vehicle/Multi-Agent Collaboration. The term “autonomous” refers to fully autonomous vehicle operations with human interactions as desired (versus remote piloting with human operations being required). An important component of the IC design is that it is robust with respect to reacting to unforeseen situations and to achieving mission requirements when one or more of the autonomous units are partially or completely disabled, including the supervisor unit.

Multiple ICs agents can be integrated for coordinated control or can be deployed as a swarm for collaborative control. Each IC retains the same architecture yet has a different local mission to execute. This replication of the architecture allows management of complexity and considerably simplifies the problem of designing multiple, interacting, intelligent agents for complex systems. Figure 1 depicts some of the issues associated with coordinated control of multiple UAVs.

Figure 1. Challenges Associated with Coordinated Control of Multiple UAVs

II. Intelligent Controller Architecture – Overview The Intelligent Controller (IC) architecture developed at the Pennsylvania State University Applied Research

Lab (ARL/PSU) 7 is composed of two main modules: Perception and Response. The Perception module is where sensor data is analyzed, information is integrated, and interpretation of the events is generated. The Response module is where the situation is assessed, plans are generated, and re-planning or plan execution occurs. Figure 2 illustrates the IC modules for a single controller. The responses from the Response module are in the form of commands and communications to vehicle subsystems or to external systems. These systems and subsystems may be other ICs, conventional control systems (effectors), or human collaborators.

An Input Interface module converts data streams external to the IC into forms that are usable by the Perception module. This data is accumulated in a buffer and released to Perception at discrete time intervals referred to as processing cycles. A processing cycle is determined by the amount of time required for an effector or conventional control system to complete a command and return data to the IC input. Any a-synchronous input to the IC, such as arrival of a message from a UAV partner or a human, is accumulated at the input buffer until released at the completion of the current processing cycle. The Input Interface module identifies incoming messages by type and holds them between cycles. Data comes in from a TCP/IP (or other standard interface) connection and is assessed for broken or collided messages. These network errors are corrected prior to forwarding the data for message classification. An Output Interface module then sends effector commands and messages to the correct recipients.

Collaborative Control- Search Areas/Paths- Planning/Re-Planning- Attack- Negotiation

Communications- Info to Communicate- How Often to Communicate- Covertness Limitations- Human-Machine Interactions

System- Performance- Efficiency- Redundancy- Robustness

Collaborative Control- Search Areas/Paths- Planning/Re-Planning- Attack- Negotiation

Collaborative Control- Search Areas/Paths- Planning/Re-Planning- Attack- Negotiation

Communications- Info to Communicate- How Often to Communicate- Covertness Limitations- Human-Machine Interactions

Communications- Info to Communicate- How Often to Communicate- Covertness Limitations- Human-Machine Interactions

System- Performance- Efficiency- Redundancy- Robustness

System- Performance- Efficiency- Redundancy- Robustness

American Institute of Aeronautics and Astronautics

3

Figure 2. High Level Decomposition for a Single IC

A. Perception Module The role of the Perception module is to create an internal representation of the external world relevant to the IC,

using sensor data streams as inputs. A key capability of the Perception module is to make correct inferences and recognize the existence of properties in the external world (e.g., obstacles) from incomplete and potentially erroneous input data. The Perception module contains data fusion algorithms and Continuous Inference Networks (CINETs)8 for information integration. CINETs are used to infer properties or events, such as "obstacle" by appropriately combining multiple pieces of information in an automatic recognition process.

For example, sensors may provide measurement inputs to a CINET that provide estimates of an object’s range, angle, and size. Based on these measurements, the Continuous Inference Network integrates the information with weighted fuzzy AND and OR nodes with the result being a confidence factor that the object is an obstacle. In this example, the confidence is based on whether or not the object "looks" like an obstacle. For the object to "look" like an obstacle, it must be the right size and be at the right range and angle. When all of these indicators are integrated, the result is a confidence factor (a value in the continuous interval [0,1] ) of the object being an obstacle.

B. Response Module

The role of the Response module is to plan and execute in real time a course of action to carry out a specific mission, given the situational awareness derived by the Perception module. The Response module is decomposed into three layers: A Mission Manager, Behaviors, and Actions. The Mission Manager retains the big picture and specifies a mission plan, which is a list of relevant Behaviors to be executed. A behavior is a module in response that is responsible for carrying out a time-extended core operation such as search, avoid, or loiter. Each Behavior has its own plan to execute, which is a list of Actions to be conducted. Actions represent the bottom-level operations of the response module and are responsible for generating the output communications of the intelligent controller. Control is cycled and interrupted appropriately using an Execution Engine within the Response module. The Execution Engine is an application-independent component of the Response module that calls the functions in the Response module in an appropriate order.

C. Multiple ICs

Multiple ICs can be integrated in a hierarchy or as a group for coordinated control. Each IC retains the same architecture yet has a different local mission to execute (such as UAVs searching different local areas). This replication of architecture allows management of complexity and considerably simplifies the problem of designing multiple, interacting, intelligent controllers for complex systems. Just as a collection of Behaviors within an IC needs a Mission Manager for coordination and arbitration, a collection of ICs within a system also requires a supervisor. This role is assumed by a Supervisory IC.

For multiple vehicles, a significant portion of a mission may be achieved by having the capability of the individual autonomous units carry out their own tasks while operating in a group. Given that the individual vehicles are intelligent and capable of inferring the behavior of other cooperating vehicles through sensing and observation, only a limited number of direct communications may be necessary to achieve significant performance enhancements.

Smart Sensor Inputs

Sensor 1

Sensor N

Messages

.

.

.

• Satellite Info• Radar Contacts• Human Inputs

(orders, advice)

INTELLIGENT CONTROLLER

Perception Response

• Sensor Data Fusion• Information Integration• Inferencing and Interpretation • Situational Awareness

• Operational Assessment

• Planning and Replanning

• Plan Execution

• Incoming Missile• Ground Target• Obstacle

• Launch Long-range Missile• Monitor Situation

Configure Team

Messages

ConventionalControlSystems

Human Collaborator

Other AutonomousControllers

Smart Sensor Inputs

Sensor 1

Sensor N

Messages

.

.

.

• Satellite Info• Radar Contacts• Human Inputs

(orders, advice)

INTELLIGENT CONTROLLER

Perception Response

• Sensor Data Fusion• Information Integration• Inferencing and Interpretation • Situational Awareness

• Operational Assessment

• Planning and Replanning

• Plan Execution

• Incoming Missile• Ground Target• Obstacle

• Launch Long-range Missile• Monitor Situation

Configure Team

Messages

ConventionalControlSystems

Human Collaborator

Other AutonomousControllers

American Institute of Aeronautics and Astronautics

4

The overall capability of the aggregate system is then an emergent property stemming from the collaboration among individual vehicles coupled with their own abilities to carry out autonomous operations over a range of variations in the missions. For a properly designed collaborative unmanned vehicle system, this emergent capability may be greater than that of a system composed of vehicles acting alone.

D. Collaborative Operations

The operating characteristics of a group of autonomous, coordinated controllers constitutes an autonomous intelligent control system, and their design based on this architecture can be summarized as follows. The autonomous system is composed of one or more ICs, where one of the ICs may possibly be a supervisor IC. Each IC’s objective is to carry out a local mission, but in coordination with the global mission and defined by the mission for the collective system. The mission may be altered during execution by receipt of a new mission from a human or a Supervisor IC. Within the constraints of its current mission, each IC operates autonomously. There does not exist an "optimal control law" for the system of ICs, and the designers do not attempt to derive one.

Rather, the objective in the design of each IC is for it to be able to operate "optimally" as an individual, given its current mission and its perceived world as created by its Perception processing. This perceived world may include a Representational Class, say "Partner," where an instance of "Partner" represents the status of a partner or peer, including its operational plans and objectives to the extent known. This knowledge will generally be incomplete and changeable.

An autonomous intelligent control system such as this would appear to constitute a close parallel to biological systems such as beehives, ant colonies, and football teams. The system is composed of a collection of individuals with certain specialized characteristics and with some ability to communicate with each other. Operating together, the resulting system can have emergent properties, strengths, and survivability that go beyond the sum of that of the individual units.

III. Intelligent Controller Architecture For Multiple UAVs The capabilities of the prototype multi-UAV Intelligent Controller developed by ARL/PSU include capabilities for autonomous operations for individual units as well as collaboration between UAVs. These capabilities include: Search, Transit, Attack, Avoid, Communicate, Rendezvous, Assist, Search, Coordinate Attack, Delegate Mission, etc. Each of these operations is implemented as an independent Behavior or agent that operates autonomously within its scope, and conducts real-time planning, analyzing the status of its execution, and adapting appropriately to the results of that analysis.

A. Perception Module for UAVs Most of the interpretation, decision-making, and analysis performed by the IC are based on a continuous logic as

opposed to binary logic rule based12. As an example, it is necessary for a UAV IC to determine whether an instance of Obstacle is a “threat” so that it can decide whether evasive Action needs to be taken. It may also be required to determine whether that same instance is suitable for use as “cover”. The processing structure to determine this is referred to as a Continuous Inference Network (CINet) and is shown in Figure 3. CINets establish confidence factors (values between 0 and 1) of existence for the set of inferred properties defined for a particular class. Since physical variables are usually continuous, classifiers (the nodes in the CINet) need to be based on continuous logic to avoid loss of information, which precludes the use of binary (true-false) logic.

An example of this is the “close-enough” inferred property used in the UAV controller. At high vehicle speeds, “close enough” will constitute a threat at greater ranges than at low speeds; consequently, the range at which an Obstacle starts to become a threat is a function of vehicle speed (or more accurately, closure rate). By making the range parameter a function of closure rate, much smoother reactions to a continuously-changing object can be achieved. Figure 3 is an illustration of the CINet used for collision avoidance to identify an obstacle that constitutes a threat.

American Institute of Aeronautics and Astronautics

5

In Figure 3, the “threat” property for Obstacle may be defined by: an obstacle is a “threat” if it is “close-enough” (range) and is “large-enough” (radius) and “in-my-path” (angle). Figure 4 shows a CINet example of the “threat” property for Attackable Targets defined by the “threat” being “close-enough” (range) and “in-my-path” (angle).

Figure 3: Collision Avoidance Threat Recognition

The CINet implementation is a nonlinear continuous map from a set of physical variables to a confidence factor (CF) for “threat” that adapts to changing conditions via dynamic adaptation of the “a” and “b” transition points. This CINet was used in the UAV IC and demonstrated robust performance when coupled with a guidance law that took into account the threat confidence factor as well as the obstacle’s physical characteristics. The concept served as a starting point for more sophisticated collision avoidance threat recognition.

Figure 5 shows another CINet example which is used for correlation of target detection reports from sensors to existing representational classes of tracks that are currently stored in the UAV IC. The CINet uses three inferred properties to produce a confidence factor that is used to determine whether or not the detection report should be fused with an existing track. The properties are Change in X, Change in Y, and Change in Size.

Figure 4: Attackable Threat Recognition

American Institute of Aeronautics and Astronautics

6

Figure 5: Track Correlation In general, continuous logic is used in ARL/PSU’s IC architecture for correlation of detection data with

representational class instances, for computing gains of adaptive estimators, for recognition of confusion, and for situation assessment.

A. Response Module for UAVs Figure 6 is an illustration of the Response module of the IC used for multiple UAV control. Each UAV has the

same Response module, and each knows whether or not it is operating as a supervisor. Figure 6 shows Behaviors and their Actions. The Mission Manager acts as the Behaviors’ supervisor, activating them as the external conditions and current orders warrant. The IC is able to receive new orders during mission execution and adjust the execution appropriately. Orders that the IC understands include “takeoff”, “land”, “transit-through-provided-waypoints”, “search-an-area”, “proceed to rendezvous point”, and “rescind-current-mission.” In general, orders can be downloaded to define the UAV mission at startup or transmitted during mission execution to modify the mission’s details on-the-fly.

1. UAV Behaviors

As Figure 6 depicts, there are several Behaviors used within the UAV IC. Two of these Behaviors are described in detail: Search and Coordinate Attack. These two Behaviors are discussed since they exemplify single UAV operations as well as coordinated UAV operations. The Search and Coordinated Attack Behaviors combine to provide an offensive capability for the vehicle.

The Search Behavior - Search is designed to cover a defined region in which targets are expected to exist and be candidates for attack. The Search Behavior’s role is to provide the target-acquisition sensors with opportunities to detect target-like objects at longer ranges, allowing the Perception module to represent and classify them, but not to react to their existence. The Search Behavior was designed with two modes, Area Search and Path Search. Area Search is in control when the designated region to search is a specified shape. This mode executes search planning and path generation algorithms. For example, the “Lawnmower Search” algorithm chooses waypoints based on a “mow the grass” pattern and continues until the entire search region is covered. The Path Mode assumes control when the designated region to search has been defined by a collection of waypoints that were issued to the controller by an order.

American Institute of Aeronautics and Astronautics

7

Figure 6: Functional Capabilities (Behavioral Decomposition) of Each UAV The Coordinate Attack Behavior – The Coordinate Attack Behavior’s role is to look for objects in Perception

that are classified as targets (at varying confidence factors), decide if conditions are correct for attack on one or more of them, and if so, request control from the Mission Manager. The initial phase of the attack is to focus acquisition beams on the selected target (the idea being to increase the SNR and data to improve classification). Given the classification information, the UAV then determines which UAV is the best one to attack the target (i.e., should it attack or should a partner UAV attack). This is decided as a function of vehicle location, classification level, weapon load, and vehicle availability. If a partner is best suited for the attack, negotiations will be initiated to request the partner to attack. Otherwise, the first UAV will proceed with the attack on its own. 2. Capabilities for Coordinated Operations

Capabilities that were integrated in the UAV Response Module to support coordinated operations included: • Search Area Collaboration. When a UAV completes searching a designated area, it sends a message to its

partner UAVs informing them that it is available to assist them. A UAV can assist in several ways. One form of assistance is to help another UAV complete its mission (e.g., re-partition a large search area for one UAV into two smaller search areas for two UAVs). Another form of assistance is if a partner UAV does not respond (e.g., it may have been removed from the group before completing its mission), then the assisting UAV will have to know the first UAVs mission and can assist in completing that mission based on the last communications that occurred between partners. As another example of assistance, if a UAV is assisting an attack, it will complete the search area containing the target if it determines that the requesting UAV assigned to that area needs additional assistance sooner rather than later. After completing the search of that area, the assisting UAV will return to its original search area and complete its original search mission.

• Negotiations. Negotiation can occur between UAVs to support an attack for any situation. One Behavior, called Coordinate Attack has three modes, (i) InitiateCoordinateAttack, (ii) RespondCoordinateAttack, and (iii) ConfirmCoordinateAttack, and was designed for coordinating attacks amongst many UAVs. A single UAV in the InitiateCoordinateAttack mode can initiate negotiations to request a partner to either attack a target or confirm the classification level of a suspected target. Partner vehicles use RespondCoordinateAttack and ConfirmCoordinateAttack to indicate they are available to assist in the coordinated attack. This Behavior forms the foundation for a generic communication between two or more vehicles when attacking a target, where one vehicle

MISSION MANAGER

TAKEOFF LANDSTANDBY TRANSIT

COMMUNICATE

SENDMESSAGE

DELEGATE MISSION

DELEGATE

ACTIVATESENSOR

SENDMESSAGENAVIGATE

ASSIST SEARCH

NAVIGATE NAVIGATE

SENDMESSAGENAVIGATE

RENDEZVOUS

SENDMESSAGE

MONITOR VEHICLE MISSION AVOID

EVADE

NAVIGATENAVIGATE

MONITORMISSION

SEARCH

ACTIVATESENSORNAVIGATE SEND

MESSAGE

ACTIVATESENSOR

FIREWEAPONNAVIGATE

ATTACK

SENDMESSAGE

COORDINATE ATTACK

NAVIGATEACTIVATESENSOR

Response Module

MISSION MANAGER

TAKEOFF LANDSTANDBY TRANSIT

COMMUNICATE

SENDMESSAGE

DELEGATE MISSION

DELEGATE

ACTIVATESENSOR

SENDMESSAGENAVIGATE

ASSIST SEARCH

NAVIGATE NAVIGATE

SENDMESSAGENAVIGATE

RENDEZVOUS

SENDMESSAGE

MONITOR VEHICLE MISSION AVOID

EVADE

NAVIGATENAVIGATE

MONITORMISSION

SEARCH

ACTIVATESENSORNAVIGATE SEND

MESSAGE

ACTIVATESENSOR

FIREWEAPONNAVIGATE

ATTACK

SENDMESSAGE

COORDINATE ATTACK

NAVIGATEACTIVATESENSOR

MISSION MANAGER

TAKEOFF LANDSTANDBY TRANSIT

COMMUNICATE

SENDMESSAGE

COMMUNICATE

SENDMESSAGE

DELEGATE MISSION

DELEGATE

DELEGATE MISSION

DELEGATE

ACTIVATESENSOR

SENDMESSAGENAVIGATE

ASSIST SEARCH

ACTIVATESENSOR

SENDMESSAGENAVIGATE

ASSIST SEARCH

NAVIGATE NAVIGATE

SENDMESSAGENAVIGATE

RENDEZVOUS

SENDMESSAGE

MONITOR VEHICLE MISSION AVOID

EVADE

AVOID

EVADE

NAVIGATENAVIGATE

MONITORMISSION

SEARCH

ACTIVATESENSORNAVIGATE SEND

MESSAGE

ACTIVATESENSOR

FIREWEAPONNAVIGATE

ATTACK

ACTIVATESENSOR

FIREWEAPONNAVIGATE

ATTACK

SENDMESSAGE

COORDINATE ATTACK

NAVIGATESENDMESSAGE

COORDINATE ATTACK

NAVIGATEACTIVATESENSOR

Response Module

American Institute of Aeronautics and Astronautics

8

requests help and determines which partner is best equipped to assist in the attack (e.g., based on fuel, weapon load-out, range from target, etc.). This Behavior is then used with the actual Attack Behavior.

• Partitioning / Assigning Search Areas. A capability has been incorporated that allows a UAV that is designated as the supervisor UAV to receive an entire mission and then delegates individual sub-missions to the group of UAVs under its control, including itself. Based on the mission, it decides whether to partition one large search area into multiple smaller areas, assign a single search area to each UAV, or plan specific flight paths for each UAV for formation flying.

• Re-Partitioning Search Areas / Supporting a Partner in Completing a Search Area. Mission re-planning has been upgraded to occur after each UAV completes a given mission. This is in contrast to when re-planning occurres only at rendezvous points. Also incorporated is the ability to assist a partner UAV in searching a designated area (e.g., re-partitioning a large area into two small areas where a second UAV can assist). This includes completing the search mission of UAVs that are out of communication.

• Communications. Various types of communications are supported within the IC, with new types added as needed. Most recently, communications in the form of negotiations between UAVs has been incorporated in the Response module so that UAVs can coordinate search plans and can collaborate on target attacks. A handshaking method of negotiation was implemented so that collaboration is achieved in a timely and efficient manner.

IV. Conclusion ARL/PSU’s experience in unmanned vehicles lead to the development of a unique and robust architecture for the design of intelligent UAVs. This architecture provides a reliable approach to the design of a single unmanned vehicle or of a system of autonomous intelligent vehicles that collaborate with each other and with humans to carry out complex missions. The technology has been demonstrated in prototype designs of actual, autonomous, unmanned, intelligently controlled vehicles. Recently, this control architecture has been applied to intelligent collaboration among multiple unmanned air vehicles (UAVs) operating in adverse environments.

Acknowledgments The author would like to acknowledge several researchers at Penn State’s Applied Research Laboratory, including S. Lewis, J. Miller, S. Neal, and C. Walker.

References 1. J.S. Albus, "Outline for a Theory of Intelligence," IEEE Transactions on Systems, Man and Cybernetics,

Vol. 21, No. 3, pp. 473-509, 1991. 2. R.A. Brooks, "Intelligence Without Representation," Artificial Intelligence, 47, pp. 139-159, 1991. 3. R. Kumar and J.A. Stover, "A Behavior-Based Intelligent Control Architecture with Application to

Coordination of Multiple Underwater Vehicles," IEEE Transactions on Systems, Man, and Cybernetics, 30, pp. 767-784, 2000.

4. A. S. Lewis and L. G. Weiss, "Intelligent Autonomy and Performance Measuring for Multiple, Coordinated UAV's ", PerMIS'04, NIST, Gaithersburg, MD, 24-26 Aug 2004.

5. A. S. Lewis and L. G. Weiss, “Intelligent Autonomy and Performance Metrics for Multiple, Coordinated UAVs”, invited paper, Journal of Integrated Computer-Aided Engineering, 12 (2005) p. 251-262.

6. J.A. Stover and R.E. Gibson, "Modeling Confusion for Autonomous Systems," SPIE, Science Artificial Neural Networks, 1710, pp. 547-555, 1992.

7. J. A. Stover and R.E. Gibson, "Controller for Autonomous Device", US Patent #5,642,467, Issued June 1997.

8. J.A. Stover, D. L. Hall, and R.E. Gibson, "A Fuzzy-Logic Architecture for Autonomous Multisensor Data Fusion," IEEE Transactions on Industrial Electronics, 43, pp. 403-410, 1996.

9. L. G. Weiss and A. S. Lewis, “Intelligent Autonomy for Multiple, Coordinated UAVs,” American Helicopter Society 60th Annual Forum, Baltimore, MD, June 7-10, 2004.

10. L. G. Weiss and A. S. Lewis, "Intelligent Autonomy for Multiple Coordinated UAV's," Proc. of the American Helicopter Society 60th Annual Forum, Baltimore, MD, 7-10 June 2004.

11. L. G. Weiss, N. C. Nicholas, “An Intelligent Controller Architecture for Full Autonomy and HMI,” 10th International Conference on Robotics and Remote Systems for Hazardous Environments, Univ of FL, 28-31 Mar 04

12. L.A. Zadeh, "Fuzzy Logic, Neural Networks, and Soft Computing," Communications of the ACM, 37 (3), pp. 77-84, 1994.

American Institute of Aeronautics and Astronautics

9

13. Miller, J.A., Albert F. Niessner, Jr., Anthony M. DeLullo, Paul D. Minear, Lyle N. Long, "Intelligent Unmanned Air Vehicle Flight Systems," AIAA Paper No. 2005-7081, presented at the AIAA InfoTech@Aeropace Conference, Sept., 2005, Wash., D.C.

14. Long, Lyle N., Scott D. Hanford, Albert F. Niessner, George B. Gurney, and Robert P. Hansen, "An Undergraduate Course in Unmanned Air Vehicles," AIAA Paper No. 2005-6982, presented at the AIAA InfoTech@Aeropace Conference, Sept., 2005, Wash., D.C.