5
16 IEEE Concurrency Distributed control for unmanned vehicles Jeffrey N. Callen RedZone Robotics, Inc. Remotely operated or computer-controlled vehicles have been developed for dealing with hazardous waste or other materials; for mili- tary, police, and fire-fighting operations; for undersea operations; and for automated high- way driving. With improvements in sensors and computing capability, the trend is to put more and more control on board the vehicle. All robotic vehicles must turn intent into action. This is the function of the onboard controller, which translates higher-level vehi- cle commands into vehicle motion or activa- tion of the vehicle’s equipment. As research into and use of robotic vehicles increases, so does the need for the efficient development and application of controller technology. Toward that end, RedZone Robotics has developed a prototype Distributed Control Sys- tem. The DCS provides cost-effective, flexible control and monitoring of a variety of robotic vehicle functions. The DCS RedZone’s experience with centralized control systems indicated that they were expensive and had limited usability (see the sidebar, “A brief controller history”). Also, a feasibility study of single-board controller implementations iden- tified the distributed approach as optimal for vehicle control. This led us to develop the DCS. The DCS consists of a network of small, intelligent nodes mounted throughout the vehicle. The nodes interface with and control a variety of device types. The nodes commu- nicate via a controller area network (CAN) bus (basically a serial LAN), with a master unit to oversee the network operation. Remote con- trol, autonomous navigation, or other higher- level functions can be layered on top of the DCS through an external gateway. WHY DISTRIBUTED CONTROL? Such a distributed vehicle-control architec- ture has six main advantages: Optimization The local node cards are small by design and mounted next to the vehicle sensors and actu- ators. There is no large control enclosure with a multiple slotted card cage and power con- ditioning. The bulk of the controller is dis- tributed throughout the vehicle, allowing maximum use of available space. Power con- sumption and dissipation are also distributed among the nodes, particularly those that drive high-current applications such as actuator motors. Robustness Distributing intelligence throughout the vehicle enhances the vehicle’s fail-safe oper- ation. Each node that controls a vehicle actu- ator can have specific operational instruc- tions in its software to accommodate a variety of failure modes. If the vehicle loses communications or control from the direct- ing authority or the outside world, the nodes can bring the vehicle to an orderly, safe stop on their own. Likewise, periodic diagnostics to monitor the health of the sensors or actu- ators can be run more often, at the local level. The nodes can report hard or impending failures and can implement alternate control schemes to effect fault-tolerant or “limp home” operation, depending on the nature of the failure. Simplified maintenance Every node has identical hardware, allowing for easy substitution of failed nodes. Software is configurable over the CAN to initialize replacement nodes to their specific application. Applications Spotlight .

Distributed control for unmanned vehicles

  • Upload
    jn

  • View
    218

  • Download
    4

Embed Size (px)

Citation preview

16 IEEE Concurrency

Distributed control forunmanned vehicles

Jeffrey N. CallenRedZone Robotics, Inc.

Remotely operated or computer-controlledvehicles have been developed for dealing withhazardous waste or other materials; for mili-tary, police, and fire-fighting operations; forundersea operations; and for automated high-way driving. With improvements in sensorsand computing capability, the trend is to putmore and more control on board the vehicle.

All robotic vehicles must turn intent intoaction. This is the function of the onboardcontroller, which translates higher-level vehi-cle commands into vehicle motion or activa-tion of the vehicle’s equipment. As researchinto and use of robotic vehicles increases, sodoes the need for the efficient developmentand application of controller technology.

Toward that end, RedZone Robotics hasdeveloped a prototype Distributed Control Sys-tem. The DCS provides cost-effective, flexiblecontrol and monitoring of a variety of roboticvehicle functions.

The DCS

RedZone’s experience with centralized controlsystems indicated that they were expensive andhad limited usability (see the sidebar, “A briefcontroller history”). Also, a feasibility study ofsingle-board controller implementations iden-tified the distributed approach as optimal forvehicle control. This led us to develop theDCS. The DCS consists of a network of small,intelligent nodes mounted throughout thevehicle. The nodes interface with and controla variety of device types. The nodes commu-nicate via a controller area network (CAN) bus(basically a serial LAN), with a master unit tooversee the network operation. Remote con-trol, autonomous navigation, or other higher-level functions can be layered on top of theDCS through an external gateway.

WHY DISTRIBUTED CONTROL?Such a distributed vehicle-control architec-ture has six main advantages:

OptimizationThe local node cards are small by design andmounted next to the vehicle sensors and actu-ators. There is no large control enclosure witha multiple slotted card cage and power con-ditioning. The bulk of the controller is dis-tributed throughout the vehicle, allowingmaximum use of available space. Power con-sumption and dissipation are also distributedamong the nodes, particularly those that drivehigh-current applications such as actuatormotors.

RobustnessDistributing intelligence throughout thevehicle enhances the vehicle’s fail-safe oper-ation. Each node that controls a vehicle actu-ator can have specific operational instruc-tions in its software to accommodate avariety of failure modes. If the vehicle losescommunications or control from the direct-ing authority or the outside world, the nodescan bring the vehicle to an orderly, safe stopon their own. Likewise, periodic diagnosticsto monitor the health of the sensors or actu-ators can be run more often, at the local level.The nodes can report hard or impendingfailures and can implement alternate controlschemes to effect fault-tolerant or “limphome” operation, depending on the natureof the failure.

Simplified maintenanceEvery node has identical hardware, allowingfor easy substitution of failed nodes. Softwareis configurable over the CAN to initializereplacement nodes to their specific application.

Applications Spotlight

.

April–June 1998 17

StandardizationThe building-block approachallows many components,such as the master unit andthe CAN hardware and pro-tocols, to be off-the-shelfstandard items. The nodehardware is custom but fol-lows industry-standard guide-lines. Interface and controlcode for the node hardware isbuilt into the controller andas such is transparent to users.They can treat the entire con-trol system as a single device.

EconomyMultiple, identical nodes enable an econ-omy of scale not present in centralizedcustom controllers. For example, re-ducing the lengths of multiconductorcabling throughout the vehicle reducesthe wiring and installation costs signifi-cantly (see the sidebar, “Distributed con-trol reduces wiring costs”). The distrib-uted controller requires only that the

power wiring and the CAN wiringextend throughout the vehicle. Veryshort cables connect the nodes to theactuators or sensors.

ComprehensivenessThe number of functions available on thevehicle is limited only by the number ofnodes, which in turn is limited by thecapabilities of the CAN and onboard

power. A node can interfacewith multiple sensors and con-trol an axis of motion at thesame time.

HARDWARE DESIGNFigure 1 shows a schematic ofthe DCS network. An exter-nal controller addresses theDCS through either the RS-232 serial or the VME paral-lel interface on the masterunit. The external controllercan be another computer thathas navigation and missionresponsibility, or an electronic

link to a human operator controlling thevehicle via telepresence. Controllingsoftware can reside on the master unitor, depending on the application, on amore powerful computer interfaced withthe DCS.

The intelligent nodes typically receiveanalog sensor, digital switch or contact,encoder, and sensor or standalone serialdevice inputs. Nodes typically output

When RedZone started developingthe Distributed Control System in late1994, most control systems for re-motely controlled vehicles and un-manned ground vehicles employed acentralized design that interfaced allvehicle-sensing and servo-control sys-tems individually to a central com-puter. These computers consisted ofmultiple CPUs in standard equipmentracks in a large central enclosure. Theywere designed to read sensors andclose control loops on several actua-tors simultaneously and in real time.Cards in the same racks handled theI/O interface, meaning that each sen-sor and actuator on the vehicle waswired back to the central computer.

RedZone Robotics developed somecontrollers of this type (the University

of Massachusetts at Amherst researchtestbed, http://vis-www.cs.umass.edu/~bdraper/ugv.html, and the internalNon-Line-of-Sight Leader/ Follower,sponsored by the US Army, http://www.redzone.com). Other centralized-control implementations included

• Lockheed Martin Astronautics’four Surrogate SemiautonomousVehicles, developed as part of theUnmanned Ground Vehicle DemoII program, sponsored by DARPAand the Office of the Secretary ofDefense (http://www.cis.saic.com/previous/demoII/index.html);

• the National Institute for Standardsand Technology’s Mustang vehicle,built to demonstrate teleoperationfor military applications; and

• Carnegie Mellon University’s Nav-Lab platforms, which demonstratedearly autonomous vehicle opera-tion (http://www.ri.cmu.edu/ri-home/projects.html).

Because projects such as these useda centralized-control approach, eachnew application required a completeredevelopment of the controller tech-nology. A dedicated low-level tech-nology to control motions, monitorhealth, and integrate the control sys-tem with position sensors was notavailable. This approach carried highdevelopment, installation, and main-tenance costs, and resulted in a prod-uct not easily adaptable to other plat-forms or tasks.

A brief controller history

VME bus

Master unit68040

Intelligent node68HC16

Serial interfaceRS-232 or other

CAN bus

Digital I/O

Analog sensor

Intelligent node Intelligent node

Encoder

Intelligent node

Motor

Intelligent node

Figure 1. A system block diagram. The DCS consists ofa multidrop controller area network bus that seriallylinks a master unit with several intelligent nodes.(Multidrop means that several nodes can be placed inparallel anywhere on the network cable.) The CAN busallows nodes to be added to the network at anylocation, supports transmission speeds up to 1 Mbitper second, and is becoming an automotive industrystandard for control systems in vehicles.

.

18 IEEE Concurrency

motor-driver signals to control differentactuators, relay drivers, and serial devices.One node can control multiple devices,depending on their location and thenature of their interfaces. An onboardpower amplifier provides motor-drivecapability for most vehicle-actuator sys-tems. The nodes have identical hardware(see Figure 2).

The master unit (see Figure 3) over-sees external communications, monitorssystem health, and passes commandsfrom the external controller to theappropriate nodes. It also coordinatesand controls higher-level vehicle func-tions that require the action of multiplenodes, and directs the configuration ofthe software in the nodes.

SOFTWARE CONFIGURATIONAlthough the intelligent nodes containidentical software, they can control dif-ferent types of actuators and interfacewith different I/O devices. The useraccomplishes this by writing genericsoftware that uses database records thatare configured at startup. The databaserecords contain all the informationneeded to customize a particular inputor output.

The nodes share the database records.High-level configuration files stored onthe master hard drive define the systeminputs and outputs. One file lists all theinputs in the system, and one lists all theoutputs. Each device connected to thehardware is referenced by a variablename in the configuration files, and isidentified by the node address and theI/O-channel number of the node wherethe device is connected.

Control- and monitor-configurationfiles assign actions to the variables. Thecontrol-configuration file has a standardformat for describing a control loop thatlinks an output with a feedback device.This format uses the variable namesdefined in the configuration files. Themonitor-configuration file allows anyinput to be tied to any output through avariety of conditional tests.

When the system is powered up, themaster unit reads the configuration filesfrom its hard drive. The master softwarelooks at the configuration files and deter-

mines the nodes where the variablenames will reside. For each variable, thesoftware defines one node as the pro-ducer of that information. Any othernode that uses that variable is defined asa consumer of that information. Themaster unit packages this information,along with the control and monitor rulesfrom the configuration files. It thentransmits these packets over the CAN tothe proper nodes. The nodes read theinformation and place it in the appro-priate databases.

Each node has three databases. Thevariable database contains the name, loca-tion, and value of each variable defined in

the input- and output-configurationfiles. The control database contains theinput and output variables and themotion-control parameters for each con-trol loop. The monitor database containsthe conditions and constraints to moni-tor, as well as the actions to take whenconstraints are violated. Once these data-bases are set up, the system is ready tooperate.

SOFTWARE OPERATIONThe master software runs three taskssimultaneously to process informationto and from the controller and to mon-itor the system’s operation. The maintask executes all the Master User Inter-face commands, which are a set of stan-dard high-level commands from out-side the DCS, such as requests for I/Odata or commands for motion. Theheartbeat task is a background task thatruns continuously on the master todetect communication failures. The

message-processing task continuouslyreads and processes CAN messagesfrom the network.

Each node’s software runs five tasksconcurrently at rates of 10 to 100 Hz.The command task continuously pro-cesses messages while performing over-head housekeeping. The variable updatetask keeps the node-variable database upto date, and sends new values to othernodes over the CAN bus. The controltask performs closed-loop servo controlof an actuator by comparing the currentposition with the reference position andacting on the position error. The nodeI/O task keeps all variable values up todate by polling the I/O devices. Themonitor task checks the conditions andconstraints, and takes action if neces-sary. A scheduler determines the taskpriorities.

High-level commands pass to themaster unit through the RS-232 or VMEinterface. These commands follow astandard format that allows reading nodeinputs, activating node outputs, or com-manding motion from node-driven actu-ators. The master sends the command orquery over the CAN bus to the appro-priate node or nodes. The affected nodesreply or carry out the requested actionas needed. Commands can be strung to-gether into script files so that they willoccur at a single command. When mon-itor constraints are violated, the systemtakes action that was preprogrammedby the user into the configuration files,the script files, or both. This action caninclude everything from notifying theuser to sending actuators to certainpositions.

High-level program-ming and standardizedcomponents addversatility

Both the master and the node softwareare written in C. However, this is trans-parent to the users. To configure thecontroller for any applicable system andto operate it, users do not need to write,edit, or compile any C code. They pro-gram the system by editing the text of

Because the nodeshave identical hard-ware and software,users can easily sub-stitute any node forany other node in thefield.

.

April–June 1998 19

the high-level configuration and scriptfiles. A standard format for commandslets users access all the I/O in the system,once the configuration files are set up.They need only choose thevariable names and actionsto be carried out.

This gives users a greatdeal of flexibility in config-uring and operating a sys-tem. They can easily add anew device to a node bywiring it to an open I/Ochannel and editing the textin the configuration files todefine the device and whatit does. Users can also eas-ily add nodes to the system.The hardware distributesthe CAN connections andpower leads through junc-tion boxes. Users can add anew node to any open con-nector on a junction box.They add the node and itsdevices to the software byediting the configurationfiles. The entire system canlikewise be moved to an-other application with onlyinstallation costs and re-written configuration files.No custom programming isneeded. This feature bene-fits researchers who needthe ability to change andexpand a system withoutthe time and cost of a com-plete custom design.

This type of controlleralso provides operationalbenefits. Because the nodeshave identical hardware andsoftware, users can easilysubstitute any node for anyother node in the field.Jumpers in the wiring har-ness determine the nodeaddress, so the address re-sides in the installation andnot in the node. Becausethe DCS configures thenodes over the network atstartup, replacing a node isas simple as connecting the

new one in place of the old and reboot-ing the system. Identical hardwarereduces the spare parts count, a benefit

for military and other users in remotelocations where logistics is an issue.

Testing the DCS

We implemented the DCS on aUS Army HMMWV (a Hum-vee—High Mobility Multi-purpose Wheeled Vehicle), todemonstrate full control of thevehicle. The DCS controlledthe Humvee’s throttle, brake,and steering through electricactuators. For the testing anddemonstration, a laptop PCsupplied driving commands tothe DCS master unit throughthe RS-232 serial link. Themaster software interpretedcertain keystrokes on the PCkeyboard as commands to steerleft or right, or move the throt-tle or brake by preset incre-ments. A researcher in the frontpassenger seat operated thePC. A safety driver sat in thedriver’s seat but did not touchthe vehicle controls unlessthere was an emergency.

To date, the vehicle hasbeen driven in this manner forapproximately 17 hours andover 50 miles. Because ofextensive bench testing beforeinstallation, there were sur-prisingly few integration is-sues. The majority of changesresulting from the testingwere to the laptop PC userinterface—primarily selectingthe proper keystrokes andactuator increments. Most ofthe rest of the testing was tobuild operator confidence inthe nonintuitive driving style.Maximum speed for the vehi-cle was 33 mph, limited by theconstraints of the testing sitesrather than by the DCS.

The vehicle had no troublemaneuvering on public roadswith varying levels of traffic,although in these circum-stances, the safety driver wasespecially vigilant. Because of

Figure 2. DCS nodes are small and compact and caninterface to a variety of devices. Identical hardwaresimplifies installation and field replacement. (photo byMatt Bulvony, courtesy of RedZone Robotics)

Figure 3. The master unit is a VME cage that contains a68040 processor card, an interface card to the CAN, anda hard disk to store master programming. The cage alsoincludes power converters for the VME cage and thenodes, and a power-monitoring module. (photo byMatt Bulvony, courtesy of RedZone Robotics)

.

safety concerns, we did not take the vehi-cle onto high-speed roads or out withheavy traffic. We demonstrated thesafety, monitoring, and maintenance fea-tures of the DCS by simulating failuresand showing recovery by the intelligentnodes, and by swapping nodes in thefield. The project’s scope limited theextent of the testing we could do, but theresults were very positive.

Where intelligentvehicles are headedWhile this project successfully developedand demonstrated a controller for re-motely operated vehicles, the resultantproduct will work on a wide range ofapplications. Because the programmingis at a high level, this controller is appro-priate for intelligent vehicles of all sorts,industrial processes or factory automa-tion, commercial building automation,and motion-control applications, such asaircraft simulators (Stewart platforms).Any application that has remote sensingand control points would benefit fromdistributed control. However, hardwarecost would be a significant issue for com-mercial applications.

Other intelligent-vehicle projects have

demonstrated the benefits of distributedcontrol. Recent military projects include

• Omnitech Robotics’ Standard Tele-operation System (http://www.omnitech.com/sts.html),

• Robotics Systems Technology’s Mo-bile Detection Assessment ResponseSystem-Exterior (MDARS-E) pro-gram (http://www.nosc.mil/robots/land/mdars/mdars.html), and

• a controller developed by SAIC (Sci-ence Applications Int’l Corp.) for theUnmanned Ground Vehicles/Sys-tems Joint Program Office, spon-sored by the United States Army andMarine Corps (http://www.redstone.army.mil/ugvsjpo).

The next generation of intelligent vehi-cles will likely use distributed controlexclusively.

ACKNOWLEDGMENTSThis work was funded under DARPA’s SmallBusiness Innovation Research program andwas sponsored by the Robotics Office of the

US Army Tank-Automotive Research, De-velopment, and Engineering Center (Tardec)in Warren, Michigan. For more about Red-Zone Robotics, access http:// www.red-zone.com. RedZone products, including theDCS, are listed in the Aviation Week & SpaceTechnology—Assoc. for Unmanned Vehicle Sys-tems Int’l 1997–98 Int’l Guide to UnmannedVehicles (McGraw-Hill, 1997).

Jeffrey Callen is an electrical engineer andthe program manager for intelligent vehiclesat RedZone Robotics. He currently is manag-ing the development of an automated trans-port system to move large, sophisticated mod-ular optics units for the National IgnitionFacility laser-fusion project being built atLawrence Livermore National Laboratory.He previously managed RedZone’s Non-Line-of-Site Leader/Follower project, whichdemonstrated convoying of a human-drivenlead vehicle and a computer-driven followervehicle for the US Army TACOM (Tank-automotive and Armaments Command). Hehas a BS from Carnegie Mellon University andan MS from the University of Pittsburgh, bothin electrical engineering. He is a member ofthe IEEE and the Association for UnmannedVehicle Systems International. Contact himat RedZone Robotics, Inc., 2425 Liberty Ave.,Pittsburgh, PA 15222-4639; [email protected].

20 IEEE Concurrency

Lockheed Martin Astronautics studiedcomparative wiring costs for their Sur-rogate Semiautonomous Vehicles,which were based on a US ArmyHMMWV (High-Mobility Multipur-pose Wheeled Vehicle—a Humvee).The study compared the wiring costsfor centralized control with an estimateof wiring costs for a distributed con-troller design. The results (see TableA) show important savings in total wireweight, total connections, and labor tofabricate and install wire harnesses intothe vehicle. An 85% savings of instal-lation labor costs is significant whenconstructing multiple vehicles.

Table A. Comparison of wiring for centralized and distributedcontrol of the Lockheed Martin Surrogate Semiautonomous

Vehicle (presented by Roy Greunke, Lockheed MartinAstronautics, at the DARPA Unmanned Ground Vehicle/Demo

II Workshop, Feb. 22, 1995, Killeen, Texas).

DISTRIBUTED

CENTRALIZED CONTROL SAVINGS

CONTROL (ESTIMATED) (%)

No. of Cables 33 14 57.6Cable length (ft.) 117 54.5 53.4Total wire weight (lb.) 30.56 7.76 74.6No. of connections 1,004 536 46.6Fabrication time (hrs.) 98.3 44.6 54.6Installation time (hrs.) 88 13 85.2

Distributed control reduces wiring costs

.