24
Multi-sensor management for information fusion: issues and approaches N. Xiong, P. Svensson * Department of Data and Information Fusion, FOI, S-172 90 Stockholm, Sweden Received 2 July 2001; received in revised form 28 September 2001; accepted 18 October 2001 Abstract Multi-sensor management concerns the control of environment perception activities by managing or coordinating the usage of multiple sensor resources. It is an emerging research area, which has become increasingly important in research and development of modern multi-sensor systems. This paper presents a comprehensive review of multi-sensor management in relation to multi-sensor information fusion, describing its place and role in the larger context, generalizing main problems from existing application needs, and highlighting problem solving methodologies. Ó 2002 Elsevier Science B.V. All rights reserved. Keywords: Sensor management; Multi-sensor system; Data fusion; Information fusion; Sensor deployment; Sensor behavior assignment; Sensor coordination 1. Introduction Multi-sensor systems are becoming increasingly im- portant in a variety of military and civilian applica- tions. Since a single sensor generally can only perceive limited partial information about the environment, multiple similar and/or dissimilar sensors are required to provide sufficient local pictures with different focus and from different viewpoints in an integrated manner. Further, information from heterogeneous sensors can be combined using data fusion algorithms to obtain synergistic observation effects. Thus, the benefits of multi-sensor systems are to broaden machine percep- tion and enhance awareness of the state of the world compared to what could be acquired by a single sensor system. With the advancements in sensor technology, sensors are becoming more agile. Also, more of them are needed in a scenario in response to the increasingly intricate nature of the environment to be sensed. The increased sophistication of sensor assets along with the large amounts of data to be processed has pushed the infor- mation acquisition problem far beyond what can be handled by a human operator. This motivates the emerging interest in research into automatic and semi- automatic management of sensor resources for im- proving overall perception performance beyond basic fusion of data. 1.1. The fundamental purpose of sensor management Multi-sensor management is formally described as a system or process that seeks to manage or coordinate the usage of a suite of sensors or measurement devices in a dynamic, uncertain environment, to improve the per- formance of data fusion and ultimately that of percep- tion. It is also beneficial to avoid overwhelming storage and computational requirements in a sensor and data rich environment by controlling the data gathering process such that only the truly necessary data are col- lected and stored [60]. The why and what issues of both single-sensor and multi-sensor management were thor- oughly discussed in the papers [1,7,53,54,56]. To reiter- ate, the basic objective of sensor management is to select the right sensors to do the right service on the right object at the right time. The sensor manager is respon- sible for answering questions like: Which observation tasks are to be performed and what are their priorities? How many sensors are required to meet an informa- tion request? * Corresponding author. Tel.: +46-855-50-3696; fax: +46-855-50-3686. E-mail addresses: [email protected] (N. Xiong), [email protected] (P. Svensson). 1566-2535/02/$ - see front matter Ó 2002 Elsevier Science B.V. All rights reserved. PII:S1566-2535(02)00055-6 Information Fusion 3 (2002) 163–186 www.elsevier.com/locate/inffus

Sensor Management

  • Upload
    wardof

  • View
    214

  • Download
    0

Embed Size (px)

DESCRIPTION

Multi-sensor management for information fusion:issues and approaches

Citation preview

Page 1: Sensor Management

Multi-sensor management for information fusion:issues and approaches

N. Xiong, P. Svensson *

Department of Data and Information Fusion, FOI, S-172 90 Stockholm, Sweden

Received 2 July 2001; received in revised form 28 September 2001; accepted 18 October 2001

Abstract

Multi-sensor management concerns the control of environment perception activities by managing or coordinating the usage of

multiple sensor resources. It is an emerging research area, which has become increasingly important in research and development of

modern multi-sensor systems. This paper presents a comprehensive review of multi-sensor management in relation to multi-sensor

information fusion, describing its place and role in the larger context, generalizing main problems from existing application needs,

and highlighting problem solving methodologies. � 2002 Elsevier Science B.V. All rights reserved.

Keywords: Sensor management; Multi-sensor system; Data fusion; Information fusion; Sensor deployment; Sensor behavior assignment; Sensor

coordination

1. Introduction

Multi-sensor systems are becoming increasingly im-portant in a variety of military and civilian applica-tions. Since a single sensor generally can only perceivelimited partial information about the environment,multiple similar and/or dissimilar sensors are requiredto provide sufficient local pictures with different focusand from different viewpoints in an integrated manner.Further, information from heterogeneous sensors canbe combined using data fusion algorithms to obtainsynergistic observation effects. Thus, the benefits ofmulti-sensor systems are to broaden machine percep-tion and enhance awareness of the state of the worldcompared to what could be acquired by a single sensorsystem.

With the advancements in sensor technology, sensorsare becoming more agile. Also, more of them are neededin a scenario in response to the increasingly intricatenature of the environment to be sensed. The increasedsophistication of sensor assets along with the largeamounts of data to be processed has pushed the infor-mation acquisition problem far beyond what can be

handled by a human operator. This motivates theemerging interest in research into automatic and semi-automatic management of sensor resources for im-proving overall perception performance beyond basicfusion of data.

1.1. The fundamental purpose of sensor management

Multi-sensor management is formally described as asystem or process that seeks to manage or coordinatethe usage of a suite of sensors or measurement devices ina dynamic, uncertain environment, to improve the per-formance of data fusion and ultimately that of percep-tion. It is also beneficial to avoid overwhelming storageand computational requirements in a sensor and datarich environment by controlling the data gatheringprocess such that only the truly necessary data are col-lected and stored [60]. The why and what issues of bothsingle-sensor and multi-sensor management were thor-oughly discussed in the papers [1,7,53,54,56]. To reiter-ate, the basic objective of sensor management is to selectthe right sensors to do the right service on the rightobject at the right time. The sensor manager is respon-sible for answering questions like:

• Which observation tasks are to be performed andwhat are their priorities?

• How many sensors are required to meet an informa-tion request?

*Corresponding author. Tel.: +46-855-50-3696;

fax: +46-855-50-3686.

E-mail addresses: [email protected] (N. Xiong), [email protected]

(P. Svensson).

1566-2535/02/$ - see front matter � 2002 Elsevier Science B.V. All rights reserved.

PII: S1566-2535 (02 )00055-6

Information Fusion 3 (2002) 163–186

www.elsevier.com/locate/inffus

Page 2: Sensor Management

• When are extra sensors to be deployed and in whichlocations?

• Which sensor sets are to be applied to which tasks?• What is the action or mode sequence for a particular

sensor?• What parameter values should be selected for the op-

eration of sensors?

The simplest job of sensor management is to choosethe optimal sensor parameter values given one or moresensors with respect to a given task, see for example, thepaper [72]. This is also called active perception wheresensors are to be configured optimally for a specificpurpose. More general problems of (multi-)sensor man-agement are, however, related to decisions about whatsensors to use and for which purposes, as well as whenand where to use them. Widely acknowledged is the factthat it is not realistic to continually observe everythingin the environment and therefore selective perceptionbecomes necessary, requiring the sensor managementsystem to decide when to sense what and with whichsensors. Typical temporal complexities, which must beaccommodated in the sensor management process, werediscussed in [46].

1.2. The role of sensor management in information fusion

Sensor management merits incorporation in infor-mation fusion processes. Although terminology has notyet fully stabilized, it is generally acknowledged thatinformation fusion is a collective concept comprisingsituation assessment (level 2), threat or impact assess-ment (level 3) and process refinement (level 4) in the so-called JDL model of data fusion [76]. As pointed out in[70], in addition to intelligence interpretation, informa-tion fusion should be equipped with techniques forproactive or reactive planning and management of owncollection resources such as sensors and sensor plat-forms, in order to make best use of these assets with

respect to identified intelligence requirements. Sensormanagement, aiming at improving data fusion perfor-mance by controlling sensor behavior, plays the role oflevel 4 functions in the JDL model.

Sensor management indeed provides informationfeedback from data fusion results to sensor operations[7,54]. The representation of the data fusion process as afeedback closed-loop structure is depicted in Fig. 1,where the sensor manager on level 4 uses the informa-tion from levels 0–3 to plan future sensor actions. Thefeedback is intended to improve the data collectionprocess with expected benefits of earlier detection, im-proved tracking, and more reliable identification, or toconfirm what might be tactically inferred from previ-ously gathered evidences. Timeliness is a necessary re-quirement on the feedback management of sensors forfast adaptation to environment changes. That is to say,a prompt decision on sensor functions has to be madebefore the development of the tactical situation hasmade such a decision obsolete.

As a categorization of process refinement, Steinbergand Bowman [68] classified responses of resources(including sensors) as reflexive, feature-based, entity-relation based, context-sensitive, cost-sensitive, and re-action-sensitive, in terms of input data/informationtypes. This categorization is viewed as an expansion ofDasarathy’s model [18] in which data fusion functionsare subdivided considering merely data, features andobjects as possible input/output types.

1.3. Multi-sensor management architectures

The architecture of a multi-sensor management sys-tem is closely related to the form of data fusion unit.Typically there are three alternatives for system struc-ture, namely,

1. Centralized. In a centralized paradigm the data fusionunit is treated as a central mechanism. It collects in-

Fig. 1. Feedback connection via sensor manager in a data fusion process.

164 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 3: Sensor Management

formation from all different platforms and sensorsand decides jobs that must be accomplished by indi-vidual sensors. All commands sent from the fusioncenter to respective sensors must be accepted and fol-lowed with proper sensor actions.

2. Decentralized. In a decentralized system data arefused locally with a set of local agents rather thanby a central unit. In this case, every sensor or plat-form can be viewed as an intelligent asset havingsome degree of autonomy in decision-making. Sensorcoordination is achieved based on communication inthe network of agents, in which sensors share locallyfused information and cooperate with each other.Durrant-Whyte and Stevens [23] stated that decen-tralized data fusion exhibits many attractive proper-ties by being:

• scalable in structure without being constrained bycentralized computational bottlenecks or commu-nicational bandwidth limitations;

• survivable in the face of on-line loss of sensingnodes and to dynamic changes of the network;

• modular in the design and implementation of fu-sion nodes.

However, the effect of redundant information is a seri-ous problem that may arise in decentralized data fusionnetworks [16]. It is not possible, within most filteringframeworks, to combine information pieces from mul-tiple sources unless they are independent or have knowncross-covariance [36]. Moreover, without any commoncommunication facility, data exchange in such a net-work must be carried out strictly on a node-to-nodebasis. A delay between sender and receiver could resultin transient inconsistencies of the global state amongdifferent parts of the network, causing degradation ofoverall performance [28].3. Hierarchical. This can be regarded as a mixture of

centralized and decentralized architectures. In a hier-

archical system there are usually several levels of hier-archy in which the top level functions as the globalfusion center and the lowest level consists of severallocal fusion centers [56]. Every local fusion node is re-sponsible for management of a sensor subset. Thepartitioning of the whole sensor assembly into differ-ent groups can be realized based on either sensors’geographical locations or platforms, sensor functionsperformed, or sensor data delivered (to ensure com-mensurate data from the same sensor group).

Two interesting instances of hybrid and hierarchicalsensor management architectures are given in the fol-lowing for illustration.

The macro/micro architecture proposed by [7] can beclassified as a two-level hierarchical system. It consists ofa macro sensor manager playing a central role and a setof micro sensor managers residing with respective sen-sors. The macro sensor manager is in charge of high-level strategic decisions about how to best utilize theavailable sensing resources to achieve the mission ob-jectives. The micro sensor manager schedules the tacticsof a particular sensor to best carry out the requests fromthe macro manager. Thus it is clear that every managedsensor needs its own micro manager.

Another hybrid distributed and hierarchical ap-proach was suggested in [60] for sensor-rich environ-ments exemplified by an aircraft health and usagemonitoring system. The main idea is to distribute themanagement function across system functional or phys-ical boundaries with global oversight of mission goalsand information requests. One such model for man-agement of numerous sensors is shown in Fig. 2. At thetop of the model is the mission manager tasked withconverting mission goals to information needs, whichare then mapped by the information instantiator into aset of measurement patterns in accordance with thoseneeds. The role of the meta-manager is to enable naturalsubdivision of a single manager into a set of mostly

Fig. 2. A distributed and hierarchical sensor management model for a sensor rich environment (cited from [60]).

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 165

Page 4: Sensor Management

independent local resource managers each being re-sponsible for a particular sensor subset. Occasionally,these local managers need to be coordinated by themeta-manager if there is a request for information,which cannot be satisfied by a single sensor suite. Amajor difficulty in implementing this hybrid architectureof sensor management lies with the meta-manager. It isnot yet obvious how to best translate global functionalneeds into a set of local resource managers and how tocoordinate the disparate local managers distributedacross functional or physical boundaries.

1.4. Classification of multi-sensor management problems

Multi-sensor management is a broad concept refer-ring to a set of distinct issues of planning and control ofsensor resource usage to enhance multi-sensor data fu-sion performance. Various aspects of this area have beendiscussed in papers in the open literature. Generally,these problems fall into three main categories, i.e., sen-sor deployment, sensor behavior assignment, and sensorcoordination.

1.4.1. Sensor deploymentSensor deployment is a critical issue for intelligence

collection in an uncertain dynamic environment. Itconcerns making decisions about when, where, and howmany sensing resources need to be deployed in reactionto the state of the world and its changes. In some situ-ations, it should be beneficial to proactively deploysensing resources according to a predicted situation de-velopment tendency in order to get prepared to observean event, which is likely to happen in the upcomingperiod.

Sensor placement [59] needs special attention in sensordeployment. It consists of positioning multiple sensorssimultaneously in optimal or near optimal locations tosupport surveillance tasks when necessary. Typically it isdesired to locate sensors within a particular region de-termined by tactical situations to optimize a certaincriterion usually expressed in terms of global detectionprobability, quality of tracks, etc. This problem can beformulated as one of constrained optimization of a setof parameters. It is subject to constraints due to thefollowing factors:

• sensors are usually restricted to specified regions dueto tactical considerations;

• critical restrictions may be imposed on relative posi-tions of adjacent sensors to enable their mutual com-munication when sensors are arranged as distributedassets in a decentralized network;

• the amount of sensing resources that can be posi-tioned in a given period is limited due to logistical re-strictions.

In simple cases, decisions on sensor placement are tobe made with respect to a well-prescribed and stationaryenvironment. As examples, we may consider such ap-plication scenarios as:

• placing radars to minimize the terrain screening effectin detection of an aircraft approaching a fixed site;

• arrangement of a network of intelligence gatheringassets in a specified region to target another well-defined area.

In the above scenarios, mathematical or physicalmodels such as terrain models, propagation models, etc.are commonly available and they are used as the basisfor evaluation of sensor placement decisions.

More challenging are those situations in which theenvironment is dynamic and sensors must repeatedly berepositioned to be able to refine and update the stateestimation of moving targets in real time. Typical situ-ations where reactive sensor placement is required are:

• submarine tracking by means of passive sonobuoys inan anti-submarine warfare scenario;

• locating moving transmitters using ESM (electronicsupport measures) receivers;

• tracking of tanks on land by dropping passive acous-tic sensors.

1.4.2. Sensor behavior assignmentThe basic purpose of sensor management is to adapt

sensor behavior to dynamic environments. By sensorbehavior assignment is meant efficient determination andplanning of sensor functions and usage according tochanging situation awareness or mission requirements.Two crucial points are involved here:

1. Decisions about the set of observation tasks (referredas system-level tasks) that the sensor system is sup-posed to accomplish currently or in the near future,on grounds of the current/predicted situation as wellas the given mission goal;

2. Planning and scheduling of actions of the deployedsensors to best accomplish the proposed observationtasks and their objectives.

Owing to limited sensing resources, it is prevalent inreal applications that available sensors are not able toserve all desired tasks and achieve all their associatedobjectives simultaneously. Therefore a reasonable com-promise between conflicting demands is sought. Intu-itively, more urgent or important tasks should be givenhigher priority in their competition for resources. Thus ascheme is required to prioritize observation tasks. In-formation about task priority can be very useful inscheduling of sensor actions and for negotiation betweensensors in a decentralized paradigm.

166 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 5: Sensor Management

To concretize this class of problems, let us consider ascenario including a number of targets as well as mul-tiple sensors, which are capable of focusing on differentobjects with different modes for target tracking and/orclassification. The first step for the sensor managementsystem should be to utilize evidences gathered to decideobjects of interest and to prioritize which objects to lookat in the time following. Subsequently, in the secondstep, different sensors together with their modes are al-located across the interesting objects to achieve bestsituation awareness. In fact, owing to the constraints onsensor and computational resources, it is in general notpossible to measure all targets of interest with all sensorsin a single time interval. Also, improvement of the ac-curacy on one object may lead to degradation of per-formance on another object. What is required is asuitable compromise among different targets.

It is worth noting that although several distinct termsappear in the literature such as sensor action planningin [41], sensor selection in [25,37,38], as well as sensor-to-task assignment in [52,53], these terms inherentlysignify the same aspect of distributing resources amongobservation tasks, thus belong to the second issue of thisproblem class. In this paper we present the more generalconcept sensor behavior assignment, which involves notonly the arrangement of operations for individual sen-sors but also inferences about system-level tasks andobjectives to be accomplished. Actually, specification oftasks at the system level can be considered as postulatingexpected overall behaviors of the perception system as awhole, while planning and scheduling of sensor actionsdefines local behaviors residing with specific sensors.Dynamic information associated with time-varyingutility and availability serves here as the basis for deci-sion making about sensor behaviors.

1.4.3. Sensor coordination in a decentralized sensornetwork

There are two general ways to integrate a set of sen-sors into a sensor network. One is the centralized para-digm, where all actions of all sensors are decided by acentral mechanism. The other alternative is to treatsensors in the network as distributed intelligent agentswith some degree of autonomy [74]. In such a decen-tralized architecture, bi-directional communicationbetween sensors is enabled, so that communicationbottlenecks possibly existing in a centralized network canbe avoided. A major research objective of decentralizedsensor management is to establish cooperative behaviorbetween sensors with no or little external supervision.

An interesting scenario requiring sensor coordinationis shown in Fig. 3 where five autonomous sensors co-operatively explore an area of interest. The sensing ran-ges of the sensors are shown as shaded circles in thefigure and Vi denotes the velocity vector of sensor i. Thatis, every sensor has its own dynamics, can perceive only

part of the area and thus has its own local view of theworld. These local views can be shared by some membersof the sensor community. Intuitively, a local picture fromone sensor can be used to direct the attention of othersensors or transfer tasks such as target tracking from onesensor to another. An interesting question is how par-ticipating sensors can autonomously coordinate theirmovements and sensing actions, on grounds of sharedinformation, to develop an optimal global awareness ofthe environment with parsimonious consumption of timeand resources.

1.5. Organization of the paper

Following the overview of multi-sensor managementissues given above, Section 2 of this paper presents ageneral perspective on sensor management problemsolving. We will discuss principles and methodologicalfoundations upon which sensor management systemsmay be based and then suggest a hierarchical top-downprocedure for sensor managers to solve a complex per-ception control problem requiring comprehensive func-tionality.

Nevertheless, there is no single mechanism capable ofaccomplishing all functional activities of all levels of thesensor management hierarchy. Although some publishedpapers in the open literature incorporate the term sensormanagement in their titles, only one topic within thisgeneral field is usually discussed by each individualpaper. In the following sections, we will therefore discussknown approaches for sensor management in terms ofseparate issues such as principles of sensor placement(Section 3), observation task evaluation (Section 4),measurement policies for information collection (Section5), sensor resource allocation (Section 6), and sensorbehavior cooperation (Section 7). Section 8 concludesthe paper and discusses issues for future research.

We will not dwell on sensor parameter control andsensor job (time) scheduling, since these topics seemmore relevant to single-sensor rather than multi-sensorsystems on which our paper is mainly focused.

Fig. 3. A team of mobile sensors cooperatively observing an area of

interest.

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 167

Page 6: Sensor Management

Approaches to sensor scheduling will be briefly men-tioned when discussing level 1 management activities inthe top-down problem solving procedure (Section 2).

2. A perspective on sensor management problem solving

2.1. Principles and methodological considerations

Inherently the purpose of sensor management is tooptimize the data fusion performance by feedbackcontrol of sensor resources. The performance index wascalled figure of merit in [7]. Its definition depends onwhat one wishes to optimize, typical quantities includingprobability of target detection, track/identification ac-curacy, probability of loss-of-track, probability of sur-vival, probability of target kill, etc. Basically, we desirethe performance index established for optimization totranscend the diversity of sensors and to be analytically/computationally tractable. Further, in order to avoidpotentially myopic sensor management strategies, aprospective figure of merit should take into accountboth short-term and long-term interests so as to arriveat a good balance in final outcomes. Optimizationaccording to long-term objectives is critical to the de-velopment of global improvements of the perceptionprocess with respect to an evolving scenario. Recentinteresting research efforts to realize non-myopic sensormanagement functionality can be found in [35,71,73,77].

From the viewpoint of decision theory, sensor man-agement is a decision making task to determine the mostappropriate sensor action to perform in order to achievemaximum utility. Such a decision problem was treatedin [41] based upon a Bayesian decision tree where dis-tinct perception actions were assessed against each otherin terms of combined costs and benefits. Fung et al. [24]and Musick and Malhotra [54] discussed the applica-bility of influence diagrams as a graphical modelingtechnique in support of decision analysis for the selec-tion of sensor action. Decision making for sensormanagement tends to be a sequential process in thesense that intermediate decisions have to be made whilea dynamic situation evolves and where penalties andawards of decisions made are only revealed over time. Ithas been recognized that in many cases sensor man-agement can be modeled as a Markov decision problemsubject to uncertainty, solutions to which were studiedby Castanon [14,15] based on stochastic dynamic pro-gramming. Although dynamic programming offers amathematically tractable structure to find the optimumsequence of decisions, it is likely to suffer from combi-natorial explosion when solving practical problems ofeven moderate size.

By viewing sensors as constrained communicationchannels [33], sensor management is intended to controlthe functions of such channels to provide the maximum

quantity of useful information within a limited timeperiod. That is to say, we would like to optimize theinformation passed through sensor channels by properarrangement of their operations. Since uncertaintyabout the environment is reduced by means of a mea-surement, each sensor action has the potential merit ofcontributing information as it is performed. The keypoint here is how to assess the relative merits of candi-date sensing plans in terms of information gained oruncertainty reduced. One direct means serving thispurpose is to borrow the concept of entropy from in-formation theory as a measure of the uncertainty aboutthe state of the environment. In this way, we canquantify the amount of information gained due to ac-complishment of a sensing plan as either entropy change(in Shannon’s definition) or cross-entropy (in Kullback–Leibler’s definition), see Section 6.2.

In recent years, entropy-based information metricshave been adopted in many studies of sensor manage-ment in various scenarios, including: controlling a singlesensor to track multiple targets [33], sensor-target pair-ings in multi-sensor and multi-target tracking [19,61,62],search area determination [39], and search versus tracktrade-offs in a simulated environment [49]. However, allthese papers consider only the amount of informa-tion rather than the value of information. They aim tomaximize the quantity of information provided by sen-sors without taking into account whether the informa-tion derived is useful or interesting. Occasionally, thismight lead to directing sensor attention to be paid tonon-significant aspects of the environment in order toreach a superficial maximum of information attainment.Further research is thus needed to develop more com-prehensive information measures, in order to drivesensor management towards optimization in terms ofthe utility of obtained information with respect to mis-sion goals rather than merely the gained amount of in-formation [3]. Regarding recent efforts in this directionwe note the attempts by [32,48], in which goal latticeswere employed to prioritize observation tasks in termsof their mission-accomplishing value rather than thequantity of obtained information.

As sensor management plays the role of feedbackcontrolling sensor resources, the development of opti-mal control laws and strategies suitable for real appli-cations is a challenging problem. Pre-designing amanagement algorithm ensuring best control behaviorat all times is desirable but can be extremely difficult dueto high uncertainty and complexity of the environmentto be sensed. A perhaps more feasible way of achievingoptimality of sensor management performance is to in-corporate machine learning techniques to enable sensormanagers to learn from experience. In principle, thereare two general classes of machine learning useful toarrive at optimal management strategies, namely, off-line learning and on-line learning. Off-line learning, as

168 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 7: Sensor Management

practiced in [40,65,66], attempts to generalize valuablecontrol (management) knowledge from a sufficientamount of examples. It is actually a form of inductivelearning, which desires a comprehensive training dat-abase of scenarios covering a wide range of situations.Using a database with only a small number of scenarioscould result in a sensor manager that will be effective insome cases but ineffective in others. On the other hand,on-line learning represents the idea of the sensor man-ager directly learning to manage sensing assets throughits interactions with the environment. This mode oflearning is more difficult to realize than off-line learningin that an assessment signal has to be created for everysensing action conducted, in order for managementstrategies to be immediately updated upon completionof a perception. In particular, reinforcement learningtechniques [8,47] were found effective for on-line learn-ing of sensor search strategies for static targets. Gener-ally speaking, despite some preliminary work done (asmentioned above), machine learning for sensor man-agement is still an under-researched area and muchroom remains for further study in this direction.

In addition, in the authors’ view, behavior-based ar-tificial intelligence [13] could be a useful asset in infor-mation fusion process refinement. As it is known thatbehavior-based systems [2] exhibit strong robustness andflexibility against a dynamically changing world, incor-porating reactive skills in sensor management wouldenable the perception system to have reflexes for copingwith eventual deviations and to recover instantaneouslyfrom unexpected events. In realizing this purpose, adeliberative-reactive architecture needs to be establishedin the process refinement level for integration of abstractplanning and behavior-based reactive control in a co-herent manner.

2.2. Top-down problem solving by a sensor manager

As we know, sensor management is a complex pro-cedure intended to guide the information collectionprocess with respect to a broad range of activities. It isnot realistic for a sensor manager to solve its completeproblem by a single mechanism, because the huge andperplexing space of possible alternatives tends to makethis goal infeasible in real time. On the other hand,problem solving for sensor management can be mademuch easier and more effective by partitioning tasks intoactivities along different layers. Thus, a sensor managermight follow a top-down policy and proceed step by stepfrom meta-issues to detailed reasoning. The followingfive activity levels would be characteristic of a sensormanagement system having a comprehensive function-ality along these lines:

Level 4 (mission planning). This is the highest level ofsensor management responsible for deciding system-

level tasks based on information generated internally orexternally. It concerns meta-sensor management issuessuch as:

• Which services to perform (e.g. search, target track-ing or target updating)?

• Which accuracy level (e.g. desirable error covariance)to aim at?

• How frequently to measure?• On which area of the environment to focus?• Which targets to select?• How to rank the importance or priorities of the re-

quired tasks?

Mission planning plays the role of indirectly managingthe behavior of the sensor system but does not deal withimplementation details. Meta reasoning is required here,to be conducted according to results from other datafusion functions. At the same time it must also provideaccess to human operators, enabling them to imposetheir special requests when necessary.

Level 3 (resource deployment). This management levelis needed for surveillance purposes in a dynamic, un-certain environment. In such cases, new extra sensingdevices have to be deployed whenever necessary in orderto upgrade sensing capabilities or to catch up with quickenvironment changes. This involves proactive or reac-tive planning of sensor assets to be deployed in a sce-nario. More concretely, the level of resource deploymentis responsible for answering questions such as:

• When are extra sensors required and how many?• Where to place the newly required sensors?

Level 2 (resource planning). The purpose of this levelis to propose requests on individual sensors by decidingwhat jobs/actions are expected to be performed by them.Typical activities on this level are:

• sensor selection for multi-sensor multi-target track-ing;

• sensor allocation for simultaneous classification ofnumerous objects;

• sensor cueing (sensor handing over, target acquisitionby another aiding sensor);

• movement planning for mobile sensors and plat-forms;

• negotiation and cooperation between sensors in a de-centralized sensor network.

Level 1 (sensor scheduling). This level has the role toset up a timeline of commands for every sensor to carryout based on sensor availability and capabilities, givenjob/action requests from level 2. Each sensor is assignedwith a detailed schedule on what to do in each time

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 169

Page 8: Sensor Management

interval. This is a non-trivial problem considering thedistinct characteristics of jobs/actions to be performed,such as hard or fluid deadlines, as well as varying job/action priorities. As important solutions to schedulingwe refer to such methods as brick packing [7], best first[7], genetic search [17], OGUPSA [78], and the enhancedversion of OGUPSA [51] with its usage demonstrated ina sensor management simulation [50].

Level 0 (sensor control). At the lowest location in thehierarchy, this level is in charge of controlling everydegree of freedom of the sensor under a given command.It involves definition of all parameters that specify thedetails for realization of that command. For example, amulti-functional radar accepts general commands suchas search, target tracking or target updating. Each ofthese command categories, in turn, is equipped with aset of degrees of freedom. It is the task of the sensorcontrol on level 0 to determine the concrete sensor pa-rameters appropriate for carrying out commands sub-mitted by sensor scheduling, in order to optimize theperformance of the overall system.

The possible layers of sensor management discussedabove are illustrated in Fig. 4, where typical issues aremarked beside their corresponding levels. It should benoted, however, that the purpose of this discussion onlevels of sensor management is intended only for betterunderstanding and more convenient description of thegeneral policy of top-down problem solving by a sensormanager. It is not the intention of this paper to give ageneric prescription for system design, recognizing thatfunctional requirements on sensor management are al-ways problem dependent.

We note that this five-layered sensor managementprocedure does not exclude the use of feedback mech-

anisms. Indeed, situation awareness explicitly deliveredto level four represents all feedback information fromdata fusion. Although not directly indicated in Fig. 4,we understand that required fusion results should beforwarded to other levels of the top-down procedure tosupport control and decision making therein.

This top-down problem solving procedure for sensormanagement is a kind of process model, describing thesequence of steps to be performed. Therefore it has onlyloose correspondence with the JDL function model. Aset of resource management levels that more closelycorrespond to the JDL data fusion levels has been pre-sented in [67], as an extension of the data fusion/re-source management duality [11].

3. Principles of sensor placement

3.1. General description

As stated in the problem description in Section 1.4.1,sensor placement is viewed as a parameter optimiza-tion problem possibly subject to constraints. Two basicpoints are of concern here, namely, the goal functionfor placement and the strategy to find optimal or near-optimal solutions. In order to keep pace with environ-ment changes, e.g., the movements of targets, theoptimization algorithms adopted must not impose ahigh computational load. Three optimization algorithmsthat have been employed for real-time sensor placementare gradient descent used in [57,58], greedy local search in[77], and simulated annealing in [59]. Worth noting is thefact that both greedy search and simulated annealinghave the sometimes-important merit of requiring noderivatives of the objective function in their searchprocedures. Various optimization criteria for sensorplacement have been suggested. Generally they can beclassified into short-term and long-term strategies de-pending on whether future performance is taken intoaccount in the objective function.

Short-term sensor placement seeks to optimize someperformance measure of the fusion system at the currentstage but does not consider its future behavior. In viewof this, sensor positions are determined simply accord-ing to one of the following criteria:

• maximizing the probability of detection of the target[57,58],

• minimizing the standard deviation in the target esti-mates [58],

• maximizing the figure of merit for the sensor net [59].

Long-term sensor placement seeks to prolong the usageof deployed sensor(s) by taking care of possible futurepaths. Such an attempt complies with the general prin-ciple that sensor management should ‘‘value long-termFig. 4. Top-down problem solving by a sensor manager.

170 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 9: Sensor Management

goals of survival and success, not just accuracy andidentity’’ [54]. Two examples of long-term placement aredescribed in [34,35,71] and [77], respectively. They bothaim to find sensor locations such that the expectedvariance of the target state will be kept below a pre-scribed limit for as many consecutive time steps aspossible after the present one. However, they differ inthe state estimation methods used. The work by the FOIgroup applies Kalman filtering to identify target statesso that it is computationally more efficient. On the otherhand, the method by Williams may be more generallyapplicable due to the employment of particle filteringmethodology.

It bears mentioning that sensor placement in thiscontext only refers to the issue of determining the po-sitions where sensors are initially located. It does notinvolve planning the movements of mobile sensors aftertheir deployment, which we believe is a kind of sensorbehavior assignment rather than sensor deployment. Bythe same token, motion planning for movable sensorsshould be a function belonging to level two (resourceplanning) of the top-down procedure for sensor man-agement proposed in Section 2.2. This is an under-addressed topic but challenging for future research.

3.2. Signal filtering related to sensor placement

In target tracking applications, the sensor placementalgorithm is inherently related to the filtering algorithmused for state estimation. The most commonly used isthe Kalman filter, which assumes linear system modelswith additive Gaussian noise. A more recently investi-gated alternative is the particle filter [27] employed by[58,77] for sensor placement in submarine tracking. Thelatter has the advantage of being able to handle anyfunctional non-linearity and any distribution of systemor measurement noise. Despite seeming quite different,both filtering approaches can be formulated within thegeneral framework of recursive Bayesian estimation.

The Bayesian signal filter consists of essentially twostages: prediction and update, in order to construct theprobability density function (pdf) of the state based onall available information. The prediction stage uses thesystem model to make prediction of the state at the nexttime step, and later the predicted state pdf is modified inthe update stage using the newly acquired data frommeasurement. The update operation is an application ofBayes’ theorem, which provides a mechanism to updateprior knowledge in light of new evidence.

The goal is to derive the pdf of the current state xkgiven the full set of measurements Dk ¼ fy1; . . . ; ykg.Suppose the required pdf pðxk�1 jDk�1Þ of the precedingtime step k � 1 is available. Then depending on thesystem model the prior pdf of the state at time k can beobtained by

pðxk jDk�1Þ ¼Zpðxk jxk�1Þpðxk�1 jDk�1Þdxk�1; ð1Þ

where pðxk jxk�1Þ is the probabilistic model of the stateevolution, defined by the system equation and knownstatistics of the system noise. Given a measurementat time k, the predicted prior pdf may be updated viaBayes’ rule as:

pðxk jDkÞ ¼pðyk jxkÞpðxk jDk�1ÞRpðyk jxkÞpðxk jDk�1Þdxk

: ð2Þ

Likewise, pðyk jxkÞ is another probabilistic model definedby the measurement equation and the known statisticsof the measurement noise.

The recurrence relations (1) and (2) constitute theformal solution to the recursive Bayesian estimationproblem. Unfortunately, analytic expressions of con-crete results are only available under certain restrictionsof the system and measurement models. For the linear-Gaussian estimation problems, the required pdf remainsGaussian at each iteration of the filter, and the Kalmanfilter can be used as a rigorous, explicitly formulatedsolution to the Eqs. (1) and (2) by propagating andupdating the mean and variance of the state distribu-tion. In other cases with non-linear or non-Gaussiannature, however, there is no general analytic (closedform) expression available for the required pdf.

Particle filtering is indeed a functional approximateimplementation of the general recursive Bayesian filterutilizing a random sample based representation of thestate pdf. Given a set of independent random samplesfxk�1ðiÞ : i ¼ 1; . . . ;Ng drawn from the pdf pðxk�1 jDk�1Þ,the particle filter is a mechanism of propagating andupdating these samples to get a set of new valuesfxkðiÞ : i ¼ 1; . . . ;Ng which are approximately distrib-uted as independent random samples of pðxk jDkÞ. Inview of this, the equations of prediction and update inthe original Bayesian filter need to be approximatedaccordingly to comply with the sample representationrequirement. The basic strategies for prediction andupdate used in the standard form of particle filtering aregiven in the following.

Prediction. Considering the prior pdf pðxk jDk�1Þ to beapproximated as

pðxk jDk�1Þ ¼Zpðxk jxk�1Þpðxk�1 jDk�1Þdxk�1

� N�1XNi¼1

pðxk jxk�1Þ ¼ xk�1ðiÞ ð3Þ

its required sample set fxkðiÞ : i ¼ 1; . . . ;N g can begenerated by repeating N times of the procedure below.

(a) uniformly resample with replacement from the setof values fxk�1ðiÞ : i ¼ 1; . . . ;Ng;(b) pass the resampled value through the systemmodelto produce a sample xkðiÞ from the prior at time step k.

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 171

Page 10: Sensor Management

Note that N is not necessarily greater than N.Update. On receipt of the measurement yk, evaluate a

normalized weight, qi, for each sample from the priorusing the measurement model

qi ¼pðyk jxkðiÞÞPN

j¼1 pðyk jxkðjÞÞ: ð4Þ

This defines a discrete distribution over fxkðiÞ : i ¼ 1;. . . ;N g with probability mass qi associated with samplexkðiÞ. Now resample N times from the discrete distri-bution to produce samples fxkðiÞ : i ¼ 1; . . . ;Ng suchthat ProbfxkðjÞ ¼ xkðiÞg ¼ qi. In this way we obtain a setof samples that tend in distribution to the posterior pdfpðxk jDkÞ as N tends to infinity. The justification for theupdate phase of the particle filter was given in [64],where it was proved that the Bayesian theorem can beimplemented as a weighted bootstrap.

A major drawback associated with the particle filter isits heavy computational demand due to the necessity ofsimulating the behavior of a large cloud of particles. Onlyrecently with the availability of high speed computershave applications of this methodology become possible.On the other hand, many techniques have been proposedto improve the computational performance and/or accu-racy of the particle filter method, one interesting amongthem being the Rao-Blackwellised version [20].

4. Observation task evaluation

Specification of importance (priorities) of differentobservation tasks, based on the current tactical situa-tion, is significant to adapt sensor operations to chang-ing environments. Information about task priorities canbe very useful in scheduling sensor actions and for ne-gotiation among sensors in a decentralized architecture.

4.1. Evaluation based on fuzzy decision trees

Molina Lopez et al. [53] developed a symbolic rea-soning process to infer numeric evaluations of defense

surveillance tasks using fuzzy decision trees. Each nodein the decision tree is a linguistic variable with its pos-sible values being fuzzy subsets in the universe of dis-course for its corresponding concept, and the relationsbetween nodes are defined by fuzzy if–then rules.Starting from information provided by situation assess-ment and/or other data fusion processes, the inferenceengine proceeds through the tree by generating inter-mediate conclusions, until the task priority associatedwith the root node has been drawn. A tool based onparser generator technology was described in [69] forcreating a fuzzy decision tree from rules (or grammarclauses).

A decision tree presented in [53] for search task pri-ority is redrawn in Fig. 5 with small simplification. Theroot node of this tree is search task priority, which isdirectly affected by the nodes: Vulnerability of Sector,Danger of Sector, and Targets in the Last Search. Fur-ther, among the three concepts directly related to theroot node, there are two intermediate conclusions (vul-nerability and danger) whose values are to be deducedfrom leaf nodes.

Five fuzzy labels: very low, low, medium, high, andvery high are used to characterize Vulnerability of Sectorand Danger of Sector, while the concept Targets in theLast Search corresponds to another three fuzzy labels:normal, many, and too many. Taking into account everyAND combination of these fuzzy labels, a rule setconsisting of 5 5 3 ¼ 75 linguistic rules has beenavailable to describe the dependence of the search taskpriority on the three nodes directly related to it. A rulein this rule base is of the type:

Ri : IFðvulnerability ¼ Ai1Þ and ðdanger ¼ Ai2Þand ðtargets ¼ Ai3Þ THEN ðPriority ¼ BiÞ

ði ¼ 1; 2; . . . ; 75Þ:

In the above example, Ai1, Ai2 are labels from {very low,low, medium, high, very high}and Ai3 one label from{normal, many, too many},and Bi is a fuzzy set for searchtask priority with its membership function Bi (y) defined

Fig. 5. A decision tree to determine the value of search task priority (modified from [53]).

172 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 11: Sensor Management

on the domain of priority degrees Y. Under the Mam-dani model of fuzzy reasoning the output fuzzy set Fi,inferred by the ith rule is

FiðyÞ ¼ si ^ BiðyÞ; ð5Þ

where si denotes the firing strength of the ith rule, de-fined for given crisp values of vulnerability, danger, andtargets by

si ¼ Ai1ðvulnerabilityÞ ^ Ai2ðdangerÞ ^ Ai3ðtargetsÞ: ð6Þ

The output fuzzy sets Fi, inferred by the individualrules are then aggregated by an OR aggregating opera-tor resulting in an overall output fuzzy set F for searchtask priority, where

F ðyÞ ¼ _75

i¼1FiðyÞ ¼ _

75

i¼1½si ^ BiðyÞ�: ð7Þ

Finally, the crisp value of search task priority is ob-tained by defuzzification of the output fuzzy set F de-riving a crisp representation of it. Usually we have twomethods available for implementing this purpose. Thefirst is calculation with the Center of Area (COA)method, i.e.,

yCOA ¼ZYyF ðyÞdy=

ZYF ðyÞdy: ð8Þ

The other alternative is to define the value of priority asthe Mean of Maxima of the membership function F ðyÞ.That is

yMOM ¼Xyj2G

yj=CardðGÞ; ð9Þ

where G is a subset of Y consisting of those elements inthe domain which have maximum value of F ðyÞ thus

G ¼ fy 2 Y jF ðyÞ ¼ maxyF ðyÞg: ð10Þ

At this point we have outlined all the necessary stepsto derive the priority value based on the three nodesdirectly related to the root node. The intermediateconclusions like Vulnerability of Sector and Danger ofSector in the decision tree can be inferred from theirrespective leaf nodes using the same procedure as above.

4.2. Evaluation using neural networks

Komorniczak and Pietrasinki [40] proposed to assigntarget priorities using a neural network, which acceptsfeatures of detected targets as its inputs and offers im-portance levels of these targets as its outputs. Givensufficient learning examples, the neural network can bebuilt via offline training based on a learning algorithmsuch as back propagation. The authors utilized a rathersimple neural network for this purpose as shown inFig. 6, where each input is multiplied with the corre-sponding weight Wi and these products are summed upyielding a net linear output u. The activation function tocompute target importance levels is of the form:

f ðuÞ ¼ 1=ð1þ expð�buÞÞ ð11Þ

with b being a parameter determining the slope of thefunction.

However, to create the required training set, an ex-perienced human operator is needed to evaluate be-forehand target importance levels with respect to a largenumber of various situations. An issue could be whetherit is possible or convenient to collect sufficient examplesfor the network training.

In addition, in our view, identifying significant fea-tures as inputs before neural network modeling could beadvantageous in order to filter out irrelevant features,alleviating computational demands and reducing therisk of overfitting in data-based training. This motivatesincluding the dashed block feature selection in Fig. 6 as apotentially desirable component, although this issue wasnot addressed in the cited paper.

4.3. Evaluation based on goal lattices

A goal-lattice-based methodology was addressed in[32,48] with the purpose of quantifying the relativecontribution of actual observation tasks in the contextof a complex mission. In doing this, one has to identifyall goals relevant to that mission and then define anordering relation among them, leading to the construc-tion of a partially ordered set. The ordering relationshipused is a precedence ordering induced by the simple

Fig. 6. Neural network for (target) priority assignment.

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 173

Page 12: Sensor Management

statement ‘‘this goal is necessary in order for the othergoal to be satisfied’’. The established partially orderedset can be represented as a lattice, allowing for straight-forward apportionment of goal values from higher rel-atively abstract goals to lower increasingly concretegoals. At each layer of the lattice, each goal gets a valueas the sum of the values received from the (higher) goalsfrom which it is included. On the other hand, a goal alsodistributes its value among the (lower) goals that it in-cludes. In particular, the bottom layer of the latticecomprises the nodes for actual observation tasks withthe apportioned values reflecting their relative impor-tance to accomplish the mission. A web browser im-plementation of a goal lattice constructor has recentlybeen reported in [30].

An illustrative example was presented in [48] applyingthe goal lattice approach to an Air Force mission in an‘‘in harm’s way’’ scenario. Seventeen goals relevant tothe ‘‘in harm’s way’’ situation were identified with as-sociated interrelations among them as shown in Table 1.The resulting goal lattice is depicted in Fig. 7, where thevalue of each goal is uniformly distributed among itsincluded goals presuming that all included goals con-tribute equally to their including goal. The bottom layercorresponds to three fundamental observation tasks:tracking, identifying, and searching with importancedegrees of 0.36, 0.46, and 0.18, respectively. As a furtheraside, the 17 goals listed in Table 1 constitute a subset ofapplicable Air Force Doctrine goals. In publications ofthe United States Air Force, totally 90 Air Force mis-sion goals were defined within the six mission areasOffensive Counterair, Defensive Counterair, Air Inter-diction, Battlefield Air Interdiction, Close Air Support,and Suppression of Enemy Air Defenses. A global goal

lattice for the whole set of mission goals was also es-tablished in [48].

One major difficulty involved with the goal latticeapproach lies in the distribution of goal values. A uni-form distribution as shown in Fig. 7 is easy to imple-ment but may be a gross distortion of the true nature ofthe underlying problem, since it is likely that some in-cluded goals contribute more or are more important forthe accomplishment of the including goal than others.Weights of arcs in the lattice reflect the preferences ofsome goals over others and these preferences can changebetween different phases of a mission. It is doubtfulwhether it is realistic for an operator to define an exactdistribution function for every goal and in every missionstage, particularly in real-time. Perhaps (fuzzy) expertsystem technology could be incorporated to automatethe process of determining arc weights during a mission,in response to changes of information provided by datafusion. However, the issue of how to define distributionfunctions for individual goals was not addressed by theauthors.

5. Measurement policies for information collection

What measurement policies to follow for the collec-tion of adequate information is an issue which needs tobe taken into account by the sensor manager, involvingthe type (pattern) of measurement to be used, the fre-quency of observation on targets, the strategy of targetdetection by sensors, etc. It was suggested in [31] that theproper measurement type may be determined on thebasis of information requests posed by mission goals orhuman operators. The term information instantiation

Table 1

‘‘In Harm’s Way’’ goals

Goal

number

Goal Included

goals

1 To obtain and maintain air superiority 2, 3, 4, 5

2 To minimize losses 6, 7, 8

3 To minimize personnel losses 6, 7, 8

4 To minimize weapons expenditure 6, 8

5 To seize the element of surprise 8

6 To avoid own detection 9, 10

7 To minimize fuel usage 10, 11

8 To minimize the uncertainty about the

environment

12, 13

9 To navigate 15, 16

10 To avoid threats 15, 16

11 To route plan 15, 17

12 To maintain currency of the enemy order of

battle

14, 16

13 To assess state of the enemy’s readiness 14

14 To collect intelligence 15, 16, 17

15 To track all detected targets

16 To identify targets

17 To search for enemy targets

Fig. 7. Lattice of the ‘‘In Harm’s Way Goals’’ (cited from [48]).

174 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 13: Sensor Management

was used to signify a collection of methods, which mapinformation needs to measurement functions. Theproblem of sensor-target detection is another significanttopic related to measurement policies in the context oftarget searching. Inherently, a strategy to detect an un-known target can be viewed as a measurement se-quencing process under uncertainty, which has beenaddressed by some researchers using stochastic dynamicprogramming and reinforcement learning (RL) tech-niques, see [8,14,47]. In the following, we will detailsensor measurement policies in terms of measurementpattern, target observation frequency, and target detec-tion strategy, respectively.

5.1. Which measurement pattern to use

In the case of search, the characteristics of variousmeasurement patterns include high- or low-bearingresolution, high- or low-range resolution, direct velocitymeasurement (Doppler), or any combination of thesecapabilities. A general idea proposed by [31] is to de-termine the measurement pattern in two steps. The firststep is to down-select from all available measurementoptions that can be performed by the sensor system tothe admissible ones, subject to constraints. Then a localoptimization criterion can be applied in the second stepto make a choice from the set of admissible solutions,maximizing the incidental information value of thepattern to be performed.

In response to requests of object identification, tablelook-up was suggested in [31] for deciding what patternof measurement will enable the desired identification.For instance, if only the type of target is needed, it issufficient to use electronic support measures (ESM) toobserve signals emanating from the platform and thenmake classification of the target in terms of the elec-tronic order of battle (EOB). Facing a more detailedtask of hull-to-emitter correlation, however, we wouldhave to arrange a longer observation period due tocertain characteristics of ESM sensors. Such priorknowledge along with numerous identification tech-niques, their applicability, and operational constraintshave to be stored in a predetermined list for real-timeusage during decision making.

5.2. When to observe a tracked target

When to make observations is an important issue af-fecting the accuracy of target tracking. Since the uncer-tainty of the target position is increasing in the absence ofmeasurements, one has to update the track before thedevelopment of uncertainty exceeds an acceptance limit.Assuming the use of a Kalman filter as the tool for stateestimation, the error covariance matrix can be adoptedas an indicator of the state variance. The problem is

therefore to find out the latest time for the next mea-surement, which still results in an updated covariancematrix falling within the specified constraint. Accordingto [31], the maximum number of intervals between twosuccessive measurements can be derived as follows.

The starting point of the analysis is related to theerror covariance prediction equation

P ðk jk � 1Þ ¼ F ðk � 1ÞP ðk � 1 jk � 1ÞF ðk � 1ÞT

þ Qðk � 1Þ ð12Þand the error covariance update equation

P ðk jkÞ ¼ I½ � KðkÞHðkÞ� � P ðk jk � 1Þ; ð13Þwhere

KðkÞ ¼ Pðk jk � 1ÞHðkÞT HðkÞP ðk jkh

� 1ÞHðkÞT þ RðkÞi�1

ð14Þwith F ðkÞ, HðkÞ denoting the state evolving matrix andthe system observation matrix, respectively. QðkÞ is thesystem noise covariance and RðkÞ the measurement noisecovariance.

Providing no measurement was performed at timek � 1, the observation matrix Hðk � 1Þ becomes null sothat we have

P ðk � 1 jk � 1Þ ¼ P ðk � 1 jk � 2Þ ð15Þand

Pðk jk � 1Þ ¼ F ðk � 1ÞPðk � 1 jk � 2ÞF ðk � 1ÞT þ Qðk � 1Þ:ð16Þ

Similarly to Eq. (12), the predicted error covariance attime k � 1 can be written as

P ðk � 1 jk � 2Þ ¼ F ðk � 2ÞP ðk � 2 jk � 2ÞF ðk � 2ÞT

þ Qðk � 2Þ: ð17ÞSubstituting (17) into (16) results in

P ðk jk � 1Þ ¼ F ðk � 1ÞF ðk � 2ÞP ðk � 2 jk � 2Þ F ðk � 2ÞTF ðk � 1ÞT þ F ðk � 1ÞQðk � 2Þ F ðk � 1ÞT þ Qðk � 1Þ: ð18Þ

Supposing now that nt intervals have been skippedwithout observation until the time step k, we can re-process Eq. (18) backwards to step k � nt, getting thefollowing recursive expression

P ðk jk � 1Þ ¼ Pnt

j¼1F ðk

�� jÞ

�P ðk � nt jk � ntÞ

Pnt

j¼1F ðk

�� nt � jþ 1ÞT

þXnt�1

j¼1

Pnt�j

m¼1F ðk

��� mÞ

�Qðk � nt þ j� 1Þ

Pnt�j

m¼1F ðk

�� nt þ jþ m� 1ÞT

��þ Qðk � 1Þ: ð19Þ

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 175

Page 14: Sensor Management

Given that the upper limit of the updated error covari-ance is known, the allowed maximum covariance beforeupdate is derived from

P ðk jkÞ�1 ¼ P ðk jk � 1Þ�1 þ HðkÞTRðkÞ�1HðkÞ: ð20ÞFinally, having obtained the allowed maximum Pðk jk�1Þ and the last updated error covariance matrix P ðk�nt jk � ntÞ, we can acquire the maximum value of ntfrom Eq. (19).

In our opinion, identification of the maximum timebetween successive track updates while maintaining thedesired uncertainty level is the most valuable contribu-tion of the paper [31]. The connotation is that obser-vation of the target at every time step is not acompelling demand. The question of when it is neces-sary to make the next measurement is answered by Eq.(19). This provides a rigorous and useful guidance forreduction of sensor resources usage, which is of partic-ular significance for target rich but sensor poor appli-cations.

5.3. In what sequence to detect an unknown target

In the target detection problem, one maintains a vec-tor of probability estimates: ½p1ðtÞ; p2ðtÞ; . . . ; pkðtÞ; . . .�for all cells in the frame, with pkðtÞ denoting the con-ditional probability of a target being located in cell kgiven all the previous measurements made in that cell.At every stage t a noisy measurement is made on acertain cell k reporting whether detection was made ornot, and the prior probability estimate pkðt � 1Þ for cellk is thereby updated based on Bayes’ theorem by

pkðtÞ ¼

ð1� PMDÞ � pkðt � 1Þð1� PMDÞ � pkðt � 1Þ þ PFA � ð1� pkðt � 1ÞÞ ;

if reporting detection;

PMD � pkðt � 1ÞPMD � pkðt � 1Þ þ ð1� PFAÞ � ð1� pkðt � 1ÞÞ ;

if reporting no detection;

8>>>>>>><>>>>>>>:

ð21Þ

where PFA ¼ P ðfalse alarmÞ ¼ P ðdetect jabsentÞ; PMD ¼P ðmissed detectionÞ ¼ P ðnot detect jpresentÞ

Combined with this problem we need a target detec-tion strategy, which determines the sequence of cells tomeasure such that the sensor(s) can focus on a properarea in every stage to get the best final result. So far thefollowing three strategies have been available to suit thispurpose.

1. Direct detection. This is an uninformed detection pat-tern, which advances through the whole frame in apredetermined cell sequence. Upon completing theframe, the procedure of measurements is repeatedagain in the same order as the previous one. In order

to ensure that equal attention is dedicated to eachcell, the total number of measurements is usually cho-sen to be a multiple of the number of cells in theframe so that incomplete frames of measurementsare obviated. Indeed, because of its low efficiency thismode of detection is only used as a default mode insome field systems.

2. Index rule detection. This strategy chooses the mostlikely cell (i.e. the cell with the highest probability es-timate) to be measured at every stage. At the begin-ning, all cells are assumed to be equally probable,therefore a random cell is selected for measurementas an initialization of the detection procedure. Laterwith probability updates per Eq. (21), measurementsare expected to gradually congregate on the target cellwhose probability estimate becomes dominating overothers’, due to repeated sensor reports of target detec-tion there. Castanon [14] analyzed this detectionstrategy using concepts from stochastic dynamic pro-gramming and concluded that the probability to findthe target is maximized by a simple index rule underthe condition of symmetric measurement densities.Since available information is made use of by the in-dex rule, the time required to correctly locate a targetis shortened in comparison with direct detection.

3. RL detection. As a machine learning technique, rein-forcement learning (RL) attempts to arrive at optimalsystem performance by trial and error through inter-actions with the environment. Certainly there may beseveral possibilities of using RL techniques to learn toconduct target detection tasks. The scheme proposedin [47] is to employ an RL-detection network for eachcell to evaluate the reward of making a measurementthere. The inputs to the one-cell network include theprobability estimate pkðtÞ for the cell and the numberof measurements that have been performed therein.Based on the network’s outputs this strategy selectsthe cell with the largest predicted reward to receivethe next measurement. At the beginning, the net-work’s weights are set randomly, so that their pre-dicted reward values are incorrect. But upon takinga new measurement, one can acquire an updatedprobability distribution of the target location usingthe Bayesian evidence rule and then calculate the ex-pected reward value of this sensory action in terms ofa predefined metric. The difference between the ex-pected reward value and that, which is predicted bythe network, provides useful information for correct-ing the weights of the network by means of a back-propagation algorithm.

6. Sensor resource allocation

Given that the tasks of observation have been pos-tulated at the system level, the sensor manager should be

176 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 15: Sensor Management

able to distribute available sensing resources acrossthem to best satisfy the information requests as a whole.Frequently, an observation task cannot be accomplishedby a single sensor and thus a sensor combination mustbe applied to it, giving rise to the sensor selectionproblem. In other cases like multiple object classifica-tion, we may need dynamic allocation of sensors toachieve best identity accuracy after a certain number ofmeasurement stages. In the sequel, existing approachesto this subject are summarized from a viewpoint ofmethodology.

6.1. Search-based approaches to sensor selection

The selection of a sensor subset to be applied to agiven task can be considered as a search problem incombinatorial space. The goal is to find the most ap-propriate solution among all possible sensor combina-tions. Hence a key point involved is how to evaluatetrials in the problem space.

Sensors’ suitability was assessed in [25] using fuzzy settheory to establish a sensor preference graph, based onwhich the best sensor subset for a given task can beexplored. Molina Lopez et al. [52] introduced someheuristics to be associated with nodes of the search treefor sensor subset evaluation in terms of sensor load andsensor suitability. The sensor load constitutes an esti-mate of the sensor resources required to accomplish atask, while the sensor’s suitability to perform a task isdefined as a function of the necessity of fulfilling the taskand the ability of the sensor to perform it. The objectivefunction for a search algorithm should be constructed ina proper manner to reflect a good balance between thetwo factors.

In multi-sensor multi-target tracking applications, theproblem of sensor selection simply means to decidesuitable sensor combinations to be applied to measure-ment of different targets. Application of all sensors toall targets is usually not possible due to constraintsin sensing and/or computational resources. Kalandrosand Pao [37] proposed the so-called covariance controlstrategy for the purpose of maintaining a desired co-variance level on each target while reducing system re-source requirements. The covariance controller, asshown in Fig. 8, is responsible for determining a sensorcombination for each target to meet the desired co-variance level, while the sensor scheduler prioritizes thesensing requests from the covariance controller and is-sues commands regarding which actions are to be exe-cuted during every scanning interval.

Assume that the target and its measurements can bedescribed by the equations:

xðkÞ ¼ F ðk � 1Þxðk � 1Þ þ wðk � 1Þ; ð22Þ

yjðkÞ ¼ HjðkÞxðkÞ þ vjðkÞ: ð23Þ

In the above expression xðkÞ is the current state of thetarget; F ðkÞ, HjðkÞ are system matrices; and yjðkÞ de-notes the measurements of the target from the jth sensorin the sensor combination ;i. wðkÞ and vjðkÞ representthe system noise and measurement noise, respectively,both of which are assumed to have zero-mean, white,Gaussian probability distributions. Based on the aboveassumptions, the sequential Kalman filter performs aseparate filtering for each sensor in the combination andthen propagates its estimate to the next filter. Thereforewe write:

x̂x1ðk jkÞ ¼ F ðk � 1Þx̂xðk � 1 jk � 1Þ þ K1ðkÞðy1ðkÞ � H1ðkÞ F ðk � 1Þx̂xðk � 1 jk � 1ÞÞ;

x̂xjðk jkÞ ¼ x̂xj�1ðk jkÞ þ KjðkÞðyjðkÞ � HjðkÞx̂xj�1ðk jkÞÞ ð24Þj ¼ 2; . . . ; k;ik;

x̂xðk jkÞ ¼ x̂xkUikðk jkÞ;

where

P ðk jk � 1Þ ¼ F ðk � 1ÞP ðk � 1 jk � 1ÞF ðk � 1ÞT þ Qðk � 1Þand

K1ðkÞ ¼ P ðk jk � 1ÞH1ðkÞT½H1ðkÞPðk jk � 1ÞH1ðkÞT

þ R1ðkÞ��1;

KjðkÞ ¼ Pj�1ðk jkÞHjðkÞT½HjðkÞPj�1ðk jkÞHjðkÞT þ RjðkÞ��1

j ¼ 2; . . . ; k;ikð25Þ

with QðkÞ and RjðkÞ denoting the system noise covari-ance and the measurement noise covariance of the jthsensor (in the sensor combination), respectively.

The updating of state covariance for each filter isperformed by

P1ðk jkÞ ¼ ðI � K1ðkÞH1ðkÞÞPðk jk � 1Þ;Pjðk jkÞ ¼ ðI � KjðkÞHjðkÞÞPj�1ðk jkÞ j ¼ 2; . . . ; k;ik;Pðk jkÞ ¼ PkUikðk jkÞ:

ð26ÞAlternatively, the covariance of state estimates can becalculated in a single step [4] by:

Fig. 8. Covariance control for sensor selection (modified from [37]).

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 177

Page 16: Sensor Management

P ðk jkÞ�1 ¼ P ðk jk � 1Þ�1 þXk;ikj¼1

HjðkÞTRjðkÞ�1HjðkÞ:

ð27Þ

Defining Ji ¼Pk;ik

j¼1 HjðkÞTRjðkÞ�1HjðkÞ as the sensor in-

formation gain corresponding to ;i, the demand oncovariance requires that the value of Ji be as close to rPas possible,where

rP ¼ PdðkÞ�1 � P ðk jk � 1Þ�1 ð28Þ

with PdðkÞ being the desired covariance for the target. Asthe Ji matrix for each sensor combination can be com-puted offline and stored in a library, only one matrixinverse P ðk jk � 1Þ�1

needs to be calculated in each scan.On the other hand, the number of sensors applied to atarget should be kept small in order to avoid an exces-sive computational burden. Considering both factorsconcurrently, the following three optimization objectiveswere suggested in [37] for the covariance controller toassign a desirable sensor combination to a target.

1. Eigenvalue/sensors. Use the sensor combination withthe smallest number of sensors while maintaining alleigenvalues of the matrix Ji �rP positive. Selectingthe fewest possible sensors ensures elimination of re-dundant sensor allocations, which was stressed by[54] as one of the general principles of sensor manage-ment.

2. Matrix norm. Use the sensor combination, whichminimizes the norm of the covariance errorPdðkÞ � P ðk jkÞ. The argument for this optimality cri-terion is to consider positive eigenvalues in the co-variance error as excess resources applied to atarget and negative eigenvalues as insufficient re-sources applied to that target. However, finding theminimum norm of the covariance error does notguarantee that the resulting covariance Pðk jkÞ will al-ways stay within the desired limit.

3. Norm/sensors. Use the sensor set with the fewest sen-sors while keeping the norm of the covariance errorwithin a predefined boundary.

The preferred merits of the strategy of covariancecontrol decompose the sensor allocation problem intoindependent sub-problems for individual targets, eachdealing with target-specific covariance goals. If the de-sired covariance changes for a target, then only thesensor set for that target needs to be reassigned. How-ever, such a strategy is based on the assumption thatsequential Kalman filtering is used to identify the statesof targets and is therefore not applicable to cases whereother filtering techniques, such as particle filtering, areinvolved. Additionally, the sensing requests from thecovariance controller occasionally have to be delayed bythe sensor scheduler due to resource constraints. The

influence of possible execution delays on tracking per-formance remains for further study.

Once the search objective for a given task has beenestablished, the second important issue for sensor se-lection is to determine which search algorithm to em-ploy. The simplest attempt, as practiced in [52], is toexamine every sensor combination that is capable ofperforming a given task before deciding on the best one.This mechanism was termed global search algorithm in[38] and its computational complexity was given asOð2NSn3Þ for target tracking problems, where NS denotesthe total number of available sensors and n is the size ofthe state vector of the target. In order to avoid theprohibitively high computational burden associatedwith global search, Kalandros and Pao [38] discussedtwo heuristic search algorithms that can reduce thecomputational demand at the cost of choosing a non-optimal sensor combination:

1. Greedy search. Pick the best sensor into the combina-tion at each iteration, until no improvement of theevaluation value can be acquired. The computationalcomplexity of this algorithm is at mostOððn3=2ÞðN 2

S þ NSÞÞ for target tracking applications.2. Randomization and super-heuristics. Begin with a base

sensor combination (usually obtained with a prioriknowledge or heuristic methods) and then generatealternative solutions via random perturbations tothe initial combination. As a descendant of the su-per-heuristics technique introduced in [42], this searchalgorithm requires random perturbations to be per-formed on a non-uniform distribution, so as toincrease the chance of finding a near-optimal combi-nation in a few trials. Heuristic information such asFrequency-of-Selection and the solution from thegreedy algorithm can be used to bias the randomsearch towards those sensor combinations that aremore likely to be good enough candidates for a giventask.

6.2. Information-theoretic approaches to sensor selection

The idea of using information theory in sensormanagement was first proposed in [33]. From the in-formation-theoretic point of view, sensors are applied toobserve the environment in order to increase the infor-mation (or reduce the uncertainty) about the state of theworld. It is the task of the sensor management system tomake a reasonable allocation of sensing assets amongvarious tasks such that the greatest possible amount ofinformation is obtained at every measurement oppor-tunity. In surveillance scenarios, information is gainedwhen localizing a target or increasing the accuracy of thestate estimate of a target being tracked. Observation ofthe target by sensor(s) enables the probability densitydistribution of the target location estimate to be up-

178 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 17: Sensor Management

dated, reducing the uncertainty about the target loca-tion. To quantify the information gained (uncertaintyreduced) about a target due to sensor observation, twomeasures are available, based on Shannon’s entropy andKullback–Leibler’s cross-entropy, respectively.

6.2.1. Information gain based on Shannon’s entropyAccording to the information theory of Shannon, the

entropy of a random process is computed by

Hx ¼ �KXi

pðXiÞ log pðXiÞ for the discrete case

¼ �KZpðX Þ log pðX ÞdX for the continuous case

ð29Þ

with K being any positive constant and pð�Þ denoting theprobability density (mass) function for the continuous(discrete) case. Particularly for the n-variate and normaldistribution, Shannon’s entropy (with K ¼ 1) becomes:

Hx ¼n2logð2peÞ þ 1

2logðjP jÞ; ð30Þ

where jP j is the determinant of the covariance matrix P.Interpreting this entropy as a measure of uncertainty,

information can be quantified as the difference of theentropy between two given probability distributions of arandom process. In view of this, we express the infor-mation gain, I, achieved on a target per measurement as

I ¼ Hbefore observation � Hafter observation ð31ÞFurther under the assumption of a normal distributionof the target state, the information gain in Eq. (31) isestablished by

IðP2; P1Þ ¼n2logð2peÞ þ 1

2logðjP1jÞ

� n2logð2peÞ

�þ 1

2logðjP2jÞ

¼ 1

2log

jP1jjP2j

� �; ð32Þ

where P1 and P2 are the covariance matrices before andafter the measurement, respectively.

6.2.2. Discrimination gain based on Kullback–Leibler’scross-entropy

The discrimination between the target’s predicteddensity before observation, p1ðX Þ, and the updateddensity after observation, p2ðX Þ, is defined in terms ofKullback–Leibler’s cross-entropy as

Dðp2; p1Þ ¼Zp2ðX Þ logðp2ðX Þ=p1ðX ÞÞdX : ð33Þ

Suppose both p1ðX Þ and p2ðX Þ are Gaussian vectorswith means M1 and M2, and covariance P1 and P2, thediscrimination gain of p2ðX Þ with respect to p1ðX Þ isgiven by

Dðp2; p1Þ ¼1

2tr P�1

1 ðP2h

� P1 þ ðM2 �M1ÞðM2 �M1ÞTÞi

� 1

2log

jP2jjP1j

: ð34Þ

Using either of the information measures describedabove, we can now develop a control paradigm forsensor selection in a multiple target situation. The pur-pose is to make decisions from a global informationperspective by maximizing the amount of informationgained across all targets. Given the expected informa-tion/discrimination gain for every pair of sensor subsetand target, this problem can be addressed using a linearprogramming formulation as follows.

Let the sensors be indexed from 1 to NS and thetargets from 1 to NT. Each sensor k has its own trackingcapacity kk, representing the maximum number of tar-gets that can be sensed by the sensor at each scan. De-noting the information/discrimination gain by Gij whensensor subset i ði ¼ 1 . . . 2NS � 1Þ is allocated to target j,then the sensor selection problem is given by

Maximize Gain ¼X2NS�1

i¼1

XNT

j¼1

Gijxij ð35Þ

subject to constraints :X2NS�1

i¼1

xij6 1

for j ¼ 1; . . . ;NT; ð36Þ

Xi2JðkÞ

XNT

j¼1

xij6 kk for k ¼ 1 . . .NS ð37Þ

with xij 2 f0; 1g for all pairs i, j, where JðkÞ is the integerset containing all the indices of the sensor subsets inwhich sensor k is included.

In the literature both information measures have beenshown useful for sensor selection in multi-sensor andmulti-target tracking applications. Shannon’s entropywas adopted in [49,61] where the expected informationgain can be derived by extrapolating the Kalman filterstate covariance and then calculating the updated co-variance matrix after a measurement. On the otherhand, Schmaedeke and Kastella [62] and Dodin et al.[19] applied Kullback–Leibler’s cross-entropy to mea-sure the discrimination of information in combinationwith the Interacting Multiple Model Kalman Filter. Forthe time being, however, we have no knowledge aboutthe comparative performance of the two entropy mea-sures in sensor management applications.

In comparison with the covariance control strategy,information-theoretic approaches provide a globallyoptimal solution obeying all resource constraints so thata later scheduling of sensing requests is not needed. Onthe other hand, a higher computational burden shouldbe expected when using them, owing to the required

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 179

Page 18: Sensor Management

calculation of a large amount of information/discrimi-nation gains at every cycle.

6.3. Decision-theoretic approaches to sensor planning

Kristensen [41] treated the problem of choosingproper sensing actions from a family of candidates asone of decision-making, and developed a frameworkbased on Bayesian decision analysis (BDA) for its solu-tion. It was strongly suggested in his thesis that uncer-tainty has to be dealt with in sensor planning since theultimate purpose of performing sensory actions is toreduce uncertainty about the external world. On theother hand, BDA offers a powerful mechanism forrepresenting and reasoning under uncertainty. The the-ory of BDA was primarily applied in the area of eco-nomics to evaluate various actions, e.g., investments,against each other.

Fig. 9 shows the decision tree which was used in [41]for sensor action planning, where we can see twodifferent types of nodes: boxes and circles. A box is adecision node followed by paths decided by the system,while a circle corresponds to a chance node with itsbranches determined by chance. The root decision nodeis associated with sensory actions such as Ai, whichproduces random output xj related to the report chancenode. The second decision node, called actuator node,represents the consequence type of action, e.g. ak, lead-ing to completion of the task. Furthermore, the statechance node corresponds to the world state after anactuator action, with Uðak; zlÞ denoting the utility orpayoff for completing a given task. From the viewpointof buying information about an uncertain world state,

BDA was used to evaluate distinct sensory actionsagainst each other in a cost/benefit manner. To accountfor the increase in expected utility per cost unit com-pared with no sensing, the measure EISI (expected in-terest from sample information) was defined as:

EISIðAiÞ ¼Pn

j¼1 P ðxjÞEUðxjÞ � EUðA0ÞCðAiÞ

; ð38Þ

where EUðA0Þ;EUðxjÞ denote the expected utilities of nosensing and of receiving report xj, respectively, P ðxjÞ theprobability of receiving xj and CðAiÞ the cost of per-forming the sensory action Ai. Finally, the Bayesian de-cision rule simply selects the one with the maximum EISIvalue as the sensory action to be performed. That is,

Aopt ¼ argmaxAm

Ai¼A0EISIðAiÞ: ð39Þ

Although the method outlined above provides anexplicit, coherent and theoretically well foundedframework to plan sensory actions under uncertainty,the construction of decision trees is still problem-dependent, in particular with respect to the subjectivedefinition of utilities for completing a task. Whether it ispossible to devise appropriate utilities in various appli-cations remains for future investigation. The other po-tential difficulty that may occur is how to formulateactuator actions in the decision tree. It was noted in [41]that such actions are not restricted to operations of realactuators in the traditional sense but refer to somehowproductive actions marking the completion of a task.However, we are not clear about what should be treatedas actuator actions in accordance with the Bayesian

Fig. 9. Bayesian decision tree for sensor action planning (cited from [41]).

180 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 19: Sensor Management

decision tree when designing sensor managers for sur-veillance purposes.

Closely related to the work of Kristensen is the paper[43], which was based on the novel idea of learning theutility of using a sensor and which addressed the prob-lem of how to find a set of sensors, which solves a givenproblem in cheapest possible fashion. Another decision-theoretic approach to so-called deliberate sensing wasproposed in [21]. However, its authors did not give im-plementation details but pointed out that decision the-ory itself does not constitute a solution to the problem.

6.4. Fuzzy logic resource manager

Gonsalves and Rinkus [26] described an intelligentfusion and asset management processor with its fourthlevel being a fuzzy logic collection manager, which isresponsible for mapping the current situation state andenemy track information into (job) requests on sensingassets. The mapping depends on prior knowledge ofsensor capabilities and enemy tactical doctrine. How-ever, this paper does not explain how to incorporatefuzzy logic in the knowledge representation and rea-soning.

A somewhat similar work to that of Gonsalves, calledfuzzy-logic based resource manager, was presented in [65]which aims at optimal allocation of various resourcesdistributed over many dissimilar platforms. A fuzzydecision tree for resource management was constructedvia codifying related military expertise, with parametersof root concept membership functions being optimizedby a genetic algorithm. The fitness function for thegenetic algorithm was established based upon variousconsiderations of geometry, physics, engineering andmilitary doctrine. A later paper [66] by the same authorsextends the idea of the former paper [65] by incorpo-rating data mining techniques [6], which are generallyunderstood as efficient discovery of valuable, implicitknowledge and information from a large collection ofdata, for performance optimization of the fuzzy decisiontree. In particular, a genetic algorithm was applied in thedata mining process to yield parameters of the rootconcept membership functions based upon a database ofscenarios. Two possible ways to construct a database ofgood quality were discussed. A comprehensive scenariodatabase is beneficial for the data mining technique toextract knowledge covering a wide range of behaviorsand providing robust strategies to resource allocation.Otherwise a small data set containing a narrow spec-trum of scenarios could result in a resource managereffective in some cases but ineffective in others.

6.5. Markov decision for sensor allocation in classification

For simultaneous classification of a number of un-known objects, it is important to properly distribute

available sensors across different objects. The dynamicsensor allocation problem consists of selecting sensors ofa multi-sensor system to be applied to various objects ofinterest using feedback strategies. Mathematically this isa partially observed Markov decision problem [44,75] asformulated in the following.

Consider a problem of N objects with xi 2 f1; . . .Kgdenoting the true class of object i. It is assumed that xi aremodeled as independent random variables with discretevalues in a finite space. Moreover each object i is asso-ciated with a prior probability distribution representingthe a priori knowledge obtained on its class variable.Now in order to obtain further information about thestate of each object, appropriate sensors should be as-signed to various objects at the time intervalst 2 f0; 1; . . . ; T � 1g. The collection of sensors applied toobject i during interval t is represented by a vector

UiðtÞ ¼ ½ui1ðtÞ . . . uiMðtÞ�; ð40Þwhere

uijðtÞ ¼1 if sensor j is used on object i at interval t;0 otherwise

ð41Þ

and M is the total number of sensors in the sensingsystem. Because of limited resources sustaining thewhole system, the planned sensor distributions mustsatisfy the following constraint for every t 2 f0; 1; . . . ;T � 1gXNi¼1

XMj¼1

rijðtÞuijðtÞ6RðtÞ; ð42Þ

where RðtÞ is the maximum amount of resources that canbe consumed by the whole sensor system during intervalt and rij denotes that quantity of resources consumed bysensor j on object i.

Here, the goal of sensor allocation is to achieve anaccurate classification of all the objects after T stages.Let ½v1; v2; . . . ; vN � be the final classification decision andcðxi; viÞ the error function representing the penalty ofclassifying an object of xi as type vi. Sensor allocation canbe defined as a problem to find a sequence of decisionsfuijðtÞ; vi j i ¼ 1; . . . ;N ; j ¼ 1; . . . ;M ; t ¼ 0 . . . T � 1g,which minimizes the expected total cost Ju in (43), sub-ject to the constraint (42).

Ju ¼ EXNi¼1

ciðxi; viÞ( )

: ð43Þ

The above problem is a special case of the finite-state,finite-observation, partially observed Markov decisionproblems addressed in [44,75].

In principle, problems of this kind can be solved bymeans of stochastic dynamic programming (SDP). Wecan, in particular, convert the partially observed Mar-kov decision problem into a standard fully observed one

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 181

Page 20: Sensor Management

by defining the problem state in terms of its conditionalprobability distribution given the available informationabout the sensors applied and the measurement dataacquired. Bertsekas [5] presented an SDP recursion tosolve this problem. The main idea is to select prospectivedecisions at every stage in order that the expected valueof the cost-to-go function at the next stage can beminimized. Unfortunately, the presence of the con-straint equation (42) makes it impossible to decomposethe whole task of decision making into smaller problemsfor individual objects. As a result, the dynamic pro-gramming algorithm must perform its job on the entireproblem space and the associated computation becomesprohibitive even for a moderate number of objects.

The near-optimum feedback strategy proposed in [15]might possibly be useful to alleviate the computationalburden associated with strict solutions to sensor allo-cation using SDP. By relaxing a hard constraint like thatin (42) to an average resource constraint, the suggestedapproximate solution is based on the use of Lagrangianrelaxation to decompose the multi-object optimizationproblem into many single object problems that are co-ordinated through introduced Lagrange multipliers. Inthis way, the relaxed sensor allocation problem is de-coupled hierarchically into two levels. At the lower level,the sensors for single objects are determined using theSDP algorithm, under the given values of Lagrangemultipliers. At the higher level, based on the lower levelsolutions, the values of Lagrange multipliers can beoptimized via certain non-differentiable optimizationtechniques. The advantage of this approximate strategyis the reduction of computational complexity by de-composing the whole decision problem into decoupledsub-problems at the lower level. However, this benefit isearned at the cost of accepting a weak condition that theaverage amount of consumed resources does not exceedthat, which is available. It is controversial whether theproposal of relaxed resource constraint has practicalvalue, since it allows violation of the original constraintin quite a few cases.

In addition to dynamic programming, state-basedsearch methods were discussed in [10] for fully observ-able and non-observable Markov decision processes. Inparticular, it was pointed out that search techniques canbe applied to partially observable problems as well, bysearching the space of belief states [9], for example. Thepresented methodology of state-based search may beworth investigation in the context of sensor allocation,to achieve potentially more effective and efficient deci-sions about sensor distributions in real-time.

7. Towards cooperative sensor behavior

A team-theoretic formulation of multi-sensor systemswas presented in [29] with sensors being considered as

members or agents of a team, able to offer opinions andto bargain in group decision situations. Coordinationand control under this structure was analyzed and dis-cussed based on a theory of team decision-making. It wasassumed that a manager/coordinator makes group de-cisions according to criteria of maximum team utility.The most general form of team utility is to incorporateutility measures of both team member and joint teamactions to combine local and global objectives. Alter-natively, in some cases, the team utility function candisregard individual team member opinions, therebyallowing the manager to make optimum decisions re-gardless of individual team member losses. It is clear,however, that since a team utility represents grouppreference with respect to an underlying problem in aspecific scenario, its definition is situation dependent.Thus, establishing a comprehensive function for assess-ing team utility may become difficult if not impossible incomplex applications.

Bowyer and Bogner [12] analyzed the issue of inte-grating distributed sensor assets based on the concept ofcooperating machines. It was argued that heterogeneoussensors in a multi-sensor system should be treated ascooperative members of a community. Here, the conceptof community refers to a plastic association rather thana rigid connection of sensors, considering the possibledeparture of a previously participating sensor as thesituation unfolds. The authors also pointed out thatcertain social behavior principles are desired of a sensornetwork to foster cooperation between individualmembers and to enhance their mutual awareness. Suchconsiderations led to the proposal of exploiting agenttechnology for managing temporal requirements andimplementing the desired social behaviors within acommunity of sensors. A multi-agent based architecturewas suggested for this purpose, allowing agents to in-teract with the sensor system. As indicated by the pro-posed agent system shown in Fig. 10, the informationfrom an existing sensor is wrapped up by its local sensoragent, which then communicates and negotiates withother agents via a suitable communication medium.

A multi-agent resource management system was alsostudied by Luo et al. [45] to support the integration ofdata from multiple sensors and associated importantsensory information located in different agents. Twokinds of agents were involved in this study, with serviceagents handling local sensor information and a globaldecision agent conducting necessary coordination ac-tivities to achieve cooperative behavior of the wholesystem. An Internet-based network architecture wasused for communications between service agents and theglobal decision agent.

Although approaches based on intelligent agentsseem to offer a useful framework to realize autonomouscooperation between different sensors, few concretepolicies have been reported in the literature answering

182 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 21: Sensor Management

the question how such agents could coordinate theirdecisions in a complex environment. However, for thesimple problem of switching between sensors whentracking a moving target, a fuzzy controller has beendesigned for the cueing process [55] and shown to bevery successful in simulations.

Currently, the Argus project [22] is carried out by theAustralian Centre for Field Robotics in Sydney, to de-velop a decentralized ground-based sensor network. Thenodes include vision, multi-spectral cameras, lasers, and(in the future) radars. The Argus network will be usedfor both air-target tracking and hand-off of groundtargets from air to land sensors.

8. Conclusion and discussion

This paper highlights the emerging area of multi-sensor management from the perspective of multi-sensorinformation fusion. Characteristics of three problemclasses of multi-sensor management are elaborated.Furthermore, a general perspective on problem solvingfor sensor management is presented in terms of certainmethodological considerations and a top-down workingprocedure. As a collection of activities controlling in-formation gathering, the scope of sensor management iswide and many problems involved seem to be case-specific. In view of this, this paper discusses the state-of-art in terms of solutions to specific problems.

Research in sensor management is still in its infancy.Recent early work on this topic has been largely focusedon sensor scheduling and sensor resource allocation.With the advances of information fusion systems intomore and more complex environments, an increasingdemand will likely occur for sensor management func-

tions on high levels such as mission planning, sensordeployment and sensor coordination. Unfortunately,these issues have not yet been extensively addressed.

Contemplating sensor management requirements onhigh levels leaves us with the sense that methods ofqualitative reasoning may be needed as complementarytools, to deal with abstract but imprecise information. Ingeneral, highly strategic decisions in sensor managementmight require the involvement of human operators, whoare expected to ‘‘discuss’’ and ‘‘negotiate’’ with thesoftware for working out intelligent solutions. Usuallyhuman operators tend to express their opinions in theform of propositions formulated in natural languageand they will also want to receive information as lin-guistically understandable descriptions. Moreover, tol-erance of imprecision seems a necessary requirementwhen making high-level decisions for achieving tract-ability, robustness, low computational cost, and betterrapport with the inexact nature of human thinking.

We feel that the term sensor management is becominginappropriate to capture what it really means to copewith increasingly high-level functionality in the future.This term was originally put forward to refer mainly tosingle sensor scheduling and control. It does not accu-rately capture the nature of process refinement in thecontext of information fusion. Indeed what we are nowtalking about is a procedure to manage data acquisitionfrom the external world driven by information requests.Whether a new term is necessary or beneficial to betterreflect the gist of this fast developing area is open fordebate and discussion by the information fusion com-munity.

Future research work can be conducted on severalinteresting but under-researched problems, including:

• Combined sensor placement and sensor selection inmulti-sensor and multi-target applications. So far inthe literature these two problems have been addressedin isolation of each other. Some papers discuss sensorselection for various targets under the assumption ofknown fixed sensor locations, whereas other papersfocus on sensor placement in tracking a single target,so that sensor selection is not required. However, inreal applications of multi-sensor and multi-targettracking we frequently have to solve these two prob-lems in combination.

• Management of movable sensor assets or platformssuch as unmanned air vehicles for surveillance pur-poses. An interesting problem may be planningtime-constrained movements of sensor(s) to keeptrack of moving targets in a dynamic environment.

• Cooperation between sensors in a decentralized para-digm. Cooperation is particularly needed for a teamof sensors and platforms that collectively performsurveillance tasks across a region. Each team memberis expected to make its own decision about where to

Fig. 10. A multi-agent based architecture for sensor integration (cited

from [12]).

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 183

Page 22: Sensor Management

go and what to sense. But these decisions must be co-ordinated with other sensors and platforms to arriveat a globally optimal perception for the system as awhole. Further, in the case of failure of one or moresensors or platforms, the system should be able toreconfigure the control and coordination of sensorbehaviors.

• Sensor management under multiple objectives. Preva-lent in military applications is that there may be otherobjectives to be accounted for, such as remaining co-vert and recognizing enemy spoofing, in addition toinformation fusion quality. Sometimes such demandsconflict with each other, requiring on one hand mea-suring in a highly covert way and on the other handgetting as much information as possible. In order toachieve a good balance between different objectives,an appropriate arbitration mechanism is needed.

• Adaptive signal filtering with respect to sensor place-ment. Up to now Kalman filtering and particle filter-ing are used in sensor placement for target stateestimation. Both techniques have advantages andweaknesses. An adaptive mechanism is desirable todecide which filter to use based on the situation andthe mission goal. Moreover, both Kalman and parti-cle filters give average performance of state estima-tion. However, in some cases we may want toensure the worst-case accuracy, especially in militaryapplications. This could make it advantageous to useso-called H1 filtering (indeed Kalman filtering isalso termed H2 filtering). H1 filtering minimizes the‘‘worst-case’’ estimation error and assumes no prioriknowledge of noise statistics [63]. Automatic switch-ing among the three filters could be valuable for im-plementation of adaptive signal filtering.

Acknowledgements

We would like to sincerely thank Johan Schubert forhis helpful suggestions, Ronnie Johansson and KarstenJ€oored for beneficial discussions and remarks, and theanonymous referees for many useful suggestions for theimprovement of this paper.

References

[1] R.A. Adrian, Sensor management, in: Proceedings of the AIAA/

IEEE Digital Avionics System Conference, Ft. Worth, TX, 1993,

pp. 32–37.

[2] R.C. Arkin, Behavior-Based Robotics, MIT Press, Cambridge,

MA, 1998.

[3] S. Arnborg, H. Artman, J. Brynielsson, K. Wallenius, Information

awareness in command and control: precision, quality, utility, in:

Proceedings of the International Conference on Information

Fusion, 2000, pp. ThB1-25–32.

[4] Y. Bar-Shalom, X. Li, Estimation and Tracking: Principles,

Techniques, and Software, Artech House, Boston, 1993.

[5] D.P. Bertsekas, Dynamic Programming and Optimal Control,

vols. I–II, Athena Scientific, Belmont, MA, 1995.

[6] J.P. Bigus, Data Mining with Neural Nets (Chapter 1), McGraw-

Hill, New York, 1996.

[7] S. Blackman, R. Popoli, Sensor management, in: Design and

Analysis of Modern Tracking Systems, Artech House, Norwood,

MA, 1999, pp. 967–1068.

[8] E. Blasch, R. Malhotra, S. Musick, Using reinforcement learning

for target search in sensor management, in: Proceedings of the

National Sensor and Data Fusion Conference, Boston, MA, 1997,

pp. 267–275.

[9] B. Bonet, H. Geffner, Learning sorting and decision trees with

POMDPs, in: Proceedings of the 15th International Conference on

Machine Learning, Madison, WI, 1998, pp. 73–81.

[10] C. Boutilier, T. Dean, S. Hanks, Decision-theoretic planning:

structural assumptions and computational leverage, Journal of

Artificial Intelligence Research 11 (1999) 1–94.

[11] C. Bowman, A. Steinberg, A systems engineering approach for

implementing data fusion systems, in: D. Hall, J. Llinas (Eds.),

Handbook of Data Fusion, CRC Press, Boca Raton, 2001, pp. 16-

1–39 (Chapter 16).

[12] R.S. Bowyer, R.E. Bogner, Cooperative behavior in multi-sensor

systems, in: Proceedings of the 6th International Conference on

Neural Information Processing, Perth, Australia, 1999.

[13] R.A. Brooks, Intelligence without reason, in: Proceedings of the

12th International Joint Conference on Artificial Intelligence,

Sydney, 1991, pp. 569–595.

[14] D.A. Castanon, Optimal detection strategies in dynamic hypoth-

esis testing, IEEE Transactions on Systems, Man, and Cybernetics

25 (7) (1995) 1130–1138.

[15] D.A. Castanon, Approximate dynamic programming for sensor

management, in: Proceedings of the 36th IEEE Conference on

Decision and Control, 1997, pp. 1202–1207.

[16] C. Chong, S. Mori, K. Chang, Distributed multitarget multisensor

tracking, in: B. Shalom (Ed.), Multitarget-Multisensor Tracking:

Applications and Advances, Artech House, Boston, 1990 (Chapter

8).

[17] M. Cone, S. Musick, Scheduling for sensor management using

genetic search, in: Proceedings of the IEEE National Aerospace

and Electronics Conference, Dayton OH, vol. 1, 1996, pp. 150–

156.

[18] B.V. Dasarathy, Decision Fusion, IEEE Computer Society Press,

Silver Spring, MD, 1994.

[19] P. Dodin, J. Verliac, V. Nimier, Analysis of the multisensor

multitarget tracking resource allocation problem, in: Proceedings

of the International Conference on Information Fusion, 2000, pp.

WeC1-17–22.

[20] A. Doucet, N. de Freitas, K. Murphy, S. Russell, Rao-Blackwel-

lised particle filtering for dynamic Bayesian networks, in: Pro-

ceedings of the 16th International Conference on Uncertainty in

Artificial Intelligence, Stanford, 2000, pp. 176–183.

[21] D.L. Draper, S. Hanks, Sensing as a deliberate act, in: Working

Notes from the AAAI Spring Symposium on Control of Selective

Perception, Stanford, Cal., 1992, pp. 39–42.

[22] H. Durrant-Whyte, A Beginner’s Guide to Decentralised Data

Fusion, Technical Document of Australian Centre for Field

Robotics, University of Sydney, Australia, 2000.

[23] H. Durrant-Whyte, M. Stevens, Data fusion in decentralised

sensing networks, in: Proceedings of the International Conference

on Information Fusion, 2001, pp. WeA3-19–24.

[24] R. Fung, E. Horvitz, P. Rothman, Decision theoretic approaches

to sensor management, DTIC #AB B172227, Wright-Patterson

AFB, OH, Feb. 1993.

[25] C. Giraud, B. Jouvencel, Sensor selection in a fusion process: a

fuzzy approach, in: Proceedings of the IEEE Conference on

Multisensor Fusion and Integration for Intelligent Systems, Las

Vegas, NV, 1994, pp. 599–606.

184 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186

Page 23: Sensor Management

[26] P.G. Gonsalves, G.J. Rinkus, Intelligent fusion and asset man-

agement processor, in: Proceedings of the International Confer-

ence on Information Technology, Syracuse, NY, 1998, pp. 15–18.

[27] N. Gordon, A. Marrs, D. Salmond, Sequential analysis of

nonlinear dynamic systems using particles and mixtures, in: W.

Fitzgerald, R. Smith, A. Walden, P. Young (Eds.), Nonlinear and

Nonstationary Signal Processing, Cambridge University Press,

Cambridge, 2001 (Chapter 2).

[28] P. Greenway, R. Deaves, Sensor management using the Decen-

tralised Kalman Filter, in: Sensor Fusion VII, Boston, MA,

Proceedings of the SPIE 2355 (1994) 216–225.

[29] G. Hager, H.F. Durrant-Whyte, Information and multi-sensor

coordination, in: J.F. Lemmer, L.N. Kanal (Eds.), Uncertainty in

Artificial Intelligence, vol. 2, Elsevier, North-Holland, 1998, pp.

381–394.

[30] K.J. Hintz, GMUGLE: A goal lattice constructor, in: Proceedings

of the SPIE International Symposium on Aerospace/Defense

Sensing & Control, Orlando, FL, vol. 4380, 2001, pp. 324–327.

[31] K.J. Hintz, G.A. McIntyre, Information instantiation in sensor

management, in: Proceedings of the SPIE International Sympo-

sium on Aerospace/Defense Sensing & Control, Orlando, FL, vol.

3374, 1998, pp. 38–47.

[32] K.J. Hintz, G.A. McIntyre, Goal lattices for sensor management,

in: Proceedings of the SPIE International Symposium on Aero-

space/Defense Sensing and Control, Orlando, FL, vol. 3720, 1999,

pp. 249–255.

[33] K.J. Hintz, E.S. McVey, Multi-process constrained estimation,

IEEE Transactions on Systems, Man, and Cybernetics 21 (1)

(1991) 434–442.

[34] K. Johansson, P. Svensson, Submarine tracking by means of

passive sonobuoys, Technical Report of Swedish Defence Re-

search Establishment, FOA-R97-00440-505SE, 1997.

[35] K. J€oored, P. Svensson, Target tracking in archipelagic ASW: a

not-so-impossible proposition?, in: Proceedings of Underwater

Defence Technology Europe 98, London, 1998, pp. 172–175.

[36] S. Julier, J.K. Uhlmann, General decentralized data fusion with

covariance intersection (CI), in: D. Hall, J. Llinas (Eds.),

Handbook of Data Fusion, CRC Press, Boca Raton, 2001, pp.

12-1–25 (Chapter 12).

[37] M. Kalandros, L.Y. Pao, Controlling target estimate covariance in

centralized multisensor systems, in: Proceedings of American

Control Conference, Philadelphia, PA, 1998, pp. 2749–2753.

[38] M. Kalandros, L.Y. Pao, Randomization and super-heuristics in

choosing sensor sets for target tracking applications, in: Proceed-

ings of the IEEE Conference on Decision and Control, Phoenix,

AZ, 1999, pp. 1803–1808.

[39] K. Kastella, Discrimination gain to optimize detection and

classification, IEEE Transactions on Systems, Man, and Cyber-

netics, Part A: Systems and Humans 27 (1) (1997) 112–116.

[40] W. Komorniczak, J. Pietrasinski, Selected problems of MFR

resources management, in: Proceedings of the International

Conference on Information Fusion, 2000, pp. WeC1-3–8.

[41] S. Kristensen, Sensor planning with Bayesian decision analysis,

Ph.D. Dissertation, Laboratory of Image Analysis, Aalborg

University, Aalborg, Denmark, 1996.

[42] T.W.E. Lau, Y.C. Ho, Super-heuristics and their applications to

combinatorial problems, Asian Journal of Control 1 (1) (1999) 1–

13.

[43] J. Lindner, R.R. Murphy, E. Nitz, Learning the expected utility of

sensors and algorithms, in: Proceedings of the IEEE International

Conference on Multisensor Fusion and Integration for Intelligent

Systems, Las Vegas, NV, 1994, pp. 583–590.

[44] W.S. Lovejoy, A survey of algorithmic methods for partially

observable Markov decision processes, Annals of Operations

Research 28 (1991) 47–66.

[45] R.C. Luo, A.M.D. Shr, C.-Y. Hu, Multiagent based multisensor

resource management, in: Proceedings of the IEEE/RSJ Interna-

tional Conference on Intelligent Robotics and Systems, Victoria,

BC, Canada, 1998, pp. 1034–1039.

[46] R. Malhotra, Temporal considerations in sensor management, in:

Proceedings of the IEEE National Aerospace and Electronics

Conference, 1995, pp. 86–93.

[47] R. Malhotra, E. Blasch, D. Johnson, Learning sensor-detection

policies, in: Proceedings of the IEEE National Aerospace and

Electronics Conference, Wright Laboratory, Dayton, OH, 1997,

pp. 769–776.

[48] G.A. McIntyre, A comprehensive approach to sensor manage-

ment and scheduling, Doctoral Dissertation, George Mason

University, Fairfax, FA, 1998.

[49] G.A. McIntyre, K.J. Hintz, An information theoretic approach to

sensor scheduling, in: Proceedings of the SPIE International

Symposium on Aerospace/Defense Sensing and Control, Orlando,

FL, vol. 2755, 1996, pp. 304–312.

[50] G.A. McIntyre, K.J. Hintz, Sensor management simulation and

comparative study, in: Proceedings of the SPIE International

Symposium on Aerospace/Defense Sensing & Control, Orlando,

FL, vol. 3068, 1997, pp. 250–260.

[51] G.A. McIntyre, K.J. Hintz, Sensor measurement scheduling: an

enhanced dynamic, preemptive algorithm, Optical Engineering 37

(2) (1998) 517–523.

[52] J.M. Molina Lopez, F.J. Jimenez Rodriguez, J.R. Casar Corre-

dera, Fuzzy reasoning for multisensor management, in: Proceed-

ings of the IEEE International Conference on Systems, Man and

Cybernetics, 1995, pp. 1398–1403.

[53] J.M. Molina Lopez, F.J. Jimenez Rodriguez, J.R. Casar Corre-

dera, Symbolic processing for coordinated task management

in multi-radar surveillance networks, in: Proceedings of the

International Conference on Information Fusion, 1998, pp. 725–

732.

[54] S. Musick, R. Malhotra, Chasing the elusive sensor manager, in:

Proceedings of the IEEE National Aerospace and Electronics

Conference, 1994, pp. 606–613.

[55] G.W. Ng, K.H. Ng, L.T. Wong, Sensor management control and

cue, in: Proceedings of the International Conference on Informa-

tion Fusion, 2000, pp. WeB1-16–21.

[56] G.W. Ng, K.H. Ng, Sensor management – what, why and how,

Information Fusion 1 (2) (2000) 67–75.

[57] D. Penny, The automatic management of multi-sensor systems, in:

Proceedings of the International Conference on Information

Fusion, 1998, pp. 748–755.

[58] D. Penny, M. Williams, A sequential approach to multi-sensor

resource management using particle filters, in: Proceedings of the

SPIE International Conference on Signal and Data Processing of

Small Targets, 2000, pp. 598–609.

[59] C.L. Pittard, D.E. Brown, D.E. Sappington, SPA: Sensor place-

ment analyzer, An approach to the sensor placement problem,

Technical Report, Department of Systems Engineering, University

of Virginia, 1994.

[60] C.G. Schaefer, K.J. Hintz, Sensor management in a sensor rich

environment, in: Proceedings of the SPIE International Sympo-

sium on Aerospace/Defense Sensing and Control, Orlando, FL,

vol. 4052, 2000, pp. 48–57.

[61] W. Schmaedeke, Information based sensor management, in:

Signal Processing, Sensor Fusion, and Target Recognition II,

Orlando, FL, Proceedings of the SPIE 1955 (1993) 156–164.

[62] W. Schmaedeke, K. Kastella, Information based sensor manage-

ment and IMMKF, in: Proceedings of the SPIE International

Conference on Signal and Data Processing of Small Targets,

Orlando, FL, 1998, pp. 390–401.

[63] D. Simon, Minimax Filtering, http://www.innovatia.com/soft-

ware/papers/minimax. htm, 2000.

[64] A.F.M. Smith, A.E. Gelfand, Bayesian statistics without tears: a

sample-resampling perspective, The American Statistician 46 (2)

(1992) 84–88.

N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186 185

Page 24: Sensor Management

[65] J.F. Smith III, R.D. Rhyne II, A fuzzy logic algorithm for optimal

allocation of distributed resources, in: Proceedings of the Inter-

national Conference on Information Fusion, 1999, pp. 402–

409.

[66] J.F. Smith III, R.D. Rhyne II, A fuzzy logic resource manager

and underlying data mining techniques, in: Proceedings of the

International Conference on Information Fusion, 2000, pp. WeB1-

3–9.

[67] A. Steinberg, Problem-solving approach to data fusion, in:

Proceedings of the International Conference on Information

Fusion, 2001, pp. FrB2-17–26.

[68] A. Steinberg, C. Bowman, Revisions to the JDL data fusion

model, in: D. Hall, J. Llinas (Eds.), Handbook of Data Fusion,

CRC Press, Boca Raton, 2001, pp. 2-1–19 (Chapter 2).

[69] D. Str€oomberg, G. Pettersson, Parse tree evaluation – a tool for

sensor management, in: Proceedings of the International Confer-

ence on Information Fusion, 1998, pp. 741–747.

[70] P. Svensson, Information fusion in the future Swedish RMA

defence, Swedish Journal of Military Technology 69 (3) (2000) 2–8.

[71] P. Svensson, K. Johansson and K. J€oored, Submarine tracking

using multi-sensor data fusion and reactive planning for the

positioning of passive sonobuoys, in: Proceedings of Hydroakus-

tik 1997, Stockholm. Technical Report of Swedish Defence

Research Establishment, FOA-R–97-00552-409–SE.

[72] K.A. Tarabanis, P.K. Allen, R.Y. Tsai, A survey of sensor

planning in computer vision, IEEE Transactions on Robotics and

Automation 11 (1) (1995) 86–104.

[73] R. Washburn, A. Chao, D. Castanon, D. Bertsekas, R. Malhotra,

Stochastic dynamic programming for far-sighted sensor manage-

ment, in: IRIS National Symposium on Sensor and Data Fusion,

MIT Lincoln Laboratory, Lexington, 1997.

[74] R. Wesson et al., Network structures for distributed situation

assessment, in: A. Bond, L. Gasser (Eds.), Readings in Distributed

Artificial Intelligence, Morgan Kaufmann, Los Allos, CA, 1988,

pp. 71–89.

[75] C.C. White, A survey of solution techniques for the partially

observed Markov decision processes, Annals of Operations

Research 32 (1991) 215–230.

[76] F.E. White, Managing data fusion system in joint and coalition

warfare, in: Proceedings of the European Conference on Data

Fusion, Malvern, UK, 1998, pp. 49–52.

[77] M. Williams, P. Svensson, K. J€oored, Multi-sensor tracking by

particle-filter-based predictive sensor deployment, Manuscript,

2001.

[78] Z. Zhang, K.J. Hintz, OGUPSA sensor scheduling architecture

and algorithm, in: Proceedings of the SPIE International Sympo-

sium on Aerospace/Defense Sensing and Control, Orlando, FL,

vol. 2755, 1996, pp. 296–303.

186 N. Xiong, P. Svensson / Information Fusion 3 (2002) 163–186