Upload
eugenia-morrison
View
213
Download
0
Embed Size (px)
Citation preview
Progress in Using Entity-Based Monte
Carlo Simulation With Explicit Treatment
of C4ISR to Measure IS Metrics
Corporate Headquarters:11911 Freedom DriveSuite 800Reston, VA 20190-5602(703)787-8700(703)787-3518 (FAX)
Simulation Sciences Division:512 Via de la Valle
Suite 301Solana Beach, CA 92075-2715
(858)792-8904 (Voice)(858)792-2719 (FAX)
Simulation Sciences Division (SSD)
http://www.metsci.com
Prepared
by
Dr. Bill Stevens, Metron
for
IS Metrics Workshop
28-29 March 2000
2
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
OUTLINE
Approach
Key Metrics Related Details– Basic Monte Carlo Metrics and Statistics– Cause-and-Effect Analysis– Sensitivity Analysis– Hypothesis Testing
Examples– CINCPACFLT IT-21 Assessment– FBE-D
Lessons-Learned and Challenges
3
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
Entity-Based Monte Carlo Simulation with Explicit C4ISR
Provides one means to directly measure relevant IS metrics in
mission-to-campaign level scenarios. Assess impact of IT and WPR
improvements on warfighting outcome.
Explicit C4ISR includes representation of:– Platforms, systems, and commanders,– Command organization (group, mission, platform),– Commander’s plans and doctrine,– Information collection,– Information dissemination,– Tactical picture processing, and – Warfighting interactions.
Provides means to capture, simulate/view, and quantify the
performance of alternate C4ISR architectures and warfighting plans.
4
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
Key Metrics Related DetailsBasic Metrics and Statistics
Typical Monte Carlo metrics are random variables X
which are computed for each replication (Xn= value in
replication n). Examples:
– Percent of threat subs tracked/trailed/killed on D+10,
– Average threat sub AOU on D+0, etc.
Three key quantities should be computed for each X:
mean. estimated on bound confidence 95% μ, of variance estimatedN
s
N
σmVariance
μ). of 2σ withinlie values 95% X, normal (for X of variance estimatedmX1N
1sσ
ns.replicatio N over X of value) (typical mean estimatedXN
1mμ
22
2N
1nn
22
N
1nn
5
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
Key Metrics Related DetailsCause-and-Effect Analysis
Relate Data Recorded Above to Force Effectiveness Metrics - Force Attrition and Damage, Resources Expended, and Commander’s Objectives Attained.
Example C4ISR Operational Sequence
Cause-and-Effect Metrics(For Each Threat Presentation)
Cause-and-Effect Metrics(Force Level)
Threat emission
Wide-area sensor detection
Engagement sensor cue
Weapon allocation
Engagement
BDA collection
Re-engagement
Record time(s) of key threat emissions.
Record time, accuracy, completeness of each WAS detection.
Record cue receipt times. Time from cue receipt to acquisition.
Record weapon system allocation times.
Record weapon launch and intercept times and engage results.
Record BDA collection times, associated engage events, BDA data.
WAS tasking loads vs. capacity vs. time. Percent miss-allocations vs. time.
Engage sensor cueing loads vs. capacity vs. time. Percent miss-allocations vs. time.
Weapon system tasking loads vs. capacity vs. time. Percent miss-allocations vs. time.
BDA collection system tasking loads vs. capacity vs. time. Percent miss-allocations vs. time.
6
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
Key Metrics Related DetailsExcursion Analysis
Monte Carlo runs can be organized in the form of a
scenario baseline + scenario excursion sets + selected
metrics and metric breakdowns. Example excursion sets:
– SA-10 Pk’s: [0.0, 0.2, 0.4, 0.6]
– CV-68 VA Squadron: [squadron-x, squadron-y, squadron-z]
Resulting excursion set sensitivity graphs can be
generated:
Number ofBLUEFightersKilled
Pk
Squadron X
Squadron Y
Squadron Z
7
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
Key Metrics Related DetailsHypothesis Testing
Many typical study objectives can be addressed through the
use of statistical hypothesis testing.
As an example, one could employ hypothesis testing to test
H0 vs. H1:
H0: XY
H1: XY
and to thus determine whether or not squadron X is statistically more or
less survivable that squadron Y for given SAM configuration.
Standard tests can be applied as a function of () where =
probability of falsely rejecting(accepting) H0.
Number ofBLUEFightersKilled
SAM PkPk’
Squadron X
Squadron Y
X
Y
8
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
ExamplesCINCPACFLT IT-21 Assessment
IT-21 tactical picture issignificantly improved.
IT-21 tactical picture issignificantly improved.
More tracks and moretracks with ID info.
More tracks and moretracks with ID info.
ACC GROUND PICTURE (CURRENT)
0
20
40
60
80
100
120
140
160
0 20 40 60 80
Time (hours)
Nu
mb
er
of
Tra
cks Total Tracks
Unknown
Mechanized
Artillery
Armor
Infantry
ACC GROUND PICTURE (IT-21)
0
20
40
60
80
100
120
140
160
0 20 40 60 80
Time (hours)
Nu
mb
er
of
Tra
cks Total Tracks
Unknown
Mechanized
Artillery
Armor
Infantry
Simulation revealed that IT-21 ground picture would have much improved ID rate …
9
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
ExamplesCINCPACFLT IT-21 Assessment
IT-21 WITH/WITHOUT ON-THE-FLY ATO
0
1000
2000
3000
Sorties Flown Engagements Kills SAM Attacks
On-The-Fly ATO
Fixed Killboxes
IT-21 with on-the-flyATO: - More kills for same number of sorties - Fewer BLUE losses
IT-21 with on-the-flyATO: - More kills for same number of sorties - Fewer BLUE losses
Degree of improvement: 36% more kills 46% fewer losses
Degree of improvement: 36% more kills 46% fewer losses
On-the-fly ATO concept was proposed to leverage the improved ID rates …
10
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
ExamplesCINCPACFLT IT-21 Assessment
Time Within Sensor Range
• Pre-positioned surveillance and engagement asset holding points.
Initial Indication of Target
Time• Overhead/all-
source detection of specific targets.
Actionable Time
• Distributed fusion efficiencies decrease correlation times.
• Rapid prioritization of targets.
• Dynamic allocation of assets to identified targets.
• Better weapon to target pairings.
Engagement Time
• Better positioned engagement assets
• In-flight target updates lead to shorter localization times.
Assessment Time
• All-source BDA data married with common tactical picture.
• Quicker relay of BDA.
• Anticipatory scheduling of pre- and post-strike imagery assets shortens engagement cycle.
IPB Time
• Faster processing and communication of annotated imagery data.
• Faster and surer assertion of target types.
Strike OODA Loop:
IT-21 Strike OODA Loop for High-Priority Targets Reduced from 13.5 to 5.5 Hours.
Artillery Attrition Goal Achieved in 34 vs. 64 Hours.
50% Increase in Critical Mobile Target Kills.
Combined IT and process improvements yield speed-of-command and commander’s attrition goal timeline improvements …
11
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
ExamplesFleet Battle Experiment Delta (FBE-D)
The MBC/C7F hypothesized that distributed surface picture management and distributed localization/prosecution asset allocation, leveraging planned IT-21 improvements, would result in significant improvements in CSOF mission effectiveness …
USN Surface Warfare
Commander
Picture Manager 1
Picture Manager N
• • •
Battle Manager 1
Battle Manager N
• • •
Distributed Surface Picture Management
Nodes
Distributed Battle Management
Nodes
TraditionalCentralized
C2
FBE-DDistributed
C2
12
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
ExamplesFleet Battle Experiment Delta (FBE-D)
M&S was employed to model the CSOF threat and US/ROK surveillance, localization, andprosecution assets. Live operators interacted with the simulation by making surveillance, localization, and prosecution asset allocations. These asset allocations were fed into the simulation in order to provide operator feedback and for the purpose of assessing the effectiveness of the experimental distributed C2 architecture.
LIVE C2
MaritimeCSOF
CommanderC2 System
(LAWS)Prosecution Tasking
NSS SIMULATION
AttackAssets
USAF AC-130 a/c,AH-64 Apaches,USAF/ROK ACC strike a/c, and
USN CV strike a/c.
ThreatAssets
nK SOF force transport boats
Fusion
USN C2 Ships
Sensors
P-3C and SH-60
TargetsSensor
Reports
BDA Reports
We
ap
on
s
13
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
ExamplesFleet Battle Experiment Delta (FBE-D)
A novel live operator-to-simulation voice and GUI based approach was employed to effect the desired virtual experimentation environment. Pictured here is the air asset interface ...
USS BLUE RIDGE
NSS SIMULATION
MASOCLAWS
NETWORK
Simulated US/ROKAir Surveillance and
Engage Assets
Simulated nK SOFInsertion Assets
Notification of air support grant
Check in
Contact investigate/engage order
Check out
Contact or engage report
HTACC
6 CAB
Air support request
Air support grant
TACCIMS
HTACC: USAF ACC Hardened Tactical Command Center (F-16C, F-15E, and ROK F-4 command), Osan, ROK6 CAB: USA Sixth Cavalry Brigade (Apache helicopter command), Kangnung, ROK.
14
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
ExamplesFleet Battle Experiment Delta (FBE-D)
Hours
Targetsremaining
AH-64s land, rearm, & launch
Communications failure
Baseline
No AC-130
Lower HellfirePk
30% less AH-64s
Excursions (Cumulative)
Team A baseline
Team B, no C-130
Team C (offsite), no C-130
FBE-D “Live” Runs
FBE-DExcursion Analysis
The FBE-D distributed C2 architecture plus new in-theater attack asset capabilities yielded the surprise result that the assessed CSOF threat could be countered in Day 01 of the Korean War Plan. Post-analysis, pictured below, was employed to assess the sensitivity of this result to different force laydowns.
15
OASD (C3I) Information Superiority Metrics Workshop
28-29 March 2000
Lessons-Learned and Challenges
Lessons-Learned• C4ISR architectures and C2 decision
processes can be explicitly represented at the
commander, platform, and system levels.
Detailed alternatives can be explicitly
represented and assessed.
• Simulation supports detailed observation of
C4ISR architecture in n-sided campaign and
mission level scenarios.
• Facilitates/forces community to think through
proposed C4ISR architectures.
• ID of key performance drivers and
assessment of warfighting impact of
technology initiatives using Monte Carlo
simulation is feasible.
Challenges• Detailed C4ISR assessments require
consideration of nearly all details associated
with planning and executing a C4ISR
exercise or experiment.
• Collection of valid platform, system, and (in
particular) C2 data and assumptions for
friendly and threat forces is an issue.
• Campaign-level decisions (e.g. determine
commander’s objectives) not easily handled.
• Scenarios in which major re-planning (e.g.
modify commander’s objectives) is warranted
are not easily handled.
• Execution times limit the analyses which can
be reasonably performed.