Flight Test Data Pipeline - International Test and...

Preview:

Citation preview

Flight Test

Data Pipeline

Aaron PayneGeorgia Tech Research Institute

Electronic Systems Laboratoryaaron.payne@gtri.gatech.edu

Outline

• Data Pipeline

– The problem

– The tools

– The implementation

• Lessons Learned

– Pipeline/data enablers

– Viewing the results

Analysis Types

• Flight Test Analysis

– Generally event specific

– Detections

• Ranges, sensitivity, accuracy

– Performance

• Processing load, response timing, state transitions

• Developmental Analysis

– Generally aggregate

– Improve performance

– Algorithm development

– Algorithm tuning

Firefly (1553)

ffextractInternal Recorder(s)

Download

Conversion of binary data to engineering

units (CSV)

TSPITrack Data

Instrumentation

Analysis

DetectionEnd to End

TimingAccuracy

System Loading

Flight Test Analysis

Developmental Analysis Example

• Problem: Correlate a large number of diverse tracks primarily

based on kinematics

• Given: A correlator capable of grouping tracks based on

uncertainty based kinematic comparison scores

• Characterize sensors!

Fire Control Radar

Targeting Pod Missile Link 16

Intra-flight Data Link

RWR Jammer

Characterizing Sensors

• 1.5+ Terabytes of Flight Test Data, but not tested with this functionality in mind

• Data needed:

– Track data – preferably many samples across all relevant modes and different target types

– Aircraft position data

– Target truth

• Issues:

– Very little “real” target truth

– Data being recorded in new format, IRIG Chapter 10

Firefly (1553)

Internal Recorder(s)

IRIG

Preparing the Data

Tracks

Aircraft Position

Target Truth

Ethernet Playback Utility• IRIG Chapter 10• pcap

IRIG

WinPcap

IRIG106

CSV, SQLite, HDF5

IRIG Ch 10 recording

Ethernet Packets

Track Data

Preparing the Data

Aircraft Position

Target Truth

Tracks

IRIG

CSV, SQLite, HDF5

IRIG Ch 10 recording

1553 Data

Firefly (1553)

IRIG Ch10 -> FireflyConverter

TSPI

Preparing the Data

Target Truth

Tracks

Aircraft Position

• Truth requires additional assets or

special ranges, i.e. unavailable

• How are the sensors tested/used now?

– Sorties are typically 2-4 ships

– Target one another

• Can we use recorded aircraft position

TSPI as truth?

• Yes, but how do we associate TSPI to

tracks?

Preparing the Data

Target Truth

Tracks

Aircraft PositionTrack Data

PC Correlator

TA

TB

TC

TD

TA

TB

TC

Track Assignment

Existing Track to Track Assignment Tool

TSPIInject TSPI as a track ATSPITSPI A

TD

Inadvertent Pipeline (Single Ship)

IRIGWinPcap

IRIG106

IRIG Ch 10 recording

Ethernet Packets

1553 Data IRIG Ch10 -> FireflyConverter

CSV, SQLite, HDF5

Track Data

TSPI

Inadvertent Pipeline (Flight Group)

Track Data #1

Track Data #2

Track Data #3

Track Data #4

TSPI #1

TSPI #2

TSPI #3

TSPI #4

PC Correlator

Track Data #1

TSPI #4TSPI #3TSPI #2TSPI #1

Track Data #2

TSPI #4TSPI #3TSPI #2TSPI #1

Track Data #3

TSPI #4TSPI #3TSPI #2TSPI #1

Track Data #4

TSPI #4TSPI #3TSPI #2TSPI #1

Run 1

Run 2

Run 3

Run 4

Inadvertent Pipeline (Aggregate)

• Batch process all flights over the

multi-year repository

• Retrieve all track report samples

that were time coincident with

“truth”

• Generate statistics on the

aggregated data

Tool Chain Goals

• Simple operation – add new data, get new results

• Provide an architecture for multiple analyses

• Simple analysis integration

• Simple analysis aggregation

• Avoid redundancy – both in engineer work and data crunching

• Relative analysis programming language flexibility

• Portable

• Leverage existing analysis scripts

Pipeline Functional Layout – Stage 1

Pipeline Controller

• Crawl• Processor• Memory• Filtering

• Pipeline Controller Responsibilities

– Crawl raw data repository

– Send data to correct parser

– Maximize hardware horsepower (threads, etc.)

– Filter data and analyses/conversions

Stage 1

Pipeline Functional Layout – Stage 2

Data Converters

IRIG

Firefly

• Data Converter Responsibilities

– Two Inputs

• File to process

• Results Location

– Results in engineering units and

consumable format (CSV, SQLite, etc.)

– Determine whether data needs to be

analyzed (i.e. only re-run if needed)Time

Synchronize

Common Frame

Stage 2

Pipeline Functional Layout – Stage 3

• Single Ship Analysis Responsibilities

– Inputs

• Raw filename

• Converted data directory

• Results directory

– Results in engineering units and

consumable format (CSV, SQLite, etc.)

– Determine whether data needs to be

analyzed (i.e. only re-run if needed)

Stage 3Single Ship Analyses

Route Plot

Max Detections

Software Loaded

Errors and Utilization

Pipeline Functional Layout – Stage 4

• Multi-Ship Analysis Responsibilities

– Inputs

• Group filenames

• Group converted directory paths

• Results directory

– Results in engineering units and

consumable format (CSV, SQLite, etc.)

– Determine whether data needs to be

analyzed (i.e. only re-run if needed)

– Use common library to find files

Stage 4Multi-Ship Analyses

PC Correlator

Truth Statistics

Pipeline Functional Layout – Stage 5

• Aggregate Analysis Responsibilities

– Inputs

• Data directory

• Results directory

– Find results

– Results in engineering units and

consumable format (CSV, SQLite, etc.)

– Determine whether data needs to be

analyzed (i.e. only re-run if needed)

– Use common library to find files

Stage 5Aggregate Analyses

Sensor Uncertainty

Data Coverage

Failure Prevelance

Use Cases

Pipeline Functional Layout

Pipeline Controller

• Crawl• Processor• Memory• Filtering

Data Converters

IRIG

Firefly

Time Synchronize

Common Frame

Stage 1 Stage 2 Stage 3 Stage 4Multi-Ship Analyses

PC Correlator

Truth Statistics

Single Ship Analyses

Route Plot

Max Detections

Software Loaded

Errors and Utilization

Stage 5Aggregate Analyses

Sensor Uncertainty

Data Coverage

Failure Prevalence

Use Cases

Pipeline Enablers

• Machine consumable ICD

• Common reference frame

• Meta Data

– Input data – Who, What, When, Where, Why, Software Versions,

Hardware Versions, etc.

– Analysis data

• When was it generated?

• What source data was used?

– Choose your data storage formats with this in mind

General Hurdles

• Reviewing script failures/successes

• Managing data (while maintaining flexibility)

– Protecting source data

– Versioning

• What combination of source data and script versions created a set of results

– Comparing results

• User access

• Reviewing the results

– More plots is not always better

– Dynamic/Interactive results

Single Track Uncertainty Analysis

• We started with graphs that look like this

• Excellent for evaluating a particular instance

• Frustrating when you have 5000 to look through

Interactive Plot Attempt #1

• Excel– Build plot template based on data

connection data

– Perl dynamically copies template and sets data connection

• Pros– Easy to modify template

– Data is available for additional computation

• Cons– Sometimes slow

– Plots are limited

Interactive Plot Attempt #2

• Javascript Plotting (specifically High Charts)– Build client side web page (no

server)

– Drag and drop csv data

• Pros– Freedom

– Portable

• Cons– Learning curve

– Can struggle with large data

Recommended Tools

• Pandas – Python Data Analysis Library

• D3.js – Data-Drive Documents

• Dygraphs

• Highcharts

• Luigi

• Tableau

• Cesium

Questions?

Recommended