Upload
brice-freeman
View
218
Download
1
Tags:
Embed Size (px)
Citation preview
Air Traffic ControlGuy Mark LifshitzSevan Hanssian
Flight Communications
• Ground Control
• Terminal Control
• En Route Centers
Flight Communications
• Radar Controllers
• Data Controller
Architecture
Stake Holders
• The Federal Aviation Administration (FAA)• Large corporate contractor• Air traffic controllers are the end users
Scale
• Up to 210 consoles per en route center.• Each console contains an IBM RS/6000 CPU.• 400 to 2,440 aircraft at a time.• 16 to 40 radars• Each center has 60 to 90 control positions.
• 1 million lines of Ada code to implement.
Tasks
• Acquire radar reports• Input and retrieve flight plans• Handle conflict alerts and potential plane collisions• Provide recording capability for later playback • Convert reports for display on consoles• Broadcast reports to the consoles• Provide extensive monitoring & control info• Provide GUI facilities on the consoles• Allow for custom data displays on each console
Principal Requirements
• Ultra-High AvailabilityTarget of 0.99999 5 minutes/year unavailability
• High PerformanceMany planes, radars and consoles
Other Requirements
• OpennessIncorporate other software components
• Field subsets of the systemHandle budget reduction
• ModifiabilityHandle changes to hardware and software
• InteroperabilityInterface with various external systems
Physical View (1/9)
Physical View (2/9)
-“Host Computer System”-Provides processing of both surveillance and flight plan data-Each en route center has two host computers
Physical View (3/9)
-Air traffic controller's workstations-Provide displays of aircraft position data-Allow controllers to modify the flight data
Physical View (4/9)
-“Local Communication Network”-Four parallel token ring networks-One is a spare (Redundancy/load balancing.)
Physical View (5/9)
-Connect networks-Able to substitute the spare ring if one fails-Make other alternative routings
Physical View (6/9)
-Each Host is interfaced to the LCN via dual LCN interface units
-Each one is a fault-tolerant redundant pair
Physical View (7/9)
-Provides a backup display of aircraft position-Used if the display data provided by the host is lost.
-”External System Interface“- Interfaces radar data from the EDARC
Physical View (8/9)
-“Backup Communications Network”- Ethernet network (TCP/IP protocols) -Used for the EDARC interface - Used as a backup network (to the LCN)
Physical View (9/9)
-Test new hardware and software-Training without interfering with the ATC
-Test new hardware and software-Training without interfering with the ATC
Module Decomposition View (1/2)
ISSS Modules:• Computer Software Configuration Items (CSCIs):
– correspond to work assignments– form units of deliverable documentation & software– contain small software elements grouped by a coherent
theme/rationale
Modifiability:• semantic coherence for allocating responsibilities to each CSCI• abstraction of common services• anticipate expected changes, generalize module, maintain interface
stability
Module Decomposition View (2/2)
• Display Management• Common System Services• Recording, Analysis and Playback• National Air space System Modification• IBM AIX operating system (as the underlying
OS)
Process View
Process View
Requirement:• Avoid cold system restarts• Immediate switchover to a standby
component was desired
Design Solution:– In applications: PAS/SAS scheme– Across applications: Fault-tolerant hierarchy
Process View
• Functional groups– Run independently on different processors – Separate instances of the program – Maintain their own state and message handling
• Operational Units– One executing copy is primary called primary address
space (PAS)• Updates the SASs when messages arrive
– Other copies are called standby address space (SAS)
Process View
• In the event of PAS failure:– A SAS is promoted to the new PAS – The new PAS connects with the clients of the
operational unit– A new SAS is started to replace the failed PAS– The new SAS announces itself to the new PAS
Process View
Client-Server View
Client-Server View
Within Applications (PAS/SAS):– Client Operational Units send the server a service
request message– Server replies with an acknowledgement message
Issues:– Clients and servers designed to have consistent
interfaces– ISSS used simple and defined message-passing
protocols for interaction
Code View (1/2)
• Applications decomposed into Ada packages– Basic (type definitions)– Complex (reused across applications)
• Packages allow for– Abstraction – Information hiding
• Processes are schedulable by OS
Code View (2/2)
• The ISSS is composed of several programs• Ada programs are created from one or more
source files• Ada programs may contain multiple tasks
Layered View (1/5)
Layered View (2/5)
OS Kernel
Layered View (3/5)
Added C programs
OS Kernel
Layered View (4/5)
Message Helpers
Added C programs
OS Kernel
Layered View (5/5)
Applications
Message Helpers
Added C programs
OS Kernel
Fault-Tolerance View
Fault-Tolerance View
Each level asynchronously:• detects errors in self, peers & lower level– Heartbeat tactic– ping/echo
• Handles exceptions from lower levels• Diagnoses, recovers, reports and raises
exceptions
Fault-Tolerance View
Requirement:• Avoid cold system restarts• Immediate switchover to a standby component
was desired
Design Solution:– In applications: PAS/SAS scheme– Across applications:Fault-tolerant hierarchy
Adaptation Data
• Uses the modifiability tactic of configuration files called adaptation data
• User-or center-specific preferences• Configuration changes• Requirements changes• Complicated mechanism to maintainers• Increases the state space
Code Templates
• The primary and secondary copies are never doing the same thing
• But they have the same source code• Continuous loop that services incoming
events• Makes it simple to add new applications• Coders and maintainers do not need to know
about message-handling
How the ATC System Achieves Its Quality Goals
• Goal: High Availability• How Achieved: Hardware redundancy,
software redundancy• Tactic(s) Used: State resynchronization;
shadowing; active redundancy; removal from service; limit exposure; ping/echo; heartbeat; exception; spare
How the ATC System Achieves Its Quality Goals
• Goal: High Performance• How Achieved: Distributed multiprocessors;
front-end schedulability analysis, and network modeling
• Tactic(s) Used: Introduce concurrency
How the ATC System Achieves Its Quality Goals
• Goal: Openness• How Achieved: Interface wrapping and
layering• Tactic(s) Used: Abstract common services;
maintain interface stability
How the ATC System Achieves Its Quality Goals
• Goal: Modifiability• How Achieved: Templates and adaptation data;
module responsbilities; specified interfaces• Tactic(s) Used: Abstract common services;
semantic coherence; maintain interface stability; anticipate expected changes; generalize the module; component replacement; adherence to defined procotols; configuration files
How the ATC System Achieves Its Quality Goals
• Goal: Ability to Field Subsets• How Achieved: Appropriate separation of
concerns• Tactic(s) Used: Abstract common services
How the ATC System Achieves Its Quality Goals
• Goal: Interoperability• How Achieved: Client-server division of
functionality and message-based communications
• Tactic(s) Used: Adherence to defined protocols; maintain interface stability