Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Analysis of paravirtualization tools supporting isolation in multicore,
mixed-criticality aerospace systems
D. Andreetti2 , F. Federici1, V. Muttillo1, D. Pascucci2, L. Pomante1, G. Valente1
1 Center of Excellence DEWS, University of L’Aquila, Italy2 Thales Alenia Space Italy, Rome, Italy
EMC²: Mixed Criticality Applications and Implementation Approaches
Overview
Context
Univaq EMC2 use case
Description
Preliminary implementation
State of the work
Conclusions and future developments
2Prague, 20/01/2016 HIPEAC 2016
EMC2
3
Living Lab 3
Space Applications
(TASE – TASI)
UC MPSoC
Hardware for Space(TRT, TASI, TTT, TUW, Tecnalia)
UC MPSoC
Sw and Tools for Space(TASE, TASI, TTT, Integrasys)
UC Payload
Applications(Integrasys, Space Apps)
UC Platforms
Applications(TASI, TASE, Univaq)
UC Radar
Payload Applications(TASI)
Prague, 20/01/2016 HIPEAC 2016
Context
Avionics and aerospace applications Stringent requirements in terms of determinism, robustness and security
4
From the federated approach to Integrated Modular Avionics IMA systems as mixed criticality
system
Transition to multicore architectures Use of virtualization
PS 1 PS 2
Func. 1 Func. 2
PS 1 PS 2
F1 F2 F3 F4
PS
CORE 1 CORE 2
F1 F2
Prague, 20/01/2016 HIPEAC 2016
Application scenario
5
TARGET MULTICORE
PROCESSING
PLATFORM
PERIPHERAL
DEVICE 1
PERIPHERAL
DEVICE 2
TEST
CONSOLE
JTAG
SERIAL
ETHERNET
SPACEWIRE
SPACEWIRE
Application Stack:
(Telemetry,
file transfers)
Satellite Demo Platform (hardware and software):
Test Software
(Test input,
analysis and
benchmarking)
DEMO PLATFORM
REFERENCE SOFTWARE
Prague, 20/01/2016 HIPEAC 2016
I/O transactions
Three different types of I/O transactions:
Level 1 transactions (telemetry)
Level 2 transactions (telemetry)
Level 3 transactions (file transfer)
Criticality with respect to timing requirements (and resource sharing)
6Prague, 20/01/2016 HIPEAC 2016
Goals
Migrate a typical aerospace application over a modern multicore platform
Benhmarking hypervisors
Compare different virtualization solutions
7Prague, 20/01/2016 HIPEAC 2016
Target Platform
GR-CPCI-LEON4-N2X: designed for evaluation of the Cobham Gaisler LEON4 Next Generation Microprocessor (NGMP) functional prototype device.
8
Processor: Quad-Core 32-bit LEON4 SPARC V8 processor
with MMU, IOMMU
Prague, 20/01/2016 HIPEAC 2016
Preliminary work:
Platform Support package development
SpaceWire drivers for Leon4 platform
Application structure
9
APPLICATION
(TELEMETRY MANAGEMENT, FILE TRANSFERS…)
DEVICE DRIVERS
I/O MANAGER
Prague, 20/01/2016 HIPEAC 2016
I/O Manager
10
The I/O Manager handles all I/O-related activities:
Collects the I/O requests from all the applications
Performs the required transactions on the links
Provides the required data to the application
Hi-level I/O Manager Formats the data according to
specific protocols.Low-level I/O Manager Manages the data provided by the
application and device interfacesimplementing a specificscheduling policy
LOW LEVEL I/O MANAGER
HIGH LEVEL I/O MANAGER
Prague, 20/01/2016 HIPEAC 2016
Low level scheduling
11
The application starts its transactions by making the data available to the I/O
Manager
The low-level I/O manager implements a periodic scheduler managing all the low
level transactions from the application to the various peripheral devices.
Time Slot 1 Time Slot 2 Time Slot n
Time window
t
Each transaction is scheduled on a specified time frame and has a fixed deadline
Level 1 transactions (deadline = 1)
Level 2 transactions (intermediate deadline)
Level 3 transactions (background transactions)
Prague, 20/01/2016 HIPEAC 2016
Application structure
12
APPLICATION
(TELEMETRY, FILE TRANSFERS…)
MULTICORE PLATFORM
HYPERVISOR
DEVICE DRIVERS
LOW LEVEL I/O MANAGER
HIGH LEVEL I/O MANAGER
Prague, 20/01/2016 HIPEAC 2016
Possible mapping
13
DEVICE
DRIVER
PARTITION
LOW-LEVEL
I/O MANAGER
PARTITION
HIGH-LEVEL
I/O MANAGER
PARTITION
APPLICATION
HYPERVISOR
SHARED MEMORY
Prague, 20/01/2016 HIPEAC 2016
XtratuM
14
PARTITION 1
HYPERCALL INTERFACE
KE
RN
EL M
OD
EU
SE
R M
OD
E
PARTITION 2 PARTITION 3
XTRATUM
USER
PARTITIONS
SUPERVISOR
PARTITIONS
HYPERCALL INTERFACE
XTRATUM
MULTICORE PROCESSOR
cpu0 cpu1 cpu2 cpu3
vcpu0 vcpu0 vcpu1
Prague, 20/01/2016 HIPEAC 2016
PikeOS
15
PIKEOS SYSTEM SOFTWARE
PARTITION 1 PARTITION 2 PARTITION 3
PIKEOS SEPARATION MICROKERNEL
ARCHITECTURE SUPPORT PACKAGEPLATFORM
SUPPORT PACKAGE
KE
RN
EL M
OD
EU
SE
R M
OD
E
Prague, 20/01/2016 HIPEAC 2016
Scheduling
In XtratuM, each CPU holds its own cyclic scheduler, defining individual scheduling plans for each core.
XtratuM also includes a fixed priority scheduling as an alternative scheduling policy
16
In PikeOS, the same time partition scheme is scheduled on all CPUs at the same time. The kernel could schedule different schemes on each CPU, but the
PSSW doesn’t support this yet
Prague, 20/01/2016 HIPEAC 2016
Conclusions and future work
In the context of the EMC2 project, Univaq and TASI are collaborating on acase study related to the implementation of a simplified satellite platform.
Two different hypervisors, SYSGO PikeOS and FentISS XtratuM, have beenconsidered for the implementation of the reference application on quad-core LEON 4 platform.
Platform support packages and device drivers have been developed
Possible implementation approaches have been analyzed
In the near future, the application will be finalized and deployed on thetarget platform. The performance of the system will be analyzed in relationto specific requirements of real time, robustness and safety.
17Prague, 20/01/2016 HIPEAC 2016
Further work
Multiprocessor Leon3 system Performance comparison of Leon3 and Leon4 processors
Evaluation of possible hardware support strategies for hypervisor functionalities
18
Heterogeneous platform with dual core ARM Cortex A9 and multiprocessor Leon3 system Implemented on Xilinx Zynq Analysis of isolation in
heterogeneous platforms sharing memory resources
Prague, 20/01/2016 HIPEAC 2016
Thank you for your attention
Questions?