Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 1
Public
FP7-ICT-2013- 10 (611146) CONTREX
Design of embedded mixed-criticality CONTRol
systems under consideration of EXtra-functional
properties
Project Duration 2013-10-01 – 2016-09-30 Type IP
WP no. Deliverable no. Lead participant
WP4 D4.2.1 STM
Definitions and intermediate implementation of
demonstrator’s execution platforms and run-time
systems
Prepared by Alberto Rosti (STM)
Issued by STM
Document Number/Rev. CONTREX/STM/R/D4.2.1/1.0
Classification CONTREX Public
Submission Date 2015-04-20
Due Date 2015-03-31
Project co-funded by the European Commission within the Seventh Framework Programme (2007-2013)
© Copyright 2015 OFFIS e.V., STMicroelectronics srl., GMV Aerospace and Defence
SA, Cobra Telematics SA, Eurotech SPA, Intecs SPA, iXtronics GmbH, EDALab srl, Docea
Power, Politecnico di Milano, Politecnico di Torino, Universidad de Cantabria, Kungliga
Tekniska Hoegskolan, European Electronic Chips & Systems design Initiative, ST-Polito
Societa’ consortile a r.l..
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 2
This document may be copied freely for use in the public domain. Sections of it may
be copied provided that acknowledgement is given of this original work. No responsibility is
assumed by CONTREX or its members for any application or design, nor for any
infringements of patents or rights of others which may result from the use of this document.
History of Changes
ED. REV. DATE PAGES REASON FOR CHANGES
RG 0.1 2014-11-18 5 Initial version
SB 0.2 2015-03-26 18 AT Demonstrator Platform (COBRA)
AR 0.3 2015-04-8 27 Unmanned Aerial Vehicle Demonstrators (OFFIS-
GMV)
CB 0.4 2015-04-09 34 Run-time management and abstractions description
CB 0.5 2015-04-13 42 Final description of POLIMI contribution
EUTH 0.6 2015-04-13 44 EUTH contribution
JF 0.7 2015-04-13 45 INTECS contribution
RVP 0.8 2015-04-13 46 RPA demonstrator platform model
AR 0.9 2015-04-14 47 Integrated version
DQ 0.10 2015-04-16 47 Updating figure 4.4
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 3
Contents
1 Introduction ........................................................................................................................ 4
2 Use Case 1: Unmanned Aerial Vehicle Demonstrators ..................................................... 5
2.1 Brief description of the demonstrator platform ........................................................... 5
2.2 Use Case 1a: Multi-Rotor System demonstrator ......................................................... 8
2.3 Use Case 1b: Remotely Piloted Aircraft I/O module Demonstrator Platform ......... 16
3 Automotive Telematics Demonstrator Platform .............................................................. 20
3.1 Brief description of the demonstrator platform ......................................................... 21
3.2 AT demonstrator platform models ............................................................................ 30
3.3 AT platform runtime management ............................................................................ 35
4 Telecom Demonstrator Platform ...................................................................................... 39
4.1 Brief description of the demonstrator platform ......................................................... 39
4.2 Telecom demonstrator platform models .................................................................... 44
5 Conclusions ...................................................................................................................... 46
6 References ........................................................................................................................ 47
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 4
1 Introduction
This report describes the demonstrators’ implementation platforms of the three use cases, it
deals with their definition, implementation at the intermediate stage, and supporting run-time
system.
The proper execution platform for each industrial use case is developed at the appropriate
platform abstraction level. Platforms are built “from the bottom up”; that is, starting from the
concrete hardware components. Abstraction for the use-case platform is derived starting from
the actual hardware platform, so that a model of the runtime properties can be to be analysed
and optimised for the use-case platform.
The runtime mechanisms implemented by the platform are modelled, taking into account the
link between runtime mechanisms and critical extra-functional properties, enabling the
runtime control and optimization of those properties.
In the project, the demonstrators are a layered HW/SW structure composed of HW execution
platforms, operating system, run-time manager, node abstraction layer, and application. All
the platform pass through the definition and implementation and have also to provide
software support services for the applications. It is thus possible to envisage common
development pattern to the three user cases platforms encompassing the refinement to their
final implementation for the supporting their target applications.
All the platforms are defined by the industrial partners, in collaboration with the research
partners. Platforms’ definition is carried out by specifying the architecture of building
components and communication facilities. This approach leads to a straightforward
implementation on SoC or MPSoC, building up the platform from its components in a bottom
up fashion starting from the concrete hardware components.
The intermediate implementation can lead to a real platform on FPGA, as in the unmanned
aerial vehicle demonstrator, where Virtual Platform or native simulation are in place to deal
with non-functional properties. It can also be made of a system of components for sensing
(SecSec, iNemo), data processing, automotive gateway, cloud communication infrastructure
and remote services infrastructure, as in the automotive user case. Or, as in the
telecommunication use case, can be composed of two cards connected each other by a GE
cable supporting Power Over Ethernet. Originally implemented by Freescale MPC880
microprocessor and Lattice FPGA has to be re-implemented on Xilinx Zynq.
Execution platforms are then endowed with supporting run time system, it can be the GMV AIR
hypervisor and the Real Time Resource Manager in user case 1, a cloud of semantic services
as in use case 2, use case 3 is provided of Linux os (xenomai), network stack, SNMP agent
and WEB server.
Important contribution is the development or exploitation of virtual platforms to provide link
and handling of the extra-functional properties. The unmanned aerial use case resorts to a
virtual platform for the Xilinx ZYNQ 7000 family which is available from Cadence [2], the
automotive use case resorts to its point tools or web frameworks to model and analyse the full
system, the telecommunications use case exploits Open Virtual Platform environment and
SCNSL for network simulation.
The document is organized into chapters corresponding to each of the three use cases targeted
within the project.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 5
2 Use Case 1: Unmanned Aerial Vehicle Demonstrators
Use Case 1 was split into two different demonstrators, corresponding to two independent
systems, driven by OFFIS and GMV respectively.
The first one (the Use Case 1a) consists of an overall UAV controller that executes several
tasks of heterogeneous criticality levels. It is based on a pre-existing multi-rotor system that is
used as an aerial platform which will be extended by a Multi-Processor System on Chip
(MPSoC) to implement the safety-, mission- and non-critical functions of an autonomously
civilian UAV.
The second one (the Use case 1b) is an adaptation from the I/O module of the Flight Control
Computer (FCC) of a pre-existing Remotely Piloted Aircraft (RPA). It consists of a series of
tasks (handling the data from different sensor devices) with different criticality levels
deployed on a Multi-Processor System on Chip (MPSoC).
These two demonstrators are aimed at focusing on two different sub-flows within the overall
CONTREX design and development flow, as well as on different goals. While the first
demonstrator is focused on VP-based simulation, and concentrates mainly on power and
temperature features, the second one does it on a (higher level) native simulation and
automatic DSE, concentrating mainly on timing and power features. Moreover, while the first
demonstrator is focused on the dissemination of methodologies and tools (by means of the
demonstrable flying system), the second one focuses on the exploitation of them.
Both demonstrators use the same MPSoC execution platform, which has been then
customized for each one according to the specific demonstrator’s requirements and needs.
Section 2.1 provides a general description about the common execution platform, while
sections 2.2 and 2.3 explain how this platform has been customized, respectively, for Use
Case 1a and Use Case 1b demonstrators.
2.1 Brief description of the demonstrator platform
This section provides a general description about the execution platform that has been chosen
for the Use Case 1 (including both Use Case 1a and Use Case 1b demonstrators). It consists
on a Xilinx ZYNQ 7020 MPSoC, which combines multicore processors and programmable
logic and thus served well CONTREX purposes.
Both demonstrators use the real platform, which furthermore is being simulated, for the
consideration of the extra-functional properties, using different technologies: virtualization
and native simulation.
Section 2.1.1 briefly describes the main characteristics of the Xilinx ZYNQ 7000 family,
while sections 2.1.2 and 2.1.3 show the models of the execution platform that have been set
up for system simulation.
2.1.1 Xilinx ZYNQ 7000 family
As previously said, the Xilinx ZYNQ 7000 MPSoC family [2] combines multicore processors
and programmable logic. The multicore processor is represented by a dual ARM Cortex -A9
MPCore at 866MHz and an Artix-7 FPGA with 85k logic cells. The development with this
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 6
MPSoC is fully supported by the Xilinx Vivado Toolchain. The structure of this MPSoC is
shown in Figure 2.1.
Figure 2.1 Overview
Overview of the structure of the Xilinx ZYNQ family is described in [2]
The ARM dual core is connected to the peripherals by the AMBA® Interconnect. On the left
side the available interfaces are shown which can be connected to the pinout of the MPSoC by
the Processor I/O Mux. The AMBA Interconnect represents also the interface to Multiport
DRAM Controller and the Flash Controller as well as the connection to the Programmable
Logic (FPGA) part of the MPSoC.
With this flexible and heterogeneous MPSoC it becomes possible to process the presented
safety and real-time critical flight algorithms together with the mission-critical payload tasks
on a single MPSoC. With the Artix-7 FPGA it is possible to define and build further
interfaces, processing elements (e.g. MicroBlaze Softcore) or also specialized hardware for
the payload processing tasks. While the development and production of an own board is very
time and cost intensive, it was decided to use an industry board which is available at Trenz
Electronic GmbH [3]. The TE0720-01-2IF board, shown in Figure 2.2, has a dimension of
50mm x 40mm and a weight of 20g. It is connected to an individual breakout board via the
assembled industry connectors on the bottom of the board.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 7
Figure 2.2 TE0720-01-2IF industry board of Trenz Electronic GmbH [3]
While the ZYNQ has by standard only two UART interfaces, extra UART interfaces (a
different number for each of the demonstrators) had to be implemented in the Programmable
Logic part. The communication rates between the components were chosen so that they met
the demonstrator’s requirements with their timing constraints.
2.1.2 Virtual Platform for ZYNQ 7000
A virtual platform for the Xilinx ZYNQ 7000 family is available from Cadence [2]. The so-
called ‘Cadence Virtual System Platform for the Xilinx Zynq-7000 All Programmable SoC’ is
a fast and extensible simulation platform based on SystemC TLM2 intended for software
development (Figure 2.3).
The heart of the virtual platform is an instruction set simulator (ISS) for the ARM Cortex A9
dual core provided by Imperas as part of OVP. The Imperas ARM ISS is an instruction
accurate processor model including debugger interface and performance monitor registers. It
Figure 2.3 Cadence Virtual Platform for ZYNQ SoC
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 8
provides a native TLM interface for the integration in SystemC TLM2 based virtual
platforms. The ISS is extended with models for memory, on-board standard peripherals,
interconnects, and display. Altogether, this builds a virtual platform that allows running
software, e.g. booting a Linux OS. Further features of the virtual platform are the extensibility
and the ability to be connected to real world interfaces. The second allows to forward virtual
interfaces of the virtual platform to real interfaces of the host PC. For instance, one can
connect the Ethernet port or USB ports. As a result, the Ethernet or USB devices are
accessible from inside the virtual platform.
The other feature, extensibility, allows the integration of additional, user-defined components
in the virtual platform. They can be given either as SystemC TLM2 components or as HDL
(VHDL/Verilog/SystemVerilog) implementation that will be co-simulated via the Cadence
HDL simulator framework. To do so, Cadence provides configuration files that describe the
virtual platform’s components, its internal structure and communication links. With this
feature, it is possible to integrate components that are meant to be implemented in the ZYNQ
SoC’s FPGA fabric on one hand, or components that are used for simulation only such as
observer modules connected to a bus.
In conclusion, the virtual platform provided by Cadence matches the requirements for the
CONTREX use-cases. It allows running the demonstrator applications and the extensibility
enables us to integrate extra-functional property models.
2.1.3 UML/Marte model of the demonstrator platform
In order to allow performance analysis and simulation, the execution platform modelled using
the CONTREX design methodology (based on UML/MARTE) and supported by the analysis
and simulation tools an execution platform model that is used for native simulation (using
UC’s VIPPE tool) has been developed. Its model is not shown here since it is similar to the
one descripted in section 2.3.2.
2.2 Use Case 1a: Multi-Rotor System demonstrator
Like mentioned in deliverable D4.1.1, OFFIS uses the described processing platform on its
multi-rotor system demonstrator. The already flying real part of the demonstrator with the
ZYNQ platform is shown in Figure 2.4.
Figure 2.4 Flying multi-rotor system with its avionics based on the ZYNQ platform
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 9
The ZYNQ MPSoC concurrently processes the safety-critical flight algorithms as well as the
mission-critical on-board video processing. To do so the ZYNQ platform is configured and
extended by specific parts for the multi-rotor demonstrator. This configuration and the
extensions are shown in section 2.2.1. For analyzing extra-functional properties of the
avionics a virtual platform of the real system will be used. These platform models are
presented in section 2.2.2. In section 2.2.3 the runtime management methods, which are used
for the safety- and mission-critical parts of the system, are introduced.
2.2.1 Multi-Rotor specific extensions of the demonstrator platform
The multi-rotor system requires a specific configuration and also specific extensions of the
ZYNQ MPSoC. While the safety-critical flight algorithms have not to be influenced by the
mission-critical video processing tasks and the ARM Cortex-A cores of the ZYNQ are not
suited to process real-time algorithms, two MicroBlaze soft cores are implemented in the
programmable logic. An overview of the in the programmable logic implemented architecture
is shown in Figure 2.5.
Figure 2.5 Used hardware architecture of the ZYNQ MPSoC
The first MicroBlaze is responsible for the data mining of all data from the sensors
(MPU9150, BMP085 and battery guards) and the remote control receiver. The interfaces for
connecting the external components are also implemented in the programmable logic. In that
way only the data mining MicroBlaze has exclusive access to the interfaces. The received data
is stored in a dual-ported RAM, where the data mining MicroBlaze is the only component
with write access to the resource. The second MicroBlaze is used for processing the flight
algorithms (e.g. attitude and altitude controller) and calculating the set points for the motor
drivers. This soft core has only readable access to the first dual ported RAM to get data like
attitude angles and the altitude of the data mining MicroBlaze. The flight control MicroBlaze
places its data in the second dual-ported RAM, where this MicroBlaze is the only element
with writeable access to the resource. All other processing elements connected to the AMBA
interconnect have only readable access to this RAM. In that way a safety on-way
communication is realized. For transmitting the calculated set points to the motor drivers
another I2C interface is implemented in the programmable logic, with exclusive access by the
second MicroBlaze.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 10
Next to this IP cores some general purpose in-/outputs (GPIOs) are mapped in the
programmable logic for simple debugging outputs in form of LEDs, a Buzzer or plain Pins.
These GPIOs are directly mapped to the responsible components.
The mission-critical video processing tasks are executed on the ARM dual core system. For
these tasks and processing elements the already existing interfaces are used, which are
available over the AMBA interconnect. The USB interface is used to connect the camera, for
capturing the video image, the gimbal, for aligning the camera, and the Wi-Fi dongle, for
transmitting data to the ground control station. The SDIO interface is used to load the system
configuration, bitstream for the programmable logic, binaries and operating systems for the
processing elements while the system is booting. At least an UART interface is used to
transmit telemetry data for the pilot to the remote control display.
2.2.2 Multi-Rotor demonstrator platform models
To analyze extra-functional properties of the demonstrator a virtual platform of the avionics
with all major parts should be designed. For this the before in section 2.1.2 mentioned
‘Cadence Virtual System Platform for the Xilinx Zynq-7000 All Programmable SoC’ would
be used. This virtual platform has the needed extensibility for the elements of the
programmable logic which were described one section before. Regarding the extra-functional
properties the virtual platform should be connected to three different models to analyze the
timing, the power consumption and the temperature of the MPSoC. These models would be
characterized on the real avionics, to get the specific data of the used system.
2.2.3 Multi-Rotor platform runtime management
In Figure 2.6 the mapping of the tasks of different criticalities to the processing elements is
shown.
Figure 2.6 Mapping of tasks to processing elements of the architecture
Next to the mapping the figure also opens up the used runtime management methods. The
safety-critical tasks of the system are scheduled statically and are mapped to the MicroBlazes.
This is done while the data mining and the flight control algorithms are consisting of a
sequential data flow which has only rarely intersections for some exception handling. Else the
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 11
data flow processes continuous the same operations in every cycle. The mission-critical tasks
are executed and scheduled dynamically by a Linux operating system. This is done while the
video processing has no static data flow and features can be switched on and off at runtime of
the system. So far the system has no runtime management to handle requirements regarding
the extra-functional properties.
2.2.4 UAV platform runtime management
The Run-time Resource Manager (RTRM) will be in charge of assigning computing resources
to non-critical tasks, taking into account extra-functional requirements, i.e., timing, power and
thermal constraints mentioned in deliverable D1.2.1.
Since, the usage of the two ARM CPU cores, provided by the ZYNQ platform, are shared
between mission-critical and non-critical tasks, it is mandatory to bound the effects due to the
execution of non-critical tasks. For instance, we must avoid unacceptable increase of
temperature of the chip, and keep under control the energy budget consumed by this class of
tasks, so that the mission-critical tasks fully exploit the resources provided by the platform.
2.2.5 Run-time resource management actions
As introduced in deliverable D3.3.1, the RTRM framework allows to control the resource
allocation and usage, both at application and system level.
Since non-critical tasks are expected to be image processing applications, it is reasonable to
think about tunable algorithms. This means implement adaptive applications, capable of carry
out results with different levels of accuracy, according to the amount of the available
computing resources. For instance, if an application needs to perform at a specific frame-rate,
and the assigned resources do not allow to reach such goal, it can decide to decrease the
accuracy of the results. This behaviour has an impact also from the point of view of the
resource utilization, and thus on the power consumption and the temperature of the processor.
As depicted in Figure 2.7, the application is notified about the assigned resources
(Application Working Mode selected by the RTRM), through the Run-Time Library (RTLib).
Being aware of the amount of exploitable resources, the application can implement the
behaviour explained above, enabling a kind of cooperative resource management.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 12
Independently from the cooperative behaviour of the applications, the RTRM exploits
suitable interfaces provided by the Linux OS, to monitor the status of the hardware and
enforce resource allocation.
The monitoring of the hardware relies on the availability of suitable sensors on the platform.
Commonly, we are able to observe:
The current clock frequency of the CPU;
The temperature of the CPU;
The status of charge of the battery.
However, some versions of the ZYNQ 7000 series platform are also equipped with controllers
for voltage and current monitoring, making possible the reading of the current power
consumption of the computing platform. The resource allocation enforcement instead,
exploits specific frameworks included in the Linux kernel, as cpufreq and Linux Control
Groups.
According to deliverable 4.1.1, non-critical tasks are not subject to timing requirements.
Therefore the resource allocation policy to be implemented, must assign resources taking into
account the constraints on the battery lifetime and the temperature of the avionics, with the
RTRM ready to react to constraints violations. The actions that the RTRM can execute, to
guarantee such constraints are:
Perform frequency scaling on the CPU;
Re-define the CPU resource allocation;
Stop an application.
The former point, requires the setting of the cpufreq framework governor to “userspace”. This
allows the RTRM to set the CPU frequency, according to the requirements of the non-critical
tasks and the extra-functional constraints previously mentioned.
The second, consists of bounding the utilization of the CPU by the applications, by
dynamically changing the number of CPU cores and/or the CPU bandwidth time assigned.
Figure 2.7 RTRM interactions with hardware platform
and (non-critical) managed application
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 13
This is done thanks to the interface provided by the Linux Control Groups framework. …
shows an example of this resource management action.
Finally, since we are talking about non-critical tasks, the RTRM can evaluate to stop the
application (process killing), especially if some misbehaviour are detected, e.g., a very fast
increase of temperature.
2.2.6 Power and thermal management abstractions
To the sake of portability, the RTRM framework implements an internal abstraction on top of
the OS monitoring interfaces. This abstractions relies on the (C++) classes described below,
and their member functions.
PowerManager
Most of the member functions defined by the class are so called “setter” and “getter”
functions, and take as first parameter a “resource descriptor” object referencing a system
hadware resource (e.g. the CPU).
Member function Arguments Description
GetLoad() Resource Percentage of current resource
usage.
Figure 2.8 Example of the RTRM
re-assigning the CPU bandwith time
quota assigned to the application
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 14
GetTemperature() Resource Current temperature of the
hardware resource specified, in
Celsius degrees.
GetClockFrequency() Resource Current working frequency, in
KHz.
GetAvailableFrequencies() Resource çist of the available clock
frequencies that can be set, in
Khz.
SetClockFrequency() Resource Set a new clock frequency, in
KHz.
GetVoltage() Resource Current voltage value, in mV.
GetVoltageInfo() Resource Voltage range allowed to power
supply the hardware.
GetFanSpeed() Speed of the fan installed on the
platform for thermal control, if
available.
SetFanSpeed() Set the speed of the fan installed
for thermal control.
GetFanSpeedInfo() Speed supported by the fan
installed for thermal control, in
RPM.
GetFanSpeed() Current speed of the fan
installed for thermal control, in
RPM.
ResetFanSpeed() Reset the speed of the fan to
default value.
GetPowerUsage() Resource Current power consumption of
the resource specified (or of the
system, if sensors are available.
GetPowerState() Resource
Current power state, defined as
an integer number. A power
state implicitly defines a
voltage/frequency configuration.
SetPowerState() Resource
Set a power state, defined as an
integer number. A power state
implicitly defines a
voltage/frequency configuration.
BatteryManager
This class implements the module for monitoring the status of the battery, returning run-time
data to other client modules. The module will provide information about the estimated
lifetime of the battery, allowing the resource allocation policy to take decisions accordingly.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 15
Member function Arguments Description
IsDischarging() Battery ID Status of the battery, indicating
whether it is discharging.
GetChargeFull() Battery ID Full charge status capacity, in
mAh.
GetChargeMAh() Battery ID Charging level of the battery in
mAh.
GetChargePerc() Battery ID Charging level of the battery in
percentage.
GetDischargingRate() Battery ID Current discharging rate, i.e.,
current drained, in mA.
GetEstimatedLifetime() Battery ID Remaining lifetime, in seconds,
given the current charge level.
GetTargetLifetime() Battery ID The target lifetime constraint.
2.2.7 Development status
The RTRM mechanisms and interfaces, described in the previous paragraphs, have been
already implemented and tested on different platforms, running a Linux operating system, as
summarized in the following table.
Platform Processor architecture Class
Desktops/Laptops Intel x86 32 and 64 bits
Pandaboard ARM Cortex A9 dual-core 32 bits
Insight ARNDALE ARM Cortex A15 quad-core 32 bits
ODROID XU-3 ARM Cortex A15 quad-core + Cortex A7
quad-core (big.LITTLE HMP) 32 bits
The ODROID XU-3 for instance, is an example of platform compatible with the ZYNQ 7000,
featuring an ARM CPU, chosen also for the availability of on-board sensors providing voltage
and current monitoring. This has made possible the implementation of most of the monitoring
interfaces defined in the PowerManager class.
Intel x86 laptops have been also used to test an implementation of the BatteryManager class,
on top of the ACPI interface.
Further developments will include the porting and testing of the RTRM on the computing
platform chosen for the avionics, and the development of a specific resource management
policy.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 16
2.3 Use Case 1b: Remotely Piloted Aircraft I/O module Demonstrator Platform
As said in section 2.1, GMV has reused a subset of the FCC (Flight Control Computer)
software developed for a medium sized Remotely Piloted Aircraft. This subset, which
includes most of the FCC’s I/O module components, has been adapted (according to the needs
and the objectives pursued in CONTREX) into a mixed-criticality threaded model with to be
deployed onto the Xilinx Z-7020 MPSoC execution platform.
The ZYNQ MPSoC concurrently processes the sensor data processing tasks, encompassed by
the Flight Control, Mission Control and Logging components, with different associated
criticalities. In order to support the connections with all the sensor devices, the real ZYNQ
platform has been extended by synthetizing extra UART ports in the programmable logic (the
resulting platform connected to the specific sensor devices is shown in Figure 2.9).
Figure 2.9 Customized ZYNQ platform connected to IO sensor devices
For the study of the relevant extra-functional properties, system performance analysis and
simulations will be performed. In order to enable them, different models of the execution
platform, according to the different analysis and simulation technologies and tools, had to be
developed.
Although Use Case 1b demonstrator is mainly focused on native (high-level) simulation, to be
performed using UC’s VIPPE tool, a Virtual Platform model of the execution platform is
being also developed, in order to be used as a reference for the evaluation of the VIPPE
simulation technology (in terms of speed and accuracy).
2.3.1 RPA IO specific extensions of the demonstrator platform
As mentioned above, in order to support the connections with all the sensor devices, the real
ZYNQ platform needed to be extended with extra UART ports. These ports, which were
synthetized in the ZYNQ platform’s programmable logic, together with the two pre-existing
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 17
UART ports, the SDIO and the ARM9 MP Cortex, form the basic HW architecture of the Use
Case 1b demonstrator’s execution platform (sketched in Figure 2.10).
Figure 2.10 HW architecture of the customized ZYNQ platform
One of GMV’s main concerns in CONTREX is to improve the avionics development flow by
introducing an extra Design Space Exploration (DSE) stage. The main goal pursued by the
DSE to be applied to Use Case 1b is to select a system configuration (i.e., a specific HW/SW
architecture) that allows to fulfil the requirements and constraints on the extra-functional
properties specified for the system while minimizing the overall power consumption.
Thus, different HW/SW architectures, some of them requiring additional extensions to the
ZYNQ platform, will be evaluated.
For instance, two basic design alternatives (that form the design space) have been initially
considered. These different alternatives reflect different mappings of application components
to platform resources; in one of them, it would be needed to synthetize a MicroBlaze Softcore
processor in the ZYNQ platform’s programmable logic, as shown by Figure 2.11.
Figure 2.11 Two basic architectural alternatives to be explored in Use Case 1b
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 18
It has to be noted the final design space is still under consideration and different or additional
design alternatives (to those shown above) with different implications on the execution
platform might be defined. Out of those, some might be actually implemented in the real
platform while others could be only reflected by the platform models used for system
simulation (either VP-based or native).
2.3.2 RPA IO demonstrator platform models
In order to enable the high-level native performance analysis and simulation, the execution
platform needed to be modelled using the CONTREX design methodology (based on
UML/MARTE) and be supported by the analysis and simulation tools.
As was mentioned, the HW platform of the demonstrator consists of a Xilinx Z-7020 MPSoC,
which integrates a dual-core ARM® Cortex™ A9-based processing system and 28nm Xilinx
programmable logic in a single device. Extra UART ports have been synthetized in the
programmable logic in order to enable the communications with all sensor devices.
Figure 2.12 shows the architectural description of the demonstrator using the CONTREX
UML/MARTE modelling language. It includes the architecture of the HW platform as well as
the preliminary approach to the SW platform (Linux OS and three different memory spaces),
which will be later refined in order to fit the design alternatives shown in Figure 2.11.
Figure 2.12 Use Case 1b demonstrator’s platform architecture
The platform’s model depicted in previous figure reflects the architectural details of the
platform, including additional details regarding the interconnection architecture (buses,
bridges traversed) that were not included in Figure 2.10 but can have a potential impact on
performance. Technological implementation aspects (e.g., whether a UART device is a pre-
existing device of the Zynq processing system or a configurable block within a FPGA) have
been omitted. Instead, these aspects can be captured in the characterization of the IO devices.
This information, which can be useful in the performance assessment (and thus in DSE), is
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 19
not captured at this state of the development; however, the model is ready for an easy
incorporation of it.
Figure 2.13 shows the definition of the HW resources that have been instantiated in Figure
2.12. There, it can be observed, for instance, that different component declarations are used
for the UARTs integrated in the Zynq processing system (“ZynqPS_UART”) and for the
UARTs implemented on the FPGA (“AXI_UART_16550”).
Figure 2.13 HW resources used in Use Case 1b demonstrator’s platform
2.3.3 RPA IO platform runtime management
As shown in Figure 2.11, the two different design alternatives would include the Linux OS
and the Xenomai real-time kernel. In the case of the tasks running on the basic Linux kernel, a
simple scheduling mechanism based on a FIFO rule would be used. In the case of the tasks
running on the Xenomai kernel, a priority-based dynamic scheduling mechanism would be
employed, according to the results obtained from the corresponding schedulability analysis
performed over the real-time system.
The tasks corresponding to the Flight Control IO component in the second design alternative
would be statically scheduled and then implemented bare-metal on the MicroBlaze processor.
As already mentioned in section 2.3.1, the actual design space is still under consideration and
thus different or additional design alternatives, including different runtime management
systems to those mentioned above, might be finally defined.
During system simulations, extra features are required in order to manage extra-functional
properties. Power and temperature models have to be attached to the simulator, which
moreover needs to include some kind of introspection mechanism that enables to gather the
evolution of extra-functional properties over time. A tracing engine that records extra-
functional property values for further evaluation will be also provided.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 20
3 Automotive Telematics Demonstrator Platform
Several non-automotive companies provide private and/or fleet vehicle drivers with a support
service in case of accident. The architecture is based on three main components: a sensing
unit for acceleration measurements, a localization unit for GPS reading and a data processing
and communication for identification of accidents and communication of position data either
to public authorities (hospital, police) or private support providers. The Automotive
Telematics Demonstrator will extend the commercial solution provided by Cobra to a new
generation that will provide more functionalities at better performances/power trade-off, as
already described in D1.2.1.
A general overview of the CONTREX tools and workflow used in Automotive Telematics
Demonstrator is shown in Figure 3.1 where the part highlighted are the object of this
paragraph.
Figure 3.1 Contrex flow for Automotive telematics usecase
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 21
3.1 Brief description of the demonstrator platform
At the moment the best-case architecture option is reported in Figure 3.2.
Figure 3.2 Final architecture of the complete Automotive Telematics application
As a functional point of view, the iNemo represents the Low Cost Sensing unit, SecSoC is the
high end sensing Node and the eurotech Gateway with the Kura platform is the main ECU
that will interface with the cloud. The corresponding block diagram is reported in Figure 3.3.
The communication between the sensing units will be realized by a serial proprietary
protocol.
Figure 3.3 Block Diagram of the Automotive Telematics use case
SeCSoC
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 22
3.1.1 The low cost sensing unit: iNemo-M1
The low-cost sensing unit will be implemented on the iNEMO-M1 platform, provided by ST.
Figure 3.4 The iNEMO M1 System-on-Board
The iNEMO-M1 is the first 9-DOF motion sensing System-on-Board (SoB) of the iNEMO
module family. It integrates multiple MEMS sensors from ST and a computational core:
The LSM303DLHC e-compass module. The e-compass module LSM303DLHC is a
system-in-package featuring a 3D digital linear acceleration sensor and a 3D digital
magnetic sensor. The accelerometer has full scales of ±2g/±4g/±8g/±16g and the
magnetometer has full scales of ±1.3/±1.9/±2.5/±4.0/±4.7/±5.6/±8.1 gauss. All full
scales available are selectable by the user.
The L3GD20 digital gyroscope. The L3GD20 is a low-power digital gyroscope able
to sense the angular rate on the three axes. It has a full scale of ±250 / ±500 / ±2000
dps and is capable of measuring rates with several bandwidths, selectable by the user.
The STM32F103REY6 ARM® Cortex™-M3 32-bit microcontroller.
This 9-DoF inertial system represents a fully integrated solution that can be used in a broad
variety of applications such as robotics, personal navigation, gaming and wearable sensors for
healthcare, sports and fitness. A complete set of communication interfaces and motion-
sensing capabilities in a small size form factor (13x13x2 mm) and the possibility to embed
ST’s sensor fusion software makes the iNEMO-M1 system-on-board a flexible solution for
high-performance, effortless orientation estimation and motion-tracking applications. The
STM32F103REY6 high-density performance line microcontroller is the computational core
of the iNEMO-M1 module. It operates as the system coordinator for the on-board sensors and
the several communication interfaces. Exploiting the features of the MCU, the iNEMO-M1
offers a wide set of peripherals and functions such as 12-bit ADCs, DAC, general-purpose 16-
bit timers plus PWM timers, I2C, SPI, I2S, USART, USB and CAN, that enable different
operative conditions and several communication options.
3.1.2 The High end Sensing Unit: SeCSoC
The prototypical version of the high-end node is based on a system-on-chip specifically
intended for image processing captured by a low-power CMOS camera. The architecture of
the SeCSoC (shown in Figure 3.5) extends the typical structure of a high-end microcontroller
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 23
with specific modules for image processing, ultra-low power analog modules, and power
island and clock gating capabilities. The SoC is based on the multi-core R4MP processor
The board (in Figure 3.6) that will be provided has a size similar to a credit card, and it
includes two VGA sensors, optional sensors connectors, USB, SPI , UART, JTAG host
connectivity, digital stereo microphones and pressure and temperature sensors. The adoption
of multi-core technology also in some of the sensing units will allow for an improved
management of extra-functional aspects such as power consumption, execution time, quality
of service, security and reliability. The proprietary cores can be programmed through a
development toolchain (gcc+GNU binutils based) released by ST. Moreover peripheral
library APIs are available.
Figure 3.5 System-on-Chip SeCSoC architecture
Figure 3.6 SeCSoC board
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 24
3.1.3 The main ECU: EUTH MiniGateway High
The EUTH MiniGateway is a compact size device designed to support M2M (Machine to
Machine) applications. As described in the D3.2.1 deliverable, the EUTH MiniGateway will
be adopted in the automotive use case (UC2) as a bridge between the Cobra main ECU
installed in the car and the cloud platform that will gather all the data obtained from the
vehicles.
The D3.2.1 document introduced the feasibility analysis of the porting of the Kura open-
source Java-based framework on the Cobra ECU. From the feasibility analysis emerged that,
with the HW and SW resources available on the current version of the Cobra ECU, the
porting is not possible and it has been decided to adopt a decoupled solution that use the
MiniGateway as a bridge between the Cobra ECU and the cloud platform. This allows the
integration and the usage of Kura directly on the EUTH MiniGateway, reducing potential
integration issues and splitting the management into different devices with different and
separate tasks.
Since the description provided in D3.2.1 deliverable, a lot of effort has been spent with the
project partners to clarify and organize the different steps of the proposed solution. In this
context, Eurotech worked to produce a first working prototype of the EUTH MiniGateway,
following the initial specifications defined in the D3.2.1 document. The result of this work is
explained in the next section.
After the first set of prototypes, EUTH started the design of a new version of the
Minigateway, called Minigateway v2, with the objective to solve the issues and bugs
identified in the first version and trying to introduce new useful features.
3.1.3.1 EUTH MiniGateway Prototype description Starting from the hardware and software specifications provided in D3.2.1, EUTH developed
the first set of prototypes of the MiniGateway.
Figures from 3.7 to 3.10 provide some photo of the device.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 25
Figure 3.7 Front view of the MiniGateway enclosure
Figure 3.8 Back view of the prototype
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 26
Figure 3.9 Overview of the prototype’s interfaces
.
Figure 3.10 Overview of a different prototype where the antennas are external
.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 27
The final result consists of a very small device, as it is only 140 x 80 x 32 mm, with a weight
of only 160 g. Respecting the objectives defined in the D3.2.1 deliverable, the device is
completely fanless, with no moving parts and has a peak power consumption of only 2.5W.
These characteristics, combined with the fact that the device can be used with a power supply
of 12 V, make the MiniGateway perfectly suitable for the automotive environments.
The prototypes have been tested and only minor issues have been identified. This version will
be used in the first version of the automotive demonstrator.
Figure 3.11 Screw-type connector pinout
.
3.1.3.2 EUTH MiniGateway V2 Based on the previous version of the MiniGateway, EUTH decided to improve the quality of
the prototype and introduce new functionalities. The areas where the design activities will be
focused for the gateway evolution are mainly two:
try to solve the issues found in the first version of the MiniGateway;
introduce new functionalities that extend the set of services that can be provided.
The new version of the MiniGateway will try to solve some bugs found in the first prototype
and related to file system corruption. During the test and debugging phase, it emerged that the
file system is corrupted if a write interruption occurs, for example determined by a device
reset or a memory eject.
To cope with this problem, the software of the fist prototype has been adapted in order to try
to keep, as much as possible, the file system in memory (logs, databases, etc.), in order to
prevent unwanted file corruptions and reducing the number of disk write operations. In this
way the possibility of a file system corruption is reduced to the minimum, keeping at highest
levels the speed and the functionalities of the device.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 28
The new version of the MiniGateway will be designed and implemented adopting software
and, mainly, hardware solutions that will prevent this issue.
Regarding the new features that will be introduced in the new design, the following list
reports the main improvements and key features that will be provided with MiniGateway V2:
a more powerful processor with a clock of 800Mhz;
512 MB DDR3 ram that can be extended up to 1 GB;
4GB of eMMC storage memory (that can be extended to up to 16GB);
2 x general purpose USB ports;
1 x USB port usable for add-on module connection;
1 x USB OTG port (on mini PCI express connector);
2 x isolated digital inputs;
2 x isolated digital outputs;
2 x RS-232/RS-485 configurable serial ports;
1 x slot for expandable storage;
Cellular modem with integrated GPS functionalities;
802.11 Wireless LAN;
Bluetooth 4.0 BLE
Optionally, the gateway can have:
2 x CAN ports;
a 3-axis accelerometer;
a temperature sensor;
a Trusted Platform Module;
a stand-alone GPS module.
The new device will run natively the Yocto embedded Linux operating system version 1.6.
Regarding the physical characteristics, this new device will have more or less the same size of
the first prototype, with comparable power consumption.
Figure 3.12 depicts a possible port arrangement of the new device. This design is currently
under discussion and can be adapted depending on the needs that the automotive use case will
arise during the future steps. The same approach applies to the enclosure, see Figure 3.13. The
dark grey part, on the enclosure side, is the external cellular module ReliaCELL 10-20,
already available in EUTH products portfolio.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 29
Figure 3.12 Possible port position for the new device
Figure 3.13 First study of the enclosure of the EUTH MiniGateway
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 30
3.2 AT demonstrator platform models
3.2.1 Energy/Timing models for the node sensors
3.2.1.1 The low cost sensing unit: iNemo-M1
Based on a first, coarse-grained set of power figures derived from the detailed documentation
of the sensor devices and of the microcontroller MCU provided by ST and ST-POLITO,
important architectural decisions have been made (see Section 5.3.4). POLIMI and ST-
POLITO have then defined the optimal granularity for the refined characterization. Such
results will be used in the non-functional node simulator developed by POLIMI.
iNEMO-M1 platform application model used for node simulation will consider several power
characterization profiles according to the application scenarios described in section Fehler!
Verweisquelle konnte nicht gefunden werden.. According to them, each component of
iNemo could be in its normal or low-power running mode or, alternatively, in its sleep mode,
where the power consumption can be considered very low.
The general diagram shown in Figure 3.14 can be considered as the typical state machine
representing the operating cycle of each sensor device (accelerometer, magnetometer and
gyroscope) belonging to iNemo system:
Figure 3.14 iNemo sensor device operating states for power characterization
For each state it shall be fed with the information summarized below:
Power-down & sleep
Average current consumption when the device is only power supplied, but it is in its lowest
power consumption state.
Idle (not sampling, not reading)
Average current consumption when a sensor device is neither involved in any data sensing or
conversion nor communicate the acquired data to a host device (the MCU in our case).
Sampling
Average current consumption and conversion time, or, equivalently energy per operation.
This state refers to a single operation of sampling and conversion of the three (or six) axes for
each inertial motion device included in iNemo-M1.
Reading
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 31
Average current consumption for reading out a single tern, or, equivalently, average current
consumption for reading the entire FIFO.
Concerning the microcontroller core, the simulation model defines different states by
combining the state of the peripherals (on, no clock, no power), the state of the core (normal
mode, stop mode, stand-by mode) and the operating frequency. A precise characterization of
all possible combinations of such states is not necessary for the level at which the simulation
will be performed and most of the power figures can be found in the data sheet.
In addition to the core and the devices, the iNEMO M1 System-on-Board integrates power
supplies, voltage regulators and few other analog components. To provide a reliable model for
simulation, the quiescent power consumption (current absorption) of these section will be
provided.
3.2.1.2 Power Supplies
Battery SOC (and possibly lifetime in the case the discharge exceeds the battery capacity) will
be tracked using a circuit-equivalent battery model that is able to calculate at any point in time
the residual available charge (battery State-Of-Charge or SOC) based on a power request. The
model is the one of the two options described in D3.1.1 (see Figure 3.15), and can be
characterized using the method described in that deliverable starting from a set of
specifications easily available in most datasheets.
Figure 3.15 Battery model used in the simulation
The figure shows the conceptual behaviour of the model, which can be described either in
SystemC/AMS or in Matlab/Simulink depending on the needs.
Since the voltage value(s) used in the device are likely to be different from the nominal
battery output voltage (around 4V for a Li-Ion cell) and also for voltage regulation issues, a
DC-DC converter is needed. A model of a converter can be cascaded to the battery model;
this converter model can again be one of two types: a circuit equivalent model (using the
netlist specified on the datasheet of the target devices to be used), or a functional model
expressing the converter efficiency, i.e, the ratio of input (battery) power and output
(converter) power, in terms of the relevant quantities (typically, the output (load) current and
the difference between input and output voltages).
In this case, based on our experience, it is more general to use such a functional model rather
than a circuit model.
The overall model used in the simulator and in particular its interface is depicted in Figure
3.16Fehler! Verweisquelle konnte nicht gefunden werden.. The basic signals to be traced
are the V and I signals; these are supposed to be inputs to the model, and are derived from a
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 32
trace where at each time stamp a value of load current (requested to the battery) and the
relative voltage level is reported.
The model has two extra interfaces.
Signal SOC is a status signal and reports the SOC (a %) of the battery. It can be used
to generate a condition such as "battery exhausted" when it falls below some pre-defined
value that can be also different from 0%.
Signal Enable is a control signal that can be used to somehow "detach" the battery
from a load in some particular case. It will not be initially used in our simulations.
Figure 3.16 Battery and converter model interface
3.2.1.3 The High end Sensing Unit: SeCSoC
The SeCSoC platform model will be modeled using DoCea Power ACePlorer: in the flow of
the model that we’re developing is viewed.
A cycle approximate simulator provided by ST has enough detail to execute the actual
embedded software, but is still fast enough to run non-trivial applications. This model must be
augmented with non-functional aspects to show the developer the effect of running the
embedded software from the power point of view. The results of the simulations can hardly be
as precise as RTL or gatelevel simulation, but they are the only option for early system level
modelling, and an improvement compared to handwritten scenarios. The simulation provides
functional stimuli to Docea Power model for the power simulation.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 33
Figure 3.17 ACEplorer and Cycle simulator
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 34
The ST simulator provides some power and performances activity, and recognize power states
of the systems.
Power states have been defined. Characterization of these 4 states are on going activity to
provide as input for power model of the platform in Aceplorer.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 35
3.3 AT platform runtime management
3.3.1 Node-level extra-functional monitoring infrastructure
This section describes the lightweight framework used to monitor extra-functional properties
for applications in microcontroller-based nods and specifically for the iNemo-based low-cost
sensing unit used in the Automotive Use-Case.
The main purpose of the framework is to monitor system properties that depend on the run-
time context and activities of the application but that are not directly related to the
application’s functionality. The most common metrics are that have been considered for the
first implementation of the framework are:
Counters. This class of metric is devoted to monitor specific events occurring during the
application execution, such as accesses to device drivers, statistics on data-dependent
execution paths, errors, and so on. Such metrics are expressed in terms of adimensional
counters.
Timing. This class of metrics collects time-related properties of the application. The main
metrics in this class are execution time, both at task-level and at a finer-grained level, if
required, latencies, response times, throughput and so on.
Power/Energy. This class of metrics collects information related to the power consumption.
From a logical point of view, power consumption can be associated both to physical devices
(sensors, regulators, microcontroller, …) and to software functions. In this latter case, the
measure refers to the power consumption associated with the microcontroller during the
execution of the specific function.
Costs. This class of metric has been conceived to account for the costs associated to data
communication over the radio interface and is mostly related to the amount of data being
transmitted and the details of protocol-specific packet formats (size, overhead,…) and
handshake policies (retransmission).
From the implementation point of view, the framework is implemented as an external library
to be linked together with the application. The framework is meant to be general in terms of
extra-functional properties to be monitored and it’s closely tied with both the application and
the platform. For this reason, the framework has been designed to be used within UC2 as a
monitoring library.
The framework provides to an application the ability to monitor at runtime the desired metrics
with a function-level granularity. For each function the developer defines the relevant metrics
to observe for every device on the platform (in the general case the system is composed by
several processing elements beside the main processor, such as accelerometers and I/O
devices). Additionally, in order to gather the measures, the developer must instrument the
region of code that he wants to observe. Once these operations are performed, the framework
automatically senses the system and stores the observation values while the application is
running and makes them available to specific portions of the application responsible for using
such measures to implement local run-time management and/or to export them up to the
cloud.
The framework is based on four main concepts:
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 36
Devices. A device is a physical component of the system that can be profiled in terms of
extra-functional properties.
Metrics. A metric is any extra-functional property relevant for the application, even if it is not
related to a physical quantity. For example time and energy are metrics, but also the amount
of data transferred by the system can be a metric.
Measures. A measure is defined by the metric, a numeric value and the related device. The
metrics and the devices must be defined in the configuration of the framework.
Events. The events are meant to express concepts such as “the frequency of the
microcontroller has changed to 200 MHz” or “the UART has consumed 20 mJ”.
The library has been currently tested by integrating it into a dummy application composed a
limited number of tasks, each one executing one or two functions. The models, required for
metric estimation, are under development from the Use-Case partners and will be adapted,
implemented and integrated in the next months.
3.3.2 BBQLite
In the automotive use-case, and in particular on the low-cost sensor node, the monitoring
infrastructure, in conjunction with compile-time simulation-based characterizations, will be
used as the source of information driving the behaviour of the system in varying operating
conditions. The management of the node’s hardware devices and software activities
(functions and tasks) will be under control of the BBQLite run-time manager.
A simplified view of the interaction of BBQLite with the configuration tools (design-time),
the application and the monitoring infrastructure (run-time) is shown in D4.1.1 and is reported
here in Figure 3.18 for the sake of readability.
Figure 3.18 Design-time and run-time interactions of BBQLite in UC-2
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 37
Run-time actions performed by BBQLite are based on three classes of information, namely:
Funtional Status. The application’s functional status, also referred to as operating-mode.
Given the requirement of the automotive application, a completely autonomous management
system solely based on non-functional properties will not satisfy availability and functional
needs. For this reason it is necessary to introduce the notion of operating mode, that expresses
the current functional status of the system, e.g. the motion status of the vehicle or the status of
the dashborad key. Associated to such states, different sets of functions shall be mandatorily
enabled/disabled, leaving to the non-functional manager the role of managing power (and
other) optimizations, possibly at the cost of a degradation of the function being performed.
Extra-functional status. This information consist in the collection of metrics exposed by the
non-functional monitoring infrastructure. A separate configuration is required to generate the
necessary code and hooks enabling non-functional metrics estimation, collection and access.
Design-time configuration. The configuration depends on the results of application and node
event-driven simulation, combined with user-defined policies explicitly specified by the
application’s developer. All this information is fed to the BBQLiteConf configuration tool to
generate the application-specific portions of the BBQLite run-time manager. Note that in a
more general scenario characterized by less stringent memory/performance constraints, the
configuration generated by BBQConfLite takes the form of a (serialized) data structure parsed
by the BBQLite engine at runtime and does not require code generation and re-linking of the
manager engine.
Figure 3.19 shows the relation among the different components of the run-time infrastructure
and the nature of the information being exchanged.
Figure 3.19 Integration of BBQLite with the extrafunctional monitoring infrastructure
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 38
As the figure shows, the BBQLite engine is not being executed at the level of task, but rather
as a sort of service that tasks can rely upon. This implementation – currently being finalized –
can be improved by splitting the BBQLite engine into two portions:
A core part running as a service and implementing the interactions between lower level
information and actions and the higher level tasks.
A task-level part belonging to the application level that can be executed periodically, upon
request or a combination of these two cases from all other tasks. The interaction between
functional tasks and the task-level BBQLite portion will be implemented with standard IPC
mechanisms.
In addition to this extension, a revision and completion of the current implementation is being
performed in order to facilitate integration with the ARM RL-RTX microkernel.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 39
4 Telecom Demonstrator Platform
4.1 Brief description of the demonstrator platform
4.1.1 The current platform
The Ethernet over Radio System mainly consists of an OutDoor Unit (Figure 4.1), the ODU-
IP-LC (also abbreviated as ODU or ODU-IP). The ODU Unit encapsulates Ethernet packets
in a GFP frame, modulates, and sends them on the radio channel. Management packets are
redirected to a Controller Unit.
Figure 4.1 The current implementation of the OutDoor Unit (ODU)
The ODU implements all functionalities required by the system, in particular:
Signal base band processing
Modem stage
RF interface
User Ethernet interface
The ODU Card houses a Freescale communication processor based on the PowerPC core
(MPC880) with abundant RAM (64MB) and FLASH (32MB), one Ethernet interface (PoE)
and I2C/SPI bus support. It is capable of running a pre-emptive real time operating system
like Linux (xenomai) and a full featured network stack that allows the developer to access a
pool of ready to run software applications. In particular the board can run an SNMP agent, a
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 40
WEB server and different processes to control the devices forming the system. The operating
system is capable of managing hard real time events.
The full version of the current ODU Hardware architecture is depicted in Figure 4.2.
Figure 4.2 ODU Hardware Architecture (full version)
The current ODU board Hardware details are listed below:
Freescale MPC880 PowerQUICC
The main functional control blocks are:
- Microprocessor (uP) manages the ODU card, handles alarms, handles the
protection switch protocol and communicates with a remote Element Manager,
a Local Craft Terminal and the ODU partners (local and remote);
- Memory and Peripherals, e.g. SDRAM, Flash EPROM (serial and parallel),
FPGA and other devices used for management of RF channel.
LATTICESEMI LFE2M35E-5F256C FPGA (Ethernet Layer 2 Switch + MODEM
functionalities)
The Switch performs the following functions:
- Routing of LCT controller messages based on MAC address and optionally on
VLAN tag.
- Routing of CT traffic from/to Cable Interface to/from INT A of Modem, based on
proprietary VLAN tag. No buffering (unless the one required for the
store&forward mechanism) and flow control is applied to this traffic between
Switch and Modem. When the radio channel is not available the traffic is dropped.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 41
- Routing of DUT (Data User Traffic) from/to Cable Interface to/from INT B of
Modem, based on MAC address and optionally on VLAN tag. This traffic to the
radio link can be served by four priority queues to implement QoS, based on IEEE
802.1p, IP TOS&DS, VLAN ID and MAC address. Because of the limited
capacity available on the radio link, on INT B a PAUSE Flow Control is applied to
stop forwarding frames from Switch to Modem when the Modem Tx buffer is full.
The incoming messages from the Cable interface are buffered in the Switch
priority queues according to VLAN tag. The queues are emptied according to their
priority. When the queues are full the following messages are discarded.
- Routing of Management information from/to System controller to/from INT B of
Modem (radio channel), based on MAC address and optionally on VLAN tag.
Memories:
o 32-bit-wide 64 Mbyte DDR3 SDRAM for data handling and program
execution
o 32 MByte NOR Flash memory for SW code and FPGA netlist
o 1MB Serial Flash EPROM for station data
Parallel and serial busses to/from memory and peripheral devices:
o microprocessor local bus at 66 MHz to demultiplex address, data and control
lines required to access SDRAM, Flash EPROM, FPGA and other devices
o SPI bus required to access the external serial Flash EPROM for station
inventory data and some RF section’s devices.
o I2C bus required to access some devices used into the RF section.
The Serial Management Controller is used as a Debug Serial Interface for HW and
SW debugging purposes.
4.1.2 The new platform
Because there is already a current baseline implementation of the demonstrator running on
MPC880 hardware, the main Intecs goal within CONTREX is to provide a basis to extend the
company know-how in the CONTREX innovation domains (in particular the virtual platform
and power estimation domains) in order to integrate them into future product development
processes.
The Telecom Use case platform model is fully explained in Section Fehler! Verweisquelle
konnte nicht gefunden werden., and it is mainly focused on the Open Virtual Platform
environment exploitation. Moreover, Intecs is interested in cross-verification of the results
obtained using the virtual platform and the power/thermal tool-set. For that reason, Intecs
intends to port its current implementation of the Ethernet Over Radio application to a real
Zynq board (Zynq-7000 XC7Z045 FFG900 – 2), depicted in Figure 4.3.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 42
Figure 4.3 Telecom demonstrator real board - ZC706 board features
:
Note: The consortium proposed platform for the OVP modelling was the Xilinx Zynq 7020
SoC XC7Z020-1CLG484C – nevertheless, we do not expect significant differences
between the two platforms in terms of component modelling within the Open Virtual
Platform environment.
The Zynq-7000 XC7Z045 FFG900 – 2 platform has the following key features Fehler!
Verweisquelle konnte nicht gefunden werden.:
Configuration
Onboard configuration circuitry
2X16MB Quad SPI Flash
SDIO Card Interface (boot)
PC4 and 20 pin JTAG ports
Memory
DDR3 Component Memory 1GB (PS)
DDR3 SODIM Memory 1GB (PL)
2X16MB Quad SPI Flash (config)
IIC - 1 KB EEPROM
Communication & Networking
PCIe Gen2x4
SFP+ and SMA Pairs
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 43
GigE RGMII Ethernet (PS)
USB OTG 1 (PS) - Host USB
IIC Bus Headers/HUB (PS)
1 CAN with Wake on CAN (PS)
USB UART (PS)
Video/ Display
HDMI 8 color RGB 4.4.4 1080P-60 OUT
Expansion Connectors
1st FMC LPC expansion port (34 LVDS Pairs on LA Bus, 1 GT)
2nd FMC HPC expansion port (34 LVDS Pairs on LA Bus, 8 GT – No HA or HB bus)
Dual Pmod (8 I/O Shared with LED’s)
Single Pmod (4 I/O)
IIC access to 8 I/O
Clocking
33MHz PS System Clock
200MHz PL Oscillator (Single-Ended Differential)
SMA Connectors for external clock (Differential)
GTX Reference Clock port with 2 SMA connectors
OBSAI/CPRI – SFP+ Received clock
EXT Config CLK
Control & I/O
2 User Push Buttons/Dip Switch, 2 User LEDs
IIC access to GPIO
SDIO (SD Card slot)
3 User Push Buttons, 2 User Switches, 8 User LEDs
IIC access to 8 I/O
IIC access to a WTClock
Power
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 44
12 V wall adaptor
Current measurement capability of supplies
Analog
AMS interface (Analog) System Monitor and also available for external sensor
4.2 Telecom demonstrator platform models
The platform models used for the demonstrator will constitute an abstracted version of the
ODU suitable for demonstrating the functioning Telecom application (described in D4.1.1)
making use of the CONTREX innovation technologies.
The demonstrator platform models for the Telecom demonstrator are depicted in the indicated
area in Figure 4.4, providing support to the demonstrator application.
Legacy HW/SW Model
Time critical
SW
SDFsModeling and Analysis
Virtual Platform (OVP)
HW
(C/C++/SystemC)
SW
Functional/Extra-functional
SimulationTestbench (SCNSL)
ForSyDe model
Design validationActual HW SW Package
description
Power
(datasheet)
Radio channel
EthernetPower
model
Thermal
model
Exec time
measurements
Power
measurements
Target Platform (ZYNQ)
HW in the
loop facility
HW abstraction
(HIFSuite) SW synthesis
(HIFSuite)
SWHW (SystemC, VHDL,
Verilog, IP-XACT)
Time critical SW
Analytical
DSE for Timing
Simulation-based
DSE for Power
and Temperature
Figure 4.4 Demonstrator platform modelling for UC3
EDALab is considering the processing of UML/MARTE models for use in system level DSE
with SCNSL, providing network simulation / modelling functionality to the demonstrator
platform.
The primary demonstrator platform model is envisioned to be the Open Virtual Platform,
which will host the chosen Zynq platform being used as widely as possible throughout the
consortium in order to provide useful comparative implementation experiences across the use
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 45
case. At the time of this delivery (Month 18), investigations are being completed into the
choice of final simulation hosting environment, originally postulated to be an implementation
provided by partner OFFIS but which could potentially shift to a Cadence offering.
OFFIS is preparing a critical part of the Telecom demonstrator platform modelling
environment, the OVP plugin that will generate power and thermal traces from the simulation
for subsequent offline input to the modelling tools provided by both Docea and OFFIS for
(offline) analysis and optimization of thermal and power related properties.
Note that the Telecom demonstrator does not include a full-blown runtime modelling
component in the same sense as in the other use cases, since the principal business case for
the modelling activities is in the design of the new generation of Intecs telecom product lines
and services rather than dynamic run-time monitoring of an operational system. The emphasis
is therefore on maximizing the simulation and trace generation capability, for use in extensive
offline analysis of extra-functional characteristics, which feed back not so much into dynamic
runtime reconfiguration but rather design time architecture (aided by the application system
modelling innovations of CONTREX treated in the companion deliverable D4.1.1).
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 46
5 Conclusions
The execution platforms for the three use cases have been defined starting from their actual
hardware implementation on which the applications are implemented.
This definition has been followed by modelling the platforms in bottom up fashion, providing
abstraction of the actual behaviour of the system running the applications onto virtual
platforms. Where the execution run time mechanism of the platforms has been modelled to
allow execution of the applications and moreover to provide link between the run-time
mechanism and some critical extra-functional properties.
Even if at this stage of the project the work of dealing with the treatment of extra-functional
properties and their optimization is not complete yet, the state of work already starts to give a
glimpse about how the approach of the projects allows to control the extra-functional
properties.
CONTREX/STM/R/D4.2.1 Public
Definitions and intermediate implementation of demonstrator’s execution platforms and run-
time systems
Page 47
6 References
[1] "Description of Work". CONTREX – Design of embedded mixed-criticality
CONTRol systems under consideration of EXtra-functional properties, FP7-ICT-
2013- 10 (611146), 2013.
[2] Xilinx Inc. (2014) Zynq-7000 Platform Devices. [Online]. Available:
http://www.xilinx.com/products/silicon-devices/soc/zynq-7000/index.htm
[3] Trenz Electronic GmbH. (2014) Trenz Electronic TE0720 Series. [Online]. Available:
http://www.trenz-electronic.de/products/fpga-boards/trenz-electronic/te0720-
zynq.html
[4] “Cadence Virtual System Platform”.
http://www.cadence.com/products/sd/virtual_system/pages/default.aspx