Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 1 Dissemination level: PU - Public
Project Acronym: FotoInMotion
Grant Agreement number: 780612 (H2020-ICT-20-2017-1, RIA)
Project Full Title: Repurposing and enriching images for immersive storytelling
through smart digital tools
Project Coordinator: INTRASOFT International
DELIVERABLE
D4.1 - Overall system architecture Dissemination level PU- Public
Type of Document Report
Contractual date of delivery M6 – 30/06/2018
Deliverable Leader ATC
Status - version, date v1.0, 30/06/2018
WP / Task responsible 4
Keywords: Architecture, platform, applications, backend,
database structure, database, technologies
Ref. Ares(2018)5554369 - 30/10/2018
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 2 Dissemination level: PU - Public
Executive Summary In the current deliverable we describe the architectural guidelines and system requirements
that drive the overall system architecture. The architecture is presented with the use of
formalised ways of representation of the different views of the system, diagrams to describe
the components, their interactions and deployment towards an integrated platform.
We present the identified architecture based on the user requirements (as described in
D1.2) and we then analyse the architectural blueprint and the components of the platform.
A logical view of the components depicts the communication among them and the
information view reveals the data flow processes that will take place in order to realise the
functionalities of the platform.
We finally describe the integration methodology we are going to follow and the steps of the
integration time plan that will drive the development activities towards the delivery of the
first version of the user applications.
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 3 Dissemination level: PU - Public
Deliverable Leader: ATC
Contributors: QDEPQ, INESC
Reviewers: George Zissis (INTRA)
Approved by: All Partners
Document History
Version Date Contributor(s) Description
0.1-0.3 21/05/2018 ATC ATC internal early revisions
0.4 4/06/2018 QDEPQ, INSESC Version distributed for review and partner
input (on reviewers list and risks)
0.5 12/06/2018 ATC Consolidated version after review and inputs
0.6 19/06/2018 ATC, INTRA Version for QM
1.0 27/06/2018 ATC Submitted version
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 4 Dissemination level: PU - Public
Table of Contents EXECUTIVE SUMMARY ..................................................................................................................................... 2
DEFINITIONS, ACRONYMS AND ABBREVIATIONS ............................................................................................. 6
1 INTRODUCTION ...................................................................................................................................... 7
PURPOSE OF THIS DOCUMENT AND RELATIONS WITH OTHER WPS .......................................................................... 7 DOCUMENT STRUCTURE ................................................................................................................................. 7
2 SYSTEM ARCHITECTURE .......................................................................................................................... 8
ARCHITECTURE DESIGN METHODOLOGY ............................................................................................................ 8 HIGH LEVEL OVERVIEW ................................................................................................................................ 10 COMPONENTS OVERVIEW ............................................................................................................................. 12
Presentation Layer Components ................................................................................................... 12 Business and Services Layer Components ..................................................................................... 12 Data Layer Components ................................................................................................................ 15
3 PRESENTATION LAYER .......................................................................................................................... 16
WEB APPLICATION ...................................................................................................................................... 16 MOBILE APPLICATIONS (ATC) ....................................................................................................................... 16
Native vs Hybrid (ATC) .................................................................................................................. 16 Supported Platforms ..................................................................................................................... 17
TECHNOLOGIES ........................................................................................................................................... 17 Angular 6....................................................................................................................................... 17
4 INTEGRATION ....................................................................................................................................... 19
AGILE METHODOLOGY ................................................................................................................................. 19 ORCHESTRATION MECHANISM ....................................................................................................................... 20 IMPLEMENTATION ROADMAP ........................................................................................................................ 21
5 PHYSICAL DEPLOYMENT ....................................................................................................................... 23
HARDWARE SPECIFICATIONS .......................................................................................................................... 23 DEPLOYMENT DIAGRAM ............................................................................................................................... 23
6 CONCLUSIONS ...................................................................................................................................... 25
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 5 Dissemination level: PU - Public
Table of Figures FIGURE 1: LAYERED ARCHITECTURE ................................................................................................................................. 9 FIGURE 2: HIGH LEVEL OVERVIEW OF THE FOTOINMOTION PLATFORM ................................................................................ 11 FIGURE 3: ORCHESTRATION MECHANISM ....................................................................................................................... 20 FIGURE 4: INTEGRATION TIME PLAN .............................................................................................................................. 22 FIGURE 5: DEPLOYMENT DIAGRAM ............................................................................................................................... 24
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 6 Dissemination level: PU - Public
Definitions, Acronyms and Abbreviations Acronym Title
API Application Programming Interface
HTTP Hypertext Transfer Protocol
SSL Secure Sockets Layer
XML Extensible Markup Language
UI User Interface
UX User Experience
ICT Information & Communication Technologies
JWT JSON Web Token
JSON JavaScript Object Notation
Term Definition
Beneficiary EC term used to designate the legal entity which has signed the
Grant Agreement. This term is often substituted by the common
language term ‘partner’.
TypreScript TypeScript is an open-source programming language developed and
maintained by Microsoft. It is a strict syntactical superset of JavaScript,
and adds optional static typing to the language.
RESTful API A RESTful API is an application program interface (API) that
uses HTTP requests to GET, PUT, POST and DELETE data. A RESTful API --
also referred to as a RESTful web service -- is based on representational
state transfer (REST) technology, an architectural style and approach to
communications often used in web services development.
Consortium Group of beneficiaries that have signed the Consortium Agreement
and the Grant Agreement (either directly as Coordinator or
by accession through the Form A).
Deliverable
Leader
Responsible for ensuring that the content of the deliverable meets the
required expectations, both from a contractual point of view and in
terms of usage within the project. Is also responsible for ensuring that
the deliverable follows the deliverable process and is delivered on time.
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 7 Dissemination level: PU - Public
1 Introduction Purpose of this Document and Relations with other WPs
The objective of this document is to present the overall architecture of the FotoInMotion
platform and application in terms of the supported functionalities, the respective processes
and the components that realise them. This document will serve as a reference point for the
development work that will take place in WP2, WP3 and WP4. The decisions presented in
this deliverable are subject to refinements and modifications, based on the progress of the
technical work packages, as well as the validation and evaluation phases. Possible
modifications will be reported in the future deliverables regarding the platform, i.e. D4.2
FotoInMotion platform and APIs (M15).
WP4 aims at providing the architectural and implementation aspects for the delivery of the
FotoInMotion tools in an integrated platform taking into account the full range of
requirements for such service. The design of the FotoInMotion platform is driven by the
usage scenarios and user requirements defined earlier in WP1 and will drive the design and
implementation of the various components produced in the context of work packages WP2,
WP3 & WP4, taking of course into consideration the constraints and characteristics of the
individual components. The decisions presented in this deliverable are subject to
refinements and modifications, based on the progress of the technical work packages, as
well as the validation and evaluation phases that will take place in the context of WP5.
Document Structure This document is comprised of the following chapters:
• Section 2 provides a high level overview of the FotoInMotion platform architecture
and descries the different components
• Section 3 goes into more depth into the Presentation Layer of the platform
• Section 4 provides more details on the Integration methodology and presents a clear
time plan until M18
• Section 5 presented the overall physical deployment of the FotoInMotion platform
• Section 6 includes a short conclusion of this document making references to next
steps and activities.
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 8 Dissemination level: PU - Public
2 System Architecture This section presents the initial version of the FotoInMotion platform architecture. The
design methodology is firstly presented in order to provide the background knowledge that
drives the designed architecture. After that, a high level architecture of the platform is
described in order to set the stage for the development of the first prototype. It must be
noted that the decisions presented in this section are subject to refinements and
modifications, based on the progress of the technical work packages, as well as the
validation and evaluation phases.
Architecture Design Methodology The design of the FotoInMotion architecture follows the principles of the Layered
Architecture pattern. The layers in a generic layered architectural design of any software
platform are described as follows:
• Presentation Layer
o Contains all the User Interfaces, the Visualization Modules and the Mobile
apps
• Services Layer
o Exposes multiple APIs in the form of web services, defining a set of
resources/methods as well as message structures
• Business Layer
o Encapsulates all the business logic, as well as core domain entities of the
system. It implements all system’s workflows and offers a simplified API
(system facade) to the top layers for fulfilling the business workflows.
• Data Layer
o Consists of all Data Access Objects as well as external service consumers. It is
the broker to all the persistence storage and external data.
• Cross-Cutting Layer
o Although it is not one of the basic four layers, it contains a set of features and
modules which do not belong to a specific layer, since they are collaborating
with all layers of the platform. These modules refer mainly to security and
communication.
The following diagram depicts the layered software architecture:
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 9 Dissemination level: PU - Public
Figure 1: Layered Architecture
The layered architectural design pattern addresses all the aspects that we are targeting.
Together with other software architectures and standards that are followed, it helps us to
apply the best standards in all the design aspects we are focusing on. Specifically:
• Usability: The fact that we are following a layered architecture isolates the
presentation modules from the logic layer, giving the possibility to focus on good
User Experience design (UX). Thus, a web designer or a usability expert can work
separately on the User Interface unaffected by the backend system developers.
These experts can focus distinctly on the User Interface to maximize the quality of
user experience. Internally, inside the presentation layer, we are going to follow a
Model View Controller design pattern for building a web application, starting from a
plain, well-defined user interface that consumes the services provided by the
backend. The use of modern web technologies for advanced visualisations and
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 10 Dissemination level: PU - Public
interactive, responsive dashboards (e.g. Angular.js, TypeScript, HTML5, CSS3) will
offer the best set of Front-end features to provide a clean and fully functional
interface. Finally, we are going to follow an agile methodology for developing the
platform, based on rapid prototyping and frequent iterations. This enables more
frequent evaluations close to the end-user of the platform and better result in terms
of meeting the usability requirements.
• Performance: For tackling performance issues, we are going to rely on two factors –
caching and distribution. The system architecture logic is implemented in the
backend and is exposed through an API of RESTful web services. These services can
be deployed independently and remotely in a distributed way among several nodes
of the cloud, following a Software Service Oriented Architecture. If needed, load
balancing and caching will be applied between the presentation and the services
layers. Additionally, the caching of data can take place even between the business
layer and the data layer, when frequent fetching of the same data is required.
• Security: The security features will be applied system-wise, covering all layers of
architecture. An authentication and authorization server will be set up to ensure that
only users with the appropriate permissions can access the data relying in the data
repositories. The external API (Service Interfaces) exposed by the platform can be
secured by means of encryption over HTTP via SSL, which is the standard protocol for
security over the Internet.
• Maintainability: For addressing maintainability, we are going to follow standards
oriented and technology independent architecture. Focusing on well-defined generic
standards (e.g RESTfull APIs) we don’t rely on specific knowledge for proprietary
solutions and standards. We will use only open-source solutions that are cross-
platform and fully flexible. Regarding the code collaboration tools, we are going to
use Git as a source code and version control tool. For communication between the
various developer teams, we are going to use a ticket tracking system, in a way that
all tickets can be traceable and all code commits can be examined and explained,
referring to specific tickets.
• Scalability: The system is going to be able to scale up based on the volume of the
requests. The elasticity will be provided by the cloud platform itself by configuring
the resources appropriately. The architecture is free to scale up easily due to the
service oriented distributed nature of the backend. By introducing the Apache Kafka
framework as the messaging backbone of the system we can take advantage of its
highly scalable nature.
High Level Overview The following diagram provides a high level overview of the platform, pertaining to the
general layered architecture schema presented in the previous section.
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 11 Dissemination level: PU - Public
Figure 2: High level overview of the FotoInMotion Platform
On the bottom layer we have the data sources which include environmental sensor data
collected from the mobile devices, location based sensor data, images, 3D effects,
narrations etc.
The components responsible for the analysis and processing of the data are found at the
next layer. Multimedia content, contextual and content related metadata, visual features
extracted as results of applying image processing algorithms and 3D effects are the major
categories of data generated at a first stage. At the services layer, a set of RESTful services
will handle the communication between the lower layers and the user interface. A
controller component will be deployed so as to manage the sequence of needed
communications between the components as well as between the interface and the data
layer.
Finally, the user interface will include the three applications that correspond to the relevant
pilot cases (the first one, targeting photojournalism, the second one targeting fashion
market and the third one targeting festivals) as well as the corresponding mobile
application. The basic output presented in these applications will mainly include
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 12 Dissemination level: PU - Public
visualisations and graphical elements to be directly used by end-users, there might be a
need to also present results at a lower level, i.e. in a machine readable format. More
specifically, a set of RESTful services to provide internal results coming from e.g. the image
processing modules would be one of the outputs of interest for some of the use cases. In
this case, the application will provide the way to access the relevant data and perhaps an
interactive environment to show how to use them, e.g. a Swagger1-based page for the
offered services.
Components Overview The following paragraphs provide a brief description of the components that will realise the
architecture seen in the high level overview. Besides the description, each component
comes with a set of inputs and outputs that are provided by the various processing and
analysis components
Presentation Layer Components
Environment Context Acquisition Tool (ECAT)
The ECAT will be a middleware layer, directly interacting with the built-in sensors and low-
level services of the mobile platform (i.e. Android).
This module will collect various context information (pertaining to the context of photo
acquisition), such as environmental conditions, geolocation, etc. It will expose an API
enabling the retrieval of the recently collected contextual information. This API will enable
FotoInMotion’s MobileApp to acquire the context information, at the time of photo
acquisition, and send it (together with the media content in scope), to the
DigitalEventHandler for Digital Event creation
Business and Services Layer Components
Digital Event Handler (DEH)
The Digital Event Handler (DEH) manages or intermediates the access to, and manipulation
of Digital Events (DEs).
DEs are complex informational objects which merge/interrelate multimedia content with
multi-layer interpretative/charactering metadata. The latter may pertain to: the base media
content and to its context of acquisition; or to other aspects. The part of a DE which consists
of the original captured media information and the related context and image interpretative
metadata (automatically or user produced) is termed the core area of a DE. The part of a DE
comprising any further metadata is termed the peripheral area. The two parts will not
necessarily be integrated or collocated. More likely, the latter will reference the earlier and
vice-versa.
Therefore, in more precise terms, the DEH manages the interaction with the core of part of
DEs. Overall, the DEH will provide the necessary functionalities for DE uploading into,
1 https://swagger.io/
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 13 Dissemination level: PU - Public
retrieval from, and manipulation at, a DE repository. This means that it also intermediates
the delivery of services by the Image Analysis Module (IAM), the Inference Module (InfM)
and the Annotation Assistance Module (AAM). It will thus handle or intermediate the:
▪ original creation of a DE – this means that it will handle the reception of the original
media content (original photo), its storage at the DE repository, the requesting of an
initial analysis of the photo by the IAM, the construction of the overall metadata
structure of the DE and the insertion of both the contextual metadata (received from
the terminal side) and the interpretative metadata provided by the IAM.
▪ on-demand requesting of DE analysis by the IAM and subsequent addition of that
module’s interpretative conclusions to the DE’s metadata;
▪ assistance to user annotation of media content (user annotation of photos, or user
validation of automatically generated tags), by intermediating the interaction with
the AAM, and addition of the resulting information to the DE’s metadata section;
▪ the realization of inference analysis over the entire metadata of a DE, by
intermediating with the InfM, and the addition of the resulting information to the
DE’s metadata section;
▪ the retrieval of full DE core, or the retrieval of the metadata or media part of the DE
core.
This module will provide a comprehensive API to enable the execution of the above
mentioned processes.
Image Processing & Narration Toolbox (IPNT)
The IPNT component is a service hosted by QdepQ systems. The goal of this component is to
supply functionality that can be used as a tool for narration. It contains both tools for the
processing and preparation of 2D (RAW-) images and 3D effects and tools for videos.
This component will be implemented in three stages:
▪ First stage will be a first prototype with a limited set of non-configurable effects.
▪ The second stage will be another prototype, this time with improvements by
incorporating data from WP2 (obtained by pulling from the database) to enhance its
effects. It will also extend the toolbox and introduce some configurations for the
effects in the toolbox.
▪ The third and final set contains the full toolbox of effects and rendering tools. On top
of that it will provide suggestions for effects according to the content under the
user's artistic supervision.
Input: Because of the nature of the 3D reconstruction algorithm and the heavy calculations
involved, the image should be submitted to the QdepQ server, as RAW and uncompressed
as possible. Compression and modifications of the original image have major impact on the
reconstructions made by the algorithm. The process is also computationally very expensive,
making it mandatory to tune the algorithm to the hardware that executes it.
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 14 Dissemination level: PU - Public
Output: After the user selected all operations that it wants to execute on the image, the
service can return the modified image as a video file.
API Methods: All communications will be done through a RESTful interface.
Technologies (On the QdepQ server): OpenCL, OpenCV, Deep learning
Image Analysis Module (IAM)
This module (IAM) will handle automatic image analysis for information extraction. It shall
comprise an integrated set of visual content analysis algorithms and recognition
mechanisms targeting project’s use cases specific requirements (combination of image
processing and computer vision algorithms with machine learning techniques), which will
enable it to perform such tasks as: region of interest identification; object, or object set,
recognition; person recognition and identification; etc.
The IAM will perform its action at the request of the DEH. The DEH will employ the IAM to
acquire interpretative information with which it may enrich DEs (expressed in accordance
with the digital events metadata model), by mapping visual features to descriptive
metadata. The IAM’s API will thus receive as input the location of the media resource to be
analysed (photo), the description of the analysis to be performed and further information to
assist in the analysis process. It will return a batch of metadata with the information that it
was able to extract from the images (typically associating perceptive and semantic
information to regions of the image).
Inference Module (InfM)
This module will employ ML mechanisms to derive further knowledge from contextual or
content related metadata. It will thus subject the acquired contextual metadata, as well as
the user produced or automatically generate media content related metadata, to further
interpretive processing. The inferred knowledge will enable high level description of the real
world situation related to the captured picture.
The InfM will perform its action at the request of the DEH. The DEH will employ the InfM to
acquire higher value interpretative information with which it may enrich DEs.
The InfM’s API will thus receive as input the location of the metadata resources to be
inferred upon (it may instead receive the actual resources as input), and the description of
the inference analysis to be performed. It will return a batch of metadata with the resulting
inferences.
Annotation Assistance Module (AAM)
This module will provide the necessary services to assist users with their manual annotation
of media content. It will deliver automatically generated annotations for user validation, and
handle user responses to such solicitations. This is an important functionality as it will allow
to progressively increase the reliability of the interpretative metadata, employed in the
training of the automated media analysis and inference mechanisms, thus making them
more accurate.
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 15 Dissemination level: PU - Public
The interaction with the AAM will be intermediated by the DEH. The DEH establishes the
connection between the AAM and relevant terminal side provisions. The AAM handles the
remaining interaction with said provisions and then delivers the resulting information to the
DEH for its inscription into the metadata component of the corresponding DE.
The AAM will receive as input, from the DEH, the core part of the DE (media content and
metadata) whose (automatically generated) metadata is to be subjected to user validation
as well as the indication of which parts of said metadata is to actually be subjected to the
referred process. It will receive also from the DEH the necessary information to establish
communication with the terminal side provision interfacing the metadata validating user.
In its interaction with the terminal side provision (and thus with the user), the AAM will
send the latter the image (photo) in question and the associated metadata to be reviewed.
Once the validation operation is completed at terminal side, the altered/validated metadata
is sent to the AAM which will forward it to the DEH.
Data Layer Components
DE Repository
The DE repository stores the media and metadata component of the core part of DEs. It
stores this information as sets of media and XML files, which carry the actual information.
The DE Repository is not an individual module by its own. Instead it is a functionality (or set
of them) provided by the FotoInMotion Data module (e.g. a file repository, a set of database
tables, and an API to access those resources).
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 16 Dissemination level: PU - Public
3 Presentation Layer Web Application
A web based dashboard will expose the functionality exposed by the various FotoInMotion
backend components designed as a user friendly and responsive user interface (UI). Both
characteristics are considered as key elements of the desired user experience (UX) that the
FotoInMotion dashboard aims to offer to the users in interest. The user experience is simply
the feeling got by the users and refers to the interaction of the end user with the services
offered by the FotoInMotion components.
A dashboard is considered user friendly if it successfully addresses the following features:
• Simplicity/Clarity
• Low response time
• Visibility of system status
• Error prevention
• Minimalistic design
• Error recovery
The responsiveness of the UI is equally important. Responsive web design is an approach
that facilitates the properly rendering of a web application on a variety of devices and
screen sizes. This means that the web application can respond to changes in the browser
width by adjusting the placement of the UI elements to fit in the available space.
Mobile Applications (ATC) The FotoInMotion mobile application plays a significant role in the workflow of the use cases.
In most of the cases it is the starting point of the workflow as it acts as the capturing
mechanism of the photographs/images to be further processed. It will enable the end users
to immediately act and capture any event that happens suddenly in front of them.
Moreover, it will allow the end users to transparently associate the photograph they have
shot with information in the context of environmental information as well as information
related to location, time etc. This information will be collected through the Environment
Context Acquisition Tool (ECAT) and will then be submitted to the central repository for
further processing.
Native vs Hybrid (ATC)
Two approaches will be investigated regarding the implementation of the FotoInMotion
mobile application. One option is to build a hybrid mobile application that will simply be a
web view of the FotoInMotion web dashboard application. Another option is to build a native
mobile application designed for the Android OS. There are key differences between the two
approaches and both have advantages and disadvantages.
In general, hybrid applications are compatible with both the Android and the iOS platform
but they usually offer a poor user experience that many end users find disappointing. On the
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 17 Dissemination level: PU - Public
other hand, native applications are those that are designed for a specific mobile Operating
System like Android or iOS, they have better user experience as well as exceptional
performance but they need to be developed separately for each target platform. Apart from
the better performance, native applications offer an eye-catching look and feel that most
mobile users find it more comfortable to interact with.
Supported Platforms
The ECAT will be developed for the Android operative system (currently available in its Oreo,
8.1 iteration). The development of the ECAT will be based upon the usage of Intel Context
Sensing SDK, an existing framework dedicated to tasks such as sensor analysis and context
data gathering. This framework delivers multiple built-in context type providers, allowing
easy access to context data, extracted and processed from native hardware sensors. Apart
from real-time information, Intel´s platform is also capable of gathering historical context
data, delivering a complete overview of user contexts throughout time. A dynamic rules
engine, dedicated to understanding user´s preference and establishing behaviour rules, will
also be implemented.
Technologies
Angular 6
Angular2 is a platform and framework for building client applications in HTML and
TypeScript. Angular is itself written in TypeScript. It implements core and optional
functionality as a set of TypeScript libraries that you import into your apps.
The basic building blocks of an Angular application are NgModules, which provide a
compilation context for components. NgModules collect related code into functional sets;
an Angular app is defined by a set of NgModules. An app always has at least a root module
that enables bootstrapping, and typically has many more feature modules.
• Components define views, which are sets of screen elements that Angular can
choose among and modify according to your program logic and data. Every app has
at least a root component.
• Components use services, which provide specific functionality not directly related to
views. Service providers can be injected into components as dependencies, making
your code modular, reusable, and efficient.
Both components and services are simply classes, with decorators that mark their type and
provide metadata that tells Angular how to use them.
The metadata for a component class associates it with a template that defines a view. A
template combines ordinary HTML with Angular directives and binding mark-up that allow
Angular to modify the HTML before rendering it for display.
2 https://angular.io/guide/architecture
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 18 Dissemination level: PU - Public
The metadata for a service class provides the information Angular needs to make it available
to components through Dependency Injection (DI).
An app's components typically define many views, arranged hierarchically. Angular provides
the Router service to help you define navigation paths among views. The router provides
sophisticated in-browser navigational capabilities.
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 19 Dissemination level: PU - Public
4 Integration Agile Methodology
Agile software development3 (ASD) is a group of software development methodologies
based on iterative and incremental development, where requirements and solutions evolve
through collaboration between self-organising, cross-functional teams, addressing the
development efforts performed in the various stage of a project.
There are many specific ASD methods. Most of them promote development, teamwork,
collaboration, and process adaptability throughout the life-cycle of the project. ASD goes
beyond traditional software development processes (such as Waterfall) and exploits an
evolutionary method that is an iterative and incremental approach to software
development and integration. Thus, the requirements and design phases are iteratively met
with the development phase to incrementally produce system software releases, which can
be assessed over the suitability, the maturity and the immediate business value. On top of
them, ASD foresees an intense testing phase, in which the unit testing is achieved from the
developer’s perspective and the acceptance testing is conducted from the customer’s
perspective. Thus, the major difference with respect to the iterative approaches of plan-
based methodologies (like RUP and Spiral) is the fact that requirements and testing are part
of the actual development iterative process and the target stakeholders can be
progressively involved in the development process aiming to deliver high quality software.
“The best architectures, requirements, and designs emerge from self-organising teams”.
ASD manifests that “working software is the primary measure of progress”. Thus, agile
methodology approaches the requirements of a software environment, like the cloud
market, iteratively by frequently delivering working software prototypes. These prototypes
enable the project development and business teams to work together and maximise the
quality of the produced output.
ASD can be suitable for the development strategy of an ICT solution, mainly because:
• A parallel process between the development of the planned software solution and
the verification of the requirements can be followed, leading to business oriented
people to actively participate in the specification of use cases and the evaluation of
the system developments and provide valuable feedback in an iterative way;
• The work on individual and independent development fields is split among small
groups comprising the separate development teams;
• As a ready to the market solution is envisaged rapidly, the solution can benefit from
frequent releases to align the work done among the individual teams;
3 http://agilemethodology.org/
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 20 Dissemination level: PU - Public
• The producing releases can be exchanged among senior technical teams and
business oriented groups to evaluate the effectiveness of the solution in real
business situations.
Orchestration Mechanism The orchestrator component will act as a backbone for the communication of the various
FotoInMotion processing and analysis backend components. It will encompass any
synchronization needs that will arise in order to set up an efficient execution workflow of
the backend components. For example, when the environmental metadata have been
uploaded to the central repository we need a way to notify the component that has to
process and analyse those metadata in order to proceed. Having a message bus between
the processing components can solve this issue in a scalable and asynchronous way. For this
purpose, a Kafka4 message broker will be used and be part of the orchestrator component.
A set of topics will be created each one serving a specific data pipeline. The components
that produce messages (the ones that need to notify others once they finish their job)
should publish a relevant message on a topic and then the orchestrator, that should have
already been registered on that topic, can parse the message and call another service API, as
it is depicted in the figure that follows.
Figure 3: Orchestration Mechanism
The synchronization and communication pipeline is one of the responsibilities of the
orchestrator component. It will also act as the dashboard controller. Specifically, it will
handle the user management as well as any authentication and authorization aspects. In
order to support the authentication workflow a FotoInMotion Auth server will be deployed
4 https://kafka.apache.org/
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 21 Dissemination level: PU - Public
and connected to the web dashboard. The step by step Authentication/Authorization
process workflow is described below:
• A user is registered in the FotoInMotion web dashboard through the FotoInMotion
Auth server
• A user logs in the dashboard using its credentials
• A JWT token is returned and saved in local storage of dashboard
• The JWT token is used to authorize access based on the user’s role
• The dashboard controller (orchestrator component) must send the JWT token in the
Authorization header using the Bearer schema, along with the request
• The validation of JWT token should be done through the FuturePulse Auth server
The authentication and authorization process will follow the JWT open standard. JSON Web
Token (JWT5) is an open standard (RFC 75196) that defines a compact and self-contained
way for securely transmitting information between parties as a JSON object. This
information can be verified and trusted because it is digitally signed. JWTs can be signed
using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA.
Although JWTs can be encrypted to also provide secrecy between parties, we will focus on
signed tokens. Signed tokens can verify the integrity of the claims contained within it, while
encrypted tokens hide those claims from other parties. When tokens are signed using
public/private key pairs, the signature also certifies that only the party holding the private
key is the one that signed it.
Authorization is the most common scenario for using JWT. Once the user is logged in, each
subsequent request will include the JWT, allowing the user to access routes, services, and
resources that are permitted with that token. Single Sign On is a feature that widely uses
JWT nowadays, because of its small overhead and its ability to be easily used across
different domains. Regarding the information exchange JSON Web Tokens are a good way of
securely transmitting information between parties. Because JWTs can be signed—for
example, using public/private key pairs—one can be sure the senders are who they say they
are. Additionally, as the signature is calculated using the header and the payload, you can
also verify that the content hasn't been tampered with.
Implementation Roadmap The following integration roadmap presents the steps leading to the first release of the
FotoInMotion applications in Month 18 of the project. The functionalities and components
for every step are listed in order to give an overview of the integration process during the
forthcoming milestones. The integration of future versions and outcomes of the platform
components will be planned and documented in future deliverables.
5 https://jwt.io/introduction/
6 https://tools.ietf.org/html/rfc7519
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 22 Dissemination level: PU - Public
Μ11 – November 2018:
• Data collection processes defined, i.e. data APIs, photos, possible audio file
• FotoInMotion Data storage setup
• Storing and Retrieving data mechanism available
Μ15 – March 2019:
• First set of 3D and semantic depth reconstruction and rendering tools available
• Initial integration of outputs to the FotoInMotion Data storage
• Definition of the required services to enable communication between components
• First Version of Platform and APIs
M17 – May 2019:
• First set of contextual data generation and annotation tools
• Well-defined and documented services comprising the first version of the
FotoInMotion API
M18 – June 2019
• Use Case Applications v1
• The first version of the applications supporting the 3 use cases
Figure 4: Integration Time plan
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 23 Dissemination level: PU - Public
5 Physical Deployment Hardware Specifications
The execution of ECAT will require a mid-range or high-end smartphone or tablet with the
Android operative system (version 4.4, Kit Kat, at least). A minimum set of sensors is
required for most of the functionalities available within Intel Context Framework SDK:
accelerometer, gyroscope, proximity and compass. Apart from these specific sensors,
hardware such as built-in microphones and location services (GPS for instance) are also
required for a more complete demonstration of the usage of relevant features such as audio
recognition and location processing.
The automated interpretative and inferencing capabilities of such modules as IAM, InfM,
AAM, require significant computing capabilities. A desktop PC should be employed
presenting the following minimal characteristics: 5th generation Intel i7 Quad-Core
CPU; 16GB of RAM; Microsoft Windows 8.1 operative system; and a high end graphics
board.
For modules DEH, IAM, InfM, AAM, and DE Repository requirements are not yet possible to
accurately assess.
The service provided by QdepQ Systems, the IPNT component, will run on a machine with a
lot of computational power. Although 8 GB RAM is the minimum, at least 32 GB is advised.
On top of that, multiple powerful GPUs and a powerful i7 processor is used.
These requirements are due to an iterative algorithm that stores many versions of a picture
in RAM while performing big convolutions on the data.
The exact specifications of the hardware will be determined later in the project based on
advances in technology.
Deployment Diagram The infrastructure of the FotoInMotion platform is distributed on several nodes. There is the
central infrastructure which resides on the cloud and that hosts the web dashboard, the
orchestrator component, the authentication server as well as part of the data repository.
There is also the QdepQ infrastructure that hosts the services related to the image
processing and narration component. Also in the corresponding INESC infrastructure there
will be deployed all the WP2 components (specifically the Digital Event Handler, Image
Analysis, Inference and Annotation Assistance modules) and the Digital Event repository. By
central or QdepQ or INESC infrastructure we refer to a virtual cluster of nodes that they may
be physical or in the cloud. Furthermore, all of the components, both those in the central
repository or those in the remote locations, communicate through secure REST APIs over
TLS channels.
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 24 Dissemination level: PU - Public
Figure 5: Deployment Diagram
FotoInMotion D4.1 - Overall system architecture
H2020 Contract No. 780612 Final 1.0, 30/06/2018
Page | 25 Dissemination level: PU - Public
6 Conclusions This deliverable reports the work carried out within Task “T4.1 System Architecture and
Specification” being the first task within work package “WP4: Platform Integration and
Application Development”. It includes the architecture specifications and design of the
integrated FotoInMotion platform and serves as the basis for the development tasks of the
project. Information about the system use cases, the characteristics of the components of
the system and the data flow between them is presented in detail. Moreover, the
methodology of integration and an initial integration plan are also described.
This architecture description document will be very useful to define and communicate the
initial blueprint of the FotoInMotion platform. The architecture will continue to evolve
throughout the project and the most important point is to make sure that it is consistent
and in line with the design and implementation work being described in the other technical
work packages, as well as with the early pilot activities of the project. This deliverable acts
as the reference point for the actual development of the platform and offers a shared and
common background for the Consortium participants on the envisaged technologies that
are necessary to build such a platform.