63
D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release 1 Continuous Security and Privacy-by- Design Mechanisms – Early Release Deliverable D4.1 Editors Manos Papoutsakis Kyriakos Kritikos Kostas Magoutis Reviewers Demetris Trihinas (UCY) Giannis Ledakis (Ubitech) Date 30 March 2018 Classification Public

Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

1

Continuous Security and Privacy-by-Design Mechanisms – Early Release Deliverable D4.1

Editors Manos Papoutsakis

Kyriakos Kritikos

Kostas Magoutis Reviewers Demetris Trihinas (UCY)

Giannis Ledakis (Ubitech) Date 30 March 2018 Classification Public

Page 2: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

2

Contributing Author # Version History

Name Partner Description

Manos Papoutsakis FORTH 1 Table of Contents (ToC), partner contribution

assignment, text on Introduction, Section 2.2,

Section 3.2, and Chapter 5

Kyriakos Kritikos FORTH 2 Text and editing pass on Section 3.1, Chapter

6, Conclusions

Panagiotis Gouvas UBITECH 3 Chapter 2.1 & 4

Kostas Magoutis FORTH 4 Finalized Executive Summary, introduction,

conclusions, and general editing pass

Page 3: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

3

Contents

CONTENTS 3

1 INTRODUCTION 8

1.1 Document Purpose and Scope 9

1.2 Document Relationship with other Project Work Packages 10

1.3 Document Structure 10

2 STATE OF THE ART AND KEY TECHNOLOGY AXES 12

2.1 Authorization 12

2.2 Perimeter Security 15

2.3 Vulnerability Assessment 16

2.3.1 Vulnerability Modelling, Classification & Storage 16

2.3.2 Vulnerability Scanning and Risk Assessment 18

2.3.3 Work Directions 21

2.4 Model Driven Security 23

3 UNICORN SECURITY AND PRIVACY ARCHITECTURE 24

3.1 Security Meta-Model 24

3.2 Architecture 28

3.2.1 Security Flow 30

3.2.2 Vulnerability Assessment flow 31

4 PRIVACY-BY-DESIGN MECHANISMS 33

4.1 Requirements and User Roles 33

4.2 Reference Architecture 33

4.3 Exposed Functionality 37

4.4 Implementation 37

4.5 Interaction with other Unicorn Services and Components 39

5 PERIMETER SECURITY 40

5.1 Requirements and User Roles 40

5.2 Exposed Functionality 40

5.3 Reference Architecture 41

5.4 Implementation 42

5.4.1 Docker Compose approach 42

5.4.2 Prototype 42

6 VULNERABILITY ASSESSMENT 47

Page 4: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

4

6.1 Requirements and User Roles 47

6.2 Exposed Functionality 49

6.3 Reference Architecture 50

6.4 Implementation 51

6.5 Evaluation 52

6.5.1 Comparative Vulnerability Database Evaluation 52

6.5.2 Experimental Vulnerability Scanning Tool Evaluation 53

7 CONCLUSIONS 57

8 REFERENCES 59

APPENDIX A 61

Page 5: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

5

List of Figures

Figure 1: Unicorn Reference Architecture 9

Figure 2 - ABAC Indicative Information Flow 14

Figure 3: Security Model 25

Figure 2: Model-driven security enforcement and vulnerability assessment architecture 29

Figure 5: Security Flow 30

Figure 6: Vulnerability Assessment Flow 31

Figure 7 – Policy Engine Components 34

Figure 8 - Usage of XML Artefacts 35

Figure 9 - Sample XSD of Policy 35

Figure 10 - Sample XSD of Request 36

Figure 11 - Sample XSD of Response 36

Figure 12 - Expert System Basic Components 38

Figure 13 - Forward Chaining Execution Flow 38

Figure 14: Perimeter Security reference architecture 41

Figure 15: Security configuration model - IDS configuration Details 43

Figure 16: Preliminary implementation on Google Cloud Platform 44

Figure 17: Application (Web server) deployment across GCP cloud regions 45

Figure 18: Average CPU utilization of application VMs (workers) under increasing load (JMeter users) 46

Figure 19 - The two different paths in vulnerability addressing 48

Figure 20 - The architecture of the security framework dedicated to vulnerability assessment 50

List of Tables

Table 4.1 – Privacy by Design Actors 33

Table 2: Exposed functionality and fulfilment level of Perimeter Security 41

Table 3 - The exposed functionality and the level of requirement fulfilment 50

Table 4 - Qualitative Evaluation of Vulnerability Databases 52

Table 5 - The evaluation results concerning scanning mode, accuracy and time 55

Table 6 - Scanning Accuracy Results per Vulnerability Area 55

Page 6: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

6

Executive Summary

The aim of this Deliverable is to provide a comprehensive overview and report for the early release of the

Continuous Security and Privacy-by-Design Mechanisms of the Unicorn reference architecture. These

mechanisms include Privacy-By-Design and Encrypted Persistency, Perimeter Security, and Vulnerability & Risk

Assessment Mechanisms. The work presented in this report is within the scope of Work Package 4 (WP4).

The Deliverable begins by presenting an overview of the state-of-the-art and key technology axes relevant to

the Unicorn targets of continuous security and privacy by design. We introduce the technologies that are being

used in the context of the project along with their adaptation and integration within the Unicorn architecture.

We present the overall architecture of the security and privacy part of the Unicorn platform, describing an early

release of a meta-model devised to organize concepts, relationships, and relevant information about the

perimeter security and vulnerability assessment design libraries. Features and policies of these design libraries

will be exposed as functionality either at design-time, using annotations or at run-time using the service graph.

A core part of this report is devoted to describing the key mechanisms of Privacy-By-Design and Encrypted

Persistency, Perimeter Security, and Vulnerability & Risk Assessment. For each mechanism we present the

requirements specified to drive their design, the roles of involved users, the exposed functionalities, the

reference architecture, and the prototype implementation to date. The deliverable concludes by outlining the

remaining work to be conducted towards the final release of the Unicorn continuous security and privacy-by-

design mechanisms.

Page 7: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

7

Table of Abbreviations

AMI Amazon Machine Image

API Application programming interface

ARP Address Resolution Protocol

CAM Content Addressable Memory

CAPEC Common Attack Pattern Enumeration and Classification

CPE Common Platform Enumeration

CPU Central Processing Unit

CVE Common Vulnerabilities and Exposures

CVSS Common Vulnerability Scoring System

CWE Common Weakness Enumeration

DAQ Data Acquisition library

DDoS Distributed Denial of Service

DoS Denial of Service

ENISA European union Agency for Network and Information Security

FPVA First Principles Vulnerability Assessment

GPU Graphics Processing Unit

HPI-VDB Hasso Plattner Institut - Vulnerability Database

IDE Integrated Development Environment

IDS Intrusion Detection System

IP Internet Protocol

IPS Intrusion Prevention System

NVD National Vulnerability Database

OSVDB Open Source Vulnerability Assessment DB

OVAL Open Vulnerability and Assessment Language

OWASP Open Web Application Security Project

SQL Structured Query Language

UML Unified Modeling Language

VDB Vulnerability Database

VM Virtual Machine

WASC Web Application Security Consortium

XEE XML External Entity

Page 8: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

8

1 Introduction Deliverable D4.1, henceforth referred to as D4.1, provides comprehensive overview of the early release of the

Continuous Security and Privacy-by-Design Mechanisms, part of the overall Unicorn reference architecture. The

main software components included in the Unicorn Reference Architecture comprising the Continuous Security

and Privacy-by-Design Mechanisms are:

Authorization Enforcement

The role of this component is to ensure that requests that are handled by microservices comply with

specific security policies that are defined beforehand. This will be achieved using a holistic authorization

framework which has to be seamlessly integrated with the service graph.

Security Enforcement

The role of this component is to ensure that the traffic exchanged between a service and the outside

world at any time will not harm application instances. This is achieved via information-flow tracking

using an Intrusion Detection System (IDS) deployed within the execution environment of the application

instance. This component creates comprehensive reports towards the cloud application administrator

whenever a security incident occurs.

Vulnerability Assessment

The role of this component is to continuously assess and report any vulnerabilities detected that may

impact an application, according to a comprehensive range of vulnerability areas. The component

creates a joint report of the evaluation of the overall risk at the application level and communicates this

report to the cloud application administrator.

Figure 1 depicts the current version of the Unicorn Reference Architecture, highlighting the components

comprising the aforementioned mechanisms. A Security Service is bundled within each containerized execution

environment by the Unicorn Platform upon deployment. The Security Service is configured according to the

requirements and constraints defined at design time in the application service description via annotations. The

Unicorn Security Service is particularly tailored for containerized environments such as Docker, as well as any

other container format adopting the Open Container Specification. After deployment, the set of events of

interest, such as security and privacy incidents and vulnerability alerts, are automatically published by the

Security Service to the Real-time Notification Service, part of the Unicorn Runtime Enforcement Layer.

Page 9: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

9

Figure 1: Unicorn Reference Architecture

1.1 Document Purpose and Scope

The purpose of this deliverable is to provide comprehensive overview of the early release of the Continuous

Security and Privacy-by-Design Mechanisms, which are part of the overall Unicorn reference architecture. In this

deliverable, we will give an overview of the Privacy-by-Design and Encrypted Persistency mechanism. This

mechanism is responsible to offer mainly the authorization functionality along with the encryption business

logic. Such business logic comprises the privacy preservation layer of UNICORN.

Page 10: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

10

We will next present the early release of the Security Service being incorporated in the Unicorn platform. We

will present the exposed functionality of this service, the Unicorn components it comprises, and details about

the status of the current implementation of this service, demonstrating how Security Service can be

containerized, using Docker and Docker Compose. We will also describe the Security Enforcement Service and

Vulnerability Assessment Service of the Unicorn platform, their exposed functionality, reference architecture,

and describe the tools that were chosen to materialize the vulnerability scanning and the risk computation tasks.

Finally, we will present an evaluation of the chosen tools within the current early release of the Unicorn

Continuous Security and Privacy-by-Design mechanisms.

1.2 Document Relationship with other Project Work Packages

This deliverable provides an initial description of main components of the Unicorn platform, such as privacy-by-

design and encrypted persistency, perimeter security and vulnerability assessment mechanisms. In order to reach

this point business and technical requirements, reference architecture and use cases were needed, all of which

were the product of WP1 (D1.1). These products guided the development of the Unicorn components presented

in this document. After the first release of the Unicorn platform, Unicorn Demonstrators and the Performance

Evaluation (WP6) will provide feedback to WP4 as well as other technical work packages.

The components described in this deliverable will be building blocks of the overall Unicorn platform. The

integration of the developed components (subsystems) to form the Unicorn framework and integrated platform

will take place in task T1.4 and will be documented in D1.3.

1.3 Document Structure

The rest of this deliverable is structured as follows:

Section 2 presents the key technology axes and State of the Art in data privacy, intrusion detection/prevention

systems and vulnerability & risk assessment. As far as authorization is concerned, the existing authorization

models are discussed along with their advantages and disadvantages.

We present a brief classification and discussion on the state-of-the-art of intrusion detection/prevention

systems. Regarding vulnerability & risk assessment, we present recent work related to vulnerability modelling,

classification, and storage. We also present directions that guided the design and realisation of the

risk/vulnerability assessment part of Unicorn’s framework.

Section 3 focuses on the security and privacy architecture. First the security meta-model is presented as an

Eclipse Ecore model, along with a documentation about the classes and their attributes. In addition, this section

shows the information flow that governs application deployment along with configuration and runtime

management of intrusion detection and vulnerability assessment.

Section 4 elaborates the privacy-by-design mechanisms. More specifically, the architectural components that realize the authorization and the encryption functionality are analysed. As already discussed, these components comprise the privacy preservation layer. Section 5 is devoted on the perimeter security framework part of the Unicorn platform. It focuses on the

reference architecture of perimeter security, describing the internal sub-components of the main components

of the security mechanism and their functionality. Moreover, a reference is made to the way Docker Compose

Page 11: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

11

can be used to deploy the aforementioned components alongside an application. Next, a prototype is described,

its architecture, and an evaluation.

Section 6 focuses on the vulnerability assessment framework part of the Unicorn platform. It approaches

application protection through scanning an application for a wide range of vulnerabilities and assessing the

respective overall risk coming from these vulnerabilities.

Section 7 provides our conclusions.

Page 12: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

12

2 State of the Art and Key Technology Axes

2.1 Authorization

There have been several Access Control Mechanisms that have been proposed in order to realize the various

logical access control models. These models provide a framework and set of boundary conditions upon which

the objects, subjects, operations, and rules may be combined to generate and enforce an access control decision.

Each model has its own advantages and limitations. In UNICORN, the ABAC model has been selected mainly

because of its generality and flexibility. However, for the sake of completeness, the most dominant ACMs will

be listed:

MAC/DAC: An early application of logical access control was applied in Department of Defense (DoD)

applications in the 1960s and 1970s with the emergence of the concepts of Discretionary Access Control

(DAC) and Mandatory Access Control (MAC) [34]. In discretionary access control (DAC), the owner of the

object specifies which subjects can access the object. DAC is called discretionary because the control of

access is based on the discretion of the owner. Most operating systems such as all Windows, Linux, and

Macintosh and most flavors of Unix are based on DAC models. In these operating systems, when you

create a file, you decide what access privileges you want to give to other users; when they access your

file, the operating system will make the access control decision based on the access privileges you

created. In MAC, the system (and not the users) specifies which subjects can access specific data objects.

The MAC model is based on security labels. Subjects are given a security clearance (secret, top secret,

confidential, etc.), and data objects are given a security classification (secret, top secret, confidential,

etc.). The clearance and classification data are stored in the security labels, which are bound to the

specific subjects and objects. When the system is making an access control decision, it tries to match

the clearance of the subject with the classification of the object. For example, if a user has a security

clearance of secret, and he requests a data object with a security classification of top secret, then the

user will be denied access because his clearance is lower than the classification of the object. The MAC

model is usually used in environments where confidentiality is of utmost importance, such as a military

institution.

IBAC/ACLs: As networks grew, the need to limit access to specific protected objects spurred the growth

of Identity Based Access Control (IBAC) capabilities. IBAC employs mechanisms such as access control

lists (ACLs) to capture the identities of those allowed to access an object. If a subject presents a

credential that matches the one held in the ACL, the subject is given access to the object. Individual

privileges of a subject to perform operations (read, write, edit, delete, etc.) on an object are managed

by the object owner. Each object needs its own ACL and set of privileges assigned to each subject [36].

In the IBAC model, the authorization decisions are made statically prior to any specific access request

and result in the subject being added to the ACL. For each subject to be placed on an ACL, the object

owner must evaluate the identity, object, and context attributes against the high-order policies that

relate to the object and decide whether to add the subject to the ACL. This decision is static and a

notification process is required for the owner to reevaluate and perhaps remove a subject from the ACL

to represent subject, object, or contextual changes. Failure to remove or revoke access over time leads

to users accumulating privileges.

RBAC: [36,37,38] employs pre-defined roles that carry a specific set of privileges associated with them

and to which subjects are assigned. For example, a subject assigned the role of Manager will have access

to a different set of objects than someone assigned the role of Analyst. In this model, access is implicitly

Page 13: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

13

predetermined by the person assigning the roles to each individual and explicitly by the object owner

when determining the privilege associated with each role. At the point of an access request, the access

control mechanism evaluates the role assigned to the subject requesting access and the set of

operations this role is authorized to perform on the object before rendering and enforcing an access

decision. Note that a role may be viewed as a subject attribute that is evaluated by the access control

mechanism and around which object access policy is generated. As the RBAC specification gained

popularity, it made central management of enterprise access control capabilities possible and reduced

the need for ACLs.

ABAC: ACLs and RBAC are special cases of ABAC. ACLs work on the attribute of “identity”. RBAC works

on the attribute of “role”. The key difference with ABAC is the concept of policies that express a complex

Boolean rule set that can evaluate many different attributes. While it is theoretically possible to achieve

ABAC objectives using ACLs or RBAC (though the administrative work to implement and maintain ABAC

policies with ACLs or RBAC is prohibitive), demonstrating access control requirements compliance is

difficult and costly due to the level of abstraction required between the access control requirements

and the ACL or RBAC model [36]. Another problem with ACL or RBAC models is that if the access control

(AC) requirement is changed, it may be difficult to identify all the places where the ACL or RBAC

implementation needs to be updated.

In general, ABAC avoids the need for capabilities (operation/object pairs) to be directly assigned to subject

requesters or to their roles or groups before the request is made. Instead, when a subject requests access, the

ABAC-compliant engine can make an access control decision based on the assigned attributes of the requester,

the assigned attributes of the object, environment conditions (also expressed as assigned attributes), and a set

of policies that are specified in terms of those attributes and conditions. Furthermore, policies can be created

and managed without directly affecting users and objects, and users and objects can be provisioned without

affecting policies. Furthermore, in any non-ABAC multi-organizational access method example [36],

authenticated access to objects outside of the subject’s originating organization would require the subject’s

identity to be pre-provisioned in the target organization and pre-populated on an access list.

For the sake of clarity, we will differentiate hereinafter in the deliverable the concepts of ABAC and ABAC

Reference Implementations. Following the strict definition of NIST [39] we will refer to ABAC as the “access

control method where subject requests to perform operations on objects are granted or denied based on assigned

attributes of the subject, assigned attributes of the object, environment conditions, and a set of policies that are

specified in terms of those attributes and conditions”. Following this definition, the terms “attribute”, “subject”,

“object”, “operation”, “policy” and “environmental condition” will be unambiguous used based on the following

definitions:

Attributes are characteristics of the subject, object, or environment conditions. Attributes contain

information given by a name-value pair.

A Subject is a human user or a systemic entity, such as a device, that issues access requests to perform

operations on objects. Subjects are assigned one or more attributes.

An Object is a system resource for which access is managed by the ABAC system, such as devices, files,

records, tables, processes, programs, networks, or domains containing or receiving information. In fact,

an object can be anything upon which an operation may be performed.

Page 14: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

14

An Operation is the execution of a function at the request of a subject upon an object. Operations

include read, write, edit, delete, copy, execute.

Policy is the representation of rules that makes it possible to determine if a requested access should be

allowed, given the values of the attributes of the subject, object, and possibly environment conditions.

Environment conditions represent operational or situational context in which access requests occur.

Environment conditions are detectable environment characteristics that are modelled as attributes.

Environment characteristics are independent of the subject or object, and may include the current time, day of

the week, location of a user, or the current threat level. It should be noted that some ABAC implementations

(e.g. XACML 3.0) allow policy creators to define their own attribute categories extending the set of subject,

resource, action and environment.

Any ABAC system should implement the conceptual flow that is depicted on Figure 2. According to this flow any

subject can perform an access request for a specific operation regarding a specific target object (step 1).

Figure 2 - ABAC Indicative Information Flow

The access request is handled by the ABAC Access Control Mechanism which consults a policy repository (step

2a) in order to obtain the set of attributes that have to be examined in order to reach a decision of “allow” or

“deny”. The attribute examination phase checks subject attributes (step 2b), object attributes (step 2c) and

environmental attributes (step 2d) in order to perform the actual assessment (step 3).

Page 15: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

15

2.2 Perimeter Security

Intrusion Detection and/or Prevention Systems (IDPS) are considered an indispensable tool for the early

detection of hostile activity and prevention of potential damage to applications and their supporting computing

infrastructure. The use of such a system can lead to in-time notification of the system administrator preventing

the success of an attack.

A taxonomy of IDPSs can be organized in the layers of monitoring, detection, alarm management and reaction

[1]. In the monitoring layer, we find network-based, host-based and application-based IDPSs. Network-based

systems monitor and analyse network traffic and try to identify potential attacks. Host-based systems monitor

the dynamic behaviour and the state of the application system, including the detection of which program

accesses which resources. Application-based systems analyse the application logs and measure its performance.

The second layer, detection, includes two sub-categories, signature-based intrusion detection and anomaly

detection. According to the first, a set of rules or signatures is used to decide that a given pattern identifies an

intruder. Anomaly detection identifies events that appear to be deviating from normal system behavior, a

technique that is applicable to detecting unknown attacks at different levels. Statistical tests to the observed

behavior determine if it is legitimate or not.

The third layer, alarm management, includes two more sub-categories, alarm quality improvement and alarm

correlation. The first sub-category aims to improve the quality of the alert. This is done by adding information

such as vulnerability reports or alert context. The second sub-category aims to figure out the incident that

caused the created alerts. This procedure is called correlation.

The fourth layer is the reaction layer. The first sub-category of this layer includes IDPSs with active responses,

which alter the state of the system that they protect, including reconfiguration of the IDPS and/or the firewalls.

The second category includes IDPSs with passive responses, including termination of the connection of the

attacker.

Network intrusion detection and prevention are typically resource-intensive processes. They rely on a set of

rules compared against network packets, where most of the time, each byte of every packet needs to be

processed as part of the string searching algorithm looking for matches among a large set of strings from all

signatures that apply for a particular packet. If a string contained in a packet payload satisfies all the conditions

of one of the rules, an associated action is taken. String comparison, part of network intrusion detection and

prevention, is highly computationally-intensive, accounting for about 75% of the total CPU processing time of

modern network IDSs [4], [5]. Previous work has demonstrated use of Graphics Processing Units (GPUs) as a

means of accelerating the inspection process in its IDS [6]. The use of GPUs in security applications can be found

in other areas such as cryptography [7], data carving [8] and intrusion detection [5], [8]. The authors of [7]

achieved to boost the processing throughput of the Snort IDS tool, by offloading the string matching operations

to GPU, by a factor of three. There are also other attempts that have to do with the use of specialized hardware

to accelerate the inspection process of intrusion detection systems with the use of content addressable memory

(CAM) [9], [10] or specialized hardware circuit implemented on FPGA [11]. All of them have high cost or the

whole procedure is difficult and time consuming.

In Unicorn, we leverage Unicorn’s security-by-design aspects in the configuration of the security and privacy

enforcing mechanisms and allowing Cloud Application security requirements and policies through code

Page 16: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

16

Developers to enforce appropriate model specifications, to be realized through corresponding enforcement

libraries. More specifically, application developers that use the Unicorn framework are able to describe the

security to be applied to their application by adding security annotations in their application model. Regarding

IDS/IPS, they may set the speed, accuracy and cost, the number and the variety of the detection rules and the

action to take upon the detection of an intrusion. The desired security enforcement mechanisms are configured

and deployed in the same execution environment the application is running, in the form of a component called

security detector. Detected security incidents create automatic responses such as increase service capacity to

avoid deterioration for legitimate users. A regression based approach [3] will be used to estimate the strength

of an attack so as to more accurately provision service capacity. In Unicorn we leverage Docker Compose as the

technology to deploy and configure the intrusion detection system along with the application. More details

about IDS containerization in Unicorn can be found in section 5.4.1 and Appendix A.

2.3 Vulnerability Assessment The state-of-the-art analysis can be separated into two parts. The first deals with analysing work related to

vulnerability modelling, classification & storage. The second deals with analysing work related to vulnerability &

risk assessment. In the end, we supply some directions of work that are derived from the analysis which has

driven the design and realisation of the risk/vulnerability assessment part of our framework.

2.3.1 Vulnerability Modelling, Classification & Storage

Vulnerability Classification. The authors in [16] determine vulnerabilities either intrinsic to cloud computing core technologies or prevalent

to a relevant technology’s state-of-the-art implementations. They especially supply examples of vulnerabilities

for core cloud characteristics like unauthorised access to management interface and internet protocol ones.

They also claim that the most prevalent cloud-specific vulnerabilities in respective implementations include

injection and weak authentication vulnerabilities. Finally, they present a novel layered cloud reference

architecture in which each component is associated with a set of vulnerabilities that can affect it. In [21], the

authors propose a list of cloud computing threats, classified into cloud security and internet security threats. All

these threats are associated with 6 main security objectives that concern the properties of confidentiality,

integrity, availability, multi-trust, auditability and usability. This mapping enables assessing the overall risk of the

relevant threats for a specific cloud or cloud-based application. The work of ENISA [1] is extended in [24] via the

proposal of a more elaborate threat and vulnerability set taken from various sources. Threats are classified into

3 groups (network, system/data and organisational) based on the components of the information system

affected. Each threat is associated with some example vulnerabilities as well as mapped to the property affected

(confidentiality, integrity or availability).

Vulnerability Modelling & Standards. MITRE1 has developed the Common Vulnerabilities and Exposures (CVE®2), a dictionary of common names (i.e.,

CVE Identifiers) for publicly known information security vulnerabilities. CVE promotes using common identifiers

to enable sharing data across separate network security databases and tools, while supplies a baseline for

evaluating an organisation’s security tools coverage. This dictionary includes the following main fields for each

vulnerability: its (textual) description and its related references (other URLs identifying it).

1 https://www.mitre.org/ 2 https://cve.mitre.org

Page 17: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

17

MITRE has also developed via a community-coordinated effort the Common Weakness Enumeration (CWE)3, a

list of common security weaknesses. CWE can be utilised as a common language, a repository for software

security tools and a baseline for weakness identification, mitigation and prevention. Two viewpoints have been

employed for enabling navigating this enumeration: (a) the Research Concepts representation supporting

research over weakness types organised by behaviours; (b) the Development Concepts representation

facilitating organising items according to concepts frequently used or encountered during development.

Seven Pernicious Kingdoms [25] is a taxonomy of common, well-known coding errors that might lead to security

vulnerabilities, organised into 8 error categories: (1) input validation & representation; (2) API abuse; (3) security

features; (4) time and state; (5) errors; (6) code quality; (7) encapsulation; (8) configuration / environment. A

tool allowing to search this taxonomy is offered in: https://vulncat.hpefod.com/en/weakness. OWASP4 Top Ten5

is an outcome taken out from 40 data submissions from firms specialised in application security and from an

industry survey completed by 515 individuals. Out of these 2 sources, the collected data span vulnerabilities

gathered from hundreds of organizations and over 100,000 real-world applications and APIs. From all these

vulnerabilities, the Top 10 areas are selected and prioritised based on this prevalence data by also considering

consensus estimates of exploitability, detectability and impact. Thus, these top-10 areas map to the most serious

application security risks for a broad organisation spectrum. These risks are coupled with generic information

concerning their likelihood and technical impact using a simple ratings scheme based on the OWASP Risk Rating

Methodology6. This scheme utilises 4 main dimensions: exploitability, weakness prevalence, weakness

detectability, and technical impact. Each dimension is mapped to a scale of 3 quality values which differ for each

dimension but are associated with the same risk level.

Vulnerability Databases.

The National Vulnerability Database (NVD)7 is a SCAP-compliant vulnerability database, integrating vulnerability

information from various interrelated vulnerability standards like CVE, CWE, CPE8, and CVSS9. The latest NVD

version includes 102129 (CVE) vulnerabilities.

HPI-VDB10 is a comprehensive and up-to-date repository, taking the form of a high performance database,

including a large set of known software vulnerabilities (96410 in number). It has been developed by the IT-

Security Engineering Team at HPI in Germany. The vulnerabilities are collected and stored by following a two-

step process: (i) the vulnerability information is initially collected from the internet, like public portals of other

vulnerability databases or software vendors; (ii) the collected information is normalised before it is stored in the

repository. Various API types over this repository are offered for IT developers to enable using this repository

during software development. vFeed11 is a detective and preventive security information repository unifying

information collected from various internet resources. vFeed supports CVE, CWE and OVAL standards as well as

3 https://cwe.mitre.org 4 https://owasp.org 5 https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project 6 https://www.owasp.org/index.php/OWASP_Risk_Rating_Methodology 7 https://nvd.nist.org 8 https://cpe.mitre.org 9 https://www.first.org/cvss 10 https://hpi-vdb.de 11 https://vfeed.io

Page 18: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

18

connects to well-known exploitation databases and frameworks to associate vulnerabilities with the way they

can be exploited. It also supports CPE (Common Platform Enumeration) dictionary enabling to connect

vulnerabilities to the components they concern, i.e., systems, software and packages of more than 190000

products. CVSS standard is also supported to associate vulnerabilities with their risk. Mitigations are offered via

the CAPEC standard.

2.3.2 Vulnerability Scanning and Risk Assessment

The zone-based approach in [14] focuses on risk and performance assessment comprising five main steps: (a)

application of vulnerability detection tools, including Nessus12, NMap13 & Nikto14; (b) collection of all

vulnerabilities and evaluation of respective risk leading to an assessment report production; (c) conducting

vulnerability analysis via vulnerability identification related to process security events that involves the level of

effectiveness in reducing the vulnerabilities; (d) assessing risk by computing the relative degree of risk to the

facility with respect to the expected effect on each critical asset as a function of consequence and occurrence

probability; (e) conducting countermeasure analysis by identifying suitable counter-measures based on the

computed risks and identified vulnerabilities. This approach looks quite interesting as: (a) it combines the use of

different vulnerability detection tools which is a major reason for having a better vulnerability coverage. To

achieve such a coverage, we adopt multiple vulnerability detection tools in our framework; (b) it employs a nice

risk calculation approach related to the actual facility and assets involved which is in line with our argument that

risk assessment should be performed by considering multiple abstraction levels.

As a cloud solution is more vulnerable from inside than outside and a multi-tenant environment raises more

risks than a single-tenant one, a vulnerability detection approach was proposed in [20], able to assess the

vulnerabilities and respective risks by scanning both within the current installation and outside of it. The

assessment is conducted by using the Nessus vulnerability scanner. The proposed approach was applied on both

Windows- and Unix-based VMs. Two main conclusions were derived from this application: (1) Windows VMs are

more vulnerable than Unix ones based on their original configuration; (2) once Windows VMs are hardened via

patching and reconfiguration, the same security level is reached as the one exhibited by Unix VMs. The proposed

approach strengthens the idea that not just the user application must be scanned but also other components,

like the operating system hosting it. As such, there is a need to use vulnerability scanning tools that can cover

detecting vulnerabilities also for such components. This also relates to the fact that possibly multiple

vulnerability scanners need to be employed and not just one as it is expected that the scanners’ capabilities will

vary and there could be a more specialised focus only on certain component types.

As VM templates are widely exploited for VM instantiation, the work in [1] focused on assessing their security

level in the context of one cloud provider, Amazon. As such, a vulnerability assessment over a big number of

public AMIs was conducted, focusing on covering external attacks, malicious image providers and image

sanitisation. This assessment involved using 5 main tools: (1) Nmap for open port detection; (2) Nessus for

conducting vulnerability scanning; (3) special-purpose software for conducting a suite of 24 tests over Amazon

Machine Images (AMIs) organised in 4 main groups; (d) audit tools for discovering well-known rootkits, Trojans,

backdoors plus hidden processes and sockets; (e) the ClamAV15 antivirus for malware detection. The assessment

12 https://www.tenable.com/products/nessus/nessus-professional 13 https://nmap.org 14 https://cirt.net/Nikto2 15 https://clamav.net

Page 19: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

19

led to a set of findings, the most important of which are now outlined: (a) there exists outdated software in AMIs

that can lead to vulnerabilities, especially for Windows VMs; (b) some unsolicited connections have been

detected that could signify potential vulnerabilities; (c) leftover keys and backdoors were discovered; (d)

forgotten private data were also detected for the users preparing and publishing the AMIs. This is a very

interesting work which signifies that: (1) not only containers of applications should be checked for vulnerabilities

but also the VMs hosting them; (2) a VM template selection process must be employed to have a ranking of VM

templates based on their respective risk. The latter could then enable users to select the most secure VM

templates for deploying their applications.

[15] advocates that VM security in cloud environments can be enhanced by the combined use of 2 components:

(a) Nessus for VM scanning to discover outdated / affected by vulnerabilities software; (b) OpenVAS16 for

conducting periodic or pre-rollout security vulnerability scans. As such, a certain architecture is proposed,

involving the use of adapters to support the configuration and execution of these 2 components as well as the

transformation of their reports in a common format to assist in producing a unified, consolidated vulnerability

assessment report. While Nessus and OpenVAS can be considered as two competing vulnerability scanners, the

proposed approach highlights the idea that vulnerability scanners can be orchestrated together for potentially

reaching a better coverage of the possible application vulnerabilities. Further, the use of adapters to support

the orchestration seems to be a suitable solution that can scale well, especially if we consider that adapters can

provide a uniform abstraction level to the vulnerability scanner orchestrator. Such a solution was more or less

adopted by our implementation, as it will be shown in section 6.3. The Vulcan vulnerability assessment

framework was proposed in [17] which relies on an ontology-based VDB, constructed from NVD. This framework

exhibits powerful reasoning capabilities to search the underlying knowledge base as well as discover new

vulnerabilities from existing ones. It is also configurable to exploit any kind of penetration tool like MetaSploit17.

As demonstrated in that work, the use of ontologies can be quite beneficial to vulnerability assessment. So,

potentially we might resort to coupling existing VDBs with a respective knowledge base to exhibit such

respective powerful reasoning and discovery capabilities. [22] claimed that current assessment tools cannot

discover all possible vulnerabilities within a distributed system, especially the most critical and complex ones.

As such, the Attack Vector Analyzer automated vulnerability assessment approach was proposed, enabling to

discover both the middleware components required to be assessed plu the reasons for this assessment. This

approach follows the First Principles Vulnerability Assessment (FPVA) analyst-centric methodology [18] but

attempts to automate its steps. This methodology supports a detailed analysis of those code parts related to key

system resources and their trust relations. It comprises 4 main steps: (a) architecture analysis to identify key

system elements like modules and processes; (b) resource analysis to identify key resources used by these

elements and the operations allowed on these resources; (c) privilege analysis to identify trust relations for

components with special focus on how they are accessed and protected. Such analysis also enables to detect

trust delegations for unveiling which operations a component performs on behalf of another; (d) middleware

source code inspection according to the information collected from the first 3 steps. The methodology’s

automation is increased by covering the manual gap between the third and fourth steps by replacing the

involvement of a security practitioner with a method that cover the need knowledge (rules, metrics & scores)

by extracting it from already existing vulnerability classification systems. Once this knowledge is produced and

the FPVA graph is generated from the application of the first three methodology steps, an analyzer engine is

16 www.openvas.org 17 https://www.metasploit.org

Page 20: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

20

employed, exploiting both information sources, to supply security alerts on those components that must be

analysed. An interesting conclusion from this work is that vulnerabilities should not be scanned within individual

components but also for interactions between application components and between the application and its

external users.

A framework for gathering and correlating information from different sources, such as exploit DBs, malware

signature repositories, and bug tracking systems, is suggested in [26]. The correlated information can then be

utilised for automatically generating new plugins or extending existing ones with the main benefits that up-to-

date information is used in these plugins to better address new vulnerabilities. The proposed work also suggests

a certain vulnerability lifecycle where the time spaces between the lifecycle activities (discovery, disclosure and

patch time) are associated with different risk levels. In this mapping, the order of a security level seems to

correlate with the capability to effectively address the vulnerability and the time of such a reaction/addressing.

As such, it can then inferred that current vulnerability scanning tools can be applied only on the initial risk/level

period, thus not able to cater for pro-activeness. This generates the idea that vulnerability scanning should not

be a one-shot but a continuous process. This is essential as an application system is a live system that can evolve

as well as the mechanisms for vulnerability exploitation. Thus, it is of paramount importance that the right

scanning tools are used in the different phases of the application lifetime in accordance with their capabilities

as well as the respective application security requirements that have been posed. As it will be explicated later

on, such a use is dependent on the respective trade-off that needs to be established between different

conflicting criteria that are involved in the user security requirements.

The QUIRC quantitative risk assessment approach in [21] takes into account the occurrence probability and

severity/impact of the threats identified. To compute the threat severity, guidelines are first supplied prescribing

when a qualitative value should be given to a certain threat. Then, based on these guidelines, a method called

Delphi [19] is used to generate a qualitative impact value of the considered threat, which is subsequently

mapped to a quantitative range (LOW ― 1-5, MODERATE ― 6-10, HIGH ― 11-15). A threat’s occurrence

probability is calculated, by exploiting the SANS report [23] content, as the percentage of times a certain threat

appeared in the examined systems by assuming a uniform probability distribution. Once both risk factors are

computed, the Delphi method is utilised to: (a) compute the weights on six security objectives (see previous sub-

section); (b) calculate the overall risk as the weighted sum of the 6 objective-specific risks, which are in turn

computed as the average risk for the threats mapping to each objective. This advanced approach in risk

calculation could be adopted in our security framework. However, it needs to be updated with the capability to

consider not only the application level, but the whole application hierarchy and the respective risks that might

hold for all the components involved in that hierarchy.

Apart from academic approaches, there also exist both proprietary and open-source for vulnerability scanning

and risk assessment tools. It is not in the context of this deliverable to enumerate and assess all these tools. On

the contrary, we have followed a certain evaluation approach, analysed in section 6.5.2, on the open-source

subset of these tools which attempts to assess the following: (a) their coverage in terms of different vulnerability

areas; (b) their performance and accuracy in vulnerability scanning. The outmost goal of this evaluation is to

check which tools can be combined together to reach a solution that both advances the state-of-the-art but is

also configurable to establish different scanning accuracy and performance levels which can well follow the

security requirements posed by application users.

Page 21: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

21

2.3.3 Work Directions

The careful, extensive and suitable orchestration of different vulnerability scanning tools taken from a certain

portfolio can be considered as one of the main innovations of our proposed security framework in terms of

vulnerability scanning as there seems no scanning framework or tool able to exhibit such a feature. In fact, we

can mostly see the combined use of only 2-3 fixed scanners in some approaches. This feature is beneficial as it

enables the scanning functionality to be tuned based on the different kinds of applications / components that

can exist as well as the different requirements that can be posed on them. As section 6.2 will indicate, such a

tuning can be supported by a two-level approach: (a) the generation of the right profiles that can precisely target

the specificities of different applications and their requirements; (b) the dynamic selection of the right scanning

tools from a portfolio based on current application structure and its security requirements.

Another planned innovation concerns the need for introducing a framework able to scan for vulnerabilities the

whole application hierarchy. For instance, as it was indicated in [1][15], the VMs and especially the software that

they run can also be vulnerable, especially when it is outdated. However, apart from considering the whole

hierarchy, we must also focus on the interactions between all these components as there we can also find

potential vulnerabilities (see [22]) which would not be possible to be discovered by the components’ individual

assessment. Apart from detecting and covering all possible vulnerabilities, we need to also assess the respective

risk. Assessment frameworks like those analysed above ([14][21]) seem to be a right choice but need to be

extended to cover the whole application hierarchy, as they seem to be restricted only at the higher, global level

of the whole application or on a very limited amount of levels (at most two).

We must also enable the assessment of VM templates to assist users in the best possible selection of the most

secure VM template for their application deployment and reconfiguration. This could enable to focus more on

the vulnerability assessment of the application internally without requiring to further check the respective VM

or OS vulnerability, at least for a certain amount of time.

As it seems that outdated software plays a major role in vulnerability exploitation, it is advocated that there is a

need to: (a) check the signatures of source/binary code to inspect that it has not been intentionally altered; (b)

perform static analysis on components to detect both security and normal bugs that might have an effect on

their operation. In our opinion, this necessitates using different kinds of tools, specialised in malware and static

source code analysis. In fact, we rarely see in the literature, the combined use of such tools. Added to this is the

fact that component bugs might evolve over time and new ones might be discovered, not necessarily by the

scanner by a certain framework. As such, it is not always adequate to use just a static source analysis tool but

complement it with fresh bug information coming from bug tracking systems [26], especially for software re-

used in the context of a certain application. In this way, by collecting such confirmation to complement scanning

reports, the application developer will be able to assess: (a) how bug-free is the software currently used; (b) if

there is a need to replace a software with another more secure and robust ― this might then affect both the

application code and its architecture; (c) to migrate to new component versions, when security or normal bugs

are detected and subsequently addressed in these newer versions.

Following the idea of new bug information retrieval, a scanning framework must be able to also harvest new

vulnerabilities, once they are detected, even by other frameworks or VDBs. This would enable to more rapidly

pinpoint the need to also detect and address these vulnerabilities, e.g., by introducing new detection plugins.

New vulnerabilities might also be accompanied with their detection code. This would enable to immediately

detect a new vulnerability, once entered in the scanning framework. Complementary to this direction, an

Page 22: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

22

ontology-based approach (as in [17]) could be followed for vulnerability specification which in conjunction with

real-time system state information could enable a scanning framework to reason for deriving new

vulnerabilities. As such, the scanning framework does not only rely on external sources but can derive new

vulnerabilities which could be made available to the wider community. Such vulnerabilities could then obtain

suitable identification through their proposal in respective standards like CVE.

The above direction also showcases that much work must be done in the context of addressing vulnerabilities.

Until now, the state-of-the-art mainly focuses on detecting and reporting vulnerabilities but not addressing

them. In many cases, mitigations are unveiled for vulnerabilities but they usually take the form of guidelines and

not automated scripts. As such, great manual effort must be spent to address even an already existing

vulnerability. This actually highlights the need to create a knowledge base in which the overall community would

contribute by introducing vulnerability mitigation scripts. A community-based approach would alleviate the

great effort to be put by a single organisation on this matter. Further, such an approach would enable to evaluate

the different mitigations proposed for vulnerabilities, thus giving hints to application developers on the optimal

ways for hardening their applications. Such a knowledge base would then be easy to adopt by scanning

frameworks possibly with assistance from application feedback in form of preferences that would enable to

more optimally select between the different mitigations of each vulnerability and to then execute them. As the

population of a community-enhanced knowledge base might require a long time to be realised, a first step

should be to equip vulnerability scanners with the ability to execute mitigation scripts, which could be given

initially by the application developer or be first collected by browsing mitigation information related to

vulnerabilities and then implemented. This is a direction that we intend to follow in our security framework.

A scanning framework must be extensible. Such an extensibility should come into different forms: (a) the

framework must be able to incorporate new plugins for vulnerability detection; (b) the framework should be

able to incorporate new tools, in case that such tools exhibit a better scanning performance and accuracy.

Concerning (a), suitable mechanisms must be in place to allow users to first implement accordingly these plugins

and then to register them in the framework. Respective realisations from the current scanning tool market could

be seen for this purpose so as to adopt the best possible solution. Concerning (b), there is a need to evaluate

the performance and accuracy of scanning tools via the exploitation of a certain benchmark that could assist in

the (semi-)automatic conduction of such an evaluation. Section 6.5.2 will indicate not only that such

benchmarking frameworks exist but there seems one to prevail. Once such an assessment is performed and the

tool is considered suitable for integration, the framework should provide the right points and mechanisms for

such an integration. In contrast to vulnerability detection plugins, which can be realised in a concise manner,

vulnerability scanning tools are more difficult and complex to handle. The way to approach the integration of

such tools would be by introducing adapters. In particular, as also advocated in [15], an adapter can intervene

between the scanning framework (tool orchestrator) and tool to support both the configuration, execution and

reporting normalisation of this tool based on a right and uniform abstraction. The actual implementer of an

adapter can vary, depending on who requires the introduction of the tool. However, this approach can scale

very well as a uniform interface can be established allowing to integrate any kind of vulnerability scanning tool.

To this end, this is actually the integration approach used in our framework for supporting scanning tool

orchestration (see section 6.3 for more details).

Page 23: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

23

2.4 Model Driven Security

The need to rapidly prototype cloud applications with solid security features requires security by design [29],

namely specification of security requirements, integration of security solutions, and testing throughout the

application software stack, rather than use of isolated security solutions. There has been prior work in model-

driven cloud application description frameworks [28] that can drive automated cloud application deployment

frameworks, such as Kubernetes [30]. Previous model-driven security work [31] has applied model-driven

engineering techniques towards making systems and infrastructures more secure. That work focuses mostly on

enforcing access control via model transformation from high-level business policies tied to certain system [32]

or business process elements [33], to IT-specific security policies deployed in the fixed security services of the

target infrastructure.

Page 24: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

24

3 Unicorn Security and Privacy Architecture In this section, we first present the security meta-model that has been developed for specifying the configuration

of the security part of the Unicorn platform architecture. Next, we analyse this architecture and the way it

exploits this meta-model to fulfil the respective security tasks. This early release of the meta-model initially

focuses on concepts, relationships, and relevant information regarding perimeter security and vulnerability

assessment design libraries (additional detail is provided in Section 3.3 of Deliverable D2.1 Unicorn Libraries, IDE

Plugin, Container Packaging and Deployment Toolset Early Release). The meta-model incorporating privacy and

authorization concepts is described in Section 3.4 of D2.1. Features and policies of these design libraries will be

exposed as functionality either at design-time, using annotations or at run-time using the service graph. Our

current prototype supports simple run-time configuration and adaptation at this time. Extensions to offer

design-time annotations and design-library support are planned for the final release.

3.1 Security Meta-Model

The security meta-model was developed with the goal to cover two main aspects: (a) requirements-based

configuration (and re-configuration); (b) security-driven reporting. After investigating and collecting respective

modelling requirements, this meta-model was developed as an Eclipse Ecore model (a kind of a UML model),

depicted in Fig. 1. There are three main classes that cover the above two aspects: SecurityConfiguration,

SecurityReport and SecurityRule. SecurityConfiguration captures information for configuring two of three main

functionalities of the security & privacy framework, i.e., the intrusion detection and continuous risk assessment.

In this respect, two different classes were developed to cover the information needs for the configuration of the

main framework components realising these two functionalities.

Page 25: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

25

Figure 3: Security Model

The PerimeterSecurityConfiguration class covers the IDS configuration with information that spans the following:

intrusionHandlingType: There are two main categories of intrusion handling tools, namely, intrusion

detection and intrusion prevention tools (IDS & IPS). Tools in the first category detect a possible

intrusion, while tools in the second also react to prevent the intrusion from taking place, typically by

dropping connections or packets. Logically speaking, an IPS might be more appealing to the user but it

may affect application efficiency and/or lead to false positives. Alternatively the user may desire

combining an IDS with an adaptation system to control all security and adaptation-related components

and drive application behaviour based on a set of adaptation policies/rules. Thus, users are able to select

the appropriate tool category to be enforced over their cloud applications from the

IntrusionDetectionSolution enumeration.

flavour: This represents an optimisation alternative with respect to the IDS/IPS configuration. The

parameters considered include detection performance, cost and accuracy. Each of the three flavour

alternatives (see Flavours enumeration) represents a trade-off in optimising two of the above

parameters while reducing the quality of the third. When users select the speed+accuracy alternative,

they declare their preference for high detection speed and accuracy without caring about cost. This

leads to configuring the IDS/IPS to run at full capacity in a resource-rich VM or container. The speed+cost

alternative indicates the user preference for detection speed and low cost, at the expense of accuracy.

Page 26: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

26

The system will then configure the IDS/IPS to run based on a selected set of detection rules favouring

detection speed. By installing IDS/IPS in a resource-limited VM, detection cost can also be minimised.

The accuracy+cost alternative represents user desire to optimise detection accuracy and cost but not

detection speed. This means that IDS/IPS will be configured to run in full capacity (full set of rules) over

a resource-constrained VM, impacting detection speed.

categories: The user can select one or more IDS categories which involve the selection of rules that

target a specific application scenario/use-case. For instance, the server-webapp category configures an

IDS/IPS with a set of rules that optimise intrusion detection over web application servers. The object of

focus could instead be a communication protocol, as in the case of protocol-ftp, or a specific operating

system like Windows, as in the case of os-windows. Thus the selection of a category depends also on

the user application structure and nature. For example, if the user application involves a web server and

runs over Windows, the user could select to apply rules mapping only to this component and respective

OS.

location: This indicates whether the IDS/IPS should be run for each application component or configured

in the application load balancer, such that only one instance of it needs to be installed. The first option

is costlier but enables control of network traffic at a fine granularity level. The second option can be less

costly but leads to performing detection at a coarser granularity level.

Depending on the user application, combinations of the above parameters lead to a different IDS/IPS

configurations and capabilities. Next, we highlight the dependencies and correlations between the different

parameters. The location affects the flavour as whenever the detection granularity is fine, a high cost could

be incurred that could be resolved by using cost-optimised flavour alternatives. Which of the two

alternatives is chosen depends on user preferences on detection accuracy or speed. Location does not affect

the IDS categories to be selected, just the way that they apply to application components. In particular, if

the granularity is coarse, all categories are configured within one IDS/IPS instance; otherwise, a subset of

selected categories is selected depending on the application component on which the IDS/IPS should be

installed.

The selection of categories can affect the flavour. In particular, when a large number of categories is

selected, accuracy should be increased, which then leaves up space to optimise detection speed or cost.

Users should select which one based on their preferences. Inconsistencies in the values of the different types

of configuration parameters can be identified and pointed to the user so as to be remedied.

The PerimeterSecurityConfiguration class offers methods that can be called whenever we need to modify

on-demand its information and thus lead to reconfiguring perimeter security. The modifications allowed

include: (a) turning perimeter security on or off; (b) adding or removing one IDS/IPS category; (c) sending a

report concerning the perimeter security status for the user application; (d) firing a certain security rule.

The RiskAssessmentConfiguration class includes needed information to configure the way the vulnerability

detection and respective risk assessment can be performed for the user application. This information spans

the following attributes:

frequencyValue: Indicates how frequently risk assessment should be performed in terms of a certain

value which is accompanied by the next attribute

Page 27: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

27

frequencyUnit: the time unit of the frequency. Along with the previous attribute, we can determine

precisely the respective frequency of risk assessment execution. For instance, we can indicate that

such an execution should be performed every 2 days (i.e., frequencyValue -> 2, frequencyUnit ->

days).

mode: Explicates the way vulnerability scanning can be performed on the user application, i.e.,

externally, internally or in both ways. Each option has its own pros and cons. For example, an

external scanning enables checking how vulnerable from the outside the user application is. This

might lead to faster scanning but not detecting all possible vulnerabilities. On the other hand, an

internal scanning can check the whole host of an application component which leads to better

scanning recall but takes much longer to finish and requires significant resources. Both an internal

and external scanning can provide an accurate scanning result but will also take longer to execute.

This latter mode of scanning exhibits strong interference with the application; thus, it is better to

perform it mainly during application deployment time.

accessMode: Highlights on which kind of user privileges the scanning is performed with two options

being valid: either scanning is conducted via normal user privileges or administrative ones. In the

latter case, the scanning can have access to sensitive OS files and information which can be

beneficial for the scanning result. However, this might increase the risk in taking control over the

scanning software which can then jeopardise the user application.

componentNames: If this piece of information is not supplied, the scanning concerns the whole user

application. Otherwise, the user can select only some application components to perform the

scanning to make a more focused assessment. This could occur when some components might have

been deemed vulnerable via external scanning such that the user needs also to perform internal

scanning for them to identify additional vulnerabilities.

accuracyLevel: This is a qualitative attribute enabling the user to determine the level of scanning

accuracy. Setting this level too high means that more vulnerabilities can be discovered at the

expense of an increased scanning time. Setting this level low means that fewer vulnerabilities may

be discovered but scanning time can be decreased while fewer application resources are spent.

performanceLevel: This and the previous attribute enable to express a trade-off between accuracy

and performance levels for the scanning where performance mainly means scanning time. Setting

both to the maximum possible value leads to increased cost and the user VMs to be examined

should have the necessary resources. While both attributes interfere with each other, they also

impact the user VM characteristics and thus its cost.

RiskAssessmentConfiguration allows modification of the above attributes at application pre-deployment

and run time and thus enables the on-demand adaptation of the scanning configuration.

Reports of the results of perimeter security execution and risk assessment are modelled in an abstract

class called SecurityReport, with the following common attributes for both report types: (a) the time the

report was generated (timestamp); (b) the recipients of the report (list of users in the system); (c) the

report’s delivery medium (e.g., email or SMS); (d) the actual address to be sent to. A

PerimeterSecurityReport is produced whenever there is a violation of a security rule. Apart from

referring to this rule, it indicates the IDS category matching it (e.g., server-webapp, pointing that a rule

related to web application servers was violated).

Page 28: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

28

A RiskAssessmentReport imprints the results of a vulnerability scanning which include the overall risk

score for the whole application as well as a set of mappings from application components to the

vulnerabilities found for them (at the local level). Each vulnerability discovered is represented by the

Vulnerability class which includes information spanning the vulnerability’s CVE (common vulnerabilities

and exposures) id, its textual description, an explication of the counter-measures to be performed for

resolving it as well as a CVSS (common vulnerability scoring system) score, expressing the local risk score

mapping to this vulnerability.

Finally, the SecurityRule class represents a security condition – action rule. Depending on the security

medium targeted, an action can take different forms enumerated in SecurityAction, including well

known IDS actions, such as alert and log, as well as actions mapping to vulnerability/risk assessment,

which take the form of reconfiguring this assessment or initiating it on-demand. Thus, security rules

enable the security system to be self-reconfigurable on-demand depending on the current security

situation of the user application. This can enable to raise the system’s security level when certain events

occur, such as vulnerability detections. It could also lead to performing rapid / extensive adaptation

actions by involving other parts of the application management system. For example, when perimeter

security detects a DDoS attack, a rule could enforce that the application scales rapidly to maintain

availability.

3.2 Architecture

Figure 2 depicts the information flow that governs application deployment along with configuration and runtime

management of intrusion detection and vulnerability assessment. The application model (top left) describes the

application structure and its security configuration. Taking it as input, the Deployment Manager deploys the

application (green arrows) as a set of containers created by the Container Bundler. The security configuration

information supplied drives the selection of tools to be installed by the Intrusion and Vulnerability Detectors as

well as their configuration.

Page 29: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

29

Security configuration parameters guide the Intrusion Detector in setting the speed, accuracy and cost of

intrusion detection, the number and the variety of detection rules and the action(s) to take upon the detection

of an intrusion. The Intrusion Detector uses the rule sets configured to detect possible attacks and creates a log

every time a rule is fired. The logs are sent to Security Enforcement (red arrows). If a detected attack calls for

IDS reconfiguration, the Security Enforcement undertakes that action. For instance, a response may be to drop

incoming packets from a specific IP address, change the parameters of a specific IDS rule, or call an elasticity

action (e.g., in case of a DDoS attack) to sustain user-perceived response time.

The application model provides input to the Vulnerability Detector to configure vulnerability detection tools, in

terms of the desired detection quality, the allowable detection overhead, and expected detection time at the

global or local (i.e., component) level of the application. It also indicates how often the risk assessment /

vulnerability detection should be performed. The Vulnerability Detector operates on a periodic basis (Figure 2,

blue arrows). Upon receiving the detected vulnerabilities from the orchestrated tools (in case more than one

tool is selected), it creates a joint report by drawing extra information from the Vulnerability Database. This

report includes critical information about each vulnerability detected, including its CVSS score and

recommended remedies to resolve it, which are essential for risk assessment and vulnerability remediation

purposes.

The Vulnerability Assessment complete the joint report with the evaluation of the overall risk at the application

level, and hands it over to the Real-Time Notification component and then to the User Interface of the platform

so that an administrator can decide how to deal with the detected vulnerabilities. The Vulnerability Assessment

also informs the Analytics component about the risk that has been evaluated to enable further assessment of

the current threat level.

Figure 4: Model-driven security enforcement and vulnerability assessment architecture

Page 30: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

30

The current information flow does not cover automated handling of a vulnerability: upon alert for a certain

vulnerability, the platform requires user involvement on a case-by-case basis. Our eventual goal is automated

enforcement, which requires that the application owner specifies appropriate responses (e.g., workflows,

scripts) to be executed by the Vulnerability Detector. We should note that cooperation between the Security

Enforcement and Vulnerability Assessment is supported. For instance, reconfiguration of the IDS could be

triggered when critical vulnerabilities are detected to protect the application until they have been resolved.

3.2.1 Security Flow

Figure 5: Security Flow

Figure 5 is a flow diagram regarding Security Enforcement. Green arrow shows that Compile, Building &

Deployment Enforcement installs the application itself and the Security Service in the appropriate container in

the host machine. Security Service includes the Intrusion Detector component analysed in section 5.3. The

Security Service detects possible attacks and creates logs accordingly. The logs that are created are sent to the

Security Enforcement, as step two depicts in Figure 5 above.

Based on the logs, the Security Enforcement generates security reports which include the user, who is the

recipient of the report, the action that should take place by the recipient, the rule that was violated and a priority

number. Security reports are communicated to the Real-time Notification component which sends them to the

User Interface of the platform. In that way the report reaches the corresponding recipient. These recipients

could be the Cloud Application Administrator or the Cloud Application Developer. This path is depicted by the

red arrows.

The Security Enforcement component undertakes the re-configuration of the Intrusion Detector due to detected

attacks. For example, we may decide to drop incoming packets from a specific IP address or change the

Page 31: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

31

parameters of a specific Snort rule. In such a case, the Security Enforcement communicates these changes

directly to the Intrusion Detector (yellow arrow).

Finally, there could be a scenario where a DOS attack is detected and scaling of the application is the preferable

solution. For such a solution to exist, the Security Enforcement must communicate with the Intelligent Decision

Module, which is responsible for any changes in the deployment of the application (brown arrow).

3.2.2 Vulnerability Assessment flow

Figure 6: Vulnerability Assessment Flow

The interactions between Unicorn platform components in the context of risk assessment are depicted through

the flow diagram of Figure 3. In step 0, annotations and the application model are communicated to the Security

Service. We want the tools exploited by this service to be aware of the respective application requirements.

Such tools are picked up by the Security Service from a certain tool pool. The Vulnerability Database includes all

the information needed for specifying known vulnerabilities and provides assistance especially to the reporting

functionality of the Security Service.

Using all this information, the Security Service picks up the right tools, configures them and initiates their

execution. Once all vulnerabilities are detected, a report is compiled and sent to the Vulnerability Assessment.

Over there, an estimation of the overall risk takes place which leads to updating the report. The enhanced report

is sent through the Real-time Notification to the Security & Monitoring in the Cloud IDE Plugin. Security &

Monitoring is the sub-component of the Cloud IDE Plugin, part of the Unicorn platform, where monitoring data

and security reports are published for the Cloud Application Administrator. More details can be found in D1.2,

section 4 [28].

There is another path, where the detected vulnerabilities ask for a reconfiguration of the Security Service which

is conducted via the Security Enforcement component. In this path, the reconfiguration is enabled through the

Page 32: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

32

introduction of respective security rules which act upon the occurrence of certain vulnerabilities. A third path

involves the sending of vulnerability information to the Monitoring & Analytics component in order to conduct

analytics over the detected vulnerabilities whose result will then be communicated to the Security & Monitoring.

Page 33: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

33

4 Privacy-by-Design Mechanisms

4.1 Requirements and User Roles

The privacy by design functionality includes many roles with a lot of responsibilities. Some of these

responsibilities may be overlapping which might lead to misleading interpretation of their duties. For the sake

of clarification an actor terminology is designed which will be respected throughout the entire deliverable. The

following table summarizes the roles along with their description.

Table 4.1 – Privacy by Design Actors

Actor Description

Unicorn Administrator Responsible for maintaining the UNICORN ecosystem i.e. the Context Model, the Annotation libraries & various certified security Enablers that are used. His/her roles is vital to the privacy-by-design since the context model plays a critical role in the definition of the security policies.

Application Developer Person who codes and packages an application will run on a UNICRON-compliant infrastructure. Responsible for a) defining and implementing the service graph, b) defining the Policy Enforcement Points (a.k.a. PEPs) and c) use encryption during data persistency

Application Operator (DevOps) Person responsible for deploying an application on a UNICORN-compliant infrastructure, ensuring that the service runs reliably and efficiently while respecting the security policies. S/he is also responsible to define security policies that bind to the PEPs.

Cloud Application User Person who uses a UNICORN application deployed on an IaaS provider. Most of the authorization requests are issued by this role.

IaaS Provider Organization or service provider that provides raw (virtualized) computing resources (CPU, RAM, storage, network connectivity, etc.) according to a service-level agreement (implicit or explicit) to run applications. Most of the time he is considered completely untrusted.

As it is inferred, there are five beneficiaries that relate to the security and privacy functionalities. The role of

each one of them is analysed on the table.

4.2 Reference Architecture

The Policy engine consists of five main components (see Figure 7) that handle access decisions; namely Policy

Enforcement Point (PEP), Policy Administration Point (PAP), Policy Decision Point (PDP), Policy Information Point

(PIP), and a Context Handler.

Page 34: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

34

Figure 7 – Policy Engine Components

The functional purpose of the main components is:

The Policy Administration Point (PAP) provides an interface or API to manage the policies that are stored

in the repository and provides the policies to the Policy Decision Point (PDP).

The Policy Enforcement Point (PEP) is the interface to the external world. It receives the application

specific access requests and translates them to XACML access control requests, then it denies or allows

access based on the result provided by the PDP.

Policy Decision Point (PDP) is the main decision point for the access requests. It collects all the necessary

information from other actors and yields a decision.

Policy Information Point (PIP) is the point where the necessary attributes for the policy evaluation are

retrieved from several external or internal actors. The attributes can be retrieved from the resource to

be accessed, environment (e.g. time), subjects, and so forth.

It should be noted that these components are inline with the XACML standard. As already mentioned, XACML

uses XSD notation in order to model the three basic artefacts which are required in an authorization scenario

i.e. the policy, the request and the response. Thus, as depicted on Figure 8 three types of XML documents are

processed or produced by an Policy engine in order to judge upon a decision: the Policy.xml which serializes an

actual policy, the Request.xml which serializes an authorization request and the Response.xml that serializes the

output of the engine.

Page 35: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

35

Figure 8 - Usage of XML Artefacts

Part of the XSD schema of the policy is depicted in Figure 9. As it is shown, one policy contains some informative

elements (description, issuer etc.) and some elements that relate to the definition of variables and their usage

in policy expressions.

Figure 9 - Sample XSD of Policy

In an analogous way, Figure 10 depicts part of the XSD specification of the request and Figure 11 part of the XSD

specification of the response.

Page 36: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

36

Figure 10 - Sample XSD of Request

Figure 11 - Sample XSD of Response

Page 37: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

37

Delving into the details of the elements is outside the scope of this document. The reader should consult [16]

for more details.

4.3 Exposed Functionality

The exposed functionality of the privacy by design layer in provided in the table below.

Table 4.2 – Privacy by Design Exposed Functionality

Method Description

registerPEP This method registers a Policy Enforcement Point (PEP) to the authorization controller. A PEP becomes effective only if one or more policies are associated with it.

removePEP It removes a Policy Enforcement Point (PEP) from the authorization controller.

attachPolicytoPEP It associates a specific authorization Policy with a Policy Enforcement Point.

addRuleToPolicy It appends a rule (using AND) to a specific Policy. The update is taking place even in the case where the Policy is associated with a PEP.

removeRuleFromPolicy It removes a rule from a specific Policy. The update is taking place even in the case where the Policy is associated with a PEP.

evaluatePoliciesForPePForARequest It infers whether or not a request is allowed for a PEP that is already associated with authorization policies.

encryptObject The method allows the symmetric encryption of a serialized entity using a specific symmetric algorithm and a specific key

decryptObject The method allows the symmetric decryption of an encrypted serialized entity using a specific symmetric algorithm and a specific key.

4.4 Implementation

As already discussed, the privacy by design layer has to provide authorization and encryption functionalities.

The most crucial aspect is to evaluate efficiently policies per each request. The heart of the authorization engine

is an Expert System. Expert Systems use Knowledge representation to facilitate the codification of knowledge

into a knowledge base which can be used for reasoning, i.e., we can process data with this knowledge base to

infer conclusions. The basic components of an Expert System are presented in Figure 12. The two foundational

concepts include Rules and Facts. Rules represent static knowledge (a.k.a. templates) while facts represent

dynamic knowledge. In the frame of UNICORN, we rely on Drools [40] expert system. The reason behind our

choice is the outstanding performance of the engine [41] which makes it capable to support near real-time

decision making.

Production memory rules correspond to ABAC policies that are associated with a PEP while Working memory

facts correspond to Attributes that can belong to a Request, a Resource or an Environmental element. Facts

may be modified or retracted upon their insertion. A system with a large number of rules and facts may result

in many rules being true for the same fact assertion; these rules may be in conflict. The Agenda (see Figure 12)

manages the execution order of these conflicting rules using a Conflict Resolution strategy.

Page 38: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

38

Figure 12 - Expert System Basic Components18

There are two methods of execution for a rule system: Forward Chaining and Backward Chaining; systems that

implement both are called Hybrid Chaining Systems. Forward chaining is "data-driven" and thus reactionary,

with facts being asserted into working memory, resulting in one or more rules being concurrently true and

scheduled for firing by the Agenda. Backward chaining is “goal-driven” and it is outside the scope of this

document.

Figure 13 - Forward Chaining Execution Flow

In UNICORN, we use forward chaining upon the evaluation of each request. Hence, each request is translated to

a set of working memory rules (i.e. the rules of the associated policies). Upon a request the attributes that relate

to the request are queried. As depicted on Figure 13 their combination generates the final result.

18 https://docs.jboss.org/drools/release/5.2.0.Final/drools-expert-docs/html/ch01.html

Page 39: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

39

4.5 Interaction with other Unicorn Services and Components

Privacy-by-design functionality is integrated in both design-time and run-time aspects. We will first analyse the

design-time aspects. First of all, in order for the authorization business logic to be effective, an administrator

shall configure a base model that has to be used or extended by a DevOps user. Strictly speaking, this model is

not an organic part of the privacy by design framework since it should be pre-existing. Hence it is a first

integration point. During the execution of the rules a specific component which is addressed as PIP is responsible

to query all attributes (related to Request, Resource and Environment) that participate in the policies of a PEP.

The “PIP querying business logic” is also outside of the core authorization engine since attributes exist (i.e. are

persisted) in third party platforms. Therefore, it is a crucial integration point.

Another runtime integration point is the “PEP forwarding”. As already discussed, according to the ABAC logic a

request has to be handled by an interceptor which is addressed as Policy Enforcement Point. Without an ABAC

authorization layer, the business logic that would infer the allowance or denial of the request would have been

hardcoded in the logic of the application. However, in the UNICORN approach, the request must be redirected

to the Policy Evaluation Engine without the knowledge of the application.

Finally, regarding encryption services, an application developer should be able to use a Secure Secret Storage

engine in order to protect sensitive keys that are used during symmetric encryption/ decryption (e.g. Vault19).

The integration of such an engine is optional; yet it boosts the security guarantees.

19 https://www.vaultproject.io

Page 40: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

40

5 Perimeter Security

5.1 Requirements and User Roles

The first User Role that we should mention here is that of Cloud Application Owner. This role provides the high-

level application requirements regarding the security level of the application or the operational cost. These

requirements are translated into a specific configuration of the Perimeter Security. Cloud Application Product

Manager defines the cloud application architecture and implementation plan is also responsible for packaging

the cloud application and enriching the deployment assembly with runtime enforcement policies. As a result,

this role is the one that defines the PerimeterSecurityConfiguration part of the application’s security model,

mentioned in chapter 3, taking decisions such as (a) turning perimeter security on or off; (b) adding or removing

one IDS/IPS category; (c) sending a report concerning the perimeter security status for the user application and

(d) firing a certain security rule.

Next, Cloud Application Administrator as the role that ensures the application runs reliably and efficiently while

respecting defined policies and constraints, is the one who will be informed in case a perimeter security incident

occurs. He might, for example, decide to stop some of the running application or as a whole in order to take

some actions regarding the detected intrusion. The aforementioned actions may include the mobilization of

other DevOps Team roles such as the Cloud Application Developer and the Cloud Application Tester. The Cloud

Application Developer is the one that develops the application and may need to take some countermeasures by

changing the way the application works. In such a case, the Cloud Application Tester may need to retest the

application that now has changed before it is set in production mode again.

Unicorn platform provides Perimeter Security which must respect some requirements. Based also on the

scenarios mentioned in the last two paragraphs that explain how the user roles are involved the respective

requirements are the following:

R1: Capability to cover different categories of possible attacks. Such categories are created based on

the protocol, the file type, etc. in question.

R2: Capability to create some report rich enough to includes all the necessary information that is

needed from the corresponding recipient (one of the Unicorn roles) in order to react accordingly.

R3: Capability to configure the Intrusion Detector tool during runtime in terms of creating or modifying

a specific rule.

R4: Capability to create the appropriate input for the Intelligent Decision Module in order to cause

some changes in the deployment of an application.

5.2 Exposed Functionality

Having in mind the requirements mentioned in the previous section, we have been creating Perimeter Security

as part of the Unicorn platform. Its functionality includes the following:

Functionality Fulfilment level Requirement

Capability to cover different categories of possible attacks

The mechanism for selecting the categories has been decided but the categories themselves are not fixed yet

R1

Page 41: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

41

Capability to create some report rich enough to includes all the necessary information

Able to draw the needed information from Intrusion Detection but the actual form of the report is not decided yet

R2

Capability to configure the Intrusion Detector tool during runtime

Able to achieve the initial configuring but not the dynamic

R3

Capability to create the appropriate input for the Intelligent Decision Module

The form of the input is not yet decided. R4

Table 3: Exposed functionality and fulfilment level of Perimeter Security

5.3 Reference Architecture

Figure 14 depicts the architecture of Perimeter Security of the Unicorn platform. Perimeter Security consists of

two main components: (i) Security Enforcement and (ii) Intrusion Detector.

Figure 14: Perimeter Security reference architecture

The Intrusion Detector is installed in the application container during the deployment of the application itself.

The Compile, Building & Deployment Enforcement module is responsible for this action, based on the application

and security configuration model (arrows 1, 2).

The Intrusion Detector is materialized through the use of an Intrusion Detection System (IDS). Such a system

usually focuses mainly on performing real-time traffic analysis and packet logging on Internet Protocol (IP)

networks. It can also be configured to run in different modes: (a) logging mode where the packets of the network

are displayed on the console and/or stored in a certain DB or log file; (b) intrusion detection mode where network

Page 42: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

42

traffic is not only monitored but also analysed against a set of rules which are supplied by default by the tool as

well as extended by the user. The output of the analysis is then stored or logged which can be beneficial for

remediation purposes; this action is depicted by the Log component in the Intrusion Detector. For instance, a

log or DB watcher might observe the analysis result in order to enable the system to adopt either by taking new

measures or reconfiguring the IDS to block certain network activity.

In fact, the actual parsing of the information stored in a log file or an IDS DB is performed by the Security

Enforcement which then enables this component to have control over the remedy actions that can be

performed. Log Parser is the component that undertakes this action. Security Enforcement also has the

responsibility to create a “security report” (see PerimeterSecurityReport class in section 3.1) for every alert

created and to communicate it to the Unicorn platform user interface. Another component named Report

Creator is responsible for put together all the necessary information in order the “security report” to be

comprehensive enough.

5.4 Implementation

5.4.1 Docker Compose approach

Within the main objectives of the Unicorn project is to deliver a platform that follows the micro-service

architectural paradigm. Such a paradigm commands the decomposition of a single service into a set of services,

which are independent but inter-communicate.

Docker Compose is used towards supporting this approach. Docker Compose is a tool for defining and running

multi-container Docker applications. A YAML file is used to configure the application’s services and only a single

command is needed in order for the application owner to create and start all the services from their

configuration specification in this file.

A two-step process is needed. First, the environment of the application is defined in a Dockerfile. This file is a

text file with a list of steps to perform in order to create an image. Configuration of the operating system,

installation of the needed software or coping of the files to the right place could be included in the

aforementioned steps. When the Dockerfile is built an image is created which is a template for creating the

environment for the application. A running instance of this image constitutes the container where the services

are deployed. In the second step a docker-compose.yml file is created, where the services that make up the

application are defined. In that way the services can be run together in an isolated environment.

Docker Compose was used for the deployment of Snort; the Intrusion Detection System we have decided to use

in the Intrusion Detector component of Security Perimeter of Unicorn platform. As we mentioned in the previous

section, Intrusion Detector is installed in the same container that the application is deployed in, and its

installation takes place during the deployment of the application. A figure with snippets of the Dockerfile used

for the deployment of Snort and the docker-compose.yml file along with the descriptions of their contents can

be found in Appendix A.

5.4.2 Prototype

Our prototype, while still under development, is used as a testbed to validate and evolve the security meta-

model outlined in previous section. A security configuration model conformed to security meta-model can be

Page 43: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

43

created and based on this model a deployment can be derived. Such a security configuration model, regarding

the intrusion detection system, is depicted in Figure 15. As can see in the figure, the model indicates that IDS

should be configured at the load balancer level and that it should focus on supporting only rule sets from the

server-webapp and server-apache categories. As the focus is only on certain rule sets and there is a need to

support fast application scaling, the configuration flavour to be supported is speed+accuracy. This means that

apart from packet processing speed, we also need to have the highest accuracy in the application of the

respective IDS rules. Furthermore, the specified model also indicates that the IPS mode of the IDS/IPS should be

supported in order to allow the reconfiguration of the system to handle security incidents.

Based on the property values of the security configuration model we selected Snort, one of the most versatile

and widely-deployed intrusion detection and prevention system. References to the two rule categories, server-

webapp and server-apache, are included in the configuration file of Snort. Since load balancing is offered as a

service form the google cloud platform, Snort cannot be added at the same machine with the load balancer. As

a result, another VM in front of the load balancer serves for Snort to be installed, as we will see in the Prototype

Implementation section below.

Figure 15: Security configuration model - IDS configuration Details

What follows is a description of a prototype implementation and a geo-distributed deployment topology for a

specific use case, along with results of a DoS defense via automatic scaling driven by IDS (Snort) feedback are

presented. We believe that the benefits demonstrated in this proof-of-concept implementation will be

preserved as we better integrate the prototype with the core implementation of the Unicorn platform.

Prototype implementation: Figure 16 depicts the overall prototype implementation. First of all, starting from

the top of this figure, we use JMeter as the traffic generator. JMeter is a Java application designed to simulate a

heavy load on a server, group of servers, network or object in order to test its strength or to analyse its overall

performance under different load types.

Page 44: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

44

Figure 16: Preliminary implementation on Google Cloud Platform

The black box with the rounded corners below represents a VM instance, where Snort, Barnyard2 and MySQL

are installed. Barnyard2 is an open source interpreter for Snort unified2 binary output files. It allows Snort to

write to disk and leaves the task of parsing binary data to a separate process that will not cause Snort to miss

network traffic. The Snort output populates a MySQL database. Both Barnyard2 and MySQL were used for

parsing the logs created by Snort. Since Barnyard2 populated the MySQL database, a convenient way to parse

the logs is to query the database. Within the blue box are resources provisioned at Google cloud platform (GCP),

also leveraging GCP’s Stackdriver monitoring system, and Kubernetes to power the Container Engine, a managed

environment for deploying containerized applications. To deploy our application, we need a GCP Instance Group

based on an Instance Template. In the current figure, the depicted VM instances constitute our Instance Group.

These instances are identical due to the use of the same Instance Template for them. Instance templates define

the machine type, image, zone, labels, and other instance properties. The Google cloud platform creates

automatically a load balancer on top of an Instance Group whenever an application is deployed.

In order for Kubernetes to handle the scaling of our application, it requires the definition of a certain metric

whose measurement needs to be supplied. Based on the values of this metric, Kubernetes decides if more VM

instances should be spawned or destroyed (i.e., to scale out or in the user application). In this case, we have

created a custom metric using the Stackdriver. The values of the metric are created by the VM where Snort is

installed based on the Snort alerts.

Page 45: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

45

Figure 17: Application (Web server) deployment across GCP cloud regions

Deployment topology: The geo-distributed application deployment topology we have experimented with is

depicted in Figure 17. The model describing this application states that the IDS/IPS needs not be cost-conscious,

and thus motivates a solution that defends against a DoS attack by scaling resources rather than by dropping

packets (to avoid false positives in case of a flash crowd). The deployment topology includes an instance of

JMeter (a synthetic-load generator), an instance of Snort IDS, and a set of instances that constitute the cluster

where a (Web server based) application is deployed. The JMeter instance is of type n1-standard-4 (4 vCPUs, 15

GB memory) created in zone us-west1-a to generate client traffic. The Snort instance is of type n1-standard-8

machine (8 vCPUs, 30 GB memory) created in zone us-central1-a. This instance receives the traffic generated by

JMeter. It has sufficient resources to ensure that Snort (a highly CPU-intensive activity) is never a bottleneck in

our experiments. An alert is created whenever an intrusion is detected and inserted into a MySQL database. The

application cluster consists of n1- standard-1 (1 vCPU, 3.75 GB memory) instances. The GCP load balancer

spreads load across instances, each of which is an apache2 Web server serving a page and performing some

amount of computation to increase server-side CPU usage.

Page 46: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

46

Figure 18: Average CPU utilization of application VMs (workers) under increasing load (JMeter users)

Evaluation: At initial deployment, the application cluster consists of a single instance. We simulate a DoS attack

by a fast stream of HTTP requests produced by JMeter, expressed as a number of simultaneous users (our tests

range from 1- 50 concurrent users). The HTTP packets are analyzed by Snort rules and when a rule is fired, an

alert is created and stored in MySQL. A Java program reads these alerts as well as the information received from

each VM instance of the application cluster to create time series data for our custom metric. The time series

data reach the GCP autoscaler via the GCP Stackdriver Monitoring API. The HTTP requests are then sent on for

processing by the application cluster. Based on the created custom metric, the autoscaler creates new instances

in the managed instance group and adds them to the target pool that process HTTP requests. The load balancer

becomes aware of the new instance(s) and redirects incoming traffic to them too. Figure 18 depicts the average

CPU utilization of each worker (application instance) as client load (intensity of the DoS attack) increases from 3

to 50 concurrent users, triggering an increase in the number of workers from 1 to 8 to maintain end-user

response time below 180ms (a large part of which is intrinsic to the deployment due to geographical distance

between GCP sites). Our evaluation demonstrates a case where high-level security requirements on perimeter

security can be translated into an appropriate selection and configuration of a complex IDS solution (combining

a standard IDS component with autoscaling functionality) in a cross-cloud-region deployment. This solution

actually provides an effective response to DoS attacks while also serving the needs of highly variable workloads.

The prototype is based on standard cloud service components, ensuring portability and applicability across cloud

providers.

Page 47: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

47

6 Vulnerability Assessment

6.1 Requirements and User Roles

The whole DevOps team of the respective application can be involved in vulnerability/risk assessment in one or

another way as well as the application owner but in an indirect manner. The Cloud Application Owner is the role

responsible for supplying high-level requirements. Such requirements can concern the overall security level and

operation cost of the application. Thus, such requirements indirectly affect the way vulnerability assessment

can be performed. These requirements are then mapped to concrete vulnerability (configuration) requirements

by the Cloud Application Product Manager so as to have a direct impact and control over application vulnerability

assessment. Thus, this will be the role responsible for defining the RiskAssessmentConfiguration part in an

application’s security model that conforms to the Security Meta-Model of chapter 3.

The other roles in the devops team will be mainly responsible for the reaction in terms of the discovered

vulnerabilities. In particular, the Cloud Application Developer might need to change the application code in case

that some vulnerabilities or inconsistencies have been detected. Please note that the Vulnerability Detector will

also include the execution of tools covering both the static and dynamic analysis of application code. Thus, the

application code might also need to be modified to address some quality assurance problems. As certain

vulnerabilities might also include the use of different versions of external components, the Cloud Application

Product Manager might also be involved in the vulnerability addressing, provided that the Cloud Application

Developer has modified the respective application code to migrate to a new version of these components. The

Cloud Application Administrator, apart from observing the application and how well it satisfies the given

requirement, might need to cooperate with other devops team roles in the context of addressing vulnerabilities.

It might, for example, stop the current application deployment (possibly in an incremental manner with respect

to the number of instances that might have been already spawned for it) in order to apply the needed changes

in the application code and architecture before bringing up again the application. The Cloud Application Tester

would need to retest an application when it has been changed, before it is moved to production. This role might

also decide to initiate vulnerability scanning in a non-periodic manner, especially when security incidents are

detected and the respective root causes for them might need to be detected by this scanning. Such security

incidents might also be unveiled through a previous (possibly periodic) scanning but a more fine-grained analysis

is needed for them. This means that the previous scanning configuration will need to be modified in order to

allow this fine-grained analysis.

In overall, we foresee two different reaction paths in the vulnerability addressing which are depicted in Figure

19. The first path (coloured in green) is a short and non-critical one. In this path, small code changes are

performed which might reflect individually some (service-based) application components. These changes would

then be applied by the Cloud Application Developer and be immediately enforced by the Cloud Application

Administrator. We foresee that as the changes are minor, no thorough testing needs to be performed such that

the Cloud Application Tester has to be involved.

The second reaction path (coloured in orange) is a wider and more critical one which can resemble the one

followed when radical application changes are performed, which could highlight even the migration to a new

and safer application version. In this case, all the devops team role will be involved in order to realise their part

in the overall reaction process. First, the Cloud Application Developer in cooperation with the Cloud Application

Product Manager will modify the application code. Then, the Cloud Application Tester will test the application.

Page 48: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

48

In case any issue is raised, then the initial step will be re-conducted. Otherwise, the application can be packaged

in production mode and the Cloud Application Administrator will take care of migrating the current application

instances to this mode.

Figure 19 - The two different paths in vulnerability addressing

Based on the above analysis, we can actually infer some particular requirements that should be respected by

the vulnerability assessment functionality that will be delivered by the Unicorn platform. These requirements

are also related to the innovation aspect, especially in terms of the directions that have been identified in section

2.2. These requirements, spanning the unification of these two aspects, include the following:

R1: Capability to cover the whole application hierarchy in vulnerability scanning and all respective

abstraction levels involved. It is at least obligatory to cover the two highest levels as also indicated in

the previous requirement.

R2: Capability to cover multiple vulnerability areas in a deep manner.

R3: Capability to cover the detection of vulnerabilities for different kinds of applications or components.

This might require creating dedicated profiles for the scanning of different application/component

kinds.

R4: Capability to produce a report which covers the risk at different levels of abstraction, i.e., at least at

the level of the whole application and at the level of the application components. Risk assessment could

go down to lower levels, like those mapping to infrastructural components (e.g., containers or operating

systems).

R5: Reporting should be rich in terms of the information covered. This should include apart from the

identified risk level, a proper and standardised identification of the vulnerability (e.g., via adopting

standards like CVE) as well as different ways the respective vulnerability can be resolved.

R6: Capability to tune the configuration of vulnerability assessment on-demand in order to cater for: (a)

more fine-grained analysis in case that certain high-level vulnerabilities are detected; (b) addressing

possibly conflicting requirements given by the user

R7: Optionally, automatic support to vulnerability handling might be realised by exploiting suitable input

from the user side (e.g., scripts mapped to certain vulnerabilities).

Please note that requirements R1, R4 and R6 are partially satisfied through the introduction of the security meta-

model of chapter 3. This meta-model enables to change the configuration of vulnerability assessment at the

modelling side. So, it only needs to be coupled by the respective implementation of the mechanisms that support

Page 49: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

49

the actual configuration based on the modelled change. Requirements R4 and R6 are partially covered in the

sense that the meta-model already covers all the information needed for the reporting. However, catering for

the report structuring is different from populating it. Thus, appropriate mechanisms dedicated to this population

should also be in place.

6.2 Exposed Functionality

Based on the above requirements, a particular vulnerability assessment system has been designed with the

focus on delivering respective functionality that satisfies all these requirements. In this respect, the following

functionality is intended to be realised for which we also indicate its level of fulfilment:

Name Details Fulfilment Level Requirement

Vulnerability scanning

Capability to scan for vulnerabilities across all application hierarchies.

Partial - coverage of two higher levels for this release.

R1

Deep Vulnerability Coverage

Capability to deeply cover the vulnerabilities of an application according to all vulnerability areas

Almost complete - right tools have been found but need to assess their complementarity and the well coverage of the respective vulnerability areas

R2

Application Vulnerability Profiling

Capability to create profiles for different applications / components to be able to have a more focused vulnerability scanning

For 2nd release - all selected tools enable the customisation of scanning rules. It remains to investigate and realise how to formulate the respective profiles for each application kind / component across these tools

R3

Rich Report Production

Capability to produce a rich vulnerability/assessment report which covers both the risk at different abstraction levels, the identified vulnerabilities and they way they can be resolved.

Partial - able to draw additional information for each vulnerability from a vulnerability database but it is still pending how to derive the overall risk and how to unify all the reports from the different vulnerability scanning tools together

R4, R5

Risk Assessment Tuning

Ability to tune the application vulnerability scanning / risk assessment based on the initial application requirements as well as on-demand

Partial - able to perform the initial tuning based on application requirements but the dynamic configuration is not yet implemented

R6

Page 50: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

50

Automatic Vulnerability Handling

Capability to automatically react on detected vulnerabilities in order to resolve them.

Have left space for realisation of this functionality in the designed architecture but implementation has not started yet

R7

Table 4 - The exposed functionality and the level of requirement fulfilment

6.3 Reference Architecture

The internal architecture of the security framework part dedicated to vulnerability assessment is depicted in

Figure 20. As it can be seen, the internal architecture comprises components which are situated either in the

Unicorn middleware platform or in the user VM. The components in the platform are mainly utilised for

delivering high-level functionality and supplying it to other parts of the platform while the components in the

user VMs deliver low-level functionality. There is also a database layer involving the Vulnerability DB which

represents the means via which vulnerability reports can be enriched with additional vulnerability information.

Figure 20 - The architecture of the security framework dedicated to vulnerability assessment

At the platform level, the Vulnerability Assessment component was already mentioned in chapter 3. This

component is split into two sub-components which are dedicated to realising its two main functionalities,

respectively. The Risk Evaluator is the sub-component responsible for evaluating the overall risk for the whole

application based on the risk derived for the application components and imprinting it in the vulnerability report.

The Vulnerability Detection Configurator is the sub-component responsible for (re-)configuring the various

vulnerability detectors that have been installed in the user VMs to cover the vulnerability scanning of the

respective application components. This component is also responsible for the initiation of the scanning in both

a periodic and non-periodic manner.

Page 51: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

51

At the user VM level, a Vulnerability Detector is a sub-component of the Security Service (see section 0) which

takes care of orchestrating the execution of the vulnerability scanning tools as well as the production of the

overall uniform vulnerability report from the report delivered by these tools. This sub-component might be

installed at the container-level to detect vulnerabilities related to application components as well as the user

VM level in order to detect vulnerabilities that concern the overall user VM (e.g., OS-related ones). Based on its

intended functionality to be delivered, it is split into the following three sub-components:

Vulnerability Tool: This is a state-of-the-art vulnerability scanning tool (like Nessus or Nmap) which can

be configured and invoked by the Tool Orchestrator. Please note that multiple vulnerability tools can

be employed in the respective (user) container or VM.

Tool Orchestrator: this is a component responsible for the selection of the right vulnerability tools based

on the user requirements and their orchestrated invocation. It is also responsible for unifying the

reports produced by the invoked tools. The unified report, as already indicated, is sent for enrichment

to the Vulnerability Assessment and especially the Risk Evaluator.

Adapter: This component encompasses the actual adapter functionality for the proper interaction with

a Vulnerability Tool. It thus intermediates between the Tool Orchestrator and the Vulnerability Tool

providing an abstraction interface through which the Tool Orchestrator can configure and invoke

vulnerability tools. Furthermore, this is the component responsible for consolidating the reports

produced by each Vulnerability Tool according to the format adopted by the security framework. Apart

from report consolidation, also enrichment is performed via consulting the Vulnerability Database.

6.4 Implementation

The implementation of all components has reached a certain level which is also reflected in Table 4. All

components have been implemented in Java by following a micro-service architecture. The ZapProxy20 and

OpenVAS vulnerability scanning tools have been selected for realising the actual vulnerability scanning

functionality. In addition, the FindSecurityBugs21 and Sonarqube22 tools have been selected for supporting static

code analysis. The first works mainly on Java programs while the latter supports multiple programming

languages.

More tools will be included depending on the results of our on-going evaluation which is presented in the next

section of this chapter. Fortunately, all tools produce XML-based reports so the implementation of the Adapter

with respect to report unification was easy. However, the configuration of these tools is more challenging and

demanding. In this respect, the current implementation is able to invoke the tools based on two main modes:

(a) full scanning and (b) partial scanning through the realisation of respective scanning rule profiles that have

been created according to the form expected by these tools. These two modes also enable an initial tuning of

the scanning based on the user requirements at a high-level but more opportunities for (user requirement)

optimisation need to be explored through the introduction of additional scanning modes. The vFeed VDB was

exploited for the storage and retrieval of standardised vulnerability information as was also well explicated in

section 2.3.1.

20 https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project 21 https://find-sec-bugs.github.io/ 22 https://www.sonarqube.org

Page 52: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

52

6.5 Evaluation

We have performed two different kinds of evaluation aiming at examining: (a) which is the best vulnerability

database; (b) what is the current coverage of current vulnerability scanning tools and whether it makes sense

to orchestrate them together to have an even better vulnerability coverage. These two evaluation kinds are

analysed in the following two sub-sections, respectively.

6.5.1 Comparative Vulnerability Database Evaluation

As vulnerability databases enable to associate important information to the detected vulnerabilities, which

could be exploited for reporting and vulnerability mitigation purposes, we have conducted a qualitative

evaluation of the above vulnerability databases (VDB) with the main goal to assess which one to select. This

evaluation relied on different categories of criteria: (a) standards ― we evaluate which standards are supported

by the tools. Obviously, the higher is the number of standards, the more appealing the respective VDB will be;

(b) community-support ― we evaluate whether the VDB has a very active community underneath which enables

it to be sustained and extended, where the latter is an essential feature for handling dynamicity in the security

world where new vulnerabilities might be detected even at a daily basis; (c) interfacing ― we evaluate what kind

of interfaces are supplied by the VDB which enable to interact with it in order to obtain the right vulnerability

information; (d) freshness ― how often the VDB is updated with new vulnerabilities; (e) information coverage

― the level of information covered by the VDB. Obviously, the higher is this level, the more suitable this VDB

will be.

The following table highlights the respective evaluation results for the examined VDBs.

Vulnerability DB

Standards Community-Support Interfacing Freshness

Information

Coverage

NVD CVSS, CPE, CVE, CWE NIST web search, file, data feeds, hierarchical browsing, change log

Daily basis

Medium

OSVDB CVE OpenSecurityFoundation, High-Tech Bridge, RiskBasedSecurity

open-source relational database

Shut down in 2016

Medium

HPI-VDB CVE,CWE,CVSS, CPE Hasso-Plattner Institute

API, web search Daily basis

Good

vFeed CVE,CWE,CPE,CVSS,CAPEC23,OVAL24,WASC25

vFeed.io API, SQLite database Daily basis

Good

Table 5 - Qualitative Evaluation of Vulnerability Databases

From the above table, it can be seen that the best candidates across all criteria are HPI-VDB and vFeed. However,

there is a small precedence over vFeed due to the support for multiple standards. Especially, the adoption of

CAPEC and the supply of rich mitigation information also made us to lean over selecting this VDB. In overall, if

we neglect the community criterion, this VDB scores nicely over all the considered criteria so it seems that it

23 https://capec.mitre.org 24 https://oval.mitre.org 25 http://projects.webappsec.org/w/page/13246978/Threat%20Classification

Page 53: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

53

constitutes a very good solution. We should also highlight that it seems that now it enjoys some widespread

adoption which could be another reason for selecting it.

6.5.2 Experimental Vulnerability Scanning Tool Evaluation

In order to assess the selection of the right vulnerability scanning tools, we have examined different vulnerability

scanning benchmarks. From these benchmarks, we have finally selected the OWASP Benchmark one based on

the following reasons: (a) it is quite complete and covers multiple vulnerability areas; (b) it automatically

supports out-of-the-box the use of certain vulnerability scanning tools; (c) it has an open-source code basis

which is updated frequently in contrast to the other benchmark tools; (d) it employs a certain measure to assess

the scanning accuracy of the tools and provides respective code facilitating this measurement. Even for the tools

not currently supported by the benchmark, it is possible to develop the right adapters with respect to the tools

generated report in order to enable their assessment. Concerning the vulnerability coverage, we should highlight

that this benchmark actually involves the offering of a certain problematic but modern application adopting

certain modern design patterns which suffers from 3000 vulnerabilities that are spread over 11 vulnerability

areas. We should also mention that the benchmark is also able not only to report just a single accuracy measure

but also show the accuracy of a tool with respect to all the vulnerability areas covered. It also supplies a table

which splits the results in different vulnerability areas and shows for each area not only the accuracy metric but

also the way it has been computed, i.e., from true as well as false positives and negatives (TP, FP, TN, FN). In our

opinion, this is the most essential characteristic which can enable us to assess the complemantarity of the

vulnerability scanning tools. For instance, if one tool covers 2 areas and another other 3 areas, then their

agglomeration can assist in the coverage of 5 vulnerability areas.

In this respect, due to the selection of this quite advanced and suitable benchmark, we were able to evaluate

the vulnerability scanning accuracy of different tools and especially those that are already supported by this

benchmark. However, we also attempted to check the respective trade-offs between accuracy and scanning

time by reducing the number of scanning rules that were employed in the examined tools. The tools that were

inspected included the following:

FindSecurityBugs: This is an extension of the FindBugs26 tool which focuses on conducting static analysis

for assessing the quality of Java source code. This extension enhances FindBugs with the capability to

find security-oriented bugs in the source code. FindSecurityBugs does not enable you easily to filter the

security bug detection rules. Only possibility which was actually explored is to either apply only the

security rules or all the bug detection rules, including those originally constituting the rule base of

FindBugs.

Sonarqube: this is another static analysis checker which is, however, able to check the quality of code

implemented in additional programming languages apart from Java. This tools comes with a more

configurable rule base which allowed us to filter some security rules and especially those that were

detecting minor security issues. As it will be also indicated later on, this filtering enabled to reach better

accuracy levels as it involved the non-reporting of minor security issues which map to false positives in

the context of the OWASP benchmark. In other words, these security issues were not initially foreseen

as important and are supposed not to be reported by the respective tools. This is an interesting factor

related to the OWASP benchmark which indicates that special handling of each tool needs to take place

in order to appropriately resolve the false positives effect.

26 http://findbugs.sourceforge.net/

Page 54: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

54

ZapProxy: this is a dynamic analysis tool which has been developed by ZAP. This tool, in contrast to the

previous ones, is executed over the external interface of an application and not its source code. It can

be executed either remotely or inside the respective user VM. It is considered rich as it includes various

plugins that focus on the coverage of many vulnerability areas. However, in the context of the OWASP

benchmark, it seems that some vulnerability areas covered by the tool are not captured which means

that: (a) we cannot evaluate this tool over these areas; (b) we need to configure the tool to not use the

respective plugins.

The respective evaluation results are depicted in Table 6 and Table 7. The first table reports the tool involved,

its operation mode, its accuracy, its scanning time as well as additional information related to the interpretation

of the respective result. The second table reports the accuracy results per each tool (along with its mode) over

all vulnerability areas.

TOOL MODE ACCURACY TIME COMMENT

FindSecBugs ONLY_SEC 39.10% 3:06

FindSecBugs FULL 39.10% 3:41

Sonarqube FULL 10.00% 3:37 Found 23582 vulnerabilities, 1176 bugs and 15448 code smells.

Sonarqube PART_SEC 19.00% 3:41 Increase of accuracy due to the following effect: (a) 100% on two categories; (b) many minor security issues were creating FPs in these two categories. So, their removal enabled to reach a 100% precision on these categories. 13 security rules were removed for this reason. Total vulnerabilities reduced to 242.

ZAPProxy FULL REMOTE 15.65% 12:00:00 Reached 48% progress and then the app went down. Not good accuracy is due to many false negatives. Possibly, if scanning was not interrupted, accuracy could be much higher.

ZAPProxy 4 CATEGORIES REMOTE

15.44% 501:40 Reduced version (only the most appropriate vulnerability scanners were kept) that took more than 8 hours to finish. Accuracy was almost the same but another category was also found: command injection. Possibly this is due to the fact that previous mode was interrupted so it did not have the chance to apply the corresponding plugin.

ZAPProxy FULL LOCAL 7.75% 2:42:00 Scanning time was much accelerated. However, accuracy became worse as the

Page 55: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

55

performance on path traversal and SQLi vulnerability areas was bad with respect to remote scanning.

Table 6 - The evaluation results concerning scanning mode, accuracy and time

Vuln. Area FindSecBugs Sonarqube_Full Sonarqube_Part ZapFullRemote ZapPartRemote ZapFullLocal

Command Injection

11.20% 0.00% 0.00% 0.00% 13.49% 12.70%

Cross-Site Scripting

37.32% 0.00% 0.00% 55.69% 55.69% 8.13%

Insecure Cookie

100% 55.56% 100.00% 55.56% 55.56% 55.56%

LDAP Injection

15.63% 0.00% 0.00% 0.00% 0.00% 0.00%

Path Traversal

9.57% 0.00% 0.00% 11.28% 11.28% 1.50%

SQL Injection

9.48% 5.53% 5.53% 49.63% 33.82% 7.35%

Trust Boundary Violation

18.60% 0.00% 0.00% 0.00% 0.00% 0.00%

Weak Encryption Algorithm

54.31% 50.77% 100.00% 0.00% 0.00% 0.00%

Weak Hash Algorithm

68.99% 0.00% 0.00% 0.00% 0.00% 0.00%

Weak Random Number

100% 0.00% 0.00% 0.00% 0.00% 0.00%

XPath Injection

5% 0.00% 0.00% 0.00% 0.00% 0.00%

Table 7 - Scanning Accuracy Results per Vulnerability Area

As it can be seen from the above results, the best tool with respect to the overall accuracy is FindSecurityBugs.

This might be seemed unexpected in the beginning as this tool only includes a small set of security rules.

However, it can be well justified based on the following rationale: (a) the tool achieves a high accuracy on many

vulnerability areas; (b) it operates directly on the source code so it is able to indicate with a high accuracy if a

certain issue holds or not. In fact, based on this tool category, we were expecting that SonarQube was able to

attain a higher accuracy level. However, this was not the actual case. The main issue that prevented SonarQube

overpassing FindSecurityBugs concerns the fact that it reports additional minor security issues that were not

originally foreseen in the benchmark. In this respect, after filtering the respective rules leading to the detection

of these security issues, the accuracy of Sonarqube was raised from 10 to 19%. However, we could still not reach

the accuracy level of FindSecurityBugs. Possibly this could be due to the fact that FindSecurityBugs concentrates

only on Java source code (also characterising the considered benchmark) and is thus more optimal with respect

to this programming language. On the other hand, Sonarqube has put equal focus on different programming

languages such that some required scanning rules for Java programs might be still missing.

Page 56: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

56

ZAPProxy has a bad performance in both scanning time and accuracy. The first can be easily justified by the fact

that ZAPProxy needs to perform a kind of extensive attack on the whole application in order to be able to detect

the respective vulnerabilities. In this attack, the testing of each vulnerability area requires executing a set of

tests on each part of the application. As such, this extensive attack takes a considerable amount of time to be

performed in contrast to source code inspection which is much faster. Fortunately, by configuring ZAPProxy to

cover only certain vulnerability areas as well as running it at the user VM level, we were able to reduce

significantly its scanning time. In fact, even in full scanning but local mode, the scanning time is much better

than the one concerning the (partial) scanning in remote mode. Concerning the scanning accuracy, the bad

results can be due to the following reasons: (a) it is more hard to find vulnerabilities when the source code is

not also inspected; (b) it is more difficult to have very high confidence for each kind of vulnerability; (c) some

vulnerability areas were not covered at all. Concerning (c), maybe the problem is that it is difficult to detect

issues on these areas if the source code is not inspected.

Thus, based on the results of Table 6, it seems that there is a trade-off between scanning time and accuracy only

when the scanning is dynamic. In fact, it has been shown that by playing with the number of

scanners/vulnerability rules, we can reach different levels of scanning time and accuracy. However, when the

scanning is static over the application source code, the scanning time is more or less the same, independently

from the number of security rules considered. This might be due also to the fact that the number of security

rules is small so playing with different partitions of the security rule set does not have a significant effect. In this

respect, it is advocated that the use of static analysis tools should rely on full mode scanning. While dynamic

scanning should be more focused which maps to the requirement to create suitable scanning profiles to cover

different kinds of applications or components.

By considering the results of Table 7, we can actually see that FindSecBugs covers all vulnerability areas but not

always deeply. On the other hand, Sonarqube touches 3 vulnerability areas and does well only in two of them.

This means that FindSecBugs would be recommended for use when the application is written in Java due to its

coverage. While Sonarqube could be exploited for applications written in different programming languages but

it needs to be surely complemented with another tool in order to cover additional vulnerability areas. ZAP seems

to cover 5 vulnerability areas but has a good accuracy performance only in 2 of them. Compared to FindSecBugs,

it is slightly better in two vulnerability areas: command injection and path traversal while it is much better in

other two: cross-site scripting and SQL injection. The latter means that the agglomeration of these tools will

surely enable to have a better coverage of the vulnerability areas and rich a much higher overall accuracy level.

However, we should highlight that some vulnerability areas still need to have deeper coverage. These include:

command injection, LDAP injection, path traversal, trust boundary violation and XPath injection. This signifies

that there is the need to consider another scanning tool which should complementarily cover more deeply these

areas.

Page 57: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

57

7 Conclusions This deliverable has presented the early release prototype of Continuous Security and Privacy-by-Design

Mechanisms. Security requirements and policies are employed to configure and deploy security components in

certain parts of the multi-cloud application to protect it and enforce a certain security level. Our model-driven

security framework exploits models and requirements to guide an optimal (re-)configuration of the security

infrastructure of a multi-cloud application. In contrast to the state-of-the-art, the framework exhibits the

following unique capabilities: (a) coverage of multiple security aspects and not just a very limited number of

them; (b) capability to reconfigure itself on demand according to the current context; (c) capability for optimal

(re-)configuration of the security infrastructure supporting the user application by considering the current

context and the security requirements posed; (d) capability to support the orchestrated execution of different

security services, even those that can be functionally-equivalent (as in the case of vulnerability assessment); (e)

suitable selection of the right security services from a certain portfolio of existing services in contrast to the

state-of-the-art where most of the time the security services to choose from are usually fixed.

The proposed framework comprises three main parts where each part focuses on different security aspects. The

first part is dedicated to managing the access control to the multi-cloud application in order to cater for the

enforcement of security policies as well as providing a protection net at the information level in order to satisfy

the privacy requirements expressed over the data manipulated by this application.

The second part focuses on securing the application at the communication level through the employment of a

highly-configurable IDS/IPS solution. Configurability is achieved by considering certain configuration

requirements that are expressed via a security model. This configurability concerns the optimal tuning of the

IDS/IPS solution by considering different trade-offs between processing speed, accuracy and cost which enables

also to have a more focused protection through the appropriate selection of the most suitable IDS rules

according to the actual characteristics of the application to be protected.

The third part focuses on preventive application protection through delivering the capability to scan an

application for a wide range of vulnerabilities and to assess the respective overall risk coming from these

vulnerabilities. Its unique features are that apart from the wide vulnerability area coverage, it also enables to

assess the respective risks across the whole application architecture while it also supports the suitable dynamic

configuration of the scanning according to the respective security requirements that have been posed. Another

interesting capability of the vulnerability assessment part of the security framework relates to the ability to

orchestrate the execution of different vulnerability scanners which could enable better coverage and accuracy

in the scanning as well as using the right tools for different parts of the application architecture. A side-effect of

this orchestration that follows the adapter pattern is that also the ability to unify the vulnerability assessment

reports of the orchestrated tools is supported. Apart from this report unification, the enrichment of the unified

report is also realised through the retrieval of additional information from a vulnerability database.

Towards the final release of the security framework, additional work remains for each of the three framework

parts. In the context of perimeter security and vulnerability assessment, we plan to express the exposed

functionality in the form of Java annotations, which will allow application developers to perform security

configuration at design-time. For perimeter security, this targets optimal IDS configuration in different settings.

For vulnerability assessment, directions to be followed to complete the intended functionality include: (a)

scanning coverage across the whole application hierarchy; (b) calculation of the overall application risk; (c)

selection of the right scanning tools; (d) creation of suitable scanning profiles to cater for the satisfaction of

Page 58: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

58

conflicting user requirements as well as the different kinds of applications to be supported; (e) the dynamic

reconfiguration of vulnerability assessment on demand; (f) the automated addressing of vulnerabilities.

While work remains to be done, the proposed design and implementation plan for Continuous Security and

Privacy-by-Design mechanisms offers innovative capabilities that advance the state-of-the-art in this space.

Page 59: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

59

8 References

[1] Sy BK. Integrating intrusion alert information to aid forensic explanation: an analytical intrusion

detection framework for distributive IDS. Inf. Fusion 2009;10:325–41.

[2] A. Patel, M. Taghavi, K. Bakhtiyari and J. J. Celestino, “An intrusion detection and prevention system in

cloud computing: A systematic review,” Journal of Network and Computer Applications, pp. 25-41, 2013.

[3] F. Karniavoura and K. Magoutis, “A measurement-based approach to performance prediction in NoSQL

systems,” 25th IEEE Int. Symp. Model. Anal. Simul. Comput. Telecommun. Syst. (MASCOTS 2017), pp.

20–22, 2017.

[4] S. Antonatos, K. G. Anagnostakis, and E. P. Markatos, “Generating realistic workloads for network

intrusion detection systems,” ACM SIGSOFT Softw. Eng. Notes, vol. 29, no. 1, p. 207, 2004.

[5] B. D. Cabrera, J. Gosar, W. Lee, R. K. Mehra, A. Drive, and W. C. Park, “On the Statistical Distribution of

Processing Times in Network Intrusion Detection,” 43rd IEEE Conf. Decis. Control. Dec, no. December,

pp. 1–6, 2004.

[6] G. Vasiliadis, S. Antonatos, M. Polychronakis, E. P. Markatos, and S. Ioannidis, “Gnort: High Performance

Network Intrusion Detection Using Graphics Processors,” in Proceedings of the 11th International

Symposium on Recent Advances in Intrusion Detection, 2008, pp. 116–134.

[7] D. L. Cook, J. Ioannidis, and J. Luck, “Secret Key Cryptography Using Graphics Cards,” Organization, 2004.

[8] L. Marziale, G. G. Richard III, and V. Roussev, “Massive Threading: Using GPUs to Increase the

Performance of Digital Forensics Tools,” Digit. Investig., vol. 4, pp. 73–81, Sep. 2007.

[9] F. Yu, R. H. Katz, and T. V. Lakshman, “Gigabit rate packet pattern-matching using TCAM,” in Proceedings

-International Conference on Network Protocols, ICNP, 2004, pp. 174–183.

[10] S. Yusuf and W. Luk, “Bitwise optimised cam for network intrusion detection systems,” in Proceedings -

2005 International Conference on Field Programmable Logic and Applications, FPL, 2005, vol. 2005, pp.

444–449.

[11] R. Sidhu and V. Prasanna, “Fast regular expression matching using FPGAs,” Field-Programmable Cust.

Comput. Mach. 2001. FCCM ’01. 9th Annu. IEEE Symp., pp. 227–238, 2001.

[12] Marco Balduzzi, Jonas Zaddach, Davide Balzarotti, Engin Kirda, and Sergio Loureiro. 2012. A security

analysis of amazon's elastic compute cloud service. In Proceedings of the 27th Annual ACM Symposium

on Applied Computing (SAC '12). ACM, New York, NY, USA, 1427-1434

[13] Catteddu, D., & Hogben, G. (2009). Cloud Computing: Benefits, risks and recommendations for

information security. ENISA.

[14] Chen, S., Li, H., Liang, P., & Yang, J. (2010). Analysis on Cloud-Based Security Vulnerability Assessment. IEEE 7th International Conference on E-Business Engineering, 490-494.

[15] Freisleben, B., Martin, S., Schwarzkopf, R., Schmidt, M., & Strack, C. (2012). Increasing virtual machine security in cloud environments. Journal of Cloud Computing: Advances, Systems and Applications, 1, 1-12.

[16] Grobauer, B., Stöcker, E., & Walloschek, T. (2011). Understanding Cloud Computing Vulnerabilities. IEEE

Security & Privacy, 9, 50-57.

[17] Patrick Kamongi, Srujan Kotikela, Krishna Kavi, Mahadevan Gomathisankaran, and Anoop Singhal. 2013.

VULCAN: Vulnerability Assessment Framework for Cloud Computing. In Proceedings of the 2013 IEEE 7th

International Conference on Software Security and Reliability (SERE '13). IEEE Computer Society,

Washington, DC, USA, 218-226.

[18] Kupsch, J., Miller, B., Heymann, E., Cesar, E. (2009). First Principles Vulnerability Assessment. Tech.

Report, UAB & UW.

Page 60: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

60

[19] Linstone, H.A. (1975). The Delphi Method: Techniques and Applications. Addison-Wesley.

[20] S. Ristov, M. Gusev, and A. Donevski, "OpenStack cloud security vulnerabilities from inside and outside,"

in Proceedings of The 4th International Conference on Cloud Computing, GRIDs, and Virtualization

(CLOUD COMPUTING 2013), Valencia, Spain, May 2013, pp. 101--107

[21] Saripalli, P., & Walters, B. (2010). QUIRC: A Quantitative Impact and Risk Assessment Framework for Cloud Security. IEEE 3rd International Conference on Cloud Computing, 280-288.

[22] Jairo Serrano, Eduardo César, Elisa Heymann, Barton P. Miller: Increasing Automated Vulnerability Assessment Accuracy on Cloud and Grid Middleware. ISPEC 2013: 278-294.

[23] The Top Cyber Security Risks, SAN Institute Report (Sept. 2009) http://www.sans.org/top-cyber-security-risks/ accessed on January 18, 2010.

[24] Theoharidou, M., Tsalis, N., & Gritzalis, D. (2013). In Cloud We Trust: Risk-Assessment-as-a-Service. IFIP International Conference on Trust Management (TM), 100-110.

[25] Katrina Tsipenyuk, Brian Chess, and Gary McGraw. 2005. Seven Pernicious Kingdoms: A Taxonomy of Software Security Errors. IEEE Security and Privacy 3, 6 (November 2005), 81-84.

[26] Torkura, K. A., Cheng, F., & Meinel, C. (2015) Aggregating Vulnerability Information for Proactive Cloud Vulnerability Assessment. Journal of Internet Technology and Secured Transactions, 4(2), 387-395.

[27] Unicorn, “Unicorn Reference Architecture.” 2017.

[28] OASIS. Topology and Orchestration Specication for Cloud Applications (TOSCA) Version 1.0.

[29] Dougherty C. et al. Secure design patterns. Technical Report CMU/SEI-2009-TR-010, Software

Engineering Institute, Carnegie Mellon University, Pittsburgh, PA, 2009.

[30] Brendan Burns. How container clusters like Kubernetes change operations. In Proc. SRECon15 Europe,

Dublin, 2015.

[31] L. Lucio, Q. Zhang, P. Nguyen, M. Amranib, J. Klein, ́ H. Vangheluwe, and Y. Le Traon. Advances in Model-

Driven Security. In A. Memon, editor, Advances in Computers, 2014.

[32] M. Clavel, V. Silva, C. Braga, and M. Egea. Model-Driven Security in Practice: An Industrial Experience.

In ECMDAFA, pages 326–337, Berlin, Germany, 2008.

[33] C. Wolter, M. Menzel, A. Schaad, P. Miseldine, and C. Meinel. Model-driven Business Process Security

Requirement Specification. J. Syst. Archit., 55(4):211–223, April 2009.

[34] NIST-800-53. Available Online: https://web.nvd.nist.gov/view/800-53/home

[35] NIST Guide to ABAC Definitions. Available Online:

http://nvlpubs.nist.gov/nistpubs/specialpublications/NIST.sp.800-162.pdf

[36] D. F. Ferraiolo and D. R. Kuhn, “Role-Based Access Controls,” in Proceedings of 15th NIST-NCSC

National Computer Security Conference, National Institute of Standards and Technology, Gaithersburg,

Maryland, 554-563 (1992). http://csrc.nist.gov/rbac/ferraiolo-kuhn-92.pdf.

[37] InterNational Committee for Information Technology Standards, American National Standard for

Information Technology - Role Based Access Control, ANSI/INCITS 359-2012, American National

Standards Institute, New York, May 29, 2012, 56pp.

[38] R. S. Sandhu, E. J. Coyne, H. L. Feinstein, and C. E. Youman, “Role-Based Access Control Models,” IEEE

Computer 29(2), 38-47 (February 1996). http://dx.doi.org/10.1109/2.485845.

[39] ABAC Comparison. Available Online: http://csrc.nist.gov/publications/drafts/800-

178/sp800_178_draft.pdf

[40] Drools Expert System. Available Online: http://www.drools.org

[41] Drools Performance Comparison. Available Online: https://www.g2crowd.com/press-release/best-

nosql-databases-fall-2015/

Page 61: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

61

Appendix A According to the first step of the Docker Compose process, a Dockerfile for the definition of the environment

that Snort needs was created. The code snippet below depicts snippets of such a Dockerfile. As we can see, we

use Ubuntu 16.04 as the base image of the Dockerfile (FROM ubuntu:16.04 statement). This is an image that

Snort image is based on. All the instructions that follow this line modify the base image and leads us to the new

“Snort image”. What follows is a RUN instruction, which installs all the prerequisites from respective Ubuntu

repositories. These packages include parsers required by DAQ (Data Acquisition library), libraries for network

traffic capture, regular expressions support and other functionalities that can be delivered by Snort.

After that we can see the instructions that download and install DAQ and Snort. The required versions of both

are stored in appropriate variables. Moreover, both of these software components are downloaded from the

Snort website. A COPY instruction is supplied below these instructions. With this instruction Dockerfile copies

the content of a directory in the container that is created. In this case we copy the configuration file of Snort,

which contains all the settings that Snort will use.

The next inline instruction is a CMD one. With this instruction we set the command to be executed when running

the image. In this case, we want to run the “snort -T -i eth0 -c /etc/snort/etc/snort.conf” command in order to

validate the Snort installation. The last instruction in the file is EXPOSE. This instruction informs Docker that the

container we are going to create listens on the specified ports at runtime. By default, the containers ignore all

incoming requests. However, EXPOSE does not actually publish the mentioned ports. In order to do so, you have

to use the –p flag on the docker run command. EXPOSE instructions serves more like a type of documentation

between the person who builds the image and the person who runs the container, about the ports that should

be published.

# Snort in Docker

FROM ubuntu:16.04

RUN apt-get update && \

apt-get install -y \

wget \

build-essential \

# Pre-requisites for Snort DAQ (Data AcQuisition library)

bison \

flex \

# Pre-Requisites for snort

libpcap-dev \

libpcre3-dev \

libdumbnet-dev \

zlib1g-dev \

# Optional libraries that improves fuctionality

liblzma-dev \

openssl \

libssl-dev && \

Page 62: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

62

rm -rf /var/lib/apt/lists/*

# download and install DAQ

ENV DAQ_VERSION 2.0.6

RUN wget https://www.snort.org/downloads/snort/daq-${DAQ_VERSION}.tar.gz \

&& tar xvfz daq-${DAQ_VERSION}.tar.gz \

&& cd daq-${DAQ_VERSION} \

&& ./configure; make; make install

# download and install Snort

ENV SNORT_VERSION 2.9.11.1

RUN wget https://www.snort.org/downloads/snort/snort-${SNORT_VERSION}.tar.gz \

&& tar xvfz snort-${SNORT_VERSION}.tar.gz \

&& cd snort-${SNORT_VERSION} \

&& ./configure; make; make install

# copy our snort.conf into the container\

COPY src/snort.conf /etc/snort

ENV NETWORK_INTERFACE eth0

# Validate an installation

# snort -T -i eth0 -c /etc/snort/etc/snort.conf

CMD ["snort", "-T", "-i", "echo ${NETWORK_INTERFACE}", "-c", "/etc/snort/snort.conf"]

# make container listen on port 80

EXPOSE 80

Snippets from Snort Dockerfile

According to the second step of the Docker Compose process, a docker-compose.yml file must be created, for

the definition of the services that make up the application. A very simple docker-compose.yml file is depicted in

below. This Compose file defines only one service called “snort”. The “snort” service uses an image that is called

“ubuntu-apache-snort” and can be found under the docker-id “paputsak”. The image can be first downloaded

from the docker hub registry and then its dockerfile can be used for the creation of a container as well as the

deployment of our service. Moreover, the “snort” service forwards the exposed port 80 on the container to port

80 on the host machine.

Page 63: Continuous Security and Privacy -by Design Mechanisms Early …unicorn-project.eu/wp-content/uploads/2018/05/unicorn_d4... · 2018. 5. 2. · the Unicorn targets of continuous security

D4.1 Continuous Security and Privacy-by-Design Mechanisms – Early Release

63

version: ‘2’

services:

snort:

images: paputsak/ubuntu-apache-snort

ports:

-80:80

Simple docker-compose.yml file

The last step of the Docker Compose process is to run “docker-compose up”. Compose pulls the ubuntu-apache-

snort image and runs it in order to create a container and deploy our service.