Upload
phamthien
View
212
Download
0
Embed Size (px)
Citation preview
Federal University of Rio Grande do NorteCenter of Exact and Earth Sciences
Department of Informatics and Applied MathematicsGraduate Program in Systems and Computing
Academic Master’s Degree in Systems and Computing
Self-adaptive Authorisation inCloud-based Systems
Thomás Filipe da Silva Diniz
Natal-RN
April 2016
Thomás Filipe da Silva Diniz
Self-adaptive Authorisation in Cloud-based Systems
Dissertação de Mestrado apresentada aoPrograma de Pós-Graduação em Sistemase Computação do Departamento de Infor-mática e Matemática Aplicada da Universi-dade Federal do Rio Grande do Norte comorequisito parcial para a obtenção do grau deMestre em Sistemas e Computação.
Linha de pesquisa:Sistemas distribuídos
Supervisor
Nelio Cacho, PhD
Co-supervisor
Carlos Eduardo da Silva, PhD
PPgSC – Graduate Program in Systems and ComputingDIMAp – Department of Informatics and Applied Mathematics
CCET – Center of Exact and Earth SciencesUFRN – Federal University of Rio Grande do Norte
Natal-RN
April 2016
Catalogação da Publicação na Fonte. UFRN / SISBI / Biblioteca Setorial Centro de Ciências Exatas e da Terra – CCET.
Diniz, Thomás Filipe da Silva. Self-adaptive authorisation in cloud-based systems / Thomás Filipe da Silva
Diniz. - Natal, 2016. 60 f.: il. Orientador: PhD Nélio Alessando Azevedo Cacho. Coorientador: PhD Carlos Eduardo da Silva. Dissertação (Mestrado) – Universidade Federal do Rio Grande do Norte. Centro
de Ciências Exatas e da Terra. Programa de Pós-Graduação em Sistemas e Computação.
1. Sistemas distribuídos – Dissertação. 2. Sistemas autoadaptativos –
Dissertação. 3. Controle de acesso – Dissertação. 4. Computação em nuvem – Dissertação. 5. Openstack – Dissertação. I. Cacho, Nélio Alessando Azevedo. II. Silva, Carlos Eduardo da. III. Título.
RN/UF/BSE-CCET CDU: 004.75
Dissertação de Mestrado sob o título Self-adaptive Authorisation in Cloud-based Systems
apresentada por Thomás Filipe da Silva Diniz e aceita pelo Programa de Pós-Graduação
em Sistemas e Computação do Departamento de Informática e Matemática Aplicada da
Universidade Federal do Rio Grande do Norte, sendo aprovada por todos os membros da
banca examinadora abaixo especificada:
Dr. CARLOS ANDRE GUIMARÃES FERRAZExaminador Externo à Instituição
UFPE – Universidade Federal de Pernambuco
Dr. CARLOS EDUARDO DA SILVAExaminador Externo ao Programa
UFRN - Universidade Federal do Rio Grande do Norte
Dr. THAIS VASCONCELOS BATISTAExaminador Interno
UFRN - Universidade Federal do Rio Grande do Norte
Dr. NELIO ALESSANDRO AZEVEDO CACHOPresidente
UFRN - Universidade Federal do Rio Grande do Norte
Natal-RN, 02 de Maio de 2016.
Acknowledgments
Firstly, i am grateful to the God for the good health and wellbeing that were necessary
to complete this work.
I would like to express my sincere gratitude to my supervisors Nelio Cacho and Carlos
Eduardo for the continuous support of my MSc study and related research, for his patience,
motivation, and immense knowledge. His guidance helped me in all the time of research.
I could not have imagined having a betters supervisors and mentors for my MSc study.
I must express my very profound gratitude to my wife Mileny, for providing me
with unfailing support and continuous encouragement throughout my years of study and
through the process of researching and writing this thesis. This accomplishment would
not have been possible without you. I love you =D.
Finally, I would like to express my gratitude to my mother Maristella, my grandmother
Dadá, my sister Lara and my uncles Guilherme and Lúcio, thanks for the given support
in this journey.
Life is not about how hard you hit, but how hard you can get hit and keep moving
forward. That is how winning is done.
Balboa, Rocky.
Self-adaptive Authorisation in Cloud-based Systems
Author: Thomás Filipe da Silva Diniz
Supervisor: Nelio Cacho, PhD. Carlos Eduardo da Silva, PhD.
Resumo
Apesar dos grandes avanços realizados visando a proteção de plataformas de nuvem contra
ataques maliciosos, pouco tem sido feito em relação a proteção destas plataformas contra
ameaças internas. Este trabalho propõe lidar com este desafio através da introdução de
auto-adaptação como um mecanismo para lidar com ameaças internas em plataformas de
nuvem, e isso será demonstrado no contexto de mecanismos de autorização da plataforma
OpenStack. OpenStack é uma plataforma de nuvem popular que se baseia principalmente
no Keystone, o componente de gestão de identidade, para controlar o acesso a seus recur-
sos. A utilização de auto-adaptação para o manuseio de ameaças internas foi motivada
pelo fato de que a auto-adaptação tem se mostrado bastante eficaz para lidar com in-
certeza em uma ampla gama de aplicações. Ataques internos maliciosos se tornaram uma
das principais causas de preocupação, pois mesmo mal intencionados, os usuários podem
ter acesso aos recursos e por exemplo, roubar uma grande quantidade de informações. A
principal contribuição deste trabalho é a definição de uma solução arquitetural que in-
corpora autoadaptação nos mecanismos de autorização do OpenStack, a fim de lidar com
ameaças internas. Para isso, foram identificados e analisados diversos cenários de ameaças
internas no contexto desta plataforma, e desenvolvido um protótipo para experimentar e
avaliar o impacto destes cenários nos sistemas de autorização em plataformas em nuvem.
Palavras-chave: Sistemas autoadaptativos, Controle de acesso, computação em nuvem,
Openstack.
Self-adaptive Authorization in Cloud-based systems
Author: Thomás Filipe da Silva Diniz
Supervisor: Nelio Cacho, PhD. Carlos Eduardo da Silva, PhD.
Abstract
Although major advances have been made in protection of cloud platforms against ma-
licious attacks, little has been done regarding the protection of these platforms against
insider threats. This paper looks into this challenge by introducing self-adaptation as a
mechanism to handle insider threats in cloud platforms, and this will be demonstrated in
the context of OpenStack authorisation. OpenStack is a popular cloud platform that relies
on Keystone, its identity management component, for controlling access to its resources.
The use of self-adaptation for handling insider threats has been motivated by the fact
that self-adaptation has been shown to be quite effective in dealing with uncertainty in
a wide range of applications. Malicious insider attacks have become a major cause for
concern since legitimate, though malicious, users might have access, in case of theft, to
a large amount of information. The key contribution of this work is the definition of an
architectural solution that incorporates self-adaptation into OpenStack in order to deal
with insider threats. For that, we have identified and analysed several insider threats sce-
narios in the context of the OpenStack cloud platform, and have developed a prototype
that was used for experimenting and evaluating the impact of these scenarios upon the
self-adaptive authorisation system for the cloud platforms.
Keywords : Self-adaptive Systems, Access Control, Cloud Computing, Openstack.
List of figures
1 Authentication, Authorization, Audit . . . . . . . . . . . . . . . . . . . p. 19
2 ABAC Access control mechanisms. . . . . . . . . . . . . . . . . . . . . p. 20
3 OpenStack Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 24
4 OpenStack overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 24
5 MAPE-K Feedback loop . . . . . . . . . . . . . . . . . . . . . . . . . . p. 25
6 OpenStack/ABAC component mapping . . . . . . . . . . . . . . . . . . p. 29
7 Overview of target system adaptation . . . . . . . . . . . . . . . . . . . p. 31
8 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 32
9 Package Diagram of the Prototype. . . . . . . . . . . . . . . . . . . . . p. 36
10 Probe/Monitor workflow . . . . . . . . . . . . . . . . . . . . . . . . . . p. 37
11 Analyse, plan and execute workflow . . . . . . . . . . . . . . . . . . . . p. 38
12 Architecture of our experimental deployment. . . . . . . . . . . . . . . p. 42
List of tables
1 Summary of inside threat scenarios. . . . . . . . . . . . . . . . . . . . . p. 33
2 Summary of responses. . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 33
3 Summary of impacts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 34
4 Analysis of inside abuse scenarios. . . . . . . . . . . . . . . . . . . . . . p. 35
5 Controller Performace Metrics . . . . . . . . . . . . . . . . . . . . . . . p. 44
6 Elapsed Time for de scenarios . . . . . . . . . . . . . . . . . . . . . . . p. 45
Summary
1 Introduction p. 12
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 13
1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 15
1.3 Work organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 16
2 Background p. 17
2.1 Insider Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 17
2.2 Identity Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 18
2.3 Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 21
2.3.1 Service Models . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 22
2.3.2 Openstack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 23
2.4 Self-adaptive systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 25
2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 27
3 Adding Self-adaptation to OpenStack Authorization Mechanisms p. 28
3.1 User Access Control in OpenStack . . . . . . . . . . . . . . . . . . . . . p. 28
3.2 Our approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 30
3.3 Insider Attacks Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . p. 32
3.4 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 35
4 Results and Validation p. 41
4.1 Environment description . . . . . . . . . . . . . . . . . . . . . . . . . . p. 41
4.2 Use Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 42
4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 43
4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 46
5 Related Works p. 47
6 Conclusion p. 50
6.1 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 50
6.2 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 51
6.3 Future works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . p. 51
References p. 53
Appendix A -- Scenarios Rules p. 56
12
1 Introduction
Cloud computing is an ever evolving paradigm (MELL; GRANCE, 2011). According
to (VAQUERO et al., 2008), a cloud is a pool of virtualized resources easily usable and
accessible (such as: hardware, platform or services), in which these resources are configured
dynamically according to demand. It also has the features of pay-per-use model, which
guarantees the quality of the offered service through agreements.
This paradigm is finding its place in the digital world, in a way that nowadays it is
common for standard systems to interact with a cloud platform. Due to the large adop-
tion of this type of service, a large number of cloud providers appeared on the market.
Currently, this kind of service is being adopted on a large scale by different segments of
society. From ordinary people, who use the public cloud services of Google Drive or Drop-
box in their daily lives, to enterprise segments who prefer to keep a local infrastructure
with internal access, featuring private clouds. Amazon Web Services 1, Microsoft Azure2, as well as opensource: OpenNebula 3, CloudStack 4 e OpenStack 5 are examples of well
known cloud providers today..
OpenStack appears as a set of software tools used to build public or private cloud
infrastructures. Currently, it offers various services, such as: data Storage (Swift), process-
ing (Nova), networks (Neutron), identity management (Keystone), orchestration (Heat),
database (Trove), among others. Since 2010, OpenStack has been evolving and improving
its services each year, involving several companies and big open source projects, such as:
Ubuntu, IBM, redhat, Huawei, Dell, VMware, among others. Thereby, establishing itself
currently as one of the most used IaaS platforms.
Some use cases of OpenStack cloud-based deployments are known, such as CERN 6,1https://aws.amazon.com/2https://azure.microsoft.com3http://opennebula.org/4https://cloudstack.apache.org/5https://openstack.org6http://docs.openstack.org/openstack-ops/content/cern.html
13
that have deployed a cloud with 4700 processing nodes, with approximately 120000 cores.
NeCTAR is a research institute that spans several sites with approximately 4000 cores
per site7. CNC (Cloud Computing for Science) is another project, sponsored by Brazilian
NREN8, that aims to provide a large storage cloud based on OpenStack Swift. CNC9 has
deployed an OpenStack environment that spreads throughout all Brazilian territory and
aims to support 10.000 users, increasing per year new 10.000 users.
Despite this rapid growth, we believe that aspects related to security and data privacy
are challenges that need to be addressed. The Cloud Security Alliance - CSA10 and ENISA
(European Network Information Security Agency ) 11 list some security issues in cloud
computing, such as: data breach, data loss, unsafe APIs, denial of service, internal attacks,
abuse of services (eg DDoS), etc. Among the security problems mentioned above, the issue
of insider attacks is something relevant. According to (SILOWASH DAWN CAPPELLI et al.,
2012), despite the amount of external attacks being higher than the internal ones, in 34%
of the cases, internal attacks caused more damage to the organization than the external
attacks, which account to 31% of cases. Although those numbers are not totally related
to insider attacks in Cloud computing environments, this becomes an important fact to
be considered, since in cloud the damages are potentially bigger. Thus, this work focuses
primarily on treating and exploring security aspects related to insider attacks in the
context of an OpenStack cloud platform.
1.1 Motivation
When an internal attack takes place, the damage to the organization can be catas-
trophic, sometimes resulting in financial losses (COLE, 2015). In a cloud computing sce-
nario, this damage can be increased given the large amount of not only files and resources,
but also of users who have access to the data.
A famous example of internal attack took place in July 2010, when an intelligence
analyst of the US army had access and published more than 250,000 secret documents
from the US Department of Defence. Apparently the analyst had access to the system,
i.e., he was an authorized user. However, there were insufficient mechanisms to detect that7http://docs.openstack.org/openstack-ops/content/nectar_deploy.html8National Research and Education Network9https://cnc.rnp.br/
10https://cloudsecurityalliance.org/download/the-notorious-nine-cloud-computing-top-threats-in-2013/
11https://resilience.enisa.europa.eu
14
downloading 250,000 documents in a short period of time would characterize an abnormal
behaviour.
CERT12 has a document that reports more than 700 cases of attacks from companys’
internal employees and its consequences. A reported case happened to a mortgage com-
pany13. The organization notified an employee who worked as a software engineer in unix
systems that he would be fired because of an error in a system development. However,
after the notification, he was allowed to conclude his day job. Maliciously, he ran an algo-
rithm that would disable monitoring systems and alerts, deleted more than 4,000 of the
organization’s servers credentials, and deleted all data, including backups.
In general, many organizations have several processes that rely on information systems
and computer infrastructure. These systems rely on human labor for activities related to
monitoring and auditing of malicious behaviour (ANALYTICS, 2008). In addition, a human
system administrator is not able to monitor a large number of requests in the system,
even if they were similar and occurred in a short period of time. If a situation like this
happens, it must be immediately identified and then immediately mitigated to prevent
further damage to the systems.
Some efforts have been made to mitigate the occurrence of insider attacks (DUNCAN et
al., 2013; GARKOTI; PEDDOJU; BALASUBRAMANIAN, 2014; STOLFO; SALEM; KEROMYTIS,
2012), but a few of them considers that such an attack might come from a malicious user
with authorized access to the data.
In an internal attack, the user who has or had authorization to access the cloud system
and its data is abusing(BAILEY; CHADWICK; LEMOS, 2014) the service. Its intention is to
misuse the access he has to adversely affect the system, compromising the confidentiality,
integrity and availability of data. It is important to note that these insider attacks are
different from those mentioned in (DUNCAN; CREESE; GOLDSMITH, 2012), as they consider
system components such as malicious agents.
Self-adaptive systems are a good approach to treat these problems due to efficiency
and effectiveness in dealing with uncertainty in a wide range of applications, including
some related to user access control(BAILEY; CHADWICK; LEMOS, 2014; PASQUALE et al.,
2012; SCHMERL et al., 2014). Self adaptive systems consist of mechanisms which allow
them to change their own structure or behaviour during run time (OREIZY et al., 1999)12The CERT Division is part of the Software Engineering Institute, which is based at Carnegie Mellon
University13www.cert.org/insider-threat/
15
These changes happen when the system needs to adapt to new requirements or new
ambient conditions. IBM defines a autonomic element into four main functions: Monitor,
Analyse, Plan and Execution. Those parts communicate with one another and exchange
information through a Knowledge base. Those elements implement a feedback control
loop called MAPE-K control loop (IBM, 2006). In this loop, the Monitor element is obtain,
aggregate and filter the target system status information. and send them to the Analysis
element. Analysis element evaluates the data sent by Monitor in detail in orderto detect
the need for target system adaptation. Once detected the need to adapt, the Plan phase
builds a sequence of steps with the goal of ensuring the adaptation of the target system.
Those four steps work together with a component called knowledge base.
An example of solution would be SAAF14 (BAILEY; CHADWICK; LEMOS, 2014). This
framework is capable of modify security policies on the authorization infrastructure at
runtime using self adaptation mechanisms. The SAAF’s objective is to monitor the usage
of authorization infrastructures, analysing subject interactions and adapt this infrastruc-
ture accordingly. SAAF was applied in the context of an application called PERMIS. The
PERMIS has a particular architecture, causing the SAAF to become specific to deal with
it. Thus, applying the SAAF to OpenStack would require a considerable refactoring in
this framework source, since OpenStack has a different authorization infrastructure.
1.2 Objectives
Considering the above issues, this study aims to propose a approach for self adaptation
of authorization to OpenStack authorization infrastructures. This solution is based on
MAPE-K (Monitor, Analyse, Plan, Execute)(IBM, 2006) model using the kilo version of
OpenStack. For this, the following specific objectives are listed:
1. Define an architecture based on MAPE-K concepts for the solution with focuses on
the Analysis phase, its means identify an abnormal behaviour and give possibilities
to mitigate it. It considers an OpenStack cloud with processing and storage services
enabled.
2. Design and implementation of a prototype for proof of concept.
3. Identify insider attacks scenarios in OpenStack authorization context, and evaluate
its possible responses and impacts of each response in the cloud environment.14Self-adaptive Authorisation Framework
16
These scenarios of internal attacks are based on the components of OpenStack au-
thorization. This exercise aims to evaluate and discuss possible scenarios, responses for
attacks and impacts created by these responses.
1.3 Work organization
This master dissertation is structured in six chapters including the introduction. The
second chapter presents the main concepts used in this work, including the following top-
ics: Insider Attacks, Identity Management, Cloud computing and self-adaptive systems.
Chapter three begins with an overview of the proposed solution followed by the insider at-
tacks scenarios description and ends with implementation details of the solution. Chapter
4 presents some validation and results of our approach including the environment descrip-
tion, a concrete use case, experiments and a discussion. Chapter 5 presents a selection
of related works. Lastly, we conclude presenting the main contributions followed by the
limitations and possible future works.
17
2 Background
This chapter presents the main concepts used along this work. Section 2.1 describes
some concepts about insider attacks. Section 2.2 introduces concepts related to identity
management with emphasis on processes related to AAA (Authentication, Authorisation
and Audit) exploring the access control model considered in this work (ABAC and RBAC).
Section 2.3 presents some cloud computing concepts regarding to its services models and
deployment followed by the main OpenStack concepts and how it works. The last section
describe the main concepts related to self-adaptive systems covering its characteristics
and properties.
2.1 Insider Attacks
According to (SCHULTZ, 2002), an internal attack can be seen as a misuse of the system
by authorized users. Thus, the first element of such an attack is the internal user. CERT
defines an internal user as an employee, former employee, contractor or business partner
who has access to system data or company information. In this way, to characterize an
internal attack, this user needs to have intentions to abuse or take advantage negatively
of the company’s data, affecting the confidentiality, integrity and availability of their
systems(SILOWASH DAWN CAPPELLI et al., 2012).
On the order hand, (COLWILL, 2009) divides insiders into two main groups: intentional
and unintentional. In (SILOWASH DAWN CAPPELLI et al., 2012), only the first group is
considered. However, some internal attacks caused by an innocent user may have high
potential damage as well, for example: inappropriate Internet use, which opens possibilities
to virus and malware infection, exposition of the enterprise influencing in its reputation
and future valuation.
CERT database contains about 700 registered cases of internal attacks in 2012. Look-
ing at these cases, it was possible to categorize them by analysing the patterns in 371
18
of these attacks (SILOWASH DAWN CAPPELLI et al., 2012). Thus, the following categories
were identified:
• Sabotage: When a internal user have access to some information and use it with
intent to injury a company. For example, leak sensitive information for others com-
panies take advantage in market dispute.
• Data theft: An internal attack category where some user steal information with the
intent of compromising privacy or obtain confidential information.
• Fraud: It happens when a insider use IT infrastructure for unauthorized operations
of an company data for personal gain, or thefts information leading to identity
crimes.
• Others: Cases in which an insider attacks were performed by others intentions or a
miscellaneous of Sabotage, Data theft, and fraud.
This work considers that insider attacks are carried out by users or former users of a
particular organization who have access to a particular system or company information
across the network. In addition, it takes into consideration that they are authorized to
access such data. Among the above categories, our case studies and implementation are
focused on data theft.
2.2 Identity Management
Identity management consists of an integrated system of policies, technologies and
business processes that enable organizations the treatment and handling of identities
(identity attributes) of its members (JØSANG; POPE, 2005).
From the perspective of a service provider (SP), which makes services and resources
available through the Internet, identity management allows an SP to know who its users
are (by means of authentication) and to manage what services they are entitle to use (by
means of authorization).
Different identity management models have been proposed to deal with issues related
to user authentication (BHARGAV-SPANTZEL et al., 2007) (JØSANG et al., 2005). One such
model is the federated identity management, where different providers form an associa-
tion, establishing a relationship of trust between them (CHADWICK, 2009). In this model,
19
Identity Providers (IdP) are responsible for authenticating users by sending messages con-
taining authentication credentials. In this way, the user can access resources offered by
different SPs using a single set of credentials issued by a federation IdP.
Authorization services aim to control user access after the authentication process
(Figure 1). Audit services involve the registration of all user requests and activities in the
system for future analysis, completing the tripod AAA (authentication, authorization and
auditing). To support robust systems, federated access control depends on authentication
models and authorization as the RBAC (Role Based Access Control) (SANDHU et al., 1996)
and ABAC (Attribute Based Access Control) (HU et al., 2014).
Figure 1: Authentication, Authorization, Audit
The basic purpose of access control mechanisms is to protect objects, whether data,
services, applications or distributed systems. This type of operation involves the creation,
deletion, discovery, reading and running objects. Access to these objects is requested by
subject. In general, a subject is an entity that performs an operation on objects, for
example: a non-person-entity (NPE) human, or for example autonomous systems.
In the 1970s, the concept of role based access control (RBAC) has been implemented
in multiple applications, web systems and multi-user. According to (SANDHU et al., 1996),
the central notion of RBAC is that permissions are associated with roles, and users are
assigned to appropriate roles, which greatly simplifies permissions management. For ex-
ample, in a given system, a user login in and the application identifies the role for the
user. Based on its role, the application interacts with the policy repository to search for
the permission associated to that role. Based on those permissions, the system is able to
identify the allowed action for that user.
ABAC (Attribute Based Access Control) is an access control model that aims to in-
herit the best practices of previous models and treat changeable and specific attributes
of an environment. As the name suggests, access control is based on attributes and takes
20
into account the following elements: subject, object, operation, policies and environmental
conditions. Attributes are characteristics of a subject, object and conditions of an envi-
ronment. Attributes are generally described by a name and a value. An subject can be a
human user or other system/device that aims to access and perform actions on a given
object. A subject is associated with one or more attributes. A object is the resource that
the access control system based on ABAC is to protect, for example: files, databases, web
systems, cloud resources, etc. An operation is the execution of a request from the subject
to an object. Finally, environment conditions are additional attributes to making access
control decisions. Examples of this type of attribute are: current date/ time, geographic
location and etc. For example, a company has different levels of employees (Manager,
Director, Staff etc) and only Director is able to access the Data Center room at weekend.
So, when the Director (subject) enter with their credentials, the system evaluates policies
about the subject, for example, if it has the attribute Director, as well as evaluate envi-
ronmental conditions (if weekend day or week day). After that, the subject (Director) has
access to the object (Data Center room).
Figure 2: ABAC Access control mechanisms.
In Figure 2 when an individual requests an operation on a given object, this request
is intercepted by the PEP. The PEP has the role of protecting the object, redirecting
the request to the PDP. The PDP must make a decision allowing or not the subject’s
access. This is done first based on PIP. The PIP provides the PDP information necessary
for decision-making based on environment conditions and a repository of attributes (Eg.:
object attributes). In background, the PDP also requests information on system policy
repository. This policy repository is managed by the PAP, which system administrators
21
or an administrative system manage the policies used. Only after consulting these com-
ponents, the PDP may decide that the subject of the request will or will not be held in
the given object.
For example, in a Hospital Manager system a given user requests access to analyse
some test results of a critical patient. This way, the gateway module of the system (PEP)
intercepts this request and redirect it to a module able to allow or not this access (PDP).
This process is performed with basis on additional information of the user (Department,
employee, access level), for example, to see test results the user must be a Doctor(Attribute
repository). Also is checked if the user request those results in a useful day (Environment
conditions). With these information, the system decides(PDP) if the user have access or
not to the test results and redirect the request (PEP) to the object (test results).
2.3 Cloud Computing
The use of cloud services is being widely adopted. Nowadays, they can be found
in a large number of companies, research center and universities. According to (MELL;
GRANCE, 2011), there are some essential features that all the cloud computing systems
must possess, such as on-demand service, network access, resource pooling, rapid elasticity,
service measurement.
• On-demand service: service distribution method where customers use cloud re-
sources according to their needs.
• Network access : cloud resources must be accessible over the network by a heteroge-
neous range of clients (laptops, smartphones, workstations, etc.).
• Resource pooling : Resources are pooled to serve multiple consumers. These resources
may be virtual and assigned on demand. Some examples of cloud computing re-
sources are storage, processing, memory and network
• Rapid Elasticity : Cloud property that offers more or less resources according to
consumption required. It’s understood as a dynamic allocation and deallocation
capacity of resources.
• Service Measuring : The Cloud can itself measure the use of its resources through
monitoring and management techniques. This may help charging consumers and
service providers.
22
Regarding to deployment models, cloud computing infrastructures have different strate-
gies. This depends mainly on the physical location and how resources are accessible to
users. According to (MELL; GRANCE, 2011), the four main cloud deployment models:
private cloud, community cloud, public cloud and hybrid cloud. In a private cloud, the
infrastructure is provisioned for exclusive use by a single organization made up of many
consumers. A public cloud provides open use services to general public. Its property, man-
agement and operation can be done by a company, an academic institution, a government
organization, or a mixed combination. A hybrid cloud is made of two or more infras-
tructure in the cloud (private, community, or public) which remain separate entities but
are linked by standardized or owned technology that enables communication of data and
application portability.
2.3.1 Service Models
Cloud computing offers its services according to some models. These models work in
a layered manner, so that the SaaS layer depends on a platform to run the software, and
such software requires an infrastructure for hosting it. The main models are:
Software as a service (SaaS) is a model where users access applications in cloud in-
frastructures in way that the clients does not need to install or manage software or even
manage servers, operating system or storage infrastructure, such as: Google Docs1, Mi-
crosoft One Drive2, dropbox 3.
In Platform as a service, (PaaS) it is provided to the users an environment where
applications and programs, that were created or acquired by them, can be installed and
developed with programming languages, libraries, services and tools supported by the
cloud infrastructure. Some examples are: Google App Engine4(which currently supports
applications developed in languages such as Python, Java, PHP e Go), Microsoft Windows
Azure 5, Heroku6, Cloud Foundry7, etc.
Infrastructure as a service (IaaS) is the model provided to the computing infrastruc-
ture users through storage, processing and network services, in which the customer can
install and configure software in general, including operating systems. Some examples of1http://www.google.com/docs/about/2https://onedrive.live.com/3https://www.dropbox.com/4https://cloud.google.com/appengine/docs5http://azure.microsoft.com/6https://www.heroku.com/7http://www.cloudfoundry.org/
23
IaaS are: Amazon AWS 8, OpenNebula9 and Openstack10.
Others service models are become known, such as: Network as a Service (NaaS), Access
Control as a Service (ACaaS) and others. In this work we did not cite then once it were
not referred (MELL; GRANCE, 2011)
2.3.2 Openstack
Openstack 11 is a free and open source platform that offers tools to build cloud in-
frastructures. It is composed by a set of software projects, which are used to provide
different cloud services. This project got bigger in 2010, when Rackspace Hosting and
NASA released its first version, Austin, in which part of the code belonged to the Nebula
platform related to processing and another part was from Cloud Files (related to storage).
Since 2011, open source software communities began contributing to the project (e.g the
Ubuntu community).Since then, Openstack releases at least two versions per year, the
latest one being Liberty, released at the end of 2015.
Figure 3 presents some of the main projects of Openstack: Nova (Compute), Swift (Ob-
ject Storage), Neutron (Network), Horizon (Dashboard), Glance (Image), Cinder (Block
Storage) and Identity (Keystone). Each one is responsible for providing different kinds of
services in the cloud.
Nova allows management and provisioning of virtual machines in cloud infrastructure,
characterizing a processing cloud. Nova also supports the most commonly used hypervi-
sors (Ex .: Xen, KVM vShpere), and support some emulation software (Ex .: QEMU 12).
Neutron aims to provide virtual network services (virtual balancers, switches, etc.), for
example, neutron is used by nova to provide software defined network to virtual machines.
Swift is an object storage service, with characteristics of scalability, availability, perfor-
mance, and data replication. Its architecture is based on two types of nodes: Proxy and
storage. Proxy nodes receive the client requests (Ex .: upload, delete, list objects) and
redirect to the storage nodes where the data is stored.
Keystone is the OpenStack identity management component responsible for managing
user access to cloud resources. Keystone uses an access control model based on tokens.
According to Figure 4, the flow to have access to a cloud OpenStack resource begins8aws.amazon.com9http://opennebula.org/
10Openstack11https://www.openstack.org/12www.qemu.org/
24
Figure 3: OpenStack Services
when a certain user wants for example, create an instance of the virtual machine (VM) in
Nova. It is important to note that OpenStack has a Rest API for each service provided,
to facilitate the interaction of users and customers to the cloud environment.
For creating the VM, the user firstly authenticate in Keystone. For this, the user sends
its access credentials (user name, password and tenant), receiving a access token. With
this token, he performs a request to the service attaching the received token. Then, the
cloud infrastructure performs two internal authorization procedures before performing the
request. The first process is performed by the service with the Keystone. On receiving the
request, the service checks if the token is actually valid. After that, the service validates
if the request in accordance with its own policies and then executes the requested action
sends s response to the user.
Figure 4: OpenStack overview
25
2.4 Self-adaptive systems
A self-adaptive software system is able to modify its own structure and/or behaviour
during run-time in order to deal with changes in its requirements, the environment in
which it is deployed, or the system itself (ANDERSSON et al., 2009b). One way to adapt
systems is through the implementation of a feedback control loop for monitoring of the
target system (target system), data analysis, planning of adaptation actions, their execu-
tion on the target system and a knowledge based which is used to share data between all
activities. One of the widely followed models to achieve these features is the the MAPE-K
(Monitor, Analyze, Plan and Knowledge) defined by IBM (IBM, 2006).
Figure 5: MAPE-K Feedback loop
Figure 5 presents an overview of MAPE-K phases and how they interact with the
target system. The monitoring phase (Monitor) is responsible for obtaining, aggregate and
filter the target system status information. This information is captured by the Probes
in a raw form. Once the information obtained by the Monitor are handled and filtered,
they are sent to the analysis phase. The analysis phase (Analyser) is responsible for
evaluating the data sent by Monitor in detail. The goal of this stage is to detect the
need for target system adaptation. It is possible to split analysis phase in two subphases:
the problem domain and the solution domain. In the problem domain the goal is to
implement mechanisms to identify triggers for adaptation. Based on the detected problem,
the solution domain aims to point out the possible adaptation solutions that fill this need.
Once detected the need to adapt, the planning phase (Plan) builds a sequence of steps with
the goal of ensuring the adaptation of the target system. Once this sequence is defined,
the stage of execution receives the execution plan and act on the target system through
the effectors. Those four steps work together with a component called knowledge base
(textitKnowlegde). Communicating with the knowlegde base, all components of feedback
26
loop can use information to optimize and assist in the processes of MAPE-K cycle.
Self-adaptive systems can be categorized in two main approaches: top-down and
bottom-up (CHENG et al., 2009). The first category has a centralized controller which is
responsible for managing all aspects related to the system adaptation. The second is char-
acterized by a decentralized approach, where the adaptive control is done by distributed
components that individually do not have full knowledge of the system, but together,
contribute to the adaptation of the target system. This approach is also usually referred
to as self-organising.
Another way to categorize self-adaptive systems is according to its properties. In
(IBM, 2006), IBM defines four main properties of self-adaptive systems: self-healing, self-
configuration, self-protection, and self-optimization (These properties are usually called
self-* properties), where:
• Self-Healing : Capability to detect, diagnose and treat problems (software errors,
exceptions, fault tolerance).
• Self-Configuration: Capability to act on its own components (updating, installation,
removal, reconfiguration) to better fit a situation.
• Self-Protection: This property is related to the capability to detect, identify and
protect against attacks.
• Self-Optimisation: Property where a self adaptive system can monitor their resources
and adapt them to tuning resources automatically.
Regarding the type of adaptation there are also two approaches: parametric or struc-
tural (ANDERSSON et al., 2009a). Parametric adaptation is to change the parameters of the
components according to the context. One problem of this approach is that the param-
eters are limited, which implies a fixed number of behaviours. The aspects of structural
adaptation are different mainly because it allows the change of system components as
needed, i.e., if a component does not provide a feature, its possible to replace with one
that has.
Another important aspect of the self-adaptive systems categorization is regarding to
the type of the decision-making process. This process can be static or dynamic (SALEHIE;
TAHVILDARI, 2009). In static approach the decisions are defined during the development
stage of the self-adaptive software. On the other hand, dynamic decisions has its main
27
focus during program execution time and considers all the experience acquired during the
execution to make the appropriate decision (OREIZY et al., 1999)
Given those characteristics, the proposed solution in this work adopts an approach
top-down (using a central controller). The adaptation type is parametric, where only
system parameters are modified. In addition the property in focus is self-protection.
2.5 Conclusion
This chapter presented the necessary information to understand this work, this in-
cludes: Insider attacks, identity management with emphasis in authorization issues , cloud
computing applied to OpenStack and self-adaptive systems. It was important once our
works aims to provide a solution to deal with insider attacks in a OpenStack authorization
infrastructure and for that, mechanisms of self-adaptive are implemented to detect the
abnormal behaviour and mitigate it.
28
3 Adding Self-adaptation toOpenStack AuthorizationMechanisms
This chapter aims to present the architecture of the proposed solution. First, it is
necessary to analyse the architectural components of OpenStack in order to map them in
terms of ABAC ones (Section 3.1). After this analysis, we present an approach with the
integration of OpenStack architecture and self-adaptation mechanisms. Moreover, we have
create several insider threat scenarios, possible responses, and their impact to the cloud
platform (Section 3.3). These scenarios were used as basis for developing a prototype.
3.1 User Access Control in OpenStack
OpenStack employs the RBAC model for handling access control. A user in OpenStack
is assigned a role associated with a tenant, which represents a cloud resource in one of the
services provided by the platform. The service can specify policies associating the roles
with permissions to conduct operations on the service, such as, permission to download a
particular object from the storage service.
Since OpenStack employs the RBAC model, it is possible to identify all functional
components necessary for providing authorisation. However, due to its distributed nature
and the ability to support multiple heterogeneous services, these components are arranged
in a different way when compared with traditional systems. In OpenStack, access decisions
are computed in two different points when processing a user request. While the keystone,
as the first authorization point, is based on RBAC model, in the second authorization
point, each service has an access control list (ACL) with its own policies. This hetero-
geneity demands some effort to mapping those different models and from that, propose
an self-adaptive infrastructure.
Figure 6 presents a general view of the OpenStack architecture in terms of ABAC
29
components, where the components responsible for managing access control are identified,
along with the operation flow performed in the system during user access attempt to a
particular service, e.g, Swift.
Figure 6: OpenStack/ABAC component mapping
The flow begins with a user sending its credentials for performing authentication
(Step 1), and then receiving a token as reply of a successful authentication (Step 2).
The user then requests an operation in the cloud (Step 3). Swift PEP intercepts this
request, protecting the service from a possible unauthorised operation, and requests the
PDP in Keystone (Step 4) to validate the token and to check whether the user has access
permissions to this service. In order to validate the token, the Keystone PDP consults the
Keystone PIP (Step 5a) and obtains its security policies (Step 5b) for deciding whether
the user has access to the Swift service, and returning its decision (Step 6). At this point,
the first part of authorisation has finished, but OpenStack platform has an additional
second authorisation step that is performed by the service. After consulting Keystone,
Swift needs to evaluate the request against its own policies, for checking whether the
user has permission to conduct the requested operation. The Swift PEP then activates
Swift PDP to decide (Step 7) whether the user can conduct the requested operation (e.g.,
upload a file). Swift PDP obtains the access control policies for the service (Step 8a) and
30
uses the Swift PIP (Steps 8b) to obtain any information that it needs for evaluating the
access control policy. Once an access decision is made (Step 9), Swift PEP allows the
user to perform the requested operation (represented by Step 10 towards the Object) and
returns to the user a response to the request (Steps 11 and 12).
Each OpenStack service contains a Log component, as shown in Figure 6. These
components represent the Audit service (as presented in section 2.2), and are used to
record the different activities related to access control within the system. Among the
information logged, we can mention access requests, access control decisions, operations
performed in the service and unauthorised attempts.
3.2 Our approach
The idea of this work in propose an OpenStack cloud with self-adaptive authorization
mechanisms is that the dynamic policy evolution of authorization is capable to mitigate
the malicious user threats, limiting the action scope of them.
This way, we confirm that particularity of OpenStack is the fact that there are two
PDPs and two sets of policies in the same system. Because of this, solutions for dealing
with authorisation, such as (BAILEY; CHADWICK; LEMOS, 2014), cannot be directly applied
to OpenStack. For this reason, we have defined an architectural solution for allowing the
addition of self-adaptive capabilities into OpenStack, which is presented in Figure 6,
together with the flow of activities related to self-adaptation.
Figure 7 presents a general view of our approach, where the MAPE-K loop adapts
the target system, composed by the AAA mechanisms: Authentication, Authorization
and Audit. For that, the Monitor stage is responsible for obtaining information about
the access control infrastructure (target system) and its environment through the use of
probes. This information may include user attributes, access control policies, and event
logs, such as, access requests and authorisation decisions, which can be used to update
behaviour models in the Knowledge. The Analyse stage is responsible for assessing the
collected information in order to detect any malicious behaviours. This stage also identi-
fies possible solutions for mitigating the perceived malicious behaviour, and prevent future
occurrences. The Plan stage is responsible for deciding what to do and how to do it by
selecting an appropriate solution for dealing with the malicious behaviour, and produc-
ing the respective adaptation plan. Finally, the Execute stage adapts the authorisation
infrastructure by means of effectors by following the instructions of the adaptation plan.
31
Figure 7: Overview of target system adaptation
Figure 8 presents the architecture of our approach, where a Controller implementing
the MAPE-K feedback loop monitors the cloud platform, and performs adaptations when
a malicious behaviour is detected. The target system is composed by the different services
provided by the OpenStack platform, including its identity service Keystone.
Each OpenStack service has its own set of probes and effectors, which allow the
Controller to interact with OpenStack. The information collected by the probes include
the different activities related to access control that take place in the OpenStack platform,
such as, access requests and access control decisions. Each OpenStack service contains a
Log component that can be queried for this information (Steps 1a and 1d). There are
also probes for obtaining the access control policies currently in place (Steps 1c and 1f),
and information about users (Step 1e) and about objects being protected (Step 1b) by
means of their respective PIP. It is important to mention that, for the moment, we are not
considering changes in the authentication mechanisms. The collected information is fed
into the Monitor (Steps 2a and 2b). Steps 3, 4, and 5 represents the Controller activities
that have been previously described. Finally, the Execute stage employs effectors (Steps
6a and 6b), which alter the access control policies in place through the PAP of each
service, i.e., Keystone PAP (Step 7a) and Swift PAP (Step 7b).
32
Figure 8: Architecture
3.3 Insider Attacks Scenarios
This section describes some internal threats scenarios that are representative of an
OpenStack cloud platform. These scenarios capture essentially data theft by malicious
insiders, where users with legitimate access to the system abuse their rights for steal-
ing sensitive data. They also capture the distributed and heterogeneous nature of the
OpenStack cloud platform in which multiple services are protected by means of a two
33
step token-based authorisation. This has prompted us to perform an analysis on different
insider threat scenarios, and their impact to the cloud platform and its users.
Table 1: Summary of inside threat scenarios.Scenarios DescriptionSCE#1 One user exploits one role for abusing one specific serviceSCE#2 One user exploits one role for abusing several servicesSCE#3 One user with several roles abuses one serviceSCE#4 One user exploits several roles for abusing several servicesSCE#5 Several users exploit one role for abusing one serviceSCE#6 Several users exploit one role for abusing several servicesSCE#7 Several users exploit several roles for abusing one serviceSCE#8 Several users exploit several roles for abusing several services
OpenStack users can have access to different services with one or more distinct roles,
and we have used this characteristic as basis for defining the insider threat scenarios. For
defining these scenarios, we have considered three variables: the number of users abusing
the system, the number of roles involved in the abuse, and the number of services being
abused. These variables can assume two values, one (1) or many (N). Based on this, we
have defined a total of eight abuse scenarios, which are listed in Table 1, ranging from
the case where one user exploits one role for abusing one specific service (SCE#1), one
user with several roles abusing one service (SCE#3), several users exploiting one role for
abusing one service (SCE#5), to several users exploiting several roles and abusing several
services (SCE#8).
Table 2: Summary of responses.Acronym MeaningDU Disable userDR Disable roleER Exchange user role to one with stricter permissions
RRA Restrict role actions by modifying the permissions of a roleregarding a service action
RUR Remove user role by removing the role associated with the userin Keystone
DUT Disassociate user’s tenants by removing access to all tenants the userhas access to
TSO Turn the service off
In addition to the scenarios, we have identified possible responses that can be adopted
by the MAPE-K controller, which are captured in Table 2. The responses can be executed
either over Keystone, or over the service begin abused. Among the responses, we consider
disabling an user (DU) or a role (DR) in Keystone, exchanging an user’s role to another
34
(ER), completely removing a role (RUR) or a tenant (DUT) from the user, restrict role
actions (RRA) and shutting down the service (TSO).
These responses may have different levels of impact over the user, the role, or the
service being accessed. It is possible that some of the responses may disrupt access to
legitimate users whilst removing access to insider attackers. Based on this, we have sum-
marized in Table 3 these possible impacts.
Table 3: Summary of impacts.Impact DescriptionIMP1 User does not have access permissions to the cloudIMP2 Access permissions are revoked for all users associated with a particular roleIMP3 Role is disabled in the systemIMP4 With the new role access permissions for the user are restricted
IMP5 Service must be configured with new access permissions and restarted fordeploying modifications
IMP6 User does not have access permissions to any resource in the cloud
IMP7 Service must identify which role is used to abuse. Once the user isassigned to many roles.
IMP8 Service(s) will become unavailable
Table 4 finally combines the information from the previous tables in order to present
a complete picture of the identified insider threat scenarios, their possible responses over
Keystone or the service, and the impact of these responses to users, roles and services.
The first column of that table identifies the scenario number. The next three columns
describe the scenarios in terms of the number of users, roles and services that are involved
in the scenario (which have been summarised in Table 1). The following two columns
identify the types of responses expected from a controller when handling abuse. These
responses are either associated with Keystone or the service, and they are summarised in
Table 2. For example, in scenario SCE#1 (see Table 4), once the abuse is identified by
the controller, there are a set of responses that can be performed by the controller, either
over Keystone or over the service, such as, “DU" or “DR", meaning, respectively, “disable
user" and “disable role". Finally, the last column identifies the types of impact that the
scenario might have on the users, roles and services, and these are summarised on Table
3.
It is important to note that one response may cause more than one impact. For
instance, in the first scenario (SCE#1), if the response is to disable the user in Keystone
(DU), the user does not have access permissions to the cloud (IMP#1), while disabling a
role (DR) impacts all users that are assigned that particular role (IMP#2), which might
35
Table 4: Analysis of inside abuse scenarios.
Scenarios Number of Response ImpactUsers Roles Services Keystone Service
SCE#1 1 1 1
DU - IMP1DR - IMP2 and IMP3.ER - IMP4- RRA IMP4 and IMP5
SCE#2 1 1 NRUR - IMP2DUT - IMP6- RRA IMP4, IMP5 and IMP6
SCE#3 1 N 1DU - IMP1ER - IMP4- RRA IMP4, IMP5, and IMP7
SCE#4 1 N N DU - IMP1
SCE#5 N 1 1 DR - IMP2 and IMP3- RRA IMP4, IMP5 and IMP6
SCE#6 N 1 N DR - IMP2 and IMP3SCE#7 N N 1 - TSO IMP8SCE#8 N N N - TSO IMP8
hinder the use of the role in the future (IMP#3). Although removing a role might be an
inappropriate response when dealing with scenario SCE#1, the response might be more
efficient for scenario SCE#5, which considers that several users is abusing a service with
the same role.
3.4 Implementation details
Aiming to validate our approach, we have implemented a prototype of the solution.
In this section some implementation details are presented to clarify how the Controller
works. It is important to note that, we do not delve into each of the stages of the MAPE-K,
but only the main concepts are present in terms of the key components of our prototype.
Figure 9 gives a general view of our solution, in which is possible to observe the solution
package structure. There are two main packages: java and resource. In package java are the
probes and effector packages. These probes observe the logs created by Keystone, Swift
and Nova, as well as effectors to act on those services (Nova and Swift). In the controller
package the MAPE-K loop functionalities are implemented, i.e., Monitor, Analyse, Plan
and Execute activities.
The prototype was developed using Java language with specific libraries of the Jboss
36
Figure 9: Package Diagram of the Prototype.
Drools 1. The probes work by listening the OpenStack log files that stores the information
necessary to the Controller. This was implemented using thread mechanisms that moni-
toring each new entry in the log file. The information data is captured in a raw format.
The probe module is composed by the main classes: App, LogFiletailer and LogFileTail-
erListener. The App is instanciated with basis on log file path.
Each log line captured by the probe is then passed to Monitor (Controller package).
This module filters the log entries information that are of interest to thresholds established
for analysis phase of feedback loop. This filter is needed because other operations records
are saved in the same log. Once a significant log entry is passed to the Monitor, these
data are standardized according to the model of LogInfo class and extracted in order to
be used by the Analysis module. The data needed by the analysis module are:
• Operation timestamp: The timestamp in which the operation was performed.
• Operation ID: Unique identifier of the operation in the cloud. For example, even
though two or more operation are performed at the same time, its ID’s are different.
This ID is used as the id of LogInfo object.
• Username: Name of the user that performed the operation.
• Operation type: Indicate which type of operation was performed. For example, in
the Swift can represent a download, upload or delete action. In Nova service, it can
represent if a user turn a virtual machine off, or delete it and others operations.
• Tenant Name: Name of the tenant used by the user that performed the action.1http://www.drools.org/
37
• Roles: represents a role or set of roles associated to the user that are performing the
operation.
Figure 10: Probe/Monitor workflow
In figure 10 is presented in details the workflow from the beginning. Firstly, in step 1,
the App class get every new entry that appears in log file. This entry is split and send to
the Monitor, in which the relevance of the entry is checked (step 2). That process happens
due the diversity of entries present in log that not represent a significant information to
decision process. Then, if the entry contains a significant information the Monitor calls
the Analyse class method to save the entry, else, this entry is ignored by the Monitor.
This loop is executed for each new entry.
The flow of figure 10 follow up to Analyse class with the new entry stored and send
to drools mechanisms to compare it against all registered rules (See figure 11). Drools
returns all activated scenarios to Analyse class that calls the Plan for chose a adaptation
Plan with the activated scenarios and all possible responses to mitigate attacks in that
scenarios. Planner builds a schedule for executing the adaptation and send the sequence
of steps to Executor.
Executor calls executeAdaptationAction method in Effector class. Important to note
that, its possible to exist many OpenStack services, what according to the proposed
architecture, demands a effector dedicated to each service. This is represent in the diagram
of figure 11 with a cascade of box behind the OpenStackSwift Effector.
At this point, the drools tool is important to this work. This mechanism manages all
rules to detect the insider attack scenarios. A rule in Drools is divided into two main parts
that uses first order logic: when <conditions> then <actions>. The first block describes
all conditions that may activate the rule. The second block describes what action will be
performed referred to that rule.
38
Figure 11: Analyse, plan and execute workflow
In this way, we represent the scenarios described in section 3.3 in terms of rules that,
when triggered, capture the identification of one of the insider threat. In these rules,
we have assumed that one abuse constitutes the download of five or more objects in an
interval of one minute. Once a scenario is identified, a notification is sent to the Plan
component, which needs to make a decision about which response to employ among the
ones available to deal with the respective scenario.
As our intention was to validate the impact caused by each response, our Plan was
configured to select the response being evaluated at the moment. Each response has been
implemented as a parameterised script in order to allow the modification of the users,
roles or permissions involved in the abuse.
Drools rules are implemented by the rules package of Figure 9. Since the rules are able
to interact with Java objects, the analysis module receives new log entries encapsulated
in a object, that is passed to the drools rule file and analysed. For example the Rule that
represents the scenario 1:
1 r u l e "Rule_Scenario1 "
2 when
3 $r : LogInfo ( $ id : id t rans ,
4 $time : timestamp )
5
6 $c : LogInfo ( i d t r an s < $id ,
7 $r . username == username &&
8 $r . r o l e . s i z e ( ) == 1 &&
9 $r . serviceName == serviceName
10 $time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
11 )
39
12
13 $m : MapLogResquest ( )
14 then
15 $m. s e tSc ena r i o1 ( $m. ge tScenar i o1 ( ) + 1 ) ;
16 $m. s e tSc ena r i o2 ( $m. ge tScenar i o2 ( ) + 1 ) ;
17 $m. s e tSc ena r i o5 ( $m. ge tScenar i o5 ( ) + 1 ) ;
18 $m. s e tSc ena r i o6 ( $m. ge tScenar i o6 ( ) + 1 ) ;
19 end
The received object is a instance of the LogInfo class. Then, drools can access each
attribute of the class as a local object. The operator when is a conditional operator that
checks the condition to activate that rule. In the case of Rule_Scenario_1 we check if
the entry belongs to the same user(line 7), associated to 1 role (line 8), abusing the same
service (line 9), and if the request is in the time interval established in the threshold(line
10). It is important to note that the function getMax_DURATION gets the time interval
hard coded in the core of the application, once we are proposing a prototype that aims
to validate the proposed solution.
So, if the condition is satisfied, the instructions in block then are executed. In this case,
controller variables are incremented. Due the fact that in rules that represent scenarios 2,
5 and 6 the role 1 is probably activated, its controllers are also incremented (lines 15 to
18).
This module first checks the thresholds, in other words, if it is detected that some
user is with a high download rate (for examples 50 per second), a attack was detected.
The second analyse is regarding how the attack was performed, according with OpenStack
elements. The scenarios were specified in terms of rules (See page A) using drools, this
way, if the rules are activated, a internal control is made in order to verify which rules
were activated and how many times. Based on the activated rules we are able to identify
possible responses that can be applied to mitigate the attack. These are sent by the
analysis module to Plan.
The plan module chooses one possible response and send this instruction to the execu-
tor module. The executor calls the appropriated effector to execute in the target system
the adaptation action.
The effectors implement mechanisms to modify the target system, in this case the
OpenStack ones. For that, OpenStack offers a API REST to interact with all services and
40
perform a large set of action in the Cloud. In this work we have used 3.0 version of the
API.
An effector that acts on an OpenStack service directly needs to be able to manipulate
policies described in json, and the mechanisms to update these policies. In case of swift
on the Kilo version, the swift deals with ACLs to describe local policies, which demands
the Swift effector to implement calls to Swift 1.0 API.
41
4 Results and Validation
As described in the chapter 3, the incorporation of self-adaptive authorisation into
OpenStack comprises many steps. All these steps need to be well integrated to ensure
a secure cloud platform. Therefore, the goal of this chapter is to evaluate our approach
as well as to present important considerations and results about the behaviour of our
approach. For that we have deployed an OpenStack environment with our Controller, and
used it to simulate some scenarios described in the previous chapter. We then conducted
performance experiments in terms of the controller behaviour in different scenarios, and
experiments regarding to the elapsed time between the detection and mitigation of an
insider attack to prove the effectiveness of the controller.
4.1 Environment description
In order to validate the proposed approach, we have implemented a prototype of the
MAPE-K controller for evaluating the scenarios and their impact. This prototype has
then been applied for monitoring and controlling an experimental OpenStack, in version
Kilo, deployed as a private cloud in our laboratory.
Figure 12 presents the structure of the deployment of our experimental prototype,
which is distributed in five nodes. Each node is a physical machine with 8GB RAM,
core i7 processor and 500GB disk. Two nodes are dedicated to storage (Storage Nodes),
two nodes dedicated to Processing Nodes, and one acting as the OpenStack Management
Node. The OpenStack Management Node contains the following OpenStack components:
Swift Proxy, Nova Controller and Keystone. Swift Proxy is the component responsible for
managing access to its storage service, while Nova Controller takes care of virtual machine
management, and Keystone deals with identity management. The MAPE-K controller
node is hosted in the Openstack Management Node.
42
Figure 12: Architecture of our experimental deployment.
4.2 Use Case
An example of insider attack situation to steal information is given below to illustrate
the operation of our solution. ACME is a Information and Communication Technology
company and has a private cloud based on OpenStack, exploring the processing (Nova)
and storage (Swift) services. Multiple users with different functions have access to the
services offered by the cloud. The actions and privileges of each user vary according to
the permissions associated with their roles. These roles are associated with users through
the OpenStack Keystone, and the permissions set for each service according to their ACLs
in each service.
Alice has been working for some time as a consultant on several ACME projects, and
she needs full access to files stored on Swift, as well as multiple folders within it. This
is possible because it is associated with a role of consultant. This role has full access
to system files and folders. The consultancy is completed, but Alice continues with her
enabled user in the cloud. Days later she ended up discovering that she still has access to
43
the system and starts abusing the service, downloading indiscriminately current projects
of the company, once she does not know for how long this gap is opened. This scenario
characterizes Alice as a malicious user.
With a controller based on MAPE-K, the cloud was monitoring all download actions
performed in the cloud. In Alice’s case, the system would have detected that there was
a high number of downloads in a short time, characterizing an abnormal behaviour. By
identifying this abuse coming from a single user (Alice in this case), the system would
characterize it in the scenario SCE 1. Once the attack scenario was detected, the possible
responses would be the choice to: disable the user (DU), disable the user role (DR),
exchange the user role (ER) or restrict user the user action modifying the role permission
on the service. Those different possible responses brings different impacts (as described
in table 3).
4.3 Experiments
The experimental step of this work consists feasibility and performance. The feasibility
experiments were focused on rules validation, and for that considered a low number of
users and requests. The second set of experiments consists of real simulations i.e., using
considerable number of users and request using load tests and considering the environment
described in Figure 12.
Part of the feasibility experiments were performed in the private cloud deployed in our
laboratory (as described in Figure 12) and another part in a simulated environment. The
simulated environment consists of a local environment using synthetized logs, in which
we inserted log entries equivalent to those created in the real environment while an user
performed a cloud operation. This adaptation is necessary due to access problems to real
environment and lack of optimization tests, since every change demands the creation of
a new version of the controller and the deployment of the cloud server. With the use of
the local environment its not necessary to generate a new version and deploy it remotely.
Once implemented the rules, it was possible to carry out some experiments with a larger
number of user loads and requests.
In the second set of experiments, 100 different users on the OpenStack cloud were
created, where these 100 users are associated to the same role and a single tenant. In
order to generate the request load, JMeter software 1.0 was used. It was configured to
import a .csv file containing the credentials of 100 users used to make requests in the
44
cloud. Furthermore, it was configured to perform two HTTP requests. The first one is an
authentication request, where JMeter uses all user entries in the csv file to acquire the
access token and saves them internally in a variable for each user. The second http request
attaches the token and performs an operation in the cloud. In this case, to download a
file. If the request is successful, it will return the status 200, otherwise it will return an
error status.
The set of tests consisted basically in generating request loads that could violate or not
a pre-established threshold. For these tests, it was established that a charge of downloads
in the cloud higher than 50 download actions in less than 2 seconds would be considered
an abnormal behaviour.
In this context, two metrics sets were captured. The first set of metrics was related
to the controller performance and its impact when deployed in the same cloud controller.
The software used for measurement data was the htop1. For this case the following data
were obtained:
Table 5: Controller Performace Metrics# Normal Behaviour Abnormal Behaviour
Numberof Users
CPU Con-sumption
MemoryConsumption
CPU Con-sumption
MemoryConsumption
1 4.7% 1.7% 57% 5.2%10 26% 3.3% 97% 10.2%100 40.3 % 5.3% 128% 9.7%
As it can be seen in the tables, the controller consumed less memory and less CPU
when users were behaving normally. The numbers related to performance increase as so do
the number of users, since the number of different entries increases for controller analysis.
In this case, the resources consumption does not influence decisively the full resources of
the cloud.
In the test that simulates an abusing situation, its important to note that we do not
analyse a user behaviour individually. Our experiment consider if the behaviour of the
set of user violates the threshold. For example, we note that the consumption of CPU
and memory increased, reaching more than 100% of CPU, for example for 100 users we
have 128%. It means that 1 core of the cloud controller node and 28% of other core were
working in this test. With basis on these example, it is possible to conclude that there are
situations where the required resource of the Controller have a significant impact on the
cloud server, influencing potentially on the cloud performance.1hisham.hm/htop/
45
Table 6: Elapsed Time for de scenariosNumber of Users Elapsed Time Detected Scenario1 0.508 s Scenario 110 3.418 s Scenario 5100 3.378 s Scenario 5
The second set of metrics were collected during this experiment, it was relative to the
elapsed time of the controller given the abnormal behaviours already cited. The elapsed
time consists in the interval of time between the instant in which the abnormal behaviour
is detected and the instant in which the Controller receives from the cloud API a HTTP
200 status indicating that the actions to mitigate the abnormal behaviour was completed
with success. For each set of users we collected the elapsed time between the attack
scenario identification and the end of the mitigation action. From this moment on, all
requests were interrupted.
For the tests with 10 and 100 simultaneous users, we note the identified scenario was
the same (Scenario 5), what is expected due the trio: N users, associated to 1 role abusing
1 service. For 1 user, we note that the elapsed time between the identification of the
scenario and the end of the mitigation action was less than 1 second, while in the others
cases, was around 3 seconds. This happens due the amount of requests made to mitigate
the attack in those attacks.
In the first case, only two requests are sent to mitigate the attack (Disable User). The
first one is regarding to get a valid token and the second to disable the User passing as
parameter its ID. In the second case, the request to get the token is also performed but,
in order to disable the Role, is necessary to obtain its ID. In thi way, another request is
made to retrieve all roles in details. Once we have this information, the effector sends the
request to disable the current Role passing its ID as parameter.
In the first case, it is possible to optimize the response operation because the user
ID is a information that is present in the log entry. The Role id is not registered in log,
this way the effector needs to obtain this id in order to disable it. The restriction of just
receive the Role ID as parameter is a OpenStack API(2) decision, what justify the flow
adopted in order to mitigate the scenario 5.2http://developer.openstack.org/api_-ref_-identity_-v3.html
46
4.4 Discussion
Our experiments have demonstrated that we are able to detect the identified insider
threat scenarios described in Table 4. However, smarter analysis techniques are needed for
detecting abuses and deciding what response to apply and when to apply. Our detection
rules considered that some scenarios are built based on others.
For example, SCE#7 can be seen as an evolution of scenario SCE#3, or the SCE#5
an evolution of the scenario SCE#1 which can lead to a kind of progressive blocking
situation, where n applications of response A might escalate to response B. Although, we
reached the expected response, an abuser might perceive the “progressing blocking" and
change its attack strategy.
Although we simplified the decision making process associated with the selection of a
response, we have confirmed the impact caused by each response. These provide valuable
insight on some of the criteria and trade-offs that must be considered in this decision
making, even for those responses that might look too excessive. For example, scenarios
SCE#7 and SCE#8 characterise a massive abuse on the cloud platform, and may justify
turning the affected services off, while further investigation is conducted for determining
the root cause and deciding in a more appropriate response. The identified scenarios, and
respective responses, cover the main situations considering users, roles and services, and
by no means try to exhaust the subject.
47
5 Related Works
Although there are several contributions regarding security in the cloud, little has
been done regarding the application of self-adaptation to solve security problems, and
in particular, looking into solutions for handling insider threats. An overall view of in-
sider threats and its categories in cloud environments is presented in (DUNCAN; CREESE;
GOLDSMITH, 2012). Although it elucidates the profile of insider attackers, whether it is a
human or a bot, it does not provide any insight on how to deal with this type of threat.
In terms of specific application domains, concerns have been raised of malicious in-
siders attackers towards healthcare systems (GARKOTI; PEDDOJU; BALASUBRAMANIAN,
2014). Although the solution proposed prevents insiders from modifying medical data, it
is not sufficient for protecting the system from information theft if someone has legitimate
to access to a resource. Another solution proposes the use of disinformation against ma-
licious insiders by preventing them from distinguishing the real sensitive customer data
from fake worthless data (STOLFO; SALEM; KEROMYTIS, 2012). However, if an attacker
knows precisely what he/she is looking for, disinformation might not hinder theft of in-
formation. On the other hand, sample data sniffers have a great potential in mitigating
attacks during virtual machines (VM) migrations (DUNCAN et al., 2013), as once a VM is
reallocated to a different hypervisor, a malicious attacker could exploit the vulnerabilities
and obtain a large amount of data.
From the above, and to the best of our knowledge, it is clear that no similar attempts
have been made in using self-adaptation techniques in order to deal with uncertainties
related to insider threats.
A similar approach to ours is the Self-Adaptive Authorisation Framework (SAAF) (BAI-
LEY; CHADWICK; LEMOS, 2014), which also adapts authorisation policies during run-time.
A major restriction of SAAF is that it is implemented around PERMIS (CHADWICK et
al., 2008). This dependence reduces its applicability and scope, in way that it cannot
be applied, for example, in an OpenStack/Keystone context since SAAF is tailored very
48
specifically towards PERMIS. Since the authorisation flow in OpenStack/Keystone is quite
different from that of PERMIS, it would not be simple task to refactor SAAF controller
to OpenStack/Keystone context.
Another form of self-protection in access control in SecuriTAS (PASQUALE et al., 2012),
a tool that enables dynamic decisions in awarding access, based on a perceived state of the
system and its environment. A different aspect of this work is that it is aimed essentially
towards physical security. SecuriTAS may change the conditions for accessing an office, for
example, based on the presence of high cost resources, or the presence of highly authorised
staff. This is achieved through an autonomic controller that updates and analyses a set
of models (that define system objectives and vulnerabilities, threats to the system, and
importance of resources in terms of a cost value) at run-time.
In our approach, we have decided to build an autonomic controller from scratch instead
of using an existing one, like Rainbow (SCHMERL et al., 2014) because the adaptation of
authorisation policies is more related to parametric rather than structural (ANDERSSON et
al., 2009a). For example, when adapting authorisation policies there is no need for dealing
with the architectural representation of the system, which is a fundamental aspect of
Rainbow.
In (SHAIKH; ADI; LOGRIPPO, 2012) is presented a method for risk-based access control
decision for multi-level security systems (MLS). The authors propose a way for calculating
trust and risk associated with access of subjects to objects. These values are constantly
updated based on historic user behaviour of granted access, in which reward/penalty
points are assigned based on the access outcome. In this way, their approach is able to
dynamically update trust/risk values, which are used for access control decisions. The
approach is also able to deal with exceptional situations (grant access to unauthorized
good users, or deny access to authorized bad users) when the trust outranks the risk.
The authors in (CHENG et al., 2007), proposes a model for adaptive risk-based access
control. Risk is modelled by a Fuzzy MLS (Multi-Level Security) model, quantified and
used to define several thresholds, which in turn define risk bands according with risk
tolerance. Based on risk versus benefit trade-off analysis, the solution grant access but
with additional actions to mitigate risk, depending on the risk band, supporting a "multi-
decision access control". Examples of such additional actions include: stronger logging,
extra charge for the user, different access levels token.
A model to be considered in this work is the RAdAC (MCGRAW, 2009). This model
introduces the idea that Access control decisions must consider situational factors and
49
operational need, in a way that access control decisions are based on the fact that the
benefits of sharing information outweigh the potential security risk. The main idea is to
allow to adapt access control decisions to the situation at hand. RAdAC model considers
a (probabilistic) Security Risk Determination Function, which calculates a Security Risk
Level based on Characteristics of People, IT Components, Objects, Environmental and
Situational factors and Heuristics (historic information).
In (BIJON; KRISHNAN; SANDHU, 2013) is proposed a formalisation of an adaptive quan-
tified risk-aware RBAC system. It identifies the RBAC components that can be made risk-
aware, and how to utilize estimated risk values and threshold in the access control decision
making. An adaptive quantified risk-aware access control system considers that risk can
be estimated in a metric, and this metric used for access control decisions. Moreover, it
also implies that risk values/threshold can be dynamically modified, identifying the need
for monitoring, anomaly detection and risk re-estimation functions together with auto-
mated_role_revocation and automated_permission_revocation to automatically revoke
roles and permissions from users and roles respectively.
Other related work consists of the Policy Manager1 GE (Generic Enable) of Fiware
Plataform. Fiware is a plataform for supporting the development and deploy of future
internet application. The Policy Manager provides a management of cloud resource based
on rules. Currently, this GE works with a OpenStack processing environment in which
evaluates the state of a knowledge-base and acts on it. Currently the possible actions of
this mechanisms are addressed to deal with virtual machine according with some context
data (Memory, CPU and etc). In the context of smart cities other component of fiware
platform is the Complex Event Processor (CEP). This Generic enable analyses event data
in real-time, and also enables instant responses to change conditions. This way, CEP is
a possible candidate for replace Drools engine in our approach. But further study of the
CEP is required to assess the impact of using it.
In (SIBAI; MENASCÉ, 2011) is proposed a Autonomic system named AVPS (Autonomic
Violation Prevention System). The focuses is provide mechanisms based on self-adaptive
techniques against insider attacks. Instead of deal with classic issues such as malware,
exploit intrusions this approach acts mainly on security policy violations.
1https://wiki.fiware.org/FIWARE.ArchitectureDescription.Cloud.PolicyManager
50
6 Conclusion
This chapter presents the conclusion of this master dissertation and is divided in:
contributions, limitations and future works. In section 6.1 are presented the points that
lead this work to contribute on the state of the art. Section 6.2 presents the limitations
of this work and finally section 6.3 describes the possible directions of this research.
6.1 Contribution
This master dissertation has presented an architectural approach to handle insider
threats in cloud platforms, which incorporates self-adaptation into OpenStack authoriza-
tion mechanisms. In order to achieve this, a first step in the integration of an autonomic
controller into OpenStack was to identify the OpenStack authorization components and
the models the each component implemented for that. Step 1 allows to conclude that due
the different models used in the same infrastructure, a mapping of those models to one
model (In this case, ABAC) was necessary. The effort to mapping those different models
into one characterized the second step.
A fully working prototype was built, and several scenarios representative of insider
threats were identified. For building those scenarios, a complex analyse was made, since
it was necessary to create the scenarios and specify different responses for each one, and
for each response a impact. These scenarios were used guide the experiment and evaluate
the impact of self-adaptive authorisation approaches into cloud platforms.
In the experiments we have observed that in scenarios where there are not abnormal
behaviour the autonomic controller has no significant impact on cloud performance. But,
due a simulated attack the performance of the cloud can be affected. Other important
result is about the time elapsed during a identification of the insider attack scenario and
the mitigation of it. Since the possible actions described in the scenarios are almost all
executed through API calls, the time elapsed its significantly affected depending on the
number of requests made. It means that the actions to mitigate the attacks must be
51
implemented using the least possible calls.
From the results obtained, we have confirmed the potential, in terms of effective-
ness and efficiency, that self-adaptation can provide in mitigating and protecting cloud
platforms against insider threats. Starting from the fact that self-adaptive authorisation
infrastructures are able to react dynamically to attacks by evolving during run-time its
access control policies.
6.2 Limitations
We have identified some limitations to the work presented in this dissertation.
In our approach, there was a bigger focus on developing the analysis phase of MAPE-
K, and to implement the possible attack scenarios using rules. Such an approach is limited
when the scenarios begin to become more complex or when a rule is the composition of
two or more. In our implementation, some changes in the code were required in order to
represent that.
Another limitation of our approach is that the use of rules for representing scenarios
limits the scope of action of the controller, since the possible attack scenarios that it can
identify are those previously configured.
Regarding the attack scenarios, we chose three variables: number of users, number of
roles associated with the user and quantity of services. Other variables could be analysed,
for example: Tenant/Project or similar groups and domains. This would originate more
scenarios and thus more responses and impacts.
Another interesting point is about establishing what characterizes an attack. Our
approach is simplistic regarding the mutability of the behaviour of each user or user
type. For example, to user A, it could be normal to perform downloads at a rate of 100
downloads per second. To user B, that could characterize an unusual behaviour. This user
profile analysis can become complex as the number of users increase in the environment
or to detect group attacks.
6.3 Future works
This section presents some possible directions and future work identified during the
development of the work presented in this dissertation.
52
One aspect is related to obtaining meaningful behavioural patterns and the handling
of uncertainty originated from different and disparate sources of information. This is par-
ticularly important when detecting insider threats since the logs provided by OpenStack
may not be sufficient to detect an attack. Moreover, some kind of mechanism might be
employed for allowing the automatic identification of new attack scenarios.
In the controller plan phase, more advanced strategies can be developed to choose the
best way to adapt the target system. This can be done based on the impact that each
scenario generates, for example, associating some weight to impacts and analyse what
would be the cost-benefit of taking action A rather than action B, or even both.
Since in self-adaptive solutions a lot of the responsibility is shifted from the security
administrator to the autonomic controller, assurances need to be provided, at run-time,
that the decisions taken by the controller are indeed the correct ones. An important issue
that needs to be investigated is the new types of vulnerabilities that are introduced into
the system since the security administrator is being replaced by an autonomic controller.
Another possible effort would be to direct this research to evaluate the solution be-
haviour through other platforms and extend it to support other cloud authorization in-
frastructures (OpenNebula, CloudStack, Amazon etc).
During the development of this work, we identified the possibility of direct studies
of self-adaptation of authorization infrastructure for FIWARE platform. FIWARE is a
middleware platform designed to support the development of applications for the future
internet and the internet of things. Regarding to FIWARE platform, as cited in chapter
5, the policy manager GE is a approach with mechanisms next to our solution, it means
that uses self-adaptation mechanisms to update policies. Nevertheless, this is a point that
needs a detailed investigation.
53
References
ANALYTICS, I. (Ed.). White paper: analysis of internal data theft. [S.l.], 2008.
ANDERSSON, J. et al. In: CHENG, B. H. et al. (Ed.). Software Engineering forSelf-Adaptive Systems. Berlin, Heidelberg: Springer-Verlag, 2009. cap. ModelingDimensions of Self-Adaptive Software Systems, p. 27–47. ISBN 978-3-642-02160-2.
ANDERSSON, J. et al. Software Engineering for Self-Adaptive Systems. In: CHENG,B. H. et al. (Ed.). Berlin, Heidelberg: Springer-Verlag, 2009. cap. Modeling Dimensionsof Self-Adaptive Software Systems, p. 27–47. ISBN 978-3-642-02160-2.
BAILEY, C.; CHADWICK, D. W.; LEMOS, R. de. Self-adaptive federated authorizationinfrastructures. Journal of Computer and System Sciences, v. 80, n. 5, p. 935 – 952, 2014.ISSN 0022-0000. Special Issue on Dependable and Secure ComputingThe 9th {IEEE}International Conference on Dependable, Autonomic and Secure Computing.
BHARGAV-SPANTZEL, A. et al. User Centricity: A Taxonomy and Open Issues. J.Comput. Secur., IOS Press, Amsterdam, The Netherlands, The Netherlands, v. 15, n. 5,p. 493–527, out. 2007. ISSN 0926-227X.
BIJON, K. Z.; KRISHNAN, R.; SANDHU, R. A framework for risk-aware role basedaccess control. In: Communications and Network Security (CNS), 2013 IEEE Conferenceon. [S.l.: s.n.], 2013. p. 462–469.
CHADWICK, D. W. Federated Identity Management. In: Foundations of SecurityAnalysis and Design V. [S.l.]: Springer Berlin Heidelberg, 2009, (Lecture Notes inComputer Science, v. 5705). p. 96–120. ISBN 978-3-642-03828-0.
CHADWICK, D. W. et al. PERMIS: A Modular Authorization Infrastructure. Concurr.Comput. : Pract. Exper., John Wiley and Sons Ltd., Chichester, UK, v. 20, n. 11, p.1341–1357, ago. 2008. ISSN 1532-0626.
CHENG, B. H. et al. Software engineering for self-adaptive systems. In: CHENG, B. H.et al. (Ed.). Berlin, Heidelberg: Springer-Verlag, 2009. cap. Software Engineering forSelf-Adaptive Systems: A Research Roadmap, p. 1–26. ISBN 978-3-642-02160-2.
CHENG, P. C. et al. Fuzzy multi-level security: An experiment on quantified risk-adaptiveaccess control. In: 2007 IEEE Symposium on Security and Privacy (SP ’07). [S.l.: s.n.],2007. p. 222–230. ISSN 1081-6011.
COLE, D. E. Insider Threats and the Need fot Fast and Directed Responde. [S.l.], 2015.
COLWILL, C. Human factors in information security: The insider threat - who can youtrust these days? Inf. Secur. Tech. Rep., Elsevier Advanced Technology Publications,Oxford, UK, UK, v. 14, n. 4, p. 186–196, nov. 2009. ISSN 1363-4127.
54
DUNCAN, A.; CREESE, S.; GOLDSMITH, M. Insider Attacks in Cloud Computing.In: Trust, Security and Privacy in Computing and Communications (TrustCom), 2012IEEE 11th International Conference on. [S.l.: s.n.], 2012. p. 857–862.
DUNCAN, A. et al. Cloud Computing: Insider Attacks on Virtual Machines duringMigration. In: Trust, Security and Privacy in Computing and Communications(TrustCom), 2013 12th IEEE International Conference on. [S.l.: s.n.], 2013. p. 493–500.
GARKOTI, G.; PEDDOJU, S.; BALASUBRAMANIAN, R. Detection of Insider Attacksin Cloud Based e-Healthcare Environment. In: Information Technology (ICIT), 2014International Conference on. [S.l.: s.n.], 2014. p. 195–200.
HU, V. C. et al. SP 800-162. Guide to Attribute Based Access Control (ABAC)Definitions and Considerations. McLean and Clifton, VA, United States, 2014.
IBM (Ed.). An Architectural Blueprint for Autonomic Computing. [S.l.], jun 2006.
JØSANG, A. et al. Trust Requirements in Identity Management. In: Proceedings ofthe 2005 Australasian Workshop on Grid Computing and e-Research - Volume 44.Darlinghurst, Australia, Australia: Australian Computer Society, Inc., 2005. (ACSWFrontiers ’05), p. 99–108. ISBN 1-920-68226-0.
JØSANG, A.; POPE, S. User Centric Identity Management. In: AusCERT Asia PacificInformation Technology Security Conference. [S.l.: s.n.], 2005. p. 77.
MCGRAW, R. W. Risk Adaptable Access Control(RAdAC). McLean and Clifton, VA,United States, 2009.
MELL, P. M.; GRANCE, T. SP 800-145. The NIST Definition of Cloud Computing.Gaithersburg, MD, United States, 2011.
OREIZY, P. et al. An architecture-based approach to self-adaptive software. IntelligentSystems and their Applications, IEEE, v. 14, n. 3, p. 54–62, May 1999. ISSN 1094-7167.
PASQUALE, L. et al. SecuriTAS: A Tool for Engineering Adaptive Security. In:Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations ofSoftware Engineering. New York, NY, USA: ACM, 2012. (FSE ’12), p. 19:1–19:4. ISBN978-1-4503-1614-9.
SALEHIE, M.; TAHVILDARI, L. Self-adaptive software: Landscape and re-search challenges. ACM Trans. Auton. Adapt. Syst., ACM, New York, NY,USA, v. 4, n. 2, p. 14:1–14:42, maio 2009. ISSN 1556-4665. Disponível em:<http://doi.acm.org/10.1145/1516533.1516538>.
SANDHU, R. S. et al. Role-Based Access Control Models. Computer, IEEE ComputerSociety Press, Los Alamitos, CA, USA, v. 29, n. 2, p. 38–47, fev. 1996. ISSN 0018-9162.
SCHMERL, B. et al. Architecture-based Self-protection: Composing and ReasoningAbout Denial-of-service Mitigations. In: Proceedings of the 2014 Symposium andBootcamp on the Science of Security. New York, NY, USA: ACM, 2014. (HotSoS ’14), p.2:1–2:12. ISBN 978-1-4503-2907-1.
55
SCHULTZ, E. A framework for understanding and predicting insider attacks. Comput.Secur., Elsevier Advanced Technology Publications, Oxford, UK, UK, v. 21, n. 6, p.526–531, oct 2002. ISSN 0167-4048.
SHAIKH, R. A.; ADI, K.; LOGRIPPO, L. Dynamic risk-based decision methods foraccess control systems. Computers & Security, v. 31, n. 4, p. 447 – 464, 2012. ISSN0167-4048.
SIBAI, F. M.; MENASCÉ, D. A. Defeating the insider threat via autonomic networkcapabilities. In: 2011 Third International Conference on Communication Systems andNetworks (COMSNETS 2011). [S.l.: s.n.], 2011. p. 1–10. ISSN 2155-2487.
SILOWASH DAWN CAPPELLI, A. M. G. et al. Common Sense Guide to MitigatingInsider Threats. [S.l.], 2012.
STOLFO, S.; SALEM, M.; KEROMYTIS, A. Fog Computing: Mitigating Insider DataTheft Attacks in the Cloud. In: Security and Privacy Workshops (SPW), 2012 IEEESymposium on. [S.l.: s.n.], 2012. p. 125–128.
VAQUERO, L. M. et al. A break in the clouds: Towards a cloud definition. SIGCOMMComput. Commun. Rev., ACM, New York, NY, USA, v. 39, n. 1, p. 50–55, dez. 2008.ISSN 0146-4833.
56
APPENDIX A -- Scenarios Rules
In this section are attached all drools rules implemented to validate the proposed
attack scenarios presented in section 3.3.
r u l e "Rule_Scenario1 "
when
$r : LogInfo ( $ id : id t rans ,
$time : timestamp )
$c : LogInfo ( i d t r an s < $id ,
$r . username == username &&
$r . r o l e . s i z e ( ) == 1 &&
$r . serviceName == serviceName
$time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
)
$m : MapLogResquest ( )
then
$m. s e tSc ena r i o1 ( $m. ge tScenar i o1 ( ) + 1 ) ;
$m. s e tSc ena r i o2 ( $m. ge tScenar i o2 ( ) + 1 ) ;
$m. s e tSc ena r i o5 ( $m. ge tScenar i o5 ( ) + 1 ) ;
$m. s e tSc ena r i o6 ( $m. ge tScenar i o6 ( ) + 1 ) ;
end
ru l e "Rule_Scenario2 "
when
$r : LogInfo ( $ id : id t rans ,
$time : timestamp )
57
$c : LogInfo ( i d t r an s < $id ,
$r . username == username &&
$r . r o l e . s i z e ( ) == 1 &&
$r . serviceName != serviceName&&
$time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
)
$m : MapLogResquest ( )
then
$m. s e tSc ena r i o2 ( $m. ge tScenar i o2 ( ) + 1 ) ;
$m. s e tSc ena r i o6 ( $m. ge tScenar i o6 ( ) + 1 ) ;
end
ru l e "Rule_Scenario_3"
when
$r : LogInfo ( $ id : id t rans ,
$time : timestamp )
$c : LogInfo ( i d t r an s < $id ,
$r . username == username &&
$r . r o l e . s i z e ( ) > 1 &&
$r . serviceName == serviceName&&
$time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
)
$m : MapLogResquest ( )
then
$m. s e tSc ena r i o3 ( $m. ge tScenar i o3 ( ) + 1 ) ;
$m. s e tSc ena r i o4 ( $m. ge tScenar i o4 ( ) + 1 ) ;
$m. s e tSc ena r i o7 ( $m. ge tScenar i o7 ( ) + 1 ) ;
$m. s e tSc ena r i o8 ( $m. ge tScenar i o8 ( ) + 1 ) ;
end
58
r u l e "Rule_Scenario4 "
when
$r : LogInfo ( $ id : id t rans ,
$time : timestamp )
$c : LogInfo ( i d t r an s < $id ,
$r . username == username &&
$r . r o l e . s i z e ( ) != 1 &&
$r . serviceName != serviceName&&
$time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
)
$m : MapLogResquest ( )
then
$m. s e tSc ena r i o4 ( $m. ge tScenar i o4 ( ) + 1 ) ;
$m. s e tSc ena r i o8 ( $m. ge tScenar i o8 ( ) + 1 ) ;
end
ru l e "Rule_Scenario_5"
when
$r : LogInfo ( $ id : id t rans ,
$time : timestamp )
$c : LogInfo ( i d t r an s < $id ,
$r . username != username
$r . r o l e . s i z e ( ) == 1 &&
$r . serviceName == serviceName &&
$time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
)
$m : MapLogResquest ( )
then
$m. s e tSc ena r i o5 ( $m. ge tScenar i o5 ( ) + 1 ) ;
$m. s e tSc ena r i o6 ( $m. ge tScenar i o6 ( ) + 1 ) ;
end
59
r u l e "Rule_Scenario6 "
when
$r : LogInfo ( $ id : id t rans ,
$time : timestamp )
$c : LogInfo ( i d t r an s < $id ,
$r . username != username &&
$r . r o l e . s i z e ( ) == 1 &&
$r . serviceName != serviceName &&
$time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
)
$m : MapLogResquest ( )
then
$m. s e tSc ena r i o6 ( $m. ge tScenar i o6 ( ) + 1 ) ;
end
ru l e "Rule_Scenario_7"
when
$r : LogInfo ( $ id : id t rans ,
$time : timestamp )
$c : LogInfo ( i d t r an s < $id ,
$r . username != username &&
$r . r o l e . s i z e ( ) > 1 &&
$r . serviceName == serviceName &&
$time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
)
$m : MapLogResquest ( )
then
$m. s e tSc ena r i o7 ( $m. ge tScenar i o7 ( ) + 1 ) ;
$m. s e tSc ena r i o8 ( $m. ge tScenar i o8 ( ) + 1 ) ;
end
60
r u l e "Rule_Scenario8 "
when
$r : LogInfo ( $ id : id t rans ,
$time : timestamp )
$c : LogInfo ( i d t r an s < $id ,
$r . username != username &&
$r . r o l e . s i z e ( ) != 1 &&
$r . serviceName != serviceName &&
$time . getTime ( ) − timestamp . getTime ( ) < $r .getMAX_DURATION
)
$m : MapLogResquest ( )
then
$m. s e tSc ena r i o8 ( $m. ge tScenar i o8 ( ) + 1 ) ;
end