Upload
alex-vaque
View
4.085
Download
5
Tags:
Embed Size (px)
DESCRIPTION
Alexandre Vaqué Brull. Master in Computer Architecture, Network and Systems.Universitat Politècnica de Catalunya.Advisors: Jordi Torres and Jordi Guitart.September 2011
Citation preview
The Green Evolution of EMOTIVE Cloud
EMOTIVE Cloud: The BSC’s IaaS open-source solution for Cloud Computing
Alexandre Vaqué Brull
Master in Computer Architecture, Network and Systems Department of Computer Architecture
Universitat Politècnica de Catalunya
Advisors: Jordi Torres and Jordi Guitart
September 2011
2
3
Acknowledgements
Voldria aprofitar aquestes línies per donar els meus sincers agraïments a totes les persones que m'han
ajudat a realitzar aquest projecte:
Voldria mostrar la meva plena gratitud als Drs. Jordi Torres i Jordi Guitart per la confiança dipositada
en mi, per brindar-me l'oportunitat de formar part d'aquest magnífic equip i per la seva constant i gran
ajuda. D‟ells he aprés molt tant en l‟àmbit professional, com també personal.
Volia agrair també al Dr. Iñigo Goiri la seva constant ajuda i la seva paciència. I pels seus coneixe-
ments compartits.
He après molt de tots tres i han sigut un pilars bàsics per l‟elaboració d‟aquesta màster tesis. Sense
l‟ajuda d‟aquestes persones no ho hagués pogut aconseguir. Sempre els tindré molt present per tot lo
que han fet per mi.
També volia agrair el suport de la meva parella Sara Serra en aquest projecte i a la meva família la
constant ajuda que m‟han ofert sempre. Moltes gràcies a tots els companys de feina, professors (en
destacat el Dr. David Carrera), amics i familiars que m'han recolzat i s'han preocupat en tot moment,
que encara que no els mencioni de forma explícita, no els hi puc negar el meu sincer agraïment.
4
Table of Contents
1 Abstract ............................................................................................................................................. 1
2 Introduction ....................................................................................................................................... 2
2.1 Motivation .................................................................................................................................. 2
2.2 Goals ........................................................................................................................................... 3
2.3 Document Structure .................................................................................................................... 4
3 Background and Related Work ......................................................................................................... 5
3.1 Virtualization .............................................................................................................................. 5
3.1.1 Virtualization technologies ................................................................................................. 7
3.1.2 Libvirt: The virtualization API ............................................................................................ 8
3.2 Cloud Computing ....................................................................................................................... 9
3.2.1 Virtualization Unlocks Cloud Computing ........................................................................... 9
3.3 Cloud Middleware .................................................................................................................... 10
3.3.1 OpenNebula ...................................................................................................................... 10
3.3.2 Eucalyptus .......................................................................................................................... 11
3.3.3 Open Stack ........................................................................................................................ 12
3.4 Interoperability in the Cloud .................................................................................................... 13
3.4.1 API OCCI .......................................................................................................................... 14
3.4.2 TCloud .............................................................................................................................. 17
3.5 Green Computing ..................................................................................................................... 18
3.5.1 The greening of the Cloud ................................................................................................ 19
4 Contribution .................................................................................................................................... 21
4.1 EMOTIVE ORIGINAL ............................................................................................................ 21
4.2 The Evolution of EMOTIVE .................................................................................................... 23
4.2.1 Introduction ....................................................................................................................... 23
4.2.2 New Modular Architecture................................................................................................ 24
4.2.3 Green IT evolution ............................................................................................................ 25
4.3 Extended Virtualization support in EMOTIVE ........................................................................ 25
4.4 EMOTIVE Networks ............................................................................................................... 28
4.4.1 VLAN................................................................................................................................ 28
4.4.2 VPN ................................................................................................................................... 31
4.4.3 Networks by Software are Green ...................................................................................... 32
4.5 EMOTIVE Interoperability ...................................................................................................... 34
4.5.1 API OCCI and Web Services ............................................................................................ 34
4.5.2 REST vs. SOAP ................................................................................................................ 35
5
4.5.3 API OCCI in EMOTIVE................................................................................................... 35
4.6 EMOTIVE for Green Computing............................................................................................. 39
4.6.1 Green Hypervisor Comparison ......................................................................................... 39
4.6.2 Architecture comparison (Atom-Xeon-Hybrid) ................................................................ 42
4.6.3 Middleware scheduling comparison (OpenNebula and EMOTIVE) ................................ 46
4.6.4 Middlewares qualitative comparison ................................................................................ 50
5 Conclusions ..................................................................................................................................... 52
5.1 Summary .................................................................................................................................. 52
5.2 Publications .............................................................................................................................. 52
5.3 Suggestions for future work ..................................................................................................... 52
6 References ....................................................................................................................................... 55
1
1 Abstract
In recent years, projects and researches about Cloud and Green Computing are growing. A new
Computational generation is emerging where ecological concerns have strengthened. Technological
growth leads to increased energy consumption, and thus this new Computational Generation is emitting
much CO2 in the atmosphere.
Cloud Computing has achieved to move centralized physical resources to shared virtual resources,
reducing costs and maintenance while increasing efficiency. Virtualization plays an important role in
Cloud Computing (especially IaaS) because it allows to create environments "on demand" within the
extensive Cloud platforms. This way, we can accommodate more than one virtual machine in the same
host, avoiding the expense traditional physical machine for only one service.
In this sense, we can say that virtualization environments together with Cloud Computing platforms
provide the IT market of extraordinary flexibility, a wide range of possible configurations and virtually
unlimited resources.
In the BarcelonaTech (UPC) and BSC (Barcelona Supercomputing Center), we have a Cloud Platform
based on virtualizated environments that is called EMOTIVE Cloud (Elastic Management of Tasks in
Virtualized Environments). So EMOTIVE middleware provides virtualized environments to the users
and allows executing tasks.
This project main aim is to expand and evolve the capabilities of EMOTIVE platform to improve
certain limitations. This includes enabling easy interoperability with other Cloud providers thanks to a
new architecture and adding new functionalities such as new hypervisors and networks management. In
addition, we perform new studies to evaluate these new features and extend the research and
investigations related to EMOTIVE and Cloud Computing.
In addition, we will also provide a qualitative comparison of EMOTIVE running over different
hypervisors (e.g. XEN, KVM and Virtual Box), with respect to other IaaS open-source solutions (e.g.
OpenNebula) and different kinds of computer architectures (XEON, ATOM and Hybrids solutions).
This will help users to find the best solution according to their needs: performance, green, agility,
usability, etc.
Keywords: Virtualization, Service provider, Cloud, Resource management, Green-computing,
Scheduling, consolidation, EMOTIVE Cloud, IaaS, Virtualization, Open source.
2
2 Introduction
2.1 Motivation
The use of Cloud Computing and Virtualization is increasing. It is evident with the large current offer
of Cloud Computing services. If we focus on Cloud infrastructure providers we can find a wide variety
of services and products. Cloud will be attractive to companies, whether small or large, because Cloud
computing will replace many internal IT traditional services, for example many internal IT resources
will be externalized in the Cloud. But keep in mind that the IT and Cloud Computing sector is
constantly growing, thus increasing energy consumption. It is an important problem that affects CO2
world emissions. According to Greenpeace information, the datacenters that store Cloud Computing
services will triplicate their emissions to the atmosphere in 2020. There is currently some research to
try to improve the consumption/performance ratio, especially for datacenters and supercomputers.
BSC and UPC are researching about Cloud and Green Computing. The middleware EMOTIVE Cloud
(Elastic Management Of Tasks In Virtualized Environments) is being used by BSC to do research in
Cloud Computing (thanks to its Infrastructure as a Service (IaaS) capabilities), as well as in some re-
search projects such as BREIN (1), OPTIMIS (2), VENUS-C (3), NUBA (4) and others. EMOTIVE
enables the smart management of virtual environments using different scheduling policies. Additional-
ly, it is very easy to extend thanks to its modular Web Service architecture evolved in this project. In
this master thesis, we research to improve this middleware and its green features.
Similarly to EMOTIVE, there are many other Cloud providers like Amazon EC2 Cloud (5) and Cloud
middleware such as OpenNebula (6). These new technologies allow the creation of virtual machines of
demand, and they even allow outsourcing virtual machines from external Clouds and migrating these
machines between Clouds. These are powerful features that can help to improve service availability and
enhance resource management and power consumption. However, currently the problem is that there is
no preset standards to use these new features homogenously between providers. In general, most
providers only offer their own proprietary interface or support only a given virtualization technology
(which conditionates what kind of virtual machine images they can use).
For this reason, new research is emerging to define possible standards, which allow providers to offer a
common interface and thus to interoperate among them and above all to create big Cloud communities.
Additionally, Cloud middleware are evolving to support different virtualization technologies, or at least
different image formats, to enable real interoperability.
As commented, providers must be also conscious of their energy impact, and try to reduce their energy
usage. For this reason, they are currently incorporating more complex management policies that allow
them to use the energy in an efficient way, or to consider the ecological impact when taking decisions.
In this sense, building providers based on heterogeneous architectures, with different energy
consumption profiles, is a powerful tool for these policies to really achieve their energy-related goals.
This is one of the reasons that justify why an efficient management of Cloud providers is mandatory
today. There is an important tradeoff to be solved between the performance of the applications running
in the provider and its power consumption. The goal is to fulfill the performance requirements of the
applications while minimizing power consumption. Apart from this, there are other aspects that require
3
complex management in Cloud providers. For instance, offering resources in the Cloud is no longer
about offering raw virtual machines. Clients require virtual machines to come prepared to be used to
deploy distributed services without painful configuration steps. In this sense, support for the creation
and management of virtual networks among virtual machines is a must.
2.2 Goals
In this section, we describe this thesis‟ goals, which aim to resolve some limitations presented in the
motivation section.
The main goal of the project is to extend the capabilities of the EMOTIVE Cloud platform and partly to
expand our research focused to be greener. Wherefore the new features added in EMOTIVE will try to
find a green feature.
The main project goals are:
1. To add new features to expand EMOTIVE with new functionalities, easy management, and a green
approach. We will add new hypervisors and virtual networks management.
2. To redesign the architecture and interfaces of EMOTIVE Cloud. We will go from Web Services
SOAP to RESTful.
3. Add a new OCCI interface to have interoperability with other Cloud providers and middlewares.
4. Be able to use hybrid computer architectures to exploit Green factor in EMOTIVE.
5. To study, compare and evaluate EMOTIVE:
- Running with different computer architectures (XEON, ATOM and Hybrid
solutions).
- Running with different hypervisors (KVM, XEN and Virtual Box).
- With other Middlewares such as OpenNebula, Eucalyptus and also OpenStack.
Next we will proceed to detail the goals commented:
(Goal #1) The most important new feature added in EMOTIVE is the API Xen substitution to add
Libvirt API. Initially, EMOTIVE could only use Xen hypervisor with XEN API. Libvirt API allows to
expand the number of hypervisor inside EMOTIVE. So now we can use Xen, KVM, and VirtualBox
hypervisors. In addition, Libvirt can be used to manage virtual networks within virtual machines. In
this way, we add also network management in EMOTIVE Cloud. And also we developed VPNs
creation. To sum up we will be able to create, destroy, list and edit VLANs and VPN (point to point and
multipoint to point).
(Goal #2) Another important new feature is the EMOTIVE architecture restructuration by adapting
Web Services SOAP to Web Services RESTful.
(Goal #3) Having the new REST architecture, we adapt the RESTful methods to have compatibility
with API OCCI, which makes possible to have Interoperability between Clouds. With this new feature
also we have to keep in mind the EMOTIVE compatibility with Amazon EC2. So now EMOTIVE
allows the interoperability thanks to API OCCI and Amazon EC2 interfaces.
(Goal #4) After adding these features, we have developed the EMOTIVE middleware in order to have
more green aspects. So we tried to improve and adapt EMOTIVE in this sense. An interesting green
4
aspect is that we can use hybrid computer architectures between Xeon and Atom servers together.
(Goal #5). Having achieved these goals, to test new features we made some comparisons between
EMOTIVE and different computer architectures, hypervisors and middlewares. These comparisons
allow to see the new EMOTIVE power, explore the new possibilities, and study the green approach.
We expect the research in this project to be useful for future research and to be used in other research
projects. For example, with these new functionalities we can adapt EMOTIVE to the requirements of
NUBA national research project, or the VENUS-C European project.
2.3 Document Structure
This document is organized as follows. Chapter 2 explains the goals and main motivations of this
master thesis. Chapter 3 presents some background and related work about basic concepts, including
also the state of the art.
Chapter 4 is the most important because describes the actual work completed as part of this Master
Thesis. On the one hand, introduces the technical aspects of EMOTIVE, describes and analyzes the
implementation, describe its new architecture, how it is implemented and the new features added in the
middleware. The subchapter 4.6 explains the evaluation results, describing how EMOTIVE has been
tested and comparing and summarizing the results.
Finally, Chapter 5 presents the conclusions and proposes possible future work related to this project.
5
3 Background and Related Work
First of all it is necessary to understand the basic ideas of Virtualization and Cloud Computing to
understand this master thesis. In this chapter, we present Virtualization concepts and technologies,
Cloud Computing Infrastructure ideas, Middleware products similar to EMOTIVE, Interoperability to
do federation and hybrid Clouds and Green Computing state of the art.
These topics are very important and useful to know better the puzzle of the Cloud Computing. It is
important to know that Cloud Computing without Virtualization cannot offer any of the new kind of
services that offers nowadays.
We will talk about the middleware layer and the infrastructure-as-a-service (IaaS) system. Cloud
Providers use Middlewares to offer their Services. Service Providers in the Cloud offer complex
services ready to be used. And the customers would pay depending on the volume of consumed
services, as we do with electricity or water.
This middleware is a set of resources. These resources can form Private Clouds, Public Clouds and
Hybrid Clouds. These days, one of the challenges we face is to manage better the wide range of
Clouds. In this sense, nowadays the number of companies that need more than one cloud is increasing.
Moreover, they can migrate VM from one Cloud to another and exploit new advantages. This
represents a challenge and we will show a new approach for future research
New technologies go mainly in this direction. The most well-known and useful to help to organize the
infrastructure of Cloud Computing and Virtualization are: VMWare, Xen, KVM, OpenNebula,
Eucalyptus, Libvirt and others. In the next chapters we will talk about them. They are all good
technologies to virtualize environments, manage, create some abstraction layer, etc.
With this new technology and its new possibilities, we can both contribute to improve the green
computing and we can achieve other new challenges as Manageability and Self -*, Federation &
Interoperability, Virtualization, Elasticity and Adaptability.
3.1 Virtualization
A virtual machine is an implementation of a machine (i.e. a computer) that executes programs like a
physical machine. The main idea of Virtualization is simulate machines into machines.
There are a lot of advantages to virtualize. A person who has not experienced the benefits of
virtualization often asks what the big deal is. The often repeated argument about consolidation is
flawed: after all, instead of running 10 VMs, each serving a single application, you can have one multi-
purpose server. To have to maintain a single system means less work. True? Not quite. There are
reasons for separating applications in either physical or virtual computers. The first option is clearly too
expensive in so many ways (not only initial costs, but also power, cooling, space, maintenance) and
VMs even have some advantages over real computers.
Listed below there are a number of advantages:
- Isolation: Virtual machines are independent
- Security: Each machine has privileged independent access and is very easy to backup and restore
6
virtual-machines.
- Hardware and Software flexibility (CPU, memory, disk, net, OS, etc)
- Agility: Instant server
- Portability: (file) easy to clone or to be transported to another server
- Services Consolidation, CPD cost savings, high availability and other similar features.
- Lab environments. For example, in order to test new software.
- Cloud Computing environments. They help to deal with increased load.
Hypervisor is a supervising master program that manages Virtual Machines. A computing layer which
allows multiple operating systems to run on a host computer at the same time. The role of the
Hypervisor is to support Guest Operating Systems on a single machine. It was originally developed in
the 1970s as part of the IBM S/360.Many modern variants have been done by different developers.
Figure 1- Hypervisor architecture
Virtualization types:
Computer systems introduce a division into levels of abstraction separated by well defined interfaces.
There are several ways to achieve Virtualization with different levels of abstraction obtaining different
advantages and disadvantages.
Virtualization introduces an abstraction layer to show to higher layers a different overlayed system.
Virtualization can be classified by abstracts system layer interface. These types are Hardware
Emulation, Full virtualization, and Paravirtualization.
Hardware Emulation simulates a complete hardware
allowing an unmodified OS to be run. Every instruction is
simulated on the underlying hardware. It is a problem because
this means that high performance is lost. It is emulator; this kind
of VM can even run multiple virtual machines. It is slower than
full virtualization and paravirtualization.
Many techniques are used to implement emulation. Some examples of emulation are Virtual Box,
Bochs and QEMU.
Full virtualization uses a virtual machine that mediates
between guest operating system and the native hardware. It is
slower than native hardware because there is another layer, the
hypervisor. The hypervisor do the mediations between hardware
and OS.
One of the biggest advantages of full virtualization is that guest OS can run unmodified. Certain
machine instructions must be trapped and handled within the hypervisor because the underlying
hardware is not owned by an operating system but instead, it is shared by it through the hypervisor.
Figure 2 - Hardware Emulation
Figure 3 - Full virtualization
7
There are alternatives how VMWare ESX, Parallels and KVM.
Paravirtualization is similar it to full virtualization. It
uses a hypervisor for shared access to the underlying hardware
but integrates some virtualization parts into the operating
system. The guest system needs to be modified for the
hypervisor.
To implement this method, hypervisor offers an API to be used by the
guest OS. This call is called hypercall. This issue increases the performance with regard to
fullvirtualization.
A disadvantage is that guest OS needs to be modified. But the advantages is: It can run multiple
different operating systems concurrently. Xen is an example of paravirtualization.
3.1.1 Virtualization technologies
XEN: Xen is a virtual-machine monitor for x86, x86-64, Itanium and PowerPC. It allows several guest
operating systems to execute on the same computer hardware concurrently. Xen is GNU, provides
paravirtualization and supports x86 and x64 processors. It allows several guest operating systems to
execute on one physical machine simultaneously. The first guest OS is known as Domain-0 in Xen
terminology. Domain-0 automatically boots whenever Xen software boots. Users need to login on
Domain-0 to execute other guest OS.
KVM: Kernel-based Virtual Machine is a Linux kernel virtualization infrastructure. KVM currently
supports native virtualization using Intel VT or AMD-V. Limited support for paravirtualization is also
available for Linux guests and Windows in the form of a paravirtual network driver, a balloon driver to
affect operation of the guest virtual memory manager, and CPU optimization for Linux guests.
VMWare: is commercial virtualization software that provides full virtualization. VMWare has many
flavors such as VMWare Workstation, VMWare Server and VMWare ESX that provide different levels
of flexibility and functionality. VMWare is highly portable as it is independent of the underlying
physical hardware, making it possible to create one instance of a guest OS using VMWare and copy it
to many physical systems.
VirtualBox: a newcomer to the ranks of virtualization market, Sun Microsystems VirtualBox is a
software package that provides paravirtualization. It was initially developed by a German company,
Innotek, but now it is under the control of Sun Microsystems as part of the Sun xVM virtualization
platforms. It supports Linux, Mac, Solaris, and Windows platforms as the host OS.
Amazon EC2: Amazon Elastic Compute Cloud (Amazon EC2) (5) is a web service that provides
resizable compute capacity in the Cloud. It is designed to make web-scale computing easier for
developers. This technology allows having virtual-machines in the Cloud. So EC2 is more IaaS
solution than Virtualization technology.
Figure 4 - Paravirtualization
8
3.1.2 Libvirt: The virtualization API
Introduction
Libvirt (7) is a generic virtualization API to interact with the virtualization capabilities that supports
many different hypervisors Xen, QEMU, KVM, ESX, VirtualBox, etc.) and can be used in User Mode
Linux... This has a common layer of abstraction and control for virtual machines. Also it can manage
virtual networks and virtual storage. The main components of Libvirt are a control daemon a stable C
language API and a shell environment. Libvirt has a long term stable C API and a JAVA API used in
EMOTIVE Cloud.
There are a number of open-source projects that use Libvirt how: virt-manager, virt-install and virtual
machine control mechanism. It is important to know that Libvirt stores information configuration in an
XML (independent to hypervisor).
In conclusion Libvirt is a toolkit to interact with the virtualization capabilities of recent versions of
Linux and OSes. It is a free software available under the GNU Lesser General Public License. And now
has a set of bindings for common languages.
Libvirt is a virtualization technology that is used in a lot of Cloud Computing Infrastructure products.
This technology helps to unlock virtualization in the Cloud, similar to Amazon EC2 and other Cloud
technologies.
Libvirt API:
C and Java API have these main methods:
· initialize
· getDomain
· getDomainID
· getDomainNameList
· create
· destroy
· pause / unpause
· save
· restore
· migrate
· getNodeCPUNum
· getNodeCPUSpeed
· getCPUNum
· setCPU
· updateCache
Libvirt (Virsh)
Virsh is a user Mode Linux to easy management
about Virtual Machines.
List domains: virsh list --all
Boot a domain: virsh create
/etc/xen/machine1.xml
Connect to domain console: virsh console
machine1
Reboot domain: virsh reboot machine1
Shutdown domain: virsh undefine machine1
Kill domain: virsh shutdown machine1
All virsh options: virsh help
Figure 5 - Libvirt
9
3.2 Cloud Computing
Cloud Computing is composed by three layers: IaaS (Infrastructure as a Service), PaaS (Platform as a
Service) and SaaS (Software as a Service). In this master thesis we focus in the lower layer of Cloud
Computing so we talk about Infrastructure as a Service (8) (9) (10).
Within the Cloud Computing IaaS there are three types: Public, Private, and Hybrid Cloud. Public
Cloud is a totally outsourced payment service and it does not depend on the company. A good example
is Google's services products, Salesforce.com and LotusLive iNotes, which are open to any user
through a subscription. Other example is pay-as-you-go compute capacity like Amazon EC2 and
goGrid. Private Cloud is the kind of Cloud that is created specifically for a company, maximizing
existing resources and acquiring new ones. This is achieved through virtualization. These clouds are
only for internal use of the company. This is a new model, but with a wide scope and range. For
example we can find eucalyptus, OpenNebula, IBM Smart Business Storage Cloud. And the Hybrid
Cloud is a fusion of the two previous types. There may occasionally need more resources than those
offered by the Private Cloud. Thus, when needed they can be extended from private to public, giving
the company an optimal environment for their production processes and projects.
Hybrid Clouds will be very used in the next future, because then can improve and profit private existent
resources. And if needed, they can extend themselves using Public Cloud resources. To sum up, hybrid
clouds take advantage of both public cloud and private features and also contribute with new ones.
There are a growing number of providers offering IaaS solutions for elastic capacity, whereby server
instances are executed in their own infrastructure and billed on a utility computing basis (typically
virtual machines on a per instance per hour basis). There are also a number of commercial and open-
source products which seek to replicate this functionality in-house while exposing compatible
interfaces so as hybrid cloud operating environments can be created.
3.2.1 Virtualization Unlocks Cloud Computing
The use of Cloud Computing and Virtualization is increasing (11) (12), as is evident with the large
current offer of Cloud Computing services. If we focus on Cloud infrastructure providers we find
various services and content. Furthermore these technologies have the support of big companies.
As the possibilities of Cloud Computing and services offered through it are improving and expanded,
the companies will be increasingly attracted to this technology for its intrinsic value and simplicity.
Cloud computing will eliminate many aspects of IT that traditionally required to have internal IT
resources, which will be attractive to companies, whether small or large. To open a business in the
Cloud it is not necessary to spend initial capital (Capex) and only pay for its use (Opex).
In traditional IT researchers found that about 80% of local computing resources of a company are
underutilized (eg, it is possible that at some point we are only running two or three applications in our
computer software, and so we lose much of its computational potential), certainly would make sense to
incorporate many of these applications to the Cloud. These advantages are encouraging, not only
because of the cost reduction in software licensing, but also by reduced IT spending and the obvious
advantages that provides standardization. Moreover, the outsourcing of these functions allows
companies to focus on their core competencies, leaving the management of their servers, applications
and manage their data in the hands of companies specializing in this environment can offer a
continuous service.
10
Service providers feel the urgent need to make the transition from their network architectures,
computer, software, etc and their service delivery models to better adaptation to this new environment.
This is only possible if resources can be dynamically reconfigured to meet new service requests with
minimal human intervention.
Service providers should also be able to allocate dynamically increase or decrease the capacity of
resources with the least effort possible thanks to its programmability and to respond to unpredictable
demand curves, and nobody can predict which device application or service will be the next big driver
of bandwidth in the cloud. Moreover, there are other critical issues, such as latency, security and speed
of service, which entails the need for a network that will prioritize and manage traffic reliably.
It is becoming increasingly clear that the traditional approach to service delivery cannot long resist the
emerging innovations and their related applications. In this new era, service providers need to adopt a
perspective and a totally different approach to replace the current access to intelligent services,
automated service-oriented. Those who assume that their services should evolve to become a well
capable of adapting quickly to innovation initiated by virtualization and cloud computing to exploit and
transform new benefits services will succeed. Those who do not will face a number of difficulties.
"Virtualization Unlocks Cloud Computing" (13) is a phrase that defines, what is virtualization for
Cloud Computing. So virtualization unlocks Cloud Computing and the power of virtualization is the
key to enable the technology to cloud computing environments. In summary virtualization is a step
toward internal Cloud Computing. It must be clear that Virtualization is not part of Cloud Computing.
Virtualization is a technology that enables Cloud Computing.
Cloud means "outsourcing of IT technology". It is a model based on elastic infrastructure that can scale
up or down according to demand. Here the virtualization plays an important role.
There are at least five things that virtualization opens the door to cloud computing, and push faster
organizations that address:
- Enables economies of scale.
- Decouples users from implementation.
- Speed, flexibility, agility.
- Breaks software pricing and licensing.
- Enables, motivates chargeback.
So the evolution is first virtualization, later Private Cloud and now it is beginning the Cloud Computing
era.
3.3 Cloud Middleware
This section describes some Cloud Middleware whose functionalities are comparable to EMOTIVE.
3.3.1 OpenNebula
OpenNebula is a middleware to easily build any type of Cloud and has been designed to be integrated
with any networking and storage solution. OpenNebula has a big open-source community led by UCM,
fully open-source cloud software, and not open core.
OpenNebula can transform any data center into a flexible and agile virtual infrastructure which
dynamically adapts to the changing demands of the service workload.
11
OpenNebula manages storage, network and
virtualization technologies to enable the dynamic
placement on distributed infrastructures, combining
both data center resources and remote cloud
resources, according to allocation policies.
It is a flexible, extensible and with excellent
performance and scalability to manage a lot of Virtual
Machines. Figure 6 shows the architecture of
OpenNebula.
It can manage private, public or hybrid Cloud solutions. Private Cloud with Xen, KVM and VMware,
Public Cloud supporting EC2 Query, OGF OCCI, Sunstone and vCloud APIs,… and Hybrid Cloud
with Amazon EC2, and other providers through Deltacloud.
3.3.2 Eucalyptus
Eucalyptus (Elastic Utility Computing Architecture for Linking Your Programs To Useful Systems) is
an open-source solution that originates from an NSF funded research project at University of
California, Santa Barbara, primarily as a tool for cloud-computing research. Eucalyptus Cloud (14) (15)
is an infrastructure software that enables enterprises and government agencies to establish their own
private cloud environments on clusters.
Eucalyptus can make a more efficient use of their computing capacity, thus increasing productivity and
innovation.
The main feature about Eucalyptus is the compatibility with Amazon‟s EC2 interface, but the
infrastructure is designed to support multiple client-side interfaces. Eucalyptus is implemented using
commonly available Linux tools and basic Web-service technologies making it easy to install and
maintain.
The feature highlights is:
Interface compatibility with EC2 (both Web service and Query interfaces)
Manage environments with multiple hypervisors (Xen, KVM, vSphere, ESX, ESXi) under one
management console
Stand-alone RPMs for non-Rocks RPM based systems
Secure internal communication using SOAP with WS-security
Overlay functionality requiring no modification to the target Linux environment
Basic “Cloud Administrator” tools for system management and user accounting
Advanced storage integration (iSCSI, SAN, NAS) enables you to easily connect and manage your
existing storage systems from within the Eucalyptus cloud
The ability to configure multiple clusters, each with private internal network addresses, into a single
Cloud.
Sophisticated user, group, and role management allows precise control of resources within a private
Cloud
Figure 6 – ONE Architecture
12
Eucalyptus Architecture: Web Service Cloud
Eucalyptus Components:
Cloud Controller (CLC) - The CLC is
responsible for exposing and managing the
underlying virtualized resources (servers
machines, network, and storage) via user-
facing APIs. The CLC uses an Amazon EC2
API and a Web-based user interface.
Cluster Controller (CC) - The CC controls
the execution of virtual machines (VMs)
running on the nodes and manages the
virtual networking between VMs and
between VMs and external users.
Node Controller (NC) - The NC (through the
functionality of a hypervisor) controls VM
activities, including the execution, inspection, and
termination of VM instances.
3.3.3 Open Stack
At present time, people are talking about a new middleware developed by a large number of important
companies. To create an innovative and open-source Cloud Computing software for building reliable
cloud infrastructure.
This is called OpenStack, is a collaborative software project among several big important players in the
Cloud Computing space, designed to create freely available code, badly needed standards, and common
ground for the benefit of both Cloud providers and Cloud customers. The open source software model
has been proven to promote the standards and interoperability critical to the success of our industry.
The explosive growth of the internet can be attributed to open, universal standards like HTTP and
HTML.
OpenStack is an initiative for the definition of an open architecture for IaaS Cloud Computing and an
open-source project that is currently three:
Openstack compute: for large-scale deployments of automatically provisioned virtual compute
instances.
Openstack object storage: for large-scale, redundant storage of static objects.
Openstack Image Service: provides discovery, registration, and delivery services for virtual disk
images.
Openstack has alpha version and is immature at the moment in comparation to OpenNebula and
Eucalypthus. For example, OpenNebula is maturing, proven, and works pretty well. Although
OpenStack has a big community with a big number of developers and companies so I have no doubt it
will be a very nice piece of software in the future. But if you want to put it into production today, it is
too hard because it is alpha version and not even close to feature-complete.
Figure 7 – Eucalyptus Architecture
13
3.4 Interoperability in the Cloud
This chapter talks about the state of the art to create an open standard for Cloud Computing.
Many people in the industry believe it is critically important for the Cloud to be open and share
concerns about the private nature of the leading Cloud platforms. In fact, there are already a few
projects focused on the goal of a truly open source Cloud with mass adoption.
However the present Cloud offers have followed this trend and are largely private. No one benefits
from a fractured landscape of closed and incompatible Clouds where migration is difficult to do and
true Cloud transparency is impossible.
Nowadays the solution to interoperate between Clouds is using Web Services. For example, Amazon
EC2 has a web service interface to manage his own virtual-machines, VMware has a vCloud interface
and other products have their own web services interface. But the problem is that these interfaces are
private and based on their own Clouds Computing. Eucalyptus Systems considers API AWS (Amazon
Web Services) the default standard for the industry because of its popularity. Eucalyptus Cloud is an
open-source virtualization middleware but it uses an Amazon EC2 interface.
On the other hand, OpenNebula propose an open source interface called OCCI very easy to use and to
extend. OCCI was originally initiated by UCM (Complutense University of Madrid) and now the Open
Cloud Computing interface comprises a set of open community-lead specifications delivered through
the Open Grid Forum.
TID (Telefonica I+D), as well, proposed tCloud. It is based on vCloud API specification 0.8.
δ-cloud RedHat‟s project defines a web service API for interacting with Cloud service providers and
resources in those Clouds in a unified manner. Libcloud is a pure python client library for interacting
with many of the popular Cloud server providers.
Usually the APIs wars have been a crucial strategic plan to control the technology platforms and their
associated markets. I don‟t know if δ-cloud or OCCI will be an API reference of the next years. But I
have a clear idea. An open standard API should emerge. Now both have very good fundamentals and
are present in many discussion forums.
During the development of this project, we chose to use API OCCI. Now we see that the δ-cloud is
taking a lot of relevance. So we will keep an eye on the evolution of δ-Cloud and Libcloud.
We would like to highlight that our OCCI API is used in the project NUBA, VENUS-C, OPTIMIS and
our own project EMOTIVE. We presented our experience to adopt the OCCI API in the 2010 OGF30
OCCI conference in Belgium Chapter 5 .
The δ-cloud domain is not yet as rich as OCCI's, because it cannot manage storage and network yet.
But meanwhile it puts strong emphasis on virtual machine templates (or in δ-cloud terms: 'hardware
profiles').
Architecturally it follows a slightly different approach. Unlike OCCI which focuses on the
communication layer, δ-cloud seems to support a 2-layered approach and focuses on the API layer built
on top of the communication layer. Meanwhile it is not yet language-agnostic: currently it only
supports Ruby. The whole thing (the architectural concept) remembers me Libvirt (another RedHat lib).
14
Since RedHat recently submitted the library to Apache it might gain popularity.
Another interesting thing is that OpenNebula's δ-cloud driver is built on top of OCCI.
In conclusion, OCCI is supported by Open Grid Forum (OGF). There is so much duplicate work going
on in Cloud standards as: Cloud Computing Interoperability Forum, Open Grid Forum (OGF), Open
Cloud Computing, Interface Working Group, DMTF Open Cloud Standards Incubator, GoGrid API
(CC licensed), Sun Cloud API (CC licensed), Amazon Web Services as “de-facto” (i.e. as Eucalyptus
and Nimbus have proceeded).
I believe that the next step is creating another
standard working group that sits on top of them all.
OCCI and δ-cloud are in the first-line.
3.4.1 API OCCI
The initial goal of the API OCCI is to provide an
extensible interface to Cloud Infrastructure
Services (IaaS). The OCCI API is a RESTful
service, allowing for the development of
interoperable tools for common tasks including
deployment (create, control), autonomic scaling
and monitoring Cloud resources.
This API allow for:
Consumers to interact with Cloud computing infrastructure on an ad-hoc basis (e.g. deploy,
start, stop, restart)
Integrators to offer advanced management services
Aggregators to offer a single common interface to multiple providers
Providers to offer a standard interface that is compatible with available tools
Vendors of grids/Clouds to offer standard interfaces for dynamically scalable service delivery
in their products
OCCI have been as modular as possible to facilitate future extension.
The core protocol is completely generic, describing how to connect to a single entry point, authenticate,
Figure 8 - OCCI interaction
Figure 9 – OCCI schema
15
search and CRUD operations (Create, Retrieve, Update and Delete resources) using existing standards
including HTTP (Plain Text), TLS, Oauth, JSON and Atom/Pub. State control (start, stop, restart),
billing, performance, etc. The scope of the specification will be all high level functionality required for
the life-cycle management of virtual machines (or workloads) running on virtualization technologies
(or containers) supporting service elasticity. Using a simplified service lifecycle model, it supports the
most common life cycle states offered by Cloud providers.
Simply by standardizing at this level OCCI may well become the HTTP of Cloud Computing. There is
a good article that compares OCCI API with HTML. Is OCCI the HTTP of Cloud Computing? (16)
RESTful web services
A RESTful Web Service offers these HTTP methods: GET, PUT, POST and DELETE. OCCI API is a
RESTful service and has methods associated with each resource type: Pool Resources (collection of
elements owned by a given user) and Entry Resources (single entry within a given collection).
GET PUT POST DELETE
Pool Resources (PR) to list all the entry
resources in that pool
resource owned by the user
x to create a new entry
resource
x
Entry Resources
(ER)
to list the information
associated with that
resource
to update the resource
(only supported by the
COMPUTE resource)
x to delete the resource
XML format is used to represent COMPUTE, NETWORK and DISK resources; as well as the
collection of them (Pool Resources, PRs).
Figure 11 – OCCI (Compute, Network and Storage)
Figure 10 OCCI Simple Life Cycle
16
POOL RESOURCE
References a URI for the ER.
Example:
<COMPUTES> <COMPUTE href="http://www.opennebula.org/compute/234"> <COMPUTE href="http://www.opennebula.org/compute/432"> <COMPUTE href="http://www.opennebula.org/compute/123"> </COMPUTES>
NETWORK
ID, the uuid of the network
NAME, describing the network
ADDRESS, of the network
SIZE, of the network, defaults to C
Example: <NETWORK> <ID> 123 </ID> <NAME> Blue Network </NAME> <ADDRESS> 192.168.0.1 </ADDRESS> <SIZE> C </SIZE> </NETWORK>
STORAGE
ID, the uuid of the image
NAME, describing the image
SIZE, of the image in MBs
URL, pointer to the original image
Example:
<DISK> <ID> 123 </ID> <NAME> Ubuntu 9.04 LAMP </NAME> <SIZE> 2048 </SIZE> <URL> file:///images/ub untu/jaunty.img </URL> </DISK>
COMPUTE RESOURCE
The compute element defines a virtual machine by specifying the configuration attributes. It is
more complex than previous resources commented:
ID, the uuid of the virtual machine.
NAME, describing the virtual machine.
TYPE, a COMPUTE type specifies a CPU and memory capacity, valid types are small, medium
17
and large.
STATE, the state of the COMPUTE. T
DISKS, the block devices attached to the virtual machine (DISK, SWAP, FS)
NICS, the network interfaces, defined with a list of NIC elements. (UUID, IP...)
Example:
<COMPUTE> <ID>123AF</ID> <NAME>Web Server</NAME> <TYPE>small</TYPE> <STATE>running</STATE> <DISKS> <DISK image=234 dev=sda1/> <SWAP size=1024 dev=sda2/> <FS size=1024 format=ext3 dev=sda3/> </DISKS> <NICS> <NIC network=4567f ip="19.12.1.1"/> <NIC network=0/> </NICS> </COMPUTE>
Return Codes
The OCCI Cloud API uses the following subset of HTTP Status codes.
200 OK: The request has succeeded. GET: an entity corresponding to the requested resource is sent in
the response.POST: an entity containing the result of the action. 201 Created, 202 Accepted, 204 No
Content.
The 4xx class of status code is intended for cases in which the client seems to have erred (400 Bad
Request, 401 Unauthorized, 403 Forbidden, 404 Not Found...).
And 5xx, the server failed to fulfill an apparently valid request. (500 Internal Server Error, 501 Not
Implemented ...)
Authentication
Authentication follows REST philosophy. It is recommended that the server-client communication is
performed over HTTPS to avoid sending user authentication information in plain text.
3.4.2 TCloud
TCloud API is based on vCloud API specification 0.8. In essence, compatibility for the main operations
and data types defined in vCloud is maintained by TCloud. TCloud provides extensions on advanced
Cloud Computing management capabilities including additional shared storage for service data,
network element provisioning, monitoring, snapshot management, and so on. TCloud API is focused on
adding network intelligence, reliability and security features to Cloud Computing. The goal of the
initiative is to provide the power of Cloud computing with the flexibility allowed by virtualization
Telefónica has released the TCloud API for Cloud Computing interoperability and submitted it to the
Distributed Management Task Force (DMTF). This shows Telefónica commitment to promote Cloud
18
interoperability and standardization. This releasing is good for telecommunications companies in
general, as it proves that telecommunications are skilled to define the technology in which its services
are based.
3.5 Green Computing
Nowadays the concept of Green Computing (or Green IT) (17) (18) is taking some relevance due to the
widespread concerns about issues such as climate change, recycling, biodegradability, etc. In this
chapter, we present the Green Computing focused on Cloud computing and virtualization. So we talk
about energy consumption that implies a certain level of CO2 emissions.
It is estimated that the IT sector causes 2% of global CO2 emissions and in some articles it is said that
the IT sector produces more carbon emissions than the world of aviation. For example some
information published in Daily Telegraph (19) talk about two Google searches produce the same CO2
as boiling a kettle.
Another important fact to keep in mind is that the IT sector is growing constantly, so this implies that
energetic consumption is growing. It is an important data that produces more CO2 emissions.
According to Greenpeace, world data centers to store Cloud computing services will triple emissions to
the atmosphere in 2020 (20). Currently there are several European researches to improve the
consumption-performance ratio, especially for datacenters and supercomputers. This master thesis
makes research about this subject.
The most part of the pollution produced by Cloud computing is caused by world datacenters. Power
usage effectiveness (PUE) is a measure of how efficiently is a computer datacenter using its power;
specifically, how much of the power is actually used by the computing equipment (in contrast to
cooling and other overhead). The overall average is 2, which means that for every watt used in running
the equipment, has spent other one to keep cooled this.
Virtualization technology enables the consolidation of multiple workloads and always increases
efficiency and most of them save energy on IT equipment, for example, using a smaller number of
machines. Nevertheless, virtualization also produces some additional overheads and as a result the rest
of the datacenter may be running less efficient. VM creation and VM migration is the most stand out
overhead in Virtualization moreover, Virtualization technology use a new layer called hypervisor this
means that a little high performance is lost. But virtualization overhead is insignificant in contrast to its
advantages.
BSC and UPC use EMOTIVE to research about green virtualization. Virtualization requires high
density datacenters. Remaining servers are running at higher power consumption levels but there are
different ways to deal with this. Energy-aware Scheduling in Virtualized Data Centers (21) is one
example. This presents a Scheduler that uses a mathematical algorithm to manage Virtual Machines.
Basically, the Scheduler makes a virtual machine consolidation in the minimum number of physical
machines and the unused machines will be shutdowned. Scheduler manages the Virtual Machines
migrating this to some physical machines to use the maximum possible performance of physical
machines. And if there are some physical machines without any virtual machine and in idle mode,
scheduler shutdown this server to consume less energy.
There are other solutions presented by others researches but the main idea is the same, the changes are
in the idea of the scheduler algorithm. Nowadays commercial middlewares use a generic scheduler as
backfilling that pack the VMs in the cluster nodes to reduce VM fragmentation and use the minimum
number of physical servers.
19
Other aspects to considerate that it is important to understand the impact of Virtualization on Data
Center Physical Infrastructure (22). The virtualization power savings can produce unexpected results
and probably you need to upgrade power and cooling infrastructure to take advantage of the savings
opportunity that virtualization offers. You need to worry about power and cooling when it is
virtualizing. You may need additional cooling in some physical areas to improve the power efficiency.
For example, it cools some areas dynamically depending on the load. Care should be taken to examine
the impact on power and cooling because each data center virtualization is different.
3.5.1 The greening of the Cloud
Traditional On-Premise vs Cloud Computing:
The Cloud computing business motivation is that the resources solutions on demand promise greater
flexibility, dynamic, timely and green solution than traditional on-premise computing.
Therefore, we must bear in mind that migrate certain parts or all of a classic on-premise IT to Cloud
can provide scalability, can reduce the costs of physical growth, reduce costs and reduce energy use (8).
On-premise computing needs an initial capital investment, maintenance and the costs of future updates.
In contrast Cloud does not need an important initial cost so it has a lower initial investment because
Cloud offer elasticity and pay-as-you go cost model.
It is interesting to find which solution is better. But it is more interesting the utilization into both
solutions together to keep the best features of each.
In the paper (23) there is an interesting analysis of cost and performance between “Traditional On-
Premise” with Cloud Computing classifying the various types of costs CapEX (CAPitalEXpenditures)
and OpEX (OPerationalEXpenditures) depending on the attribute to be analyzed (Infrastructure,
Business, Physical Resources, Network, Performance, Energy, budget, etc.). In short we can discuss
that in Cloud Computing there are more OpEx and in the traditional on-premise there are more CapEx.
Nowadays generally on-premise infrastructure run with low utilization, sometimes it goes down up to 5
to 10 percent of average utilization. Data centers that utilize Cloud technologies are more efficient than
traditional data centers. Energy is usually lost through server under utilization, because most of the
time these servers are just in idle mode. But in a Cloud environment the system is managed to run at
the highest efficiency. In addition, data center planning allows better power utilization. In traditional
data centers, they can have cooling problems and you can run out of space for more servers. There is
also a consortium of Cloud providers, who assure that its members optimize their data centers to mi-
nimize power consumption.
On-premise solution can be better, whenever if we have a constant full utilization of the IT
infrastructure. This often happens in large companies that offers constant services around the world.
For example, in their start Facebook was using Amazon services but finally due to their large increase
in business, Facebook built his own data center, adapted to their business needs.
Cloud solutions are highly recommended in most areas. But an important factor to consider is that
network latency influences negatively in the response time of the Cloud solutions. Traditional On-
Premise Computing usually have better network latency and therefore the response time gives better
results for the solution.
And also a lot of companies prefer to use on-premise infrastructure for its data privacy and protection.
In this project, however, we do not focus on Cloud security.
20
In conclusion, it is necessary to analyze the CapEx/OpEx balance and the consumption depending on
each own case. As we have said, what we study is the energy consumption and in the next chapter, we
would like to show our structure solution of hybrid architectures and how they are a solution to reduce
energy consumption, without losing too much performance.
Global vision in Green Computing:
We should have a global vision of both sides of the coin, Cloud computing and Green Computing.
Basically it is known the relationship between the Cloud and eco-efficient IT (24).
The end-user adoption of Cloud implies that there will be a turning point in 2011, Cloud Computing
utilization is increasing. Therefore, more power consumption in the Cloud.
There are several things to consider having good eco-efficiency in IT.
Each year the number of datacenters is growing and also the power consumption.
New legislation penalizes the excessive consumption of energy. New measures should be taken
into account.
Expensive equipment uses more energy than what they truly need.
So the Cloud computing alleviates these things:
This allows better management of internal uses of resources (sharing resources or virtualizing).
Reduce peakload using schedulers to manage a rapid provisioning and deprovisioning.
Reduce unnecessary use of resources in some points.
When there is hardware (compute or storage) limits use outsourcing.
Green Clouds? Not all Cloud Computing cases are greener compared to in-house IT:
Energy efficiency. Providers aim to have efficient operations.
Sometimes Cloud Providers overlooked reporting metrics consumption of server resources.
Recycling. Service providers would have to detail IT recycling policies.
Key points in the adoption and incentives: In 2008 many companies such as Yahoo, Google, Microsoft
and others, struggled to be in a good position as the greener operators of datacenters. But the thing is
that is not that easy to measure which data center is more efficient. In fact, several studies (commercial
and non commercial) have emerged to try to find a common standard measurement.
Follow the Moon:
Before finishing this chapter, we present an example of a very interesting technique. Follow the moon
concept means to reduce energy consumption and expenditure taking advantage of nighttime tempera-
tures and lower electricity rates, and so have their computing resources chase in day/night boundary.
i.e., migrate to data centers where it is night time. After all, always it is night somewhere in the world.
Although, these techniques have certain limitations because it is necessary to have similar settings, re-
quires visibility among others and the bandwidth latency presented increase. Well, this theory must be
carefully studied as many companies have done.
The Key technologies that can follow the moon are: virtualization, modularization, consolidation and
outsourcing appropriate. These are key strategies to achieve eco-efficient IT.
21
4 Contribution
4.1 EMOTIVE ORIGINAL
EMOTIVE (Elastic Management of Tasks in Virtualized Environments) (25) is the Barcelona
Supercomputing Center (BSC)‟s and Barcelona Tech (UPC) IaaS open-source solution for Cloud
Computing, which results from BSC‟s previous experience in European projects such as BREIN (1)
and SORMA (26). EMOTIVE provides users with elastic fully customized virtual environments
(supporting Xen hypervisor) in which to execute services. Further, it simplifies the development of new
middleware services for Cloud systems by supporting resource allocation and monitoring, data
management, live migration, and checkpoints.
EMOTIVE middleware can be categorized as an IaaS solution, since it provides the users with
virtualized environments where they can execute their tasks without any extra effort. These VMs,
which aim to fulfill the user requirements in terms of software and system capabilities, are
transparently managed by EMOTIVE in order to exploit the provider‟s resources. EMOTIVE can
easily be extended with multiple scheduling policies in order to manage the VMs using different
criteria.
Figure 12 - EMOTIVE Cloud architecture
Figure 12 illustrates the EMOTIVE Cloud architecture, which is mainly composed by three different
and modular layers: the fabrics infrastructure, the node management (VRMM), and the global
Scheduler. VRMM component is responsible for managing the life cycle of the virtual machines
(creation, destruction, submit tasks, etc.). Scheduler layer is responsible for distributing the tasks and
virtual machines between physical nodes. The Scheduler includes support for efficient virtual machines
migration, managing checkpoints, and system configuration and organization between different virtual
environments.
22
The Virtualization Fabrics layer comprises the physical resources where the VM will run. This layer
wraps the virtualized resources and offers them to the upper layers. EMOTIVE makes use of the Xen
API which makes it able to use Xen virtualization technologies. Furthermore, it implements a distributed
shared file system (DFS) that supports efficient VM creation, migration (to move VMs across provider‟s hosts
without stopping the execution), and checkpointing (to resume VM execution upon hardware failure). This file
system also supports a global repository where users can upload the input files needed by the applications (i.e.
data stage-in) and retrieve the resulting ones (i.e. data stage-out).
The data infrastructure offers a distributed storage for supporting virtualization capabilities such as
migration and checkpoint support, and it can use differents kinds of storage. It distributes the data
among the cluster nodes. It uses NFS in order to make the data of every node available from the other
nodes. Thanks to this technique, VMs can be moved between nodes without losing connection. This
capability allows new approaches such as consolidating the global system or giving more resources to a
given application if the node is not able to do this locally.
In addition, this data infrastructure allows each VM accessing data required by the user by using a
shared repository also distributed among the nodes. It also allows storing data in the system in order to
be reused later from other VMs.
The Virtual Machine Manager layer is implemented by means of the Virtualized Resource
Management and Monitoring (VRMM). This layer comprises all the local resource management
decisions (i.e. in a single node) and it is in charge of managing the physical resources of a node and
distributing these resources among all the VMs running on that node. In addition, it continuously
monitors the resource usage of these services and the fulfillment of their SLAs. If any SLA violation is
detected, an adaptation process for requesting more resources to the provider is started, first locally in
each node, then globally in the provider, and finally with other providers. This layer is also in charge of
creating and maintaining the whole virtual machine life cycle (create, destroy, migrate, etc.) and
executing tasks described by means of a JSDL file (27).
In addition, the VtM comprises all the local resource management decisions (i.e. in a single host): it is
in charge of managing the physical resources of a host and dynamically distributing these resources
among all the VMs running on that host in order to fulfill their respective Service Level Agreements
(SLAs). EMOTIVE allows specifying fine-grain resource-level guarantees in the SLA (e.g. amount of
computing power allocated to a given VM over time), which are clearly superior to the availability
guarantees supported by common providers such as Amazon EC2 (5).
Furthermore, EMOTIVE also has, by means of the VtME component, the capability to use external
resources, like the ones in public Cloud providers (i.e. Amazon EC2). This feature allows an
EMOTIVE-enabled provider to be involved in a Cloud federation (insourcing/outsourcing) and create
public, private, and hybrid clouds.
On the other side, the Resource Monitor (RM) component continuously monitors the status of tasks and
resources. This status is stored in a historical database, but it can be also used to assess the fulfillment
of the SLAs of the applications. If any SLA violation is detected, an adaptation process for requesting
more resources to the provider is started, first locally in each host, then globally in the provider, and
finally with other providers.
Finally, the Virtual Machine Scheduler layer comprises all the global VM placement decisions, both
among different providers in a Cloud federation and different hosts in a single provider. This layer is in
charge of deciding where a VM will be executed and managing its location during the execution (e.g.
migration of VMs across provider‟s hosts, cancellation of VMs, resumption of VM execution from a
23
checkpoint upon hardware failure, etc.). As a rule of thumb, the Scheduler tries to consolidate the VMs
in the provider‟s physical resources to optimize their use, while allocating enough resources to fulfill
the agreed SLAs.
Moreover, this framework allows multiple schedulers with different policies and capabilities such as
machine learning, prediction, economic, fault tolerance, semantic description, or SLA enforcement. In
this sense, it can use a simplistic Round Robin, or a consolidation-aware scheduling like Backfilling.
This is achieved thanks to the usage of a common interface (Web Service SOAP) that allows
developing new schedulers with different features and policies.
In conclusion, the main capabilities supported by EMOTIVE are summarized herewith:
· VMs creation on demand, according to application requirements.
· Monitoring of task and resource status, including historical information.
· Consolidation of VMs in the provider‟s physical resources to optimize their use.
· VM placement and fine-grain dynamic resource distribution based on Service Level Agreements
(SLAs).
· Efficient live migration of VMs across provider nodes.
· A checkpoint/recovery system to resume task execution upon hardware failure (thus achieving
fault tolerance).
· Ability to create additional VMs on external clouds when the local provider is overloaded.
· Data management services for supporting VM creation and the migration and checkpoint
mechanisms, and also to allow users to provide input (i.e. data stage-in) and retrieve output (i.e. data
stage-out).
4.2 The Evolution of EMOTIVE
4.2.1 Introduction
In this chapter, we will describe the main new functionalities added to EMOTIVE in this master thesis
and how they are implemented.
We can summarize these new features for EMOTIVE in the next schema:
New support through Libvirt API.
Replace Xen Api with Virsh API
Support for KVM and Virtual Box hypervisors.
Contextualization and easy installation. Virtual image management and creation from
the scratch.
New modular architecture with Web Services RESTful. This makes EMOTIVE easier to
be extended.
Compatibility with API OCCI thanks to new RESTful architecture. This allows
achieving interoperability in the Cloud.
GUI adaption to new RESTful architecture.
Initializing OVF support (alpha version)
24
New EMOTIVE functionalities
VLAN Network management
Easy management and creation of Virtual Networks (VPN support).
EMOTIVE evolution to be Greener
EMOTIVE in new hybrid architectures.
EMOTIVE adaption to NUBA National project.
4.2.2 New Modular Architecture
One of the main features of EMOTIVE is its modular and distributed architecture. EMOTIVE was
originally designed using a distributed SOAP architecture but now it uses Web Services RESTful
architecture. This architecture allows to use only some parts of EMOTIVE and supports agile and
dynamic construction of new Cloud environments.
Its Web Service REST interface makes EMOTIVE highly interoperable with other Cloud solutions. In
particular, we encourage using the Open Cloud Computing Interface (OCCI) (16), which allows
EMOTIVE to be interoperable with other Cloud middleware supporting this interface.
In addition to API OCCI compatibility, EMOTIVE can use the external Cloud Amazon EC2 via the
Web Service EC2 API and also it could interpret OVF files (28). The capability to use external
resources from Amazon EC2 allows to be involved in a Cloud federation (insourcing/outsourcing) and
create public, private, and hybrid clouds. Hybrid possibility allows to dynamically increase or decrease
the capacity of resources with the least effort possible thanks to its scheduler and to respond to
unpredictable demand curves.
Figure 13 - Internal EMOTIVE Architecture
25
With the new RESTful architecture we need to adapt every EMOTIVE module [Figure 13]: the GUI
module, Scheduler modules and VtM modules. In the chapter 4.5.2 we can see the full details of the
new architecture and the new interfaces used in EMOTIVE and in the chapter 4.3 we can see the evo-
lution of EMOTIVE API as we replaced XEN API to use Libvirt with API.
EMOTIVE is able to support different schedulers. All EMOTIVE Schedulers use Web Services
communications to receive an external client request or send requests to the Virtual Management (VtM)
component to manage virtual machines, submit jobs or other functionalities. A client request is a Web
Service RESTful with standard operations (like GET, POST, PUT, DELETE for HTTP). The client for
Scheduler uses this RESTful interface to request some work. This RESTful interface is very easy to
extend, being a powerful functionality.
4.2.3 Green IT evolution
In this master thesis, we analyze the energy impact of running virtualized environments using different
kind of computer architectures such as Xeon, Atom and hybrid approaches. Also we analyze the
different kind of hypervisors that now EMOTIVE can run thanks to new features added.
Additionally we exploit the EMOTIVE capability of supporting multiple scheduling policies in order to
compare the energy impact of some of them. These include a simplistic Scheduler that uses Round
Robin to balance VMs among nodes and a backfilling-alike Scheduler that does VM consolidation to
save energy. The latter is described in a previous work (29) (21). It basically uses a backfilling
scheduler to consolidate VMs in data center nodes according to multiple facets while optimizing the
provider‟s profit. In particular, it considers energy efficiency, virtualization overheads, fault tolerance,
and SLA violation penalties, while adding the ability to outsource resources to external providers. This
scheduler saves energy by shutting down the machines that are idle. In addition, we also compare the
power consumption of this EMOTIVE Scheduler with generic Schedulers solutions presented in other
middlewares solution such as OpenNebula.
We also added new features to simplify the process of virtual machines preparation and creation. For
instance, we can create Virtual Networks (VLAN) and VPNs (with SSL and PPTP protocol). With this
simple and dynamic network management we can set up dynamic Cloud environments on demand. We
can create VLANs or VPNs on demand at a certain instant of time, without the need of extend the
physical network infrastructure because it is not necessary to buy and put new hardware in the
infrastructure. In this sense, we can say that this feature has some ecological value.
4.3 Extended Virtualization support in EMOTIVE
We see that the EMOTIVE platform needs to expand and not depend only on Xen hypervisor. We want
it to be as generic as possible. Therefore we have redefined the architecture, rewritten the API and
made it compatible with Libvirt [Figure 13]. With this compatibility we have more possibilities to ex-
pand the virtualization architecture to use other hypervisors. So a part of XEN we extend to use EMO-
TIVE with KVM and Virtual Box virtualization thanks to the new API used (Libvirt), abandoning the
old API (Xen API). So now we replace Xen API by Libvirt API, therefore we create a new API plat-
form that uses Libvirt. To support the new hypervisors also we need to deploy the operating system
environment with the hypervisor and Libvirt installation and its configuration (25).
Apart from extending the number of hypervisors supported, Libvirt has other features, such as VLAN
management. We will talk about this in the next chapters.
26
Now we describe how API Libvirt is integrated in the EMOTIVE Cloud platform. The starting point is
the actual Xen monitoring API (XenMonitor.java). This API has been completely rewritten to be
adapted to Libvirt. (VirtMonitor.java). This represents a significant step forward in monitoring the plat-
form, as it extends the possibilities of the platform making it compatible with most existing virtualiza-
tion systems like Xen, KVM, VirtualBox, etc. Figure 14 summarizes the mapping between the methods
used in XenMonitor and VirtMonitor.java.
We choose JAVA language to develop the new monitoring API, because EMOTIVE is fully developed
in JAVA and bash shell. In the project we use the famous software development environment called
Eclipse. We use another tools and plugins such us Maven and Subversion. Maven dynamically down-
loads Java libraries and Maven plug-ins from one or more repositories. And Subversion (SVN) is a tool
used by EMOTIVE software developers to manage changes within their source code tree.
It is necessary to know that EMOTIVE Cloud uses important Linux bash scripts in the low level of its
architecture to do the virtual machines contextualization (for dynamically creating VMs disk images
from scratch and configuring some values, on demand). Libvirt can be used as API C, API Java and in
console mode. It is a great API.
EMOTIVE has been evolving together with API Libvirt. EMOTIVE started using Libvirt version 0.4.0
and now it is using Libvirt 0.8.3.
It is important to know that Libvirt Project and most technology used in the project (eg. KVM, OCCI),
is under development and some methods that the library offers are in beta version and have some bugs.
This technology is evolving and new functionalities are appearing. The new architecture has been
refined and is evolving. This has been an extensive period of development and testing on the platform.
Also you can see that in VirtMonitor there are absences of some methods that XenMonitor has [Figure
14]. But EMOTIVE only uses some basic methods therefore not noticed a difference in the EMOTIVE
usability with the API replacement. Equally EMOTIVE could extend new functionalities using all Java
methods presented.
In conclusion we see that before EMOTIVE virtualized only with XEN hypervisor. Now we can
virtualize with XEN, KVM, and Virtual Box. Therefore we needed to change the architecture and Java
code and also we needed to change the bash scripts due to the virtual machines contextualization so
that three hypervisors are 100 % compatible with the middleware. It is interesting to know that is very
hard to use VMware ESX in EMOTIVE although Libvirt is compatible with VMWARE ESX. VMware
uses a private architecture and it is necessary to do a lot of changes in EMOTIVE architecture. Libvirt
uses a remote communication with VMware in contrast with the others hypervisors. The others
hypervisors use local petitions and it is easier to be used in EMOTIVE. Full support for VMware is part
of our future work.
We will provide a quantitative and qualitative comparison of EMOTIVE running over different hyper-
visors (e.g. XEN, KVM and Virtual Box) in the section 4.6.1 . You could find the best solution for
every type of needs: performance, green, agility, usability, etc.
27
VirtMonitor.java (API Libvirt) XenMonitor.java (API Xen)
Compute Method Summary Compute Method Summary
int availableMemory() int availableMemory()
void checkAll() void checkAll()
int resume(java.lang.String name) int resume(java.lang.String name)
int save(java.lang.String name) int save(java.lang.String name)
int restore(java.lang.String name) int restore(java.lang.String name)
int start(java.lang.String name) int start(java.lang.String name)
int shutdown(java.lang.String name) int shutdown(java.lang.String name)
void unpause(java.lang.String name) void unpause(java.lang.String name)
void pause(java.lang.String name) void pause(java.lang.String name)
int pauseDomain(java.lang.String name)
int defineXML(java.lang.String name)
int availableCPUs() int availableCPUs()
void dummy() void dummy()
float getCPUAmount(java.lang.String name) float getCPUAmount(java.lang.String name)
int getCPUCapacity(java.lang.String name) int getCPUCapacity(java.lang.String name)
int getCPUFreq() int getCPUFreq()
int getCPUPriority(java.lang.String name) int getCPUPriority(java.lang.String name)
int getCPUSpeed() int getCPUSpeed()
int setCPUCapacity(java.lang.String name, int capacity)
int setCPUPriority(int id, int priority)
int setCPUPriority(java.lang.String name, int priority)
int getDiskRD(int vm) int getDiskRD(int vm, int vbd)
int getDisksRD(int vm) int getDisksRD(int vm)
int getDisksRW(int vm) int getDisksRW(int vm)
int getDiskWR(int vm) int getDiskRW(int vm, int vbd)
int getDomainId(java.lang.String name) int getDomainId(java.lang.String name)
String getDomainName(int domain) String getDomainName(int domain)
List<String> getDomains() List<String> getDomains()
String getIP(int vm) String getIP(int vm)
int getMemory(int vm) int getMemory(int vm)
int getMemory(java.lang.String name) int getMemory(java.lang.String name)
float getMemoryAmount(int vm) float getMemoryAmount(int vm)
float getMemoryAmount(java.lang.String name) float getMemoryAmount(java.lang.String name)
int getMemoryMax(java.lang.String name) int getMemoryMax(java.lang.String name)
int getMemoryStaticMax(java.lang.String name)
int getMemoryStaticMin(java.lang.String name)
int getMemoryDynamicMax(java.lang.String name)
int getMemoryDynamicMin(java.lang.String name)
int getMemoryDomU(int vm)
int freeMemory()
int getNetRX(int vm) int getNetRX(int vm)
int getNetTX(int vm) int getNetTX(int vm)
int getNumCPU(java.lang.String name) int getNumCPU(java.lang.String name)
int getNumDisks(int vm) int getNumDisks(int vm)
int getNumDomains() int getNumDomains()
String getState(java.lang.String name) String getState(java.lang.String name)
boolean migrate(java.lang.String name, ja-
va.lang.String destHost)
boolean migrate(java.lang.String name, ja-
va.lang.String destHost)
int pinCPU(java.lang.String name, int vcpu, int[] cpumap) int pinCPU(int vm, int vcpu, int cpu)
28
int setCPUCapacity(String name, int nvcpus) int setCPUCapacity(int id, int capacity)
int setHostMemoryDynamicMax(String name, int mem)
void setMemory(int id, int mem) setMemory(int id, int mem)
void setMemory(java.lang.String name, int mem)
void setMemoryFixed(int id, int mem) void setMemoryFixed(int id, int mem)
void setMemoryFixed(java.lang.String name, int mem)
int setMemoryMax(java.lang.String name, int mem)
void setNumCPU(java.lang.String name, int cpu)
void showMemory(java.lang.String name) void showMemory(java.lang.String name)
VirtMonitor.java (API Libvirt) XenMonitor.java (API Xen)
Network Method Summary Network Method Summary
XML defineNetwork(String name, String uuid, String bridge, String address, String netmask, String dev, String mode, String start, String end)
int createNetwork(XML)
int deleteNetwork(String network)
int listNetworks()
List<String> showNetwork(String network)
Figure 14- Mapping methods between VirtMonitor and XenMonitor
4.4 EMOTIVE Networks
4.4.1 VLAN
The acronym VLAN expands to Virtual Local Area Network. A VLAN is a logical local area network
(LAN) that extends beyond a single traditional LAN to a group of LAN segments, given specific confi-
gurations. Because a VLAN is a logical entity, its creation and configuration is done completely in
software.
Libvirt is the technology used to create VLANs in EMOTIVE. Libvirt allows virtual machines creation
and easy management but also virtual networks creation and easy management.
Libvirt allows using console mode to create and manage the VLANs or using the Java/C API. In
EMOTIVE we use the Java API (30). First of all, to create VLAN with Libvirt, it is necessary to create
the XML manually with the VLAN description and later we need to pass this XML information in the
specific Libvirt call function. We show in a XML description as example:
<network> <name>private</name> <bridge name="virbr2" /> <ip address="192.168.152.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.152.2" end="192.168.152.254" /> </dhcp> </ip> <ip family="ipv6" address="2001:db8:ca2:3::1" prefix="64" /> </network>
So in EMOTIVE we have automatized the XML parsing and creation to have better VLAN
management and creation. The steps to create VLAN with EMOTIVE are the next:
29
1 - Necessary information to create Network.java (Java object) it is a EMOTIVE class
name, id, uuid, ip address, gateway, netmask, dev, mode (route, nat, isolated, private), bridge name,
ip_start, ip_end
2 - EMOTIVE Parser to create XML file with the last information (object Network) String xml = parsing.CreateXML(Network.java) or
String xml = parsing.CreateXML(name, uuid, bridge, address, netmask, dev, mode, start, end);
3 - Libvirt function to create Network with the XML created file conn.networkCreateXML(xml);
Libvirt has a set of elements control how a virtual network is provided connectivity to the physical
LAN, these are: bridge, domain, forward (nat, route, bridge, private, vepa, passthrough). Now
EMOTIVE only uses route, nat, isolated and private but in a future we could evolve to use more…
The API OCCI is used to offer these network services to the upper layers. API OCCI is used to have a
common language interface with other Clouds such as OpenNebula. OCCI is very useful to manage,
create and remove network resources (in the Cloud). Basically OCCI allows to create, list, show, and
delete networks, similarly to virtual machines (OCCI compute).
OCCI (occi-network):
NETWORK
ID, the uuid of the network
NAME, describing the network
ADDRESS, of the network
SIZE, of the network, defaults to C
FUNCTIONS:
create, list, show, delete
Example:
<NETWORK>
<ID>123</ID>
<NAME>Blue Network</NAME>
<ADDRESS>192.168.0.1</ADDRESS>
<SIZE>C</SIZE>
</NETWORK>
The EMOTIVE Scheduler can use these functionalities via an OCCI web services restful call to create,
list, show … a VLAN. Later on, Scheduler communicates with VtM and VtM uses the networks
methods located in VirtMonitor.java to invoke the Libvirt network facilities.
The next screen [Figure 16] shows the help menu of EMOTIVE console. You can see the new networks
functionalities to network management. Functions as ($net-* …) are used for VLAN management
Figure15 - EMOTIVE Cloud example with Virtual-machines with VLANs
30
and ($vpn …) for VPN management.
localhost:~$ help
Help
server: change server to connect
create: create a VM with generic name
name: create a VM with specific name
set: set a property of the domain
+ (vmid/memory/cpu/home/exten/ip/name/taskid)
+ (id)
new: create a new domain
run [CMD]: run a task in a VM
status [TASKID]: show status of a task
pause [VMID]: pause VM
unpause [VMID]: unpause VM
destroy [VMID]: destroy VM
cancel [TASKID]: cancel a task
show
domain: domain in the system
task: tasks in the system
vm: VMs in the system
vmall: all VMs in the system
nodes: nodes in the system
vmothers: extern VM in the system (no EMOTIVE VM)
net-list: show network
net-create: create a new network
net-occi: create a new network
+ (id/name/ip/size(A,B,C)
net-destroy: destroy network
+ (name)
net-show show network
net-set: set a property of the Network
+ (name/ip/netmask/mode/start/end)
+ (quick [name-bridge] + [gateway+ip_range])
net-edit: In construction
vpn: create a vpn
multi + server + localip (openvpn)
multi + addclient + server + localip (openvpn)
ptp + server + localip (openvpn)
pptp + server + localip + user + password + tunnelname (pptp)
exit: quit this interactive terminal
Figure 16 - EMOTIVE help menu
The next screen [Figure 17] shows a demonstration to create a VLAN with EMOTIVE.
localhost:~$ net-show
Network:
Name: vlan01
Ip: 132.168.163.1
Mode: private
Netmask: 255.255.255.0
Bridge: vlan01
Range
IP start: 132.168.163.2
IP end : 132.168.163.254
Quickmode Example: >>'net-set quick vlan00 192.168.1'
name=vlan00 bridge=vlan00 ip=192.168.1.1 ip range [start
192.168.1.2 , end 192.168.1.254
31
localhost:~$ net-create
localhost:~$ net-list
Networks:
1: vlan01
localhost:~$
Figure 17- demostration to create a VLAN
4.4.2 VPN
Virtual Lan Networks allows the creation of isolated networks. We also wanted to create secure
networks with Virtual Private Networks (VPN). So we developed virtual networks creation between
VLANs or in the same network.
Virtual Private Networking is a solution that supports remote access and private data communications
over public networks that are cheaper alternatives to leased lines. VPN clients communicate with VPN
servers utilizing a number of specialized protocols.
To do this function we need to create a Java function to create VPN and have easy VPN management.
We automatize all necessary to created it: using OpenVPN open-source tool (31) and PPTP open-
source tool (32) with bash commands in Java and EMOTIVE Java functions. So to create a VPN, we
developed and automated VPN configuration as a System Administrator's would do manually. So first
of all we create the system config file (/etc/openvpn/openvpn.cfg or /etc/pptpd.conf) in the 2 nodes,
later we add the certificates in the 2 nodes and finally we launch the OpenVPN or PPTPD daemon in
each node.
The protocol used is PPTP with PPTPD application and SSL with OpenVPN open source application.
We use these two protocols in EMOTIVE Cloud because they have different interesting features.
OpenVPN is very useful in:
- Stronger Encryption: Some customers consider the more encryption the better; it would also
be fair to say that it's possible with PPTP for someone to get your password while connecting.
- No drop of packets (when using TCP): If you lose connection, you won't be thrown back on
the internet. This maybe important for you.
- Allows you access to more servers: With our Open VPN accounts you have access to all of our
servers, PPTP and Open. At any time if you want a server that is offering only PPTP accounts,
you can simply request it via our customer area. This means that our Open VPN accounts are
also PPTP if you need them to be. Simply login to your customer area and choose the PPTP
server, and a login and pass will be sent to you.
- Port Modification Allowed: If for some reason the standard configuration of our Open VPN
accounts still does not let you connect, our expert team can provide a custom configuration that
will go over whatever port you may have available. Not sure what all that means? No problem
we can login and do everything for you remotely.
And PPTP VPN Advantages:
- Works on Mobile Devices: Iphone, Android, Windows Mobile are just a few of the devices
that work with PPTP. These are very easily setup, and just a Host Name, Login and Password
you will be connected.
32
Who's the winner? To sum up, if you are looking for high security and privacy you should choose
OpenVPN. If you need easy-to-setup VPN, PPTP is a good choice. For mobile devices, PPTP is the
only solution.
In EMOTIVE we can create model as a 1-N relationship to create VPN with PPTP and OpenVPN
protocol. And we can also create a N-N relationship (only in OpenVPN).
Cloud Computing include mobile devices and we focus this TFM in Cloud Computing and be green so
it is interesting to have support to several kind of devices. The PPTP VPN allows to create an easyVPN
tunnel in differents kinds of devices and services.
4.4.3 Networks by Software are Green
EMOTIVE VLAN and VPN is a software solution installed on an existing server. Although there are
hardware solutions to create VLAN and VPN; we choose software solution because it is easier to im-
plement and manage in a Cloud. Also the maintainance and cost is cheaper than hardware solutions.
As we know, Cloud EMOTIVE facilitates the implementation and management of virtualization infra-
structure. In this master thesis, we have discussed the advantages of virtualization and its green ap-
proach. Now we want to comment the same features in relation to Network Virtualization by Software.
In this case, VLAN and VPN implementation by software.
In the next paragraph we compare software and hardware network implementations, including their
green assessment.
Implementation:
Implementing VLAN/VPN by hardware involves adding a new hardware device to the existing net-
work infrastructure. Implementing VLAN/VPN by software involves installing the software on an ex-
isting server. So networks by software save capital cost because we do not need to buy a switch, router,
or other kind of networks appliance. And we can save power because we do not have to plug in the
power this appliance. We only need the same computer with the adient software. You do not need to
cascade virtual switches or prevent bad virtual switch connections, and because they don‟t share physi-
cal Ethernet adapters, leaks between switches do not occur. Just to make a single switch into multiple
virtual switches (VLANs). In this sense it simplifies the topologies of the networks.
Maintenance:
Maintaining VLAN/VPN implemented by hardware usually requires an ongoing contract with the ven-
dor, who would offer comprehensive support for the VLAN/VPN device. Furthermore, VLAN/VPN
implemented by hardware often requires additional training for the in-house staff to enable them to
manage the day-to-day operations.
Networks implemented by software have easy management and more flexibility in network administra-
tion. An administrator is not necessary and it is possible to indirectly save energy and cost because ad-
ministrators can work in remote mode, thus avoiding displacements in the datacenter. A virtual envi-
ronment is easy to manage but in datacenters with a lot of different networks and differents servers,
storage hardware… it is hard to consolidate this software. And in this case is easier to manage these
networks with physical hardware, as we can view today in the most data centers.
33
Cost:
A hardware VPN solution is generally more expensive up front. VPN hardware can also carry a cost in
terms of training, as it can be significantly more complex to implement and support VPN hardware.
Having virtual switches do not require a spanning tree protocol (energy efficient) and a real switch does
not process these network communications (energy efficient). There is also a reduction of routing in
the broadcasting of traffic on the network (energy efficient). VLAN removes the physical boundary
(energy efficient).
VLAN implementation by software is very useful to create virtual networks on demand in a little pe-
riod of time. This saves money in hardware creation and installation. It's possible to make easily and
quickly a VLAN and VPN with a pair of clicks and it is not necessary to buy switch, contract admin
sys, save power, cooling and space. This dynamic configuration is green because it provides additional
temporal communications, and it can modify topology network easily.
To create VPN/VLANs between Clouds it is necessary to find new possibles interfaces in Interopera-
bility.
Performance:
Performance of either solution is limited to the available hardware and network resources. Often a VPN
software package is installed onto an existing server with other applications, restricting performance of
all applications to the server's available resources. In contrast, VPN hardware is a dedicated device li-
mited only by its own hardware.
Secure:
VPN hardware devices are generally considered more secure than VPN software solutions, largely be-
cause the VPN hardware device is dedicated to the sole purpose of providing VPN and is already
equipped to handle the unsecure outside network. VPN software, on the other hand, often shares a
server with other applications. As a result, those applications and the server's operating system are vul-
nerable and must be "hardened"--that is, secured in the face of the open public network.
Conclusions:
VLAN advantages into Physical LAN is that VLAN is similar to physical Ethernet LAN. The upper
layers of communications and the software that runs in the network does not distinguish what type of
LAN is running if it is physical or virtual.
In conclusion now we have other ways to create LANs so this implies new network features and possi-
bilities. It is probably one solution is better in some cases and the other in other cases. Actually in phys-
ical datacenters hardware LAN is the most used solution. But now with the introduction of Cloud
Computing, managing networks by software is an interesting possibility. In this case the network im-
plementation by hardware is very hard to do. Another thing is if you don't have implemented the net-
work you can choose create network by software because it is very easy, quick and cheap to use. Net-
work by software is much more flexible but less secure than hardware. On an Interoperable Cloud,
network by software solution would be better.
If we analyze the energetic power, hardware network is a physical chip, an appliance or something
physical that consumes electricity, but software network needs additional performance in the server that
this produces a little increase of the server power... more processes on the server, more performance
34
and more consumption. However, we would also have to evaluate which is the cost of the carbon foot-
print of both solutions. We have not been able to extend the results because we do not have enough
machinery and resources but it is probable that software networks produces less carbon emission.
4.5 EMOTIVE Interoperability
4.5.1 API OCCI and Web Services
The problem with interoperability in Cloud providers is well-known. As shown in Figure18, different
Cloud providers use their own and independent interface. This makes it difficult to communicate and
federate multiple providers (33). Recently, OCCI API has been proposed as a common standard in
order to overcome this problem. OCCI is a Cloud Interaction Layer which uses HTTP methods (like
GET, POST, PUT, DELETE) using XML format. This interface uses multiple data structures (i.e.
Compute, Network, Storage) to describe the different resources. Using these structures, it can operate
the virtual resources (i.e. create, list, show, update, delete). [Figure18]
EMOTIVE was originally designed using a distributed SOAP architecture but now it uses RESTful
Web Services. This architecture allows the usage of only some parts of EMOTIVE and supports agile
and dynamic construction of new Cloud environments. Its REST interface makes EMOTIVE highly
interoperable with other Cloud solutions. Furthermore, popular Cloud solutions such as OpenNebula have adopted OCCI to define their
interfaces. Aiming at interoperability with other Cloud solutions, EMOTIVE also implements an OCCI
interface. Notice, however, that the standard OCCI interface does not support all the original
EMOTIVE functionality. For this reason, there are some methods for jobs and clusters management
that EMOTIVE supports using its original REST interface. According to this, EMOTIVE Cloud
currently supports two interfaces: EMOTIVE REST API and the standard OCCI (34). In the following
lines, we describe briefly these two interfaces.
Figure18 - Interfaces of differentClouds
35
4.5.2 REST vs. SOAP
REST (Representational State Transfer) basically means that each unique URL is a representation of
some object. You can get the contents of that object using an HTTP GET, to delete it, you then might
use a POST, PUT, or DELETE to modify the object.
The main goal to migrate Web Services SOAP to Web Services Restful is to support easy extend and
interoperability. And also nowadays a lot of new Web services are implemented using a REST archi-
tecture rather than a SOAP one.
The main advantages of REST Web services are:
It is lightweight because not a lot of extra XML markup
Human readable results
Easy to build because no toolkits are required
SOAP also has some advantages:
Sometimes, it is easy to consume.
Rigid - type checking, adheres to a contract
Development tools
For consuming web services, it is sometimes a toss up between which is easier. For instance Google's
AdWords web service is really hard to consume, it uses SOAP headers, and a number of other things
that make it kind of difficult. On the converse, Amazon's REST web service can sometimes be tricky to
parse because it can be highly nested, and the result schema can vary quite a bit based on what you
search for. Whichever architecture you choose make sure its easy for developers to access it, and well
documented.
EMOTIVE Cloud improves with the new Web Services RESTful architecture because:
Now EMOTIVE is modular
Easy to extend and adapt
Easy development
It is possible to do an easy OCCI adaption
Human readable results and easy data parsing and processing.
4.5.3 API OCCI in EMOTIVE
OCCI describes five methods that use Compute and four for Network and Storage. EMOTIVE supports
four of the Compute methods, four Network methods, but it does not support Storage ones.
The methods comprising the EMOTIVE REST interface are described in Table 1. The methods with a
correspondence in the OCCI interface are shown boldfaced. Our interfaces basically allow:
● Compute: create, get, list and cancel Virtual Machines (supporting CIM and OVF). ● Network: similar to Compute methods but used to describe virtual networks.
● Jobs: used to submit jobs to Virtual Machines (we use JSDL format to describe them). ● Nodes: describes the system topology (used for EMOTIVE internals).
36
COMPUTE
· String Env-ID = createEnvironment (Compute) · String Env-ID = createEnvironmentAndJob (Compute, JSDL) · terminateEnvironment (String Env-ID) · List <Env-ID> = getEnvironments () · Compute = getEnvironment (String Env-ID) · String state = getEnvironmentState (String Env-ID)
NODES
· String [Node-ID or Env-ID] = getLocation (String [Env-ID or Act-ID]) · List <Node-ID> =getNodes () · nodeDown (String Node-ID) · nodeUp (String Node-ID)
JOBS
· List <Act-ID> = getActivities () · Act-ID = submitActivity (JSDL) · cancelActivity (String Act-ID) · String status = getActivityStatus (String Act-ID) · List <String Act-ID> = getAllActivities ()
NETWORK
· String Net-ID = createNetwork (Network) · deleteNetwork (String Net-ID) · Network = getNetwork (String Net-ID) · List <Network> = getListNetworks () · String Net-ID = createVPN (Network)
Table 1 - EMOTIVE REST API. Correspondence with OCCI methods is noted in bold
Table 2 shows the equivalence between the methods used in EMOTIVE REST API and OCCI API. It
basically describes the mapping of the OCCI REST methods to the EMOTIVE REST methods. In fact,
this is how we have implemented our support to the OCCI API, by means of a wrapper that translates
OCCI methods to EMOTIVE REST ones.
COMPUTE
EMOTIVE Methods (Java) API OCCI (REST)
createEnvironment(Compute) /compute POST (PR)
terminateEnvironment(String id) /compute/id DELETE (ER)
getEnvironments() /compute GET (PR)
getEnvironment(String id) /compute/id GET (ER)
37
NETWORK
EMOTIVE Methods (Java) API OCCI (REST)
createNetwork(Network) /network/id POST (PR)
getNetworks() /network GET (PR)
deleteNetwork(String id) /network/id DELETE (ER)
getNetwork(String id) /network/id GET (ER)
Table 2 - Methods used in EMOTIVE Cloud
Regarding the data structures used to describe the resources, our createEnvironment(Compute)method
is able to support the same Compute structure used in OpenNebula. An example of this Compute
structure follows herewith: <COMPUTE href="http://www.opennebula.org/compute/32"> <ID>12342-4356-12345-24324</ID> <NAME>Web Server</NAME> <STATE>running</STATE> <DISK> <STORAGE href="http://www.opennebula.org/storage/34"/> <TYPE>OS</TYPE> <TARGET>hda</TARGET> </DISK> <DISK> <STORAGE href="http://www.opennebula.org/storage/24"/> <TYPE>CDROM</TYPE> <TARGET>hdc</TARGET> </DISK> <NIC> <NETWORK href="http://www.opennebula.org/network/12"/> <MAC>00:ff:72:31:23:17</MAC> <IP>192.168.0.12</IP> </NIC> </COMPUTE>
Similarly, an example of Network structure, which is used in createNetwork(Network) method is as
follows: <NETWORK href="http://www.opennebula.org/network/12"> <MAC>00:ff:72:31:23:17</MAC> <IP>192.168.0.12</IP> </NETWORK>
Instead of using Compute, our createEnvironment(Compute) and createEnvironmentAndJob (Compute,
JSDL) methods can also support the usage of simple Open Virtualization Format (OVF) files to
describe the features of the VMs to be created. This is an OVF example of one simple VM with 2 CPUs
and 2GB of memory:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?> <ns1:Envelope xmlns:ns2="http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettingData" xmlns:ns1="http://schemas.dmtf.org/ovf/envelope/1" xmlns:ns4="http://schemas.dmtf.org/wbem/wscim/1/cim- schema/2/CIM_ResourceAllocationSettingData" xmlns:ns3="http://schemas.dmtf.org/wbem/wscim/1/common"> <ns1:References> <ns1:File ns1:href="/cosa/fina.img" ns1:id="root"/> <ns1:File ns1:size="1073741824" ns1:id="home"/> </ns1:References>
38
<ns1:VirtualSystem> <ns1:Info>EMOTIVE Cloud Virtual Machine Description</ns1:Info> <ns1:VirtualHardwareSection> <ns1:Item> <ns4:AllocationUnits>cpu</ns4:AllocationUnits> <ns4:Description>Number of CPUS</ns4:Description> <ns4:ElementName>x86</ns4:ElementName> <ns4:InstanceID>1</ns4:InstanceID> <ns4:ResourceType>3</ns4:ResourceType> <ns4:VirtualQuantity>2</ns4:VirtualQuantity> </ns1:Item> <ns1:Item> <ns4:AllocationUnits>byte * 210</ns4:AllocationUnits> <ns4:Description>RAM Memory</ns4:Description> <ns4:ElementName>2046MB of Memory</ns4:ElementName> <ns4:InstanceID>2</ns4:InstanceID> <ns4:ResourceType>4</ns4:ResourceType> <ns4:VirtualQuantity>2046</ns4:VirtualQuantity> </ns1:Item> <ns1:Item> <ns4:Caption>Home drive</ns4:Caption> <ns4:HostResource>ovf:/file/home</ns4:HostResource> <ns4:InstanceID>3</ns4:InstanceID> <ns4:ResourceType>17</ns4:ResourceType> </ns1:Item> <ns1:Item> <ns4:Caption>Root drive</ns4:Caption> <ns4:HostResource>ovf:/file/root</ns4:HostResource> <ns4:InstanceID>4</ns4:InstanceID> <ns4:ResourceType>17</ns4:ResourceType> </ns1:Item> </ns1:VirtualHardwareSection> </ns1:VirtualSystem> </ns1:Envelope>
In addition, EMOTIVE supports Job Submission Description Language (JSDL) to submit jobs using
the methods submitActivity(JSDL) and createEnvironmentAndJob (Compute, JSDL). JSDL is an
extensible XML specification for describing requirements of computational jobs. It was initially
focused in Grid but it is not restricted to this environment. JSDL describes: job name, description,
resource requirements (RAM, swap, CPU, number of CPUs, operating System, etc.), execution limits,
file staging, command to execute… This is an example of an ANSYS CFX simulation JSDL: <?xml version="1.0" encoding="UTF-8"?> <jsdl:JobDefinition xmlns:jsdl="http://schemas.ggf.org/jsdl/2005/11/jsdl" xmlns:jsdl-hpcpa="http://schemas.ggf.org/jsdl/2006/07/jsdl-hpcpa"> <jsdl:JobDescription> <jsdl:JobIdentification> <jsdl:JobName>AnsysDemo</jsdl:JobName> </jsdl:JobIdentification> <jsdl:Application> <jsdl:ApplicationName>AnsysCfx</jsdl:ApplicationName> <jsdl:ApplicationVersion>PM26</jsdl:ApplicationVersion> <jsdl-hpcpa:HPCProfileApplication> <jsdl-hpcpa:Argument>-cpu_load=1.0</jsdl-hpcpa:Argument> <jsdl-hpcpa:Argument>-threads_num=2</jsdl-hpcpa:Argument> </jsdl-hpcpa:HPCProfileApplication> </jsdl:Application> </jsdl:JobDescription> </jsdl:JobDefinition>
39
4.6 EMOTIVE for Green Computing
Following the green approach of this master thesis, we want to evaluate the energy impact of
EMOTIVE. We use three benchmarks, focused in power consumption and performance. Generally,
many computer benchmarks compare features that are linked in performance implicitly or explicitly.
But now it is important to detail power consumption, because power consumption is a new important
variable to consider. So new needs are emerging and the power consumption begins to be more
expensive. Even more expensive than the hardware! A lot of companies spend more money in power
consumption than hardware. Therefore we want to do a green evaluation.
So, we are researching about the possibility to find new green approaches. In the next subchapters we
present the results of the benchmarks. These tests are three: Green Hypervisor comparison, Atom-
Xeon-Hybrid Architecture comparison and Middleware scheduling comparison.
All tests have been made with two virtualized servers on a middleware that manages them. The
workload produced in these benchmarks is introducing virtual machines into the Cloud with running
tasks. Later we study the behavior and performance of the servers. And we measure the power
consumption with physical mesurator called „WattsUp Pro' (35).
4.6.1 Green Hypervisor Comparison
Introduction
The first benchmark compares the power consumption behavior of three different hypervisors into
EMOTIVE. As commented before, this is possible thanks to new Libvirt engine used in EMOTIVE.
Each hypervisor has different features. So we do a comparison between KVM hypervisor (based in
full-virtualization), XEN (based in paravirtualization) and VirtualBox (emulation). Therefore we want
to see the difference in power consumption (in Watts) of these hypervisors. We do not want to extend
the comparison to other aspects because it is possible to find a lot of hypervisors comparisons
(performance level and others) in the literature.
Also we think that it is not necessary to compare the behavior of these hypervisors in other Cloud
middlewares because the hypervisor power and performance is independent of the middleware that it
runs. Also we have the same point of view in computer architecture, so all tests run over the same
machine and operating system. Only changes the hypervisor used: we use Xen with 3.4.0 version,
KVM with the Linux kernel version 2.6.28.1-kvm and we use the Virtual Box version 3.2. The three
hypervisors use Libvirt 0.8.4, OS Debian GNU/Linux 5.0 and Intel Xeon E5440 2.83GHz CPU.
Workload
In the comparison we use a workload that creates 6 virtual machines in only one server with the
following order: In the second 50 we create the first VM, 50 secs later we create other VM, in the
second 150 secs we create 2 VMs, and later 2 virtual machines are created (one in the second 200 and
the last in the second 300). Within each virtual machine runs a job that produces a bucle with N
iterations, into each iteration there are several types of arithmetic operations. So this benchmark
produces a stress in the CPU. And this runs between 10 and 30 seconds to finalize. It depends on each
hypervisor and the benchmark. The performance has been closely linked with the power consumption
results, because power consumption is very linked in CPU consumption.
40
Results
As you can see in the results of the graph [Figure 19], between KVM and XEN we get similar results.
In addition, Virtual Box when it has to scale it loses a lot of performance. Therefore Virtual Box has
more power consumption and worst performance than XEN and KVM. So the battle is between XEN
and KVM. Virtualbox at the moment could be a virtualization environment very useful in Desktop
machines, but not in datacenters. It is enough to have only one virtual machine, because it has a
performance and good power consumption similar to the other two hypervisors, but VirtualBox does
not have a good scalability with more than two virtual machines. But it is possible to create more easily
and quickly a test Cloud environment in your Desktop. It is very useful to EMOTIVE testers and
developers to forget to use a big infrastructure to test new research environments. Maybe in the future
Oracle will improve his hypervisor (with the acquisition of Sun). After we had finished this
comparison, Oracle published a new VirtualBox version 4. But, at this moment, the best OpenSource or
free solutions are Xen and KVM.
In the next table [Table 3], we show the total average of watts consumption for all the hypervisors. We
can see that KVM hypervisor is the greener hypervisor. But XEN it is very near and the distance is
minimal. And we saw a smooth behavior with XEN, in contrast KVM has bigger peaks. So the decision
over KVM and Xen depends on the kind of workload. Finally KVM consumes a little less power than
XEN. In conclusion, the election of XEN or KVM hypervisor depends on each other, the environments,
the kind of workload and its uses.
Figure 19 - Power hypervisor comparation
Power Average EMOTIVE
XEN 290,2 W
KVM 289,2 W
VirtualBox 293,2 W Table 3 - Power average
41
XEN or KVM?
Xen is a hypervisor that supports x86, x86_64, Itanium, and ARM architectures, and can run Linux,
Windows, Solaris, and BSDs as guests on their supported CPU architectures. Xen can do full
virtualization on systems that support virtualization extensions, but can also work as a hypervisor on
machines that do not have the virtualization extensions: For example, Atom and ARM (that are some
interesting low power processors) and older CPUs. Also if you want to run a Xen host, you need to
have a supported kernel.
KVM is a hypervisor that is in the mainline Linux kernel. Your host OS has to be Linux, obviously, but
it supports Linux, Windows, Solaris, and BSD guests. It runs on x86 and x86-64 systems with hardware
supporting virtualization extensions. This means that KVM is not an option on older CPUs made before
the virtualization extensions were developed, and it rules out newer CPUs (like Intel's Atom CPUs) that
do not include virtualization extensions. For the most part, that is not a problem for data centers that
tend to replace hardware every few years anyway , but it means that KVM is not an option on some of
the niche systems like the SM10000 that are trying to utilize Atom CPUs in the data center.
Xen is running on quite a lot of servers, from low-cost Virtual Private Server providers like Linode to
big boys like Amazon with EC2. Xen has been around a bit longer, also it has had more time to mature
than KVM. You'll find some features in Xen that haven't yet appeared in KVM, though the KVM
project has a lengthy to do list but KVM is going to become more prominent in the future. Also,
RedHat and Canonical have begun supporting KVM. Also, KVM is not mature project but KVM
performance is improving day by day and it is growing in the kernel Linux because KVM is part of the
main Linux kernel.
In our test Full-virtualization is marginally faster than Paravirtualization. Therefore, from the results
obtained it does not appear that paravirtualization virtualization exhibits greater performance over full
virtualization. One reason for this may be the fact that our CPU's support full hardware virtualization.
However, our tests were not complete. Because, we don't want to do a full hypervisors comparison.
Basically we do these tests to know the behavior of the hypervisors in EMOTIVE Cloud and to know
the power consumption. To do a full comparison we need similar environment for example use XEN
with full-virtualization and not the para-virtualization. With this comparison we demonstrate that
paravirtualizatioin is a great alternative if you do not have processors without VT instruction to use
with full virtualization hypervisor.
But it is logic that emulation virtualization (used by VirtualBox) is a poor alternative. Only it is
recommended to be used in personal desktops and laptops. It is important to know that emulation does
not have scalability.
Xen and KVM consume little overhead and power. It is hard to choose a winner because it depends of
the environment and each one. KVM is rapidly improving through Xen has better management tools
than KVM and Xen migrations are more robust. In this line KVM needs to improve.
Conclusions
In conclusion in this comparison we demonstrate that EMOTIVE has a good behavior in all
hypervisors thanks to new Libvirt API used in EMOTIVE Cloud. So now, EMOTIVE Cloud will
evolve together API Libvirt. If Libvirt evolves, also EMOTIVE Cloud will evolve. Wherefore the new
features added in Libvirt will be added in EMOTIVE Cloud indirectly. Also we need to consider the
continuous KVM and Libvirt evolution for the future. Especially KVM is taking very strong. These
results will be outdated in a months for the continuous evolution of the KVM and others.
42
In the next comparisons, we use Xen hypervisor because we need to run on ATOM platforms. This
platform does not have CPU virtualization instructions, so we need to choose para-virtualization with
Xen. Nowadays, Intel chips haves VT instructions but Intel ATOM and the new ARM processors do not
have these. Anyway, this kind of processor is very interesting for low power consumption.
4.6.2 Architecture comparison (Atom-Xeon-Hybrid)
Introduction
In this second benchmark we demonstrate that EMOTIVE Cloud can work with different types of
computer architectures by comparing their power consumption. The main goal in this benchmark is to
find the best tradeoff between power consumption and performance.
It is important to know that Atom processor is a CPU specialized in lower power consumption but it
has low performance. On the other hand, Xeon has a good performance but the power consumption is
high. Nowadays, it is important to have datacenters or Clouds with the high performance and also to
have low power consumption. EMOTIVE Cloud allows to use hybrid architectures to achieve this. In
particular, we implement a hybrid solution with Xeon and Atom processors to do a better green
computing.
This benchmark is composed by three tests. All the tests run the EMOTIVE Cloud middleware. The
first test uses Xeon servers, the second test uses only Intel Atom servers and the last uses a hybrid
solution with both processors. All the tests use two physical nodes simultaneously (two Atom
processors, or two Xeon processors or the mixed solution with one Atom and one Xeon). [Figure 20,
Figure 22]
Workload
We use a workload that creates 6 virtual machines into two servers in the following order: the first VM
is created at second 100, we create 3 VMs at second 300, later we create 2 VMs at second 500 and, to
finish, we create the last VM at second 800. Within each virtual machine runs a job that executes a
bucle with N iterations, into each iteration there are several arithmetics operations. The EMOTIVE
Scheduler (which decides the placement of the VMs) used in this test is a round-robin.
Results
Consequently we present the experiments with the following results. The graphic [Figure 20] shows the
power consumption over time (including the two servers) with the three different architectures. It
shows that Xeon processors have the biggest consumption while Atom processors get the most efficient
power usage but Atom architecture has lower performance than Xeon. As expected, in the hybrid
solution, the power consumption is in the middle between the other two solutions and it has better
performance than using only the Atom processors. So we need to detail more about hybrid solution and
its possibilities.
The other graphic [Figure 21] is a zoom of the graphic above [Figure 20], where we increase the scale
to better detail the watts consumption that incurs the solution Atom processor, because in the [Figure
20] it is hard to appreciate the variability of Atom consumption in relation the other two architectures.
Figure 22 details the performance of the same benchmark: Xeon performance, hybrid performance and
Atom performance. To see better the relation between power consumption and the performance, you
can compare the Figure 20 with the Figure 22. The results are very logical and expected. So in the
Figure 22 (in the X range) you can see the time when the virtual machine was created, the duration time
43
and the time when virtual machine was destroyed. Also we can see the CPU utilization in the Y range.
So Xeon solution is faster than Atom. It is an expected result because Xeon CPU is focused to have
performance and Atom CPU is focused to have better power consumption. Xeon is faster in the virtual
machine execution and the CPU utilization is lower than Atom. It is very interesting to play with both
solutions to find the better relation between power and performance. Hybrid solution has interesting
results because it has three VMs with the same Xeon performance but the other three VMs has the
same performance than Atom solution but hybrid solution has less power consumption than Xeon
solution.
Figure 20 - Xeon-Atom-Hybrid comparation
Figure 21 - Atom Zoom
44
Xeon performance
Hybrid performance
Atom performance
Figure 22 - Performance of 3 solutions
Ratio To have a better understanding of the relation between power consumption and performance, we
calculate the performance (measured in executions of the benchmark per day) per watt ratio. The new
variable ratio presented in the Table 4 helps to find the better tuning configuration with both solutions.
In the Table 4 you can see the results. This ratio demonstrates that Atom has better ratio relation than
the Xeon solution but nowadays the performance, in the Computing world, is most important feature
than power consumption. Using the hybrid solution we can approach to Xeon performance and
improve the power consumption.
45
A Xeon machine is able to run hard tasks and it will take fewer seconds than Atom to finish a job from
the benchmark. Therefore, the execution of a single VM in the Xeon node will take 50 secs while in the
Xeon it will take 165 secs minutes. The consumption of a single Xeon node running this test is 268,8
Watts, while the consumption of a single Atom node is 38,7 Watts.
This benchmark it is based in general case and it is not a specific case. Each virtual machines does the
same kind of job and do not mind in what servers are running. Every server runs the same workload. It
is hard to do a diagnostic in this general case. We need to study the computing needs, it depends about
service type, calculation, etc.
XEON ATOM HYBRID
Average Watts (2 nodes) 537,6 W 77,4 W 205,6 W
Time 490 secs 1270 secs 880 secs
Performance
(86400/Time)
176,33
executions/day
68,03
executions/day
98,18
executions/day
Ratio
(Performance/Power)
0,65
exec/day per watt
1,76
exec/day per watt
0,95
exec/day per watt Table 4 – Average ratio
Conclusion Using both architectures can improve power consumption than Xeon solutions and performance than
Atom Solution. We need to study the tuning of hybrid architectures. Therefore this experiment wants
demonstrate that it is a good approach for saving energy to mix low power systems and high
performance systems in the same data center. However, it depends on the workload that it needs to run.
On the one hand, it is better to run HPC tasks in Xeon architectures because they get a much better
performance than Atom processors executing this kind of tasks. Moreover, these tasks have a deadline
which cannot be achieved by Atom hosts. On the other hand, it is possible to use ATOM with
environments that use hard memory access or hard access disk than CPU performance. For example, in
Web Servers, Databases and others. In the case of applications with lower performance requirements
such as web-applications or tasks with more relaxed SLAs, it is better to make use of Atom processors
which have much more efficient power consumption.
Finally, in the experiment (29) it is derived dealing with heterogeneous resources is a big deal and the
presented model is able to automatically balance workload among nodes with different features such as
power consumption and performance, which makes the provider able to get a better overall benefit.
In general we do this comparation to know better the behavior about the hybrid solutions and its
possibilities. But these benchmarks are more general synthetic with a hard syntactic workload. These
benchmarks do not want to find a real environment, but a synthetic environment to understand the
hybrid possibilities.
In the next benchmark we have a more specific solution and we study two Cloud solutions.
46
4.6.3 Middleware scheduling comparison (OpenNebula and EMOTIVE)
Introduction
In the next comparison, we compare the scheduling policies of two Cloud middleware from an energy
consumption point of view. We compare EMOTIVE middleware with OpenNebula. We choose
OpenNebula (ONE) because it is probably the best Open Source middleware at the moment and the
most used. As we did previously, this test runs over different computer architectures: Xeon, Atom and
hybrid architecture. All tests run over Xen hypervisor. In this case, we want to evaluate the behavior of
each middleware scheduler according to its power consumption and its performance. We want to
evaluate if we can take profit of hybrid architectures to have better power efficiency without losing
much performance. Therefore it is necessary to create a benchmark to evaluate and see the differences
between schedulers comparison.
To perform these tests we have chosen the EMOTIVE Scheduler prototype (29). This Scheduler was
created to improve the power consumption during the tasks executions in a Hybrid Cloud (greater than
or equal to two servers). So we compare this Scheduler with one of the three Schedulers of
OpenNebula that are included in the release version (36). This ONE Scheduler is called Packing Policy
and it is the better to do a power efficient computation. In addition, these Scheduling policies that
OpenNebula incorporates are very similar to most of the middleware virtualization products as Xen
Citrix, VMware, etc. So we can consider that we compare EMOTIVE Scheduler prototype between a
generic Cloud Scheduler.
We choose OpenNebula Scheduler (as we commented earlier) because it is an excellent software,
mature and nowadays consolidated in the open community of IaaS. Moreover, we have the advantage
that we work together with its developers in the NUBA national project.
In this comparison, we use the same hybrid scenario (1 Xeon – 1 Atom) that we used in the previous
one.
Before presenting the results, we would like to explain the operation of Schedulers and how they
consolidate the virtual machines. EMOTIVE Scheduler uses a Back-filling policy together with a smart
algorithm to decide the virtual machines placement into the servers. In addition, this scheduler
prototype can shutdown the servers if they are idle.
The Open Nebula Scheduler is very similar but it does not have a smart algorithm to choose the best
server to put the virtual machines and also it cans not shutdown idle nodes. So it is important to know
that the behavior of these schedulers can be equal because it can happen that OpenNebula Scheduler
takes randomly the correct node to run the tasks. We have only two nodes in the test environments, so
OpenNebula Scheduler has a 50% success to choose the correct node to run the tasks. If we use more
than two nodes, the OpenNebula Scheduler probability of success is reduced.
According to this, in order to have a fair comparison, we compare the worst-case of both schedulers. In
OpenNebula Scheduler, the worst-case is when it chooses the incorrect server node. In EMOTIVE
Scheduler the algorithm always choose the best server to run the tasks in order to reduce power
consumption. Also it is important to consider that the feature to shutdown the idle servers it is not in
production in EMOTIVE because it is in preliminary version. And OpenNebula does not have this
feature yet in current version, but it will incorporate with plugin extension. So we compare and
simulate both middleware with this feature because we hope will be a generic requirement in the future.
47
Therefore in the graphics Figure 23 and Figure 24 we can see the results in the worst-case of both
schedulers.
Workload
The workload in this benchmark is similar to the previous ones. In this one, we launch a set of 9 virtual
machines which execute a job that performs N arithmetic operations. Therefore the order creation and
execution of this virtual machines are: Initially we launch the first VM and when this VM has finished
the second VM starts. Third VM starts when the second VM has finished but sometime after them the
fourth VM runs together with the third VM. When these VMs has finished we run simultaneously the
last 2 VMs…. Later we begin the same VMs workload sequence but only we launch the initial three
VMs. Normally these ultimate three VMs runs in the other node. Notice that we have limited the Xeon
RAM to be equal to the Atom server, to have more similar environments. So we configurated the Xeon
Domain-0 RAM to provide this.
This benchmark uses 2 servers. When benchmark starts, virtual machines run in some server and each
virtual machine runs a job that produces N threads, each one running a bucle with arithmetic
calculations. So these threads are splited in 5 threads (with a job) to stress totally the CPUs available.
All virtual machines start sequentially in serial mode. Once the jobs finish, the virtual machines are not
destroyed. In this way, we simulate the situation of virtual machines offering some service over a big
period of time... So these virtual machines consume RAM memory and when these VMs spend all
memory from the server, the next VM runs into the next free node. The workload is composed by 9
virtual machines. The first 6 virtual machines overload one node, and later the other 3 virtual machines
run in the other node.
Given the better performance of Xeon processors, the job that runs into each virtual machine finishes in
less than 50 secons in the Xeon machine, while lasts 165 seconds in the Atom one. We need to find a
balanced benchmark for comparing both processor architectures… because if we run a powerful
workload the Atom processor it is saturated very quickly. In this benchmark, it is the Atom architecture
that defines the maximum load limit. Xeon processors are superior in performance capacity than Atom.
But we are focusing this benchmark to improve green capacity and not to get a lot of performance.
Results
In the first graphic we have the ONE Scheduler worst-case result and in the second graphic we have the
EMOTIVE Scheduler in the worst-case. It is important to note that in the best-case of both schedulers
the results are similar, but if we have more than two servers, there is more probability that the results
will be worse in OpenNebula. This occurs because EMOTIVE and ONE uses Backfilling scheduler but
only EMOTIVE has a smart management algorithm to choose the best node to place the VMs
We can see OpenNebula results [Figure 24] and there is an average power consumption of 288 Watts
with a benchmark process time about 890 seconds. In addition, EMOTIVE [Figure 23] has a better
power consumption of only 81 Watts, but the benchmark process time has increased to about 1250
seconds.
In the case of using only one architecture (Xeon-Xeon or Atom-Atom),we get the same results with two
middlewares because both middlewares uses the same scheduling policy and the smart algorithm used
in EMOTIVE Scheduler is specialized to run in hybrid architectures and this has not effect in regular
architectures. So in the solution Xeon-Xeon we get the better performance with 670 secs, the half that
EMOTIVE uses in hybrid system. If we use only Atom-Atom solution we get a low performance with
48
1865secs. In conclusion the intermediate solution Xeon-Atom decreases a little the performance but it
has good power consumption. We will go further on this evaluating the performance per watt ratio for
all these possibilities.
The hybrid solution could be a good solution if we use real systems as web servers, data bases, etc.
That the most important feature for this is the memory access to disc and is not more important to have
a big processing calculation capacity.
Figure 23 - EMOTIVE Scheduler
Figure 24 - OPENNEBULA Scheduler
49
Ratio
We can see in the Table 6 the performance per watt ratio. It is calculated in the same way than in the
[Table 5] and [Table 6]. So we can see that the best ratio is in Atom-Atom solution. But EMOTIVE
hybrid gets a ratio very close to the Atom-Atom solution. So it is a good ratio. In contrast, we see that
the green ratio is harmed, in the OpenNebula case with Hybrid Architecture. The ONE ratio is even
worst than Xeon solution. This demonstrates that if we mix both computer architectures, it can produce
worst results than using only unique architecture if the scheduling is not good. Therefore it is necessary
to have a good smart management in hybrid solution to take advantage of these solutions. Power is
nothing without control. There is a huge space to do research in these topics. In this case we
demonstrate that unique architectures could be better than hybrid solutions in power and performance
because Xeon consume much more than Atom but Xeon finish its jobs more quickly.
Also it is important to mention that OpenNebula is an open-source project where a lot of researchers
are working to improve its green capabilities, so we must expect these results to be improved in the
near future.
XEON-
XEON
ATOM-
ATOM
HYBRID (best-case) HYBRID (worst-case)
EMOTIVE 361,9 W 50,9 W x= 81,2 W x= 81,2 W
Opennebula 361,9 W 50,9 W x= 81,2 W x= 288 W Table 5- Power
XEON-
XEON
ATOM-
ATOM
EMOTIVE hybrid OpenNebula hybrid
Secs 670 secs 1865 secs 1250 secs 890 secs
Performance
(86400/Time)
128,95
executions/da
y
46,33
executions/
day
69,12
executions/day
97,08
executions/day
Ratio
(Performance/
Power)
0,36
executions/da
y per watt
0,91
executions/
day per watt
0,85
executions/day per
watt
0,34
executions/day per watt
Table 6 - Performance in time - RATIO (*) more is better
50
4.6.4 Middlewares qualitative comparison
Tool Eucalyptus OpenNebula EMOTIVECloud OpenStack
Main feature implements cloud semantics virtualization control framework virtualization control framework simple to implement and massively scala-ble
Highlights similar than Amazon EC2 Full framework Schedulers researches hypervisors, virtual networks and filesys-tems and the computing engine is orches-trating all of that
Provisioning Model Immediate Best-effort Best-effort Best-effort
Interfaces EC2-soap WS API and S3, Elastic Block Store (EBS)
EC2, Sunstone,vCloud, API OCCI (storage,virtualization,network)
WS REST / API OCCI (virtualiza-tion,network)
S3 and EC2
Support for Hybrid Cloud no Amazon EC2 and ElasticHosts Amazon EC2 S3 and EC2 this year
Hypervisors XEN,KVM,VMware XEN, XenServ-er(beta),KVM,VMWare/ESX
XEN,KVM &VirtualBox Xen, XenServer, KVM, Hyper-V, VM-Ware/ESX
Programming framework Java and C Ruby and wraps the XML-RPC in JAVA bindings
Java & bash script bash script, python, others
Flexible architecture no yes yes Yes (new plugins are emerging)
GUI no yes BETA yes
Command-Line Interface yes (unix shell) yes (unix shell) similar (Java Client app.) yes
Image Management No (Repository) yes Only in Debian Yes
Scheduling yes External yes Yes
placement policies round-robin approach and greedy Packing, Striping and Load-aware, haizea, ecosystem, ...
HighAvailability Backfilling, round-robin approach and other researches
-
Live Migration no yes yes Yes
High Availability and Backfilling no yes yes Yes
Architecture Centralized Centralized Descentralized and modular
Configuration Easy in ubuntu and Medium in others OS
Easy in ubuntu and Medium in others OS
Medium Beta version
Storage s3 NFS, SCP, ... NFS, SFTP, FTP, Hadoop, FS, .... yes
VLAN no yes yes yes
Currently version V1.6.2 V3.0Beta V1.2 Current Release (Cactus ), Next Milestone (Diablo) in Q3 2011
APIs used EC2 Libvirt and EC2 Libvirt and EC2 Libvirt and Xen API
More contribution Open Community and Ubuntu Open Community, Ubuntu and UCM BSC and UPC Rackspace or NASA
Community Big Big BSC and UPC More than 100 developers and architects
Popular yes CERN, NIKHEF, D-Grid, SARA, Only reseach projects (NUBA, BREIN, Now is growing a lot! (Rackspace, NASA,
51
SURF, ESAC-ESA, NCHC, CRS4, CESGA, CESCA, MPS, TID, EGEE, RESERVOIR, StratusLab, OGF OCCI, D-GRID, VENUS-C, NUBA
VENUS-C, OPTIMIS) Rightscale, Citrix, Dell, NTT Data, PEER 1, Softlayer, Cloud..com, iomart Group, Opscode, Puppet Labs, Zenoss, AMD, Intel, Spiceworks, CloudSwitch ...)
Documentation Community, Eucalyptus site, Ubuntu Enterprise Cloud (UEC)
Community, Opennebula site, Ubuntu Enterprise Cloud (UEC)
EMOTIVE web site http://nova.openstack.org/
Licence BSD Apache2 LGPL Apache2
SO Linux, Windows Linux Linux Linux
SO (LINUX) CentOS, Debian, OpenSuSE, RHEL, SLES, Ubuntu (Integred in Ubuntu UEC)
CentOS, Debian, OpenSuSE, RHEL, SLES, Ubuntu (Integred in Ubuntu UEC)
(De-bian/Ubuntu/Fedora/RedHat/CentOS)
CentOS, Debian, OpenSuSE, RHEL, SLES, Ubuntu
Default Placement Policies Default Placement PoliciesConfigur-able Placement Policies
Initial placement based on a require-ment/rank policies to prioritize those resources more suitable for the VM using dynamic information, and dynamic placement to consolidate servers
Simple Scheduling and High Availability Scheduling
there are several to choose from (simple, chance, etc) but nova-scheduler is evolv-ing for the future releases
Configurable Placement Policies No Support for any static/dynamic placement policy
Easy RESTfull interface to extend with some develop
It is a area of hot development for the future releases of OpenStack Nova
OVF support No yes yes (Alpha) -
admin. interface only EC2 can be used (i.e. no sus-pend or migration of any kind)
a superior administration interface (migrate, suspend VM,...)
a superior administration interface (migrate, suspend VM,...)
Yes
advance contextualization No completed basic basic
powerful API to extend basic (EC2 calls) yes yes http://www.virtualizationtimes.com/does-openstack-change-cloud-game
Users management / Authorization & Authentication
yes yes no Amazon API, VMware’s vCloud, Eucalyp-tus, OpenNebula and others
MySQL support no mysql lite and mysql no [BETA] sqlite3, mySQL and PostgreSQL
52
5 Conclusions
5.1 Summary
Computer Science is a discipline that evolves so fast and it is usually focused on growth performance.
Technology impact has greatly influenced our society. And it is important to consider the power
consumption of the computer science and cloud computing. Improving this parameter is more difficult
than others. Since to research about green computing needs to play with physical laws (such as most
engineering and others disciplines) (37). So nowadays there is more effort to improve the ecological
aspects.
In our tests we can see a first approach to use hybrid architectures to improve power consumption and
to achieve that we do not lose performance. Also in this project we contribute adding new features to
improve interoperability into Clouds, to have new hypervisors, and other features, such as the
EMOTIVE modular architecture that facilitates to bring new schedulers, new interfaces, new
developments and adaptions.
In general this project shows a global vision about a type of IaaS project. It was focused to evolve this
middleware to achieve new features and new visions to improve this. Always we have linked this
project with the research conducted by UPC and BSC. It should be clear that EMOTIVE does not want
to compete with products as OpenNebula, OpenStack and others. EMOTIVE is a tool to test and
research. So all environments created with EMOTIVE are pre-production environments. And basically
this framework is used by BSC and UPC to do research in Cloud Computing (21) (38) (39) (40) (41)
(42) (43), mainly in environments Infrastructure as a Service.
5.2 Publications
This section details a list of publications related to this master thesis:
Book chapter: EMOTIVE Cloud: The BSC‟s IaaS Open Source Solution for Cloud Computing. Àlex
Vaqué, Iñigo Goiri, Jordi Guitart and Jordi Torres. Universitat Politècnica de Catalunya (UPC) and
Barcelona Supercomputing Center (BSC), April 2011.
Presentation: (OGF30) Open Grid Forum 30 2010 (Brussels) – November 2010. Open Cloud
Computing Interface presentations (OCCI-WG) - Toward Interoperable Clouds: the EMOTIVE
Experience with OCCI. Alexandre Vaqué.
Technical report: F.Julià, J. Roldan, R. Nou, O. Fitó, A. Vaqué, I. Goiri, J. Berral. “EEFSim: Energy
Efficency Simulator” Research Report number: UPC-DAC-RR-CAP-2010-15, June 2010.
5.3 Suggestions for future work
EMOTIVE needs to improve some features, for example new OVF compatibility. Now EMOTIVE has
OVF compatibility in Alpha version and it is unstable. If we want to be full interoperable, we need to
have more compatibility with the most popular Cloud interfaces as API OCCI, d-Cloud, vCloud and
53
EC2. In contrast, OpenNebula has a lot of interface compatibilities and it can adapt in a lot of Cloud
environments because it supports OCCI, EC2, vCloud, and Sunstone. While there are not standards
defined, it is good solution to have a lot of compatibilities but it has a cost in development time. So it
will be necessary to be attentive in the standardization. OCCI and OVF are in good position to be the
best open standards. Also maybe KVM will be a future open-source standard as hypervisor because it is
evolving into Linux Kernel. In contrast Xen is losing quota and KVM is taking very strong. But now
Xen will be integrated into Linux kernel 3.0.
Coming back to EMOTIVE features, it needs to have a web service Restful communication with user
and password authentication to have secure communications and to manage users.
Another aspect is that EMOTIVE, in comparation to other middlewares, has not explicit storage
management. It is interesting to add this feature using some open-source tool or developing from
scratch. With this feature EMOTIVE could be 100% adapted to API OCCI because now it already
supports computes and networks solution. But virtual networks management is very basic because API
Libvirt has limited network management. To improve this can be a good solution to use Openvswitch
open-source tool (44) or using system Linux etables/iptables as ONE. But before to develop this, we
need to research more in virtual networks… if we want to progress in this line.
Libvirt has a Windows installation package in development. Now it is an experimental version. And it
is interesting to mention that EMOTIVE Cloud uses Java. It is multi-platform and it can be used in
Windows. So EMOTIVE Cloud is developed in Java and it uses Libvirt API. So is interesting to be
aware of Libvirt Windows evolution because in a future, could we create EMOTIVE Cloud
environments into Windows Operating System?
New middlewares and big communities are emerging. Now OpenStack is taking strong. When we
began this master thesis, OpenStack did not exist and during the development of this master thesis
OpenStack is starting to emerge. OpenStack promises a lot and it is an important rival for OpenNebula.
But now we think that OpenNebula has a better position to be a standard and it is the best open-source
solution. OpenNebula is strong, with experience and ready to demonstrate that now is a better solution
than OpenStack, Eucalypthus and others. But OpenStack is supported and funded for big international
companies.
KVM, OpenNebula, OpenStack and others have a continuous quick evolution. In only few months
during the writing of this project they published a lot of new features and results, for example
OpenNebula had published v1.4, later v2.0, v2.2 and now 3.0 version! … They have an aggressive
growing and continuous development. It has been demonstrated that Cloud Computing is not the future
else it is the present.
Talking about green hardware: Intel historical evolution only was focused on the performance but now
also they begin to improve better power CPU consumption (45). On the other side, ARM-based
processors dominate the mobile chipset market but now they begin to have a little deployment in the
enterprise server space, where Intel owns a majority of the market. The chips run on lower-power
consumption than Intel. It will be interesting to extend our benchmarks in this kind of CPU architecture
to improve green feature because ARM has an interesting green architecture. Now the performance is
not absolute variable and also power consumption is a new important variable to consider. Intel needs
to improve this.
Finishing this master thesis we read interesting news about SNIA/CDMI (46), that is an important
Cloud Storage Initiative, collaborates with OCCI to improve storage interface. Other interesting news
that it is obligatory to talk about is OpenCompute (47). OpenCompute is a Facebook rollout of the
Open Compute Project, a new effort to create open industry standards for data center hardware and
54
design based on Facebook‟s work at its new Oregon data center. This project invites to share open
information to improve to create an open community about CPDs ecosystem to improve the PUE
parameter in some datacenter.
55
6 References
1. BREIN. (http://www.eu-brein.com).
2. OPTIMIS. (http://www.optimis-project.eu).
3. VENUS-C. (http://www.venus-c.eu).
4. NUBA. (http://nuba.morfeo-project.org).
5. Amazon Elastic Compute Cloud EC2. (http://aws.amazon.com/ec2).
6. OpenNebula. (http://www.opennebula.org).
7. Libvirt: The virtualization API (http://libvirt.org).
8. Above the Clouds: A Berkeley View of Cloud Computing . Michael Armbrust, Armando Fox ,
Rean Griffith , Anthony D. Joseph , Randy H. Katz , Andrew Konwinski, Gunho Lee, David A.
Patterson, Ariel Rabkin, Matei Zaharia. University of California, Berkeley : Technical Report No.
UCB/EECS-2009-28, 2009.
9. Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as
the 5th utility. Rajkumar Buyya, Chee Shin Yeo, Srikumar Venugopal, James Broberg and Ivona
Brandic. The University of Melbourne, Australia. Manjrasoft Pty Ltd, Melbourne, Australia. Vienna
University of Technology, Austria. : s.n.
10. Future Of Cloud Computing. Opportunities For European Cloud Computing Beyond 2010. Keith
Jeffery [ERCIM], Burkhard Neidecker-Lutz [SAP Research]. European Comission : Expert Group
Report, Vol. Public Version 1.0.
11. Different is Cloud Computing from Virtualization, and How Similar Are They? Schooff, Peter.
How. Ebizq. [Online] 2010. http://www.ebizq.net/blogs/ebizq_forum/2010/01/how-different-is-cloud-
computing-from-virtualization-and-how-similar-are-they.php. 1595005.
12. Virtualizacion y Cloud Computing. Rojas, Elisabeth. muycomputerpro.com. [Online] 2009.
http://muycomputerpro.com/Expertos/Virtualizacion-y-cloud-
computing/_wE9ERk2XxDAknN3JQerWkRhxoXgzngOBLb_ueRTWbBVu-Q-lPa5S-ouDOwpLjHQi.
13. unlocks Cloud Computing. Bittman, Thomas J. Virtualization [Online] 2009.
http://blogs.gartner.com/thomas_bittman/2009/08/11/virtualization-unlocks-cloud-computing.
14. Eucalyptus : A technical report on an elastic utility computing architecture linking your programs to
useful systems (2008) by Daniel Nurmi , Rich Wolski , Chris Grzegorczyk , Graziano Obertelli ,
Sunil Soman , Lamia Youseff , Dmitrii Zagorodnov.
15. The Eucalyptus Open-source Cloudcomputing System. D. Nurmi, R. Wolski, C. Grzegorczyk, G.
Obertelli, S. Soman, L. Youse, and D. Zagorodnov. 9th IEEE/ACM International Symposium on
Cluster Computing and the Grid (CCGrid 2009), Shanghai, China, May 18-21, p.
16. Open Grid Forum. Open Cloud Computing Interface (OCCI) Infrastructure, Version 1. Retrieved
March 22, 2011 from http://forge.ogf.org/sf/docman/do/downloadDocument/projects.occi-
wg/docman.root.drafts.occi_specification/doc16162.
17. The Green Grid. (http://www.thegreengrid.org).
56
18. GreenTI. (http://greenti.wordpress.com).
19. Two Google searches produce same CO2 as boiling a kettle. Don, Akira The. Telegraph. 2009.
www.telegraph.co.uk.
20. How dirty is your data? A Look at the Energy Choices. international, Greenpeace. April 2011.
21. Energy-aware Scheduling in Virtualized Data Centers. I. Goiri, F. Julià, R. Nou, J.L. Berral, J.
Guitart, and J. Torres. pp. 58-67, Heraklion, Crete, Greece : s.n., September 20-24, 2010, Vol. 12th
IEEE International Conference on Cluster Computing (Cluster'10).
22. Impact of Virtualization on Data Center Physical Infrastructure.Tom Brey, IBM. Operations Work
Group. White Paper #27. s.l. : Schneider Electric, 2010. Rev 2010-0.
23. Leveraging the Cloud for Green IT: Predicting the Energy, Cost and Performance of Cloud
Computing.Amy Spellmann, Richard Gimarc, Mark Preston. Amy Spellmann, Richard Gimarc, Mark
Preston. Turnersville, USA : CMG (Computer Measurement Group), 2009.
24. Spotlight. Eco-efficent IT and the greening of the cloud. William Felows, Andy Lawrence. New
York : Symposium 2010, May,2010.
25. EMOTIVE Cloud. Autonomic Systems and eBusiness Platforms research line. Barcelona
Supercomputing Center (BSC). (http://www.emotivecloud.net).
26. SORMA. (http://www.sorma-project.eu).
27. Job Submission Description Language (JSDL) Specification, Version 1.0. Open Grid Forum.
Retrieved March 22, 2011 from http://www.gridforum.org/documents/GFD.136.pdf .
28. Distributed Management Task Force. Open Virtualization Format (OVF) Specification, Version
1.1.0. Retrieved March 22, 2011 from
http://www.dmtf.org/standards/published_documents/DSP0243_1.1.0.pdf .
29. Energy-efficient and Multifaceted Resource Management for Profit-driven Virtualized Data
Centers. Ínigo Goiri, Josep Ll. Berral, J. Oriol Fitó, Ferran Julià, Ramon Nou, Jordi Guitart, Ricard
Gavaldà, and Jordi Torres. Barcelona : Universitat Politècnica de Catalunya and Barcelona
Supercomputing Center, 2010.
30. Format Network (Libvirt). (http://libvirt.org/formatnetwork.html)
31. OpenVPN. OpenVPN Technologies, Inc. All Rights Reserved., 2002-2011. (http://openvpn.net).
32. PPTP Client. (http://pptpclient.sourceforge.net).
33. Rochwerger, R; Caceres, J; Montero, RS; Breitgand, D; Elmroth, E; Galis, A; Levy, E; Llorente,
IM; Nagin, K & Wolfsthal, Y (2009), „The RESERVOIR Model and Architecture for Open Federated
Cloud Computing‟. IBM Systems Journal, September 09.
34. OCCI in EMOTIVE. Open Cloud Computing Interface. [Online] OCCI ® OGF. (http://occi-
wg.org/2011/03/22/occi-emotive).
35. Watts up? PRO. (http://www.wattsupmeters.com).
36. OpenNebula. Scheduling Policies 2.0. [Online] 2002-2011.
http://opennebula.org/documentation:archives:rel2.0:schg.
37. Dark Silicon and the End of Multicore Scaling. Hadi Esmaeilzadehy, Emily Blemz, Renée St.
Amantx, Karthikeyan Sankaralingamz, Doug Burger. University of Washington, University of
Wisconsin-Madison, The University of Texas at Austin, Microsoft Research : International Symposium
57
on Computer Architecture (ISCA ‟11), 2011, Vol. Appears in the Proceedings of the 38.
38. Support for Managing Dynamically Hadoop Clusters. David de Nadal Bou and Yolanda Becerra.
Master thesis UPC. September 2010.
39. High Availability on Virtualized Platforms with Minimal Physical Resource Impact. Javier Alonso,
Iñigo Goiri, Jordi Guitart, Ricard Gavaldà and Jordi Torres. Universitat Politècnica de Catalunya and
Barcelona Supercomputing Center. Barcelona, Spain.
40. Introducing Virtual Execution Environments for Application Lifecycle Management and SLA-
Driven Resource Distribution within Service Providers. Íñigo Goiri, Ferran Julià, J. Ejarque, M. de
Palol, R. Badia, Jordi Guitart, and Jordi Torres. Cambridge, Massachusetts, USA, : 8th IEEE
International Symposium on Network Computing and Applications (NCA'09). , July 9-11, 2009.
41. Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data
Centers. Íñigo Goiri, J. Oriol Fitó, Ferran Julià, Ramón Nou, J. Ll. Berral, Jordi Guitart, and Jordi
Torres. 11th ACM/IEEE International Conference on Grid Computing. (Grid 2010). Brussels, Belgium,
October 25-29 : s.n., 2010.
42. Characterizing Cloud Federation for Enhancing Providers‟ Profit. nigo Goiri, Jordi Guitart, and
Jordi Torres. UPC and BSC. Miami, Florida, USA : 3rd IEEE International Conference on Cloud
Computing (CLOUD'10) , July 5-10, 2010, Vols. pp. 123-130.
43. Integració de KVM, Libvirt i Monitoring de baix nivell a Emotive Cloud Platform. Marc Gonzalez
Mateo. PFC. UPC. Març 2010.
44. Openvswitch. An Open Virtual Switch. 2009-2011. (http://openvswitch.org).
45. Crothers, Brooke. Intel adds low-power Xeon chips. (http://news.cnet.com/8301-13924_3-
10169584-64.html?part=rss&subj=news&tag=2547-1_3-0-20).
46. Cloud Data Management Interface. Advancing storage & information technology, SNIA. 2010,
April, Vol. SNIA Technical Position v1.0.
47. Facebook. Open Compute Project - Hacking Conventional Computing. (http://opencompute.org).
48. Virtual Infrastructure Management in Private and Hybrid Clouds. Ian Foster. Borja Sotomayor, R.S.
Montero and Ignacio M. Llorente.University of Chicago and UCM.
49. Outsourcing Business to Cloud Computing Services: Opportunities and Challenges. Hamid R
Motahari-Nezhad, Bryan Stephenson, Sharad Singhal. HP Laboratorieshttp.
50. Green Computing. Jose Angel Fernández (Universidad Politécnica de Madrid).
(http://internetng.dit.upm.es/2009/01/16/green-computing)
58