126
BNS CLOUD APPLICATION SYSTEM USING OPENSTACK MALAYSIAN INSTITUTE OF INFORMATION TECHNOLOGY UNIVERSITI KUALA LUMPUR 2014 BET (HONS) IN NETWORKING SYSTEMS 2014

FYP Dissertation

Embed Size (px)

Citation preview

Page 1: FYP Dissertation

BNS CLOUD APPLICATION SYSTEM USING OPENSTACK

MALAYSIAN INSTITUTE OF INFORMATION TECHNOLOGY

UNIVERSITI KUALA LUMPUR

2014

BE

T (H

ON

S) IN

NE

TW

OR

KIN

G S

YS

TE

MS

20

14

Page 2: FYP Dissertation

BNS CLOUD APPLICATION SYSTEM USING OPENSTACK

MOHAMMAD HAFIZ BIN ZUBAIR

Dissertation Submitted In Fulfillment of the Requirements

For the Bachelor of Engineering Technology (Hons) Networking System

in the Malaysian Institute of Information Technology,

Universiti Kuala Lumpur

2014

Page 3: FYP Dissertation

ii

DECLARATION PAGE

I declare that this dissertation is of my original work and all references have been

cited adequately as required by the institute.

Date: Signature:

Full Name: Mohammad Hafiz Bin Zubair

ID No. : 52208211079

Page 4: FYP Dissertation

iii

APPROVAL PAGE

We have examined this dissertation and verify that it meets the program and institute’s

requirements for Bachelor of Engineering Technology (Hons) Networking System.

…………………………………………….

ROZIYANI BINTI RAWI

…………………………………………….

MUHAMMAD EZRA BIN MUHAMMAD ISMAIL

SYSTEM AND NETWORKING UNIVERSITI KUALA LUMPUR

MALAYSIA INSTITUTE OF INFORMATION TECHNOLOGY

Page 5: FYP Dissertation

iv

COPYRIGHT PAGE

Declaration of Copyright and Affirmation of Fair Use of Unpublished Researched

Work as stated below:

Copyright @ 11/6/2014 by

MOHAMMAD HAFIZ BIN ZUBAIR (52208211079)

All rights reserved for BNS CLOUD APPLICATION SYSTEM USING

OPENSTACK

No part of this unpublished research may be reproduced, stored in a retrieval system,

or transmitted, in any form or by any means, electronic, mechanical, photocopying,

recording or otherwise without the prior written permission of the copyright holder

except as provided below:

i. Any material contained in or derived from this unpublished research may

only be used by others in their writing with due acknowledgement.

ii. MIIT Universiti Kuala Lumpur or its library will have the right to make and

transmit copies (print or electronic) for institutional and academic purposes.

iii. The Universiti Kuala Lumpur’s library will have the right to make, store in

retrieval system and supply copies of this unpublished research if requested

by other universities and research libraries.

Page 6: FYP Dissertation

v

DEDICATION

I would like to dedicate this project to my family for their love and support

towards my project. This dissertation could not have been written without Madam

Roziyani Binti Rawi who not only served as my final year project supervisor but also

encouraged me to do my best for this project until it is successfully completed. I also

would like to thank you for the co-operation, knowledge, and information during my

final year project. Last but not least, a lot of thank you to all members that giving their

time and knowledge to successfully achieve my final year project objectives. Thank

you for all the effort.

Page 7: FYP Dissertation

vi

ACKNOWLEDGMENT

Alhamdulillah and praise to Allah for giving me the strength to complete this

final year project even though there are many obstacle in the ways. The project

delivered in this paper could not have been accomplished without the help of many

individuals. I would like to acknowledge the contributions of the following groups and

individuals for successfully complete this project.

First I would like to thank my supervisor, Madam Roziyani Binti Rawi for the

helped in documentation parts of the project and letting me to use her room to park my

server. Thank you for all the guidance, valuable suggestion, comment, advice, support

and concern throughout the completion of this project.

Along this period, I also would like to thank to my friends for their supports

throughout the development and implementing the projects.

My sincere appreciation also goes to all MIIT Lecturers involved in this Final

Year Project for all their support and advice not only in this project but also during

most of my study period.

Finally, I am thankful to all people who names are not been mentioned for their

encouragement, criticism, and support for this project.

Thank you.

Page 8: FYP Dissertation

vii

TABLE OF CONTENTS

DECLARATION

APPROVAL PAGE

COPYRIGHT PAGE

DEDICATION

ACKNOWLEDGEMENT

TABLE OF CONTENTS

LIST OF TABLES

LIST OF FIGURE

LIST OF ABBREVIATION

ABSTRACT

ABSTRAK

ii

iii

iv

v

vi

vii

xi

xii

xiv

xv

xvi

CHAPTER I: INTRODUCTION

1

1.1. Introduction

1.2. Objective

1.3. Scope

1.4. Problem Statement

1.5. Proposed Solution

1.6. Limitation

1

1

2

2

3

3

CHAPTER II: LITERATURE REVIEW

4

2.1. Overview

2.2. Introduction to Cloud Computing

2.3. Cloud Computing Performance

2.4. Private Cloud Computing

2.5. Open Source Cloud Computing Management Platforms

4

4

5

6

7

Page 9: FYP Dissertation

viii

2.6. Comparison between Clouds Computing Management Tool

2.7. OpenStack and OpenNebula

2.8. Cloud Computing Security Challenge

8

10

12

CHAPTER III: METHODOLOGY

15

3.1. Introduction

3.2. Methodology

3.2.1. Initiation

3.2.2. Research

3.2.3. Planning

3.2.4. Design

3.2.4.1. Cloud Computing Server

3.2.4.2. Existing Network

3.2.4.3. Users

3.2.5. Development

3.2.5.1. OpenStack Architecture

3.2.5.2. All-in-one server

3.2.5.3. Monitoring

3.2.5.4. OpenStack Glance

3.2.5.5. Authentication

3.2.5.6. Security

3.2.6. Implementation

3.2.6.1. Client side

3.2.6.2. Server side

3.2.6.3. Network side

3.2.7. Testing

3.2.8. Analysis

3.3. Work Breakdown Structure (WBS)

3.4. Gantt Chart

3.5. Project Resources

3.5.1. Hardware

15

15

16

17

18

20

20

21

21

21

21

21

22

22

22

22

23

23

24

24

25

25

26

27

27

27

Page 10: FYP Dissertation

ix

3.5.2. Software

3.6. Project Costing and Budget

3.6.1. Cost for Hardware

3.6.2. Cost for Software

30

31

31

31

CHAPTER IV: DEVELOPMENT/PROTOTYPE

32

4.1. Introduction

4.2. Cloud Computing Server

4.3. Development Process

4.3.1. Installation requirement

4.3.2. Preparing all-in-one: dedicated hardware server

4.3.3. Creating basic virtual machine image

4.3.4. Importing image to the OpenStack Glance

4.3.4.1. Logon to OpenStack

4.3.4.2. Navigate to image tab

4.3.4.3. Launching an instance

4.3.5. Connecting to the instance

4.3.5.1. VNC Viewer

4.4. Monitoring Performance

4.4.1. Network Management Software (NMS)

4.4.2. Install SNMPD service on OpenStack server

4.4.3. Configure PRTG to monitor OpenStack server

32

32

33

33

33

38

42

42

43

44

45

45

46

46

47

48

CHAPTER V: TESTING AND RESULT

51

5.1. Introduction

5.2. Testing Method Approach

5.3. Creating Instances

5.3.1. Testing

5.3.2. Result

51

52

53

54

58

Page 11: FYP Dissertation

x

5.3.3. Analysis

5.4. Common Approach

5.4.1. Testing

5.4.2. Result

5.4.3. Analysis

5.5. Stressing OpenStack Server

5.5.1. Testing

5.5.2. Result

5.5.3. Analysis

61

62

62

63

66

67

67

70

74

CHAPTER VI: CONCLUSION AND RECOMMENDATION

75

6.1. Conclusion

6.2. Recommendation

75

76

REFERENCES

GLOSSARY

APPENDIX A: GANTT CHART

APPENDIX B: UBUNTU SERVER 14.04.4 INSTALLATION

APPENDIX C: OPENSTACK SYSTEM OVERVIEW

APPENDIX D: ACADEMIC RESEARCH PAPER

76

79

81

86

94

104

Page 12: FYP Dissertation

xi

LIST OF TABLES

Table 2.1: Comparison of cloud computing tools

Table 2.2: Comparison of OpenStack and OpenNebula feature

Table 2.3: Overview of literature reviews

Table 3.1: Indicates the components involves throughout this project

and relationship between each other

Table 3.2: Required installation and configuration of the component

Table 3.3: Component involved in the server side

Table 3.4: Networking devices

Table 3.5: Detail information on each hardware

Table 3.6: Indicate the software requirement in each component

Table 3.7: Hardware costing

Table 3.8: Software costing

9

11

13

18

23

24

24

28

30

31

31

Page 13: FYP Dissertation

xii

LIST OF FIGURES

Figure 2.1: Three cloud computing infrastructure

Figure 2.2: Cloud computing overview

Figure 3.1: RAD Methodology Model

Figure 3.2: Cloud computing diagram

Figure 3.3: Work Breakdown Structure diagram

Figure 4.1: Network diagram of the OpenStack implementation

Figure 4.2:All the necessary files retrieved using git

Figure 4.3: Modify the local.conf with the following configuration

Figure 4.4: Installation process is started

Figure 4.5: Example of scripted installation process

Figure 4.6: Summary of ./stack.sh

Figure 4.7: OpenStack login page at 10.6.2.230

Figure 4.8: Fill in information during installation process

Figure 4.9: Red Hat VirtIO is the default adapter to provide connection

Figure 4.10: Installation of GNS3

Figure 4.11: Login using username admin and password both admin

Figure 4.12: Create an image pop-up filled with necessary information

Figure 4.13: Show the example on how to create an instance

Figure 4.14: Summary of the created instance

Figure 4.15: Enter the VNC Server IP address

Figure 4.16: PRTG interface

Figure 4.17: First step to add new device to PRTG

Figure 4.18: Example of device configuration

Figure 4.19: Sensors that discovered at OpenStack server

Figure 4.20: Show the current active sensors

Figure 5.1: Running instances handled by OpenStack

Figure 5.2: List of images available in OpenStack Server

Figure 5.3: Launch instance pop-up form

Figure 5.4: Summary of Windows7-VNC image converted to Win7-1

instance

5

7

16

20

26

32

34

35

36

36

37

37

39

40

41

42

43

44

45

45

46

48

49

49

50

52

53

54

55

Page 14: FYP Dissertation

xiii

Figure 5.5: VNC Viewer interface

Figure 5.6: Select location in Win 7-1

Figure 5.7: Inside the instance

Figure 5.8: CPU utilization from 1:20AM to 2:20AM

Figure 5.9: Physical memory utilization from 1:20AM to 2:20AM

Figure 5.10: Bandwidth utilization from 1:20AM to 2:20AM

Figure 5.11: Three Cisco 3725 series virtual router

Figure 5.12: CPU utilization from 10:40PM to 11:35PM

Figure 5.13: Physical memory utilization from 10:40PM to 11:35PM

Figure 5.14: Bandwidth utilization from 10:40PM to 11:35PM

Figure 5.15: Sixth and seventh instance created

Figure 5.16: Hypervisor summary

Figure 5.17: HeavyLoad software stressing CPU usage

Figure 5.18: Error occur to start virtual router

Figure 5.19: CPU utilization from 9:40PM to 10:30PM

Figure 5.20: Physical memory utilization from 9:40PM to 10:30PM

Figure 5.21: Bandwidth utilization from 9:40PM to 10:30PM

55

56

57

58

59

60

62

63

64

65

67

68

69

70

71

72

73

Page 15: FYP Dissertation

xiv

LIST OF ABBREVIATION

All–in-one

Central Processing Unit

Infrastructure as a Service

Kilobyte Virtual Machine

Laboratory

Malaysian Institute Of Information Technology

Network Monitoring System

Operating System

Platform as a Service

Random Access Memory

Secure Shell

Simple Network Management Protocol Daemon

Software as a Service

Universiti Kuala Lumpur

Virtual Central Processing Unit

Virtual Machine

Virtual Network Computing

AIO

CPU

IaaS

KVM

Lab

MIIT

NMS

OS

PaaS

RAM

SSH

SNMPD

SaaS

UniKL

VCPU

Instance

VNC

Page 16: FYP Dissertation

xv

ABSTRACT

This study is to focus on how the cloud computing works and implements it on

real world situation where the system can be accessed throughout the UniKL City

Campus MIIT and can be used by students to do their works or assignments using the

system. The methodology used in this project is a Rapid Application Development

(RAD) model because it provides continuous development and documentation to be

reviewed about the performance of the systems. This methodology consists of eight

phases which are initiation, research, planning, design, development, testing,

implement and analysis. This project is developed using OpenStack cloud

management software on a single all-in-one hardware server that capability to create

five to six instances. This project is tested in a controlled environment where each of

the instances are loaded with GNS3 and HeavyLoad software to simulate the real

workload on CPU utilization, RAM usage and network load. During the session, it was

also being monitored using network monitoring system tool to capture the server state.

Even though the server is pushed to the limits, it can still create additional instances

using available resources, but the created instance, will face some problem when

loading high computes application such as GNS3 where it failed to start the virtual

routers.

Page 17: FYP Dissertation

xvi

ABSTRAK

Kajian ini ialah untuk mengkaji pada bagaimana pengkomputeran awan

bekerja dan melaksanakan ia di keadaan dunia sebenar di mana sistem boleh diakses

di seluruh UniKL City Campus MIIT dan boleh digunakan oleh pelajar untuk

melakukan kerja-kerja atau tugasan-tugasan mereka menggunakan sistem itu. Kaedah

yang digunakan dalam projek ini ialah Rapid Application Development (RAD) model

kerana ia menyediakan pembangunan berterusan dan dokumentasi untuk dikaji semula

tentang prestasi system ini. Kaedah ini terdiri daripada lapan fasa yang mana ia adalah

permulaan, penyelidikan, perancangan, merangka, membangunkan, menguji,

melaksanakan dan analisis. Projek ini di bangun menggunakan pengurusan perisian

awan OpenStack di satu pelayan perkakasan yang semua dalam satu yang

berkebolehan mewujudkan lima kepada enam “instance”. Projek ini diuji di satu

persekitaran terkawal di mana setiap “instance” dibebankan mengguna perisian GNS3

and HeavyLoad untuk mensimulasikan pekerjaan sebenar memuatkan penggunaan

CPU, penggunaan RAM dan rangkaian. Semasa sesi itu, ia juga dipantau

menggunakan aplikasi sistem pemantauan rangkaian untuk memeriksa keadaan

pelayan. Walaupun pelayan sedang berada di had maksimum, ia masih boleh mencipta

“instance” tambahan menggunakan sumber-sumber yang ada tetapi “instance”

tersebut akan menghadapi sedikit masalah apabila memuatkan aplikasi seperti GNS3

di mana ia gagal memulakan “routers” maya.

Page 18: FYP Dissertation

1

CHAPTER I: INTRODUCTION

1.1. Introduction

Now days, there are many applications specifically for networking

students such as Bachelor of Networking System (BNS) that are requiring a

high specification computer to execute the applications. Therefore, cloud

computing is necessary in the UniKL environment to be fully utilized by BNS

students so that they don’t have to purchase an expensive laptop just to run the

applications.

For this final year project, it was necessary to create a cloud computing

server that allows BNS student to utilize and used it as a processing station to

run applications that required high computing resources such as GNS3. The

student can use computers in the any available lab or their own laptop installed

with VNC Viewer to use this cloud computing services as long as they are in

the UniKL networks.

To achieve this, the server will be using Infrastructure as a Service

(IaaS) and Software as a Service (SaaS) platform to provide the cloud service

to the students. For the server, it will using all-in-one hardware OpenStack

configuration which means all the necessary components to run the cloud is

installed in one machine and Ubuntu Server 12.04.4 will be used as the main

operating system.

1.2. Objective

· To study and understand how to configure, set up and to apply the

OpenStack system into the existing UniKL network.

· To develop BNS Cloud Server system using Linux Ubuntu.

· To analyze the performance of the BNS Cloud Server System among

the control group of BNS students.

Page 19: FYP Dissertation

2

1.3. Scope

· The cloud management system for the cloud computing will be using

OpenStack.

· The testing will be done in a controlled environment which involves a

group of BNS students who are using applications such as packet tracer

and GNS3 to make sure it can run as it intended.

· To test the performance of the server when utilized by a number of

students and constantly monitoring the server.

· The performance of the cloud server will be measured based on the

RAM usage, network load and CPU utilization by using a network

monitoring system (NMS) tool via PRTG.

· The ability to execute program smoothly and respond time between the

user and the virtual machine will not be covered.

1.4. Problem Statement

Many new applications today required fast central processing unit

(CPU) and large capacity of RAM just to meet the requirement and executing

the applications smoothly which is costly. Based on observation and

experiences of using computers in UniKL Lab, many computers on it are still

using old computers and had many problems of running application such as

packet tracer where the application just stuck and not loading at all. For GNS3,

a minimum of at least dual-core or quad-core CPU with 2GB RAM to run 3-5

router is required as mentioned on their website, which is not computer in the

UniKL Lab is equipped for.

It is costly for management to change all this computer just to run the

applications. For the BNS students, they often lost their thumb drive containing

important documents without making a backup while doing their work in the

Lab. By investing in a single, powerful cloud computing server, it can fulfill

the requirement that students need to do their jobs by running all the required

applications on the server.

Page 20: FYP Dissertation

3

1.5. Proposed Solution

· By using low specification computer in the UniKL lab, it should be

able to run the latest applications for learning purpose pre-installed in

the cloud computing system.

· BNS students can use an application from the server in real-time,

meaning that they can load the application anywhere in the UniKL

network using computers in the Lab or their laptop.

· BNS students are able to store their work on the server as long as they

don’t exceed the quota limits of the storage allocated for them.

1.6. Limitation

There are several limitations of this project that cannot be avoided.

· The server specification will only be able to support five users at a time

for optimal usage.

· The performance of the server will be affected as for the number of the

users connected to the server are increasing.

· A quota system will be allocated for each BNS student’s account so

that they can store their work as long as not exceed the quota allocated

for them.

· The server can only be able to access internally by BNS students, which

means this is a private cloud.

Page 21: FYP Dissertation

4

CHAPTER II: LITERATURE REVIEW

2.1. Overview

Cloud computing is a new way of delivering hardware and software

through the computer network such as local area network and the Internet.

Starter consumers that have the potential to explode new ideas can utilize this

cloud computing without required to invest in a large significant capital to set

up new hardware and software or the resources to maintain the components

over a long period [1]. In this literature review, it will focus on the method of

how cloud computing will deliver the services over the UniKL campus local

area network to be accessed by BNS students.

2.2. Introduction to Cloud Computing

The collection of software and hardware in a data center is what a cloud

computing true body [1]. A private cloud is referring to the internal cloud

server made available only within the organization, in this case is UniKL

campuses. Cloud computing is designed to allow a developer or consumer to

access the provided software, platform and infrastructure as a service over the

computer network wherever you are. There are many features of cloud

computing that are considered as advantages of the cloud resources such as

pooling, rapid elasticity, measured service, on-demand self-service and broad

network access [2].

There are three categories in cloud computing:

I. Software as a Services (SaaS)

II. Platform as a Services (PaaS)

III. Infrastructure as a Services (IaaS)

Page 22: FYP Dissertation

5

(http://www.cloudave.com, 2013)

Figure 2.1: Three cloud computing infrastructure

In this FYP project, only one category will be covered which is

Infrastructure as a Service to be provided to BNS students as a learning

platform. IaaS is the collection of software and hardware which is servers,

storage, networks and operating system to be utilized by the consumer [2], all

the resources that available in the cloud can be used by the consumer without

buying all those expensive equipment’s [3].

2.3. Cloud Computing Performance

According to Dr. Neeraj (2013), cloud computing is the act of

combining the concepts of grid and distributing computing where it support

the sharing of hardware such as server memory and CPU process and also

software such as data and application simultaneously to multiple clients at a

time [5]. In a private cloud, where virtual machines (VM) based application is

widely used, such as virtual desktop where students just need a computer

whether their own or computer in the UniKL lab to access applications on the

cloud server. Commercial cloud computing is an internet based that consist of

thousands of computers connected to the cloud [5], however private cloud is

only within an organization that has a specific amount of computers it need to

be served and it is not open to public.

The overall performance of the cloud computing server according to

Dr. Neeraj (2013), is depending on “How light the interface?” And “How is

the availability of resources at the server?” Based on this conclusion, the main

Page 23: FYP Dissertation

6

performance issues of the cloud computing is the cloud server itself, where the

hardware and software is affecting the whole cloud computing performance to

the clients. To overcome this, multiple powerful computer that has many

resources such as CPU core and memory can be used in the system. But, for

the BNS Cloud Application System that implement private cloud

infrastructure where only ten clients will be used for testing purposes, it will

not require a powerful computer as the main cloud computing server.

2.4. Private Cloud Computing

A private cloud is a cloud computing model that dedicated only to a

particular organization that hosted the cloud server in their own building.

Private cloud allows the administration to host any applications on the cloud

that low specification computer cannot support so that any computers in the

organization can use the applications without needed to upgrade which is very

costly. Also the security of data in the cloud computing server is protected

because it is hosted inside the organization and maintained by their own IT

personnel. Although the private cloud is more expensive to set up compared

to use public clouds, an info-tech survey that done shows that 76% of IT-

decision maker will focus on the private cloud because it is more secure and

provide more control on the cloud [8].

Page 24: FYP Dissertation

7

([4], 2011)

Figure 2.2: Cloud computing overview

In figure 2.2 shows the overall overview of available cloud computing

model, but with the same function which is to share resources among clients.

2.5. Open Source Cloud Computing Management Platforms

The increases of the use of Open Source (OS) have developed in recent

years, therefore it is possible to manage cost-effective and efficiently

structured cloud management platform using Open Source that available today

[4]. This article compares the existing open source cloud computing

management platforms and suggests some recommendations to the user. Four

open source platforms have been discussed and comparison of structuring

cloud solutions for example, Abicloud, Eucalyptus, OpenNebula and Nimbus.

The important factor to make a decision to which open source license

is the software products use or security precaution they offer. Interoperability,

as well as robustness against malfunction makes it possible to transfer

application between clouds or use several clouds at once. These are play

important factor in daily use. The possibility and compatibility with other

platforms create the virtual independent machine for decision making. The

Page 25: FYP Dissertation

8

comparisons clearly state the differences between each platform. Through

these differences, it is possible to make a conclusion on different fields of

implementation. This is showing that commercial enterprises are able to

construct a cloud solution on the basis open source platform as well as open

source is particularly suited to this purpose.

It is important to get detailed information regarding the features and

differences of the products. This has been gained through this article from two

aspects. Firstly, the explanation is focusing on different functional methods in

detail and it comes to a commonality and differences between the four

platforms. As a conclusion, open source platforms for cloud computing not

yet reached their limit in the development stage. This technology is still very

new to the technology and it currently needs extreme development process.

2.6. Comparison between Clouds Computing Management Tool

With the increasing demand for cloud computing system, there are

many open source management tools developed for a specific type of cloud

computing models. These open source applications is free to use and provide

cost effective, independent, innovative, and flexible solutions which are easily

to be implemented to an existing system [4].

The cloud computing management tool is used to construct and manage

cloud architectures for administrator to set up their own cloud computing on

existing systems. Over the years, many tools has undergone strong growth in

the market are available to choose from commercial that has better support for

open source that is flexible enough to work with many underlying systems

such as different Hypervisors [4].

There are four famous open source cloud computing management tools:

a) Eucalyptus: Stand for Elastic Utility Computing Architecture for

Linking Your Programs to Useful System is a project from the

University of Santa Barbara in California, and it was used to

construct an open source private cloud platform [4].

Page 26: FYP Dissertation

9

b) OpenNebula: Is the European Union’s flagship in the field of

research projects with the goals to provide Virtualization and cloud

computing [4].

c) Abicloud: Is developed by Abiquo which is also the company that

maintain and enhance the cloud system [4].

d) Nimbus: Is a toolkit that makes possible to convert cluster into a

cloud computing in the form of IaaS [4].

In the table 2.1, it shows the summary of each cloud computing

management tools available and compared for better view and understanding

the features of the cloud system.

Table 2.1: Comparison of cloud computing management tools

Description Abicloud Eucalyptus OpenNebula Nimbus

Architecture

Distinctive

web-based

management

function

Hierarchically

grouped from

CLC via the

CC to the NC

Three modules

contain all the

components

Three

modules

contain all the

components

Cloud type

supported

Public /

Private /

Hybrid cloud

Private /

Hybrid cloud

Public / Private

/ Hybrid cloud

Private /

Community

cloud

Hypervisor

supported

Virtual Box,

VMware,

KVM Xen

VMware,

KVM Xen

VMware, KVM

Xen

Python, Bash,

Ebtables,

Libvirt, KVM

Xen

Data / VM

storage

ZFS

Opensolaris,

Logical

Volume

Manager

Walrus

Network File

System or

Secure Copy

Cumulus File

System

Backend

Storage

User

Interface

Cloud

Operator API

Euca2ools

(CLI)

Command Line

Interface (CLI)

Web-services:

Nimbus Web

Robustness

against error

No additional

data backup

Separated

cluster to

Permanents

database to

Regularly

check and

Page 27: FYP Dissertation

10

reduce

correlated

errors

store

information

about host and

VMs

backup for

worker nodes

Security

Code Access

System

(CAS)

Generate a

public/private

key code to

pairing with

user

authentication

Authentication

by password,

secure shell and

RSA key code

pairings

Lightweight

Directory

Access Protocol

Public Key

Infrastructure

([4], 2011)

2.7. OpenStack and OpenNebula

Private cloud management is a development of an open-source cloud

platform that is widely used which can substitute commercial cloud. This paper

is presented on the function of OpenStack and OpenNebula with comparison

in some aspect with some deployment recommendations. This research focuses

on the feature and a summary of both platforms. OpenStack is an open source

software or cloud providers that used to set up the cloud computing and the

storage. While OpenNebula is an open source target at building industry

standard to manage the complex cloud computing infrastructure where it is

offering flexible ways to build public or private clouds [10].

In table 2.2, it shows the detailed comparison between both cloud

management platforms.

Page 28: FYP Dissertation

11

Table 2.2: Comparison of OpenStack and OpenNebula feature

OpenStack OpenNebula

Provenance

Include three major parts:

Compute, object storage, Image

service

Accumulation of virtual

machine management,

Distribution infrastructure of

data centers

Architecture Straightforward, stable

Provide more low-level

feature, can work without a

shared FS, secure Copy (SCP)

Virtualization

Hypervisors Support available hypervisors

Only support three: Xen, KVM

and VMW

Scale and Activity

Level of

Community

Increasing more than

OpenNebula Almost Increasing

User Interface Used application Amazon Web

Services (AWS)

Used application Open Cloud

Computing Interface (OCCI),

EC2

Security Provides firewall and Virtual

Private Network (VPN)

Secure and efficient

authentication and

authorization – passwords and

RSA key code

([10], 2013)

From the comparison of OpenStack and OpenNebula, OpenStack

includes three parts: that is Nova, Swift, and Glance. It is popular among

enterprise around the world. OpenNebula is an accumulation of management

virtual machine and has more than 4000 downloads per month to build their

own cloud – Big Cloud. OpenStack offer the same service to be alternative who

does not want to use billing cloud. Therefore OpenStack is more suitable,

straightforward because the documentation guidance can be seen on websites.

OpenNebula suitable for a research institute for large data centers to support

the research. It is ideal for users who likes to run a cloud machine quickly,

dealing with large data, protecting investment and avoid vendor lock-in [10].

Page 29: FYP Dissertation

12

The research has clearly shown the different of OpenStack and

OpenNebula from architecture, hypervisors, security and other angles. This

evaluation will help people understand their goal and function. The result as a

recommendation in a better use of OpenStack and OpenNebula.

2.8. Cloud Computing Security Challenge

Cloud computing is an internet based service that provides services

such as computing and storage for all users, including few sectors such as

financial, government and healthcare. Therefore the overall journal is about

the systematic review on different types of clouds as well as the security

challenge that should be solved. Since a cloud is the hardware and software

collection that store in the data center, it raised a several challenges on security

issues for the service. Their some characteristics that have been proposed for

maintaining the security in a cloud computing environment. It is also included

the software platform infrastructure for each category within Cloud computing

such as SaaS, PaaS, and IaaS and defines some of the characteristics for each

category. As mention before the variety of clouds is also classified into

personal clouds, general clouds, domain-specific clouds and mixed clouds.

This journal gives a comparison between different service providers on

different clouds. In general clouds the challenge that is yet to be solved are the

Dos Attack in Virtual Machine (VM) or placing malicious code on the

physical line. Domain specific cloud for example, includes retail and financial

service or organization. The security issues include the compliance and

auditing, intrusion Detection (IDS) and firewall features, access control and

antivirus protection. Besides, a hybrid cloud is a combination of one private

cloud and one general cloud which is provides organization to manage

resources externally and internally. It is scalability and cost effectiveness of a

business. The issues of the security in these clouds were all ongoing

compliance concerns, access control and identity management, data slinging

and risk of multiple cloud tenants.

Page 30: FYP Dissertation

13

As a conclusion, this journal has been focused on the different type of

clouds and the writer describes the way of the possible solution to security

threats on each level.

In the table 2.3, it shows the summary of all the journals and articles

that has been used in this project.

Table 2.3: Overview of the literature reviews

Title Author Previous Project Current Project

A View of Cloud

Computing

Michael

Armbrust

Potential of the

Cloud Computing

Implementing the

potential of the

cloud computing

to UniKL campus.

Performance

Analysis of Cloud

Computing for

Distributed Client

Dr. Neeraj

Bhargava

Using simulation

tools to simulate

and measure

performance and

load-balancing of

the cloud server.

Using the real

cloud system to

measure

performance and

utilization of the

cloud server.

A Comparison

and Critique of

Eucalyptus,

OpenNebula and

Nimbus

Peter

Sempolinski

Different type of

Cloud Computing

architecture

Suitable

architecture for

UniKL network.

Open Source

Cloud Computing

Management

Platforms

Stefan Wind Cloud Computing

management tools

Comparison of

different cloud

management tools

Cloud

Computing:

Different

Approach &

Security

Challenge

Maneesha

Sharma

Security

challenges for

each cloud type.

Provide

authentication for

students when to

access the cloud

computing system.

Page 31: FYP Dissertation

14

Comparison of

Open-Source

Cloud

Management

Platforms:

OpenStack and

OpenNebula

Xiaolong Wen Comparison and

feature between

OpenStack and

OpenNebula

Using OpenStack

because easy to set

up and suitable for

small network.

Page 32: FYP Dissertation

15

CHAPTER III: METHODOLOGY

3.1 Introduction

This chapter will discuss on type of methodology that will be used in

this project. Methodology is a documented process or step by step processes

that are used to help in the planning and the developing the project accordingly.

This project will be requiring both hardware and software to make it

successfully running.

The Work Breakdown Structure (WBS) provides the necessary

framework for detailed cost estimating and control along with providing

guidance for schedule development and control. The function of WBS is to

break down a large and complex project into smaller project pieces provides a

better drafting of planning and developing the project. WBS can also be used

to facilitate resource allocation and task assignment.

Some amount of the budget will be listed in order to purchase the

hardware such as all-in-one server, switch, cables, and computers.

Furthermore, some of the software needed is open sources and can be

downloaded from the internet and it will be installed in the server and client

computers.

3.2 Methodology

After a lot of research has been done about this cloud computing system

using journals, the Internet and articles, it has been determined that Rapid

Application Development (RAD) adopted from Dan Gielan in the mid-1970s,

is the best method because OpenStack is still rather new and it will provide

continuous development of the system and documentation to be reviewed

about the performance of the systems. The data that will be collected is the

performance of the server to support multiple instances (virtual machine) at a

time such as CPU utilization, RAM usage and network bandwidth.

Page 33: FYP Dissertation

16

Figure 3.1: RAD Methodology Model

3.2.1. Initiation

The initial start of this project is to collect data from various

method such as investigation about hardware and software regarding

cloud computing server requirement and observation.

The investigation took place in UniKL Lab in level six where

mostly the computers in the lab are still the old specification which is

Intel Pentium 4 and only 512MB worth of DDR1 RAM. Many of the

computers are also using Windows XP which is no longer supported in

April 2014 by Microsoft.

From the observation, it has been determined that the most

required software for learning purpose among BNS students, which

their computer cannot support is GNS3 where it is requiring huge

amount of processing power and RAM to support and configure many

virtual router at once. Another software that always has a problem to

use in the lab is the latest version of packet tracer where the application

just stuck when loading.

Page 34: FYP Dissertation

17

3.2.2. Research

There are tons of research has been done in this project because

the cloud computing system is not easy to set up and require thorough

research before any real implementation can be done on the UniKL

network infrastructure. Most of the research is done through the

Internet, journals and articles about available cloud computing tools.

After research has been done, OpenStack is chosen and will be used as

Cloud Computing Management Control for this project because its

architecture that support private cloud and easy to use. Below are the

some example why OpenStack is used in this project.

i. Control and Flexibility: OpenStack area created using open

source community which means it can be supported with third-

party technology to further enhance the system.

ii. Open Industry Standard: More than 75 leading companies

from over a dozen countries are participating in OpenStack,

including Cisco, Citrix, Dell, Intel and Microsoft.

iii. Openness and Compatibility: OpenStack is compatible with

thousands of existing public and private clouds for seamless

transition from cloud to cloud.

iv. Flexible Technology: Global ecosystem of industry leading

vendors supply integration support for a wide variety of features

within the cloud. As an example, hypervisor support includes

ESX, Hyper-V, KVM, LXC, QEMU, UML, Xen, and

XenServer.

Page 35: FYP Dissertation

18

3.2.3. Planning

Planning is the rough idea or sketches about the cloud system

before it can be designed and to be developed later on. This stage is

where all the components of hardware and software involved in this

project are being gathered together to see the relation with each other’s.

It is important to have a good plan on how to integrate this cloud

computing system with the existing UniKL network environment so

that BNS students can access this cloud anywhere in the Campus. In

Table 3.1, it shows all the components that have been identified to be

used in this project.

Table 3.1: Indicates the components involves throughout this project

and the relationship between each other

Component Use / Function Relation Constraint

Computer /

Laptop

· Access applications

from the server (GNS3,

Packet Tracer)

· View and controlling

the instance in the

server.

Computer ->

Switch -> all-

in-one server

Must be connected to UniKL

network.

VNC Viewer must be

installed to control the

instance

Switch Used to interconnect

computers and cloud server

to UniKL networks.

Switch ->

Computer ->

all-in-one

server.

Must be configured to allow

all connections from UniKL

network

All-in-one

Hardware

Server

· Interface for the users to

request instance.

· Controls user

designated allocation of

resources which they

are not allowed to

exceed.

All-in-one

server ->

Switch ->

Computer

Must have huge capacity of

RAM and CPU cores to

handle many instances at

once.

Page 36: FYP Dissertation

19

· Provide authentication,

security, quota system,

and many more.

OpenStack · Cloud Management

tools to provide

instance to the users.

· Used to process user

requests and control

resources.

· Process input from the

users, retrieve the

needed disk image from

the Glance, signal the

OpenStack Compute to

set up an instance and

then give IP address.

OpenStack ->

All-in-one

server ->

Switch ->

Computer

Must be properly configured

to make sure that the server

can provide a reliable

instances service without any

problem such as Nova does

not load or copied to create a

new instance or not assigning

any IP addresses.

OpenStack

Compute

· Provide a framework

which allows instances

to run

· Relies on a library

called libvirt, design to

provide a facility for

controlling the start and

stop of instances.

Computer ->

Switch -> All-

in-one server -

> OpenStack

Must have all the required

dependencies installed so it

can run properly without any

problems creating and

running instances.

OpenStack

Glance

· Database of disk image

that can be copied and

used as the basis for the

new instance.

· When the instance is

spawned, the templates

copied and is packaged

into a disk image for the

user to use.

Glance ->

Network ->

OpenStack ->

All-in-one

server ->

Switch ->

Computer

Must have large storage to

support many UniKL BNS

students.

The image must have an

operating system and the

required software pre-

installed in it.

Page 37: FYP Dissertation

20

3.2.4. Design

Figure 3.2 shows the basic diagram for the cloud computing

system where it is divided into three sections which is a cloud

computing server, existing network and clients.

Figure 3.2: Cloud computing diagram

3.2.4.1. Cloud Computing Server

This section contains two devices which are all-in-one

hardware server and a switch which reside on its own private

network. OpenStack all-in-one server will controls all the user

resources allocation, monitoring of the system and collecting

input from the user to be processed. It is also manage image

repository, networking function, creating and maintaining the

instance.

The last device is a switch used to interconnect the

server to the UniKL existing network so that it will accessible

anywhere in the UniKL MIIT. The switch is not configured with

any Vlan as it uses the existing student Vlan in UniKL switch.

Page 38: FYP Dissertation

21

3.2.4.2. Existing Network

This section only contains one device which is switch that

already in use to connect existing computers in the lab to the

UniKL network. By connecting to the switch, the server can be

accessed by using wired and wireless in UniKL MIIT campus.

3.2.4.3. Users

On the user’s computer, they just have to install RealVNC

Viewer software and connect to the created virtual machine by

entering the virtual machine IP address. After that users can use

the available software inside the VMs as if they using it in their

own computer.

3.2.5. Development

Development is a stage where all the component is being

prepared with all required software and hardware information about

how to set up the system. The following section will discuss on the

combining of all information that is gathered together to develop the

system.

3.2.5.1. OpenStack architecture

OpenStack is a highly modular architecture that offers

extensive support for many hypervisor software, monitoring

system, storage file system, networking, and user management

services. It will be used as the main cloud computing

management control.

3.2.5.2. All-in-one Server

This is where the OpenStack codename Havana will be

installed and configured to control all the activity in the cloud

system.

Page 39: FYP Dissertation

22

3.2.5.3. Monitoring

To monitor the performance of the OpenStack server,

PRTG NMS software will be used to monitor the server and

record any changes in the RAM usage, CPU utilization and

network utilization.

3.2.5.4. OpenStack Glance

Glance is a database located in the server that will

handle and store the virtual machine disk image template. The

disk image can be created by administrator to suit their need or

using any available operating system that can be directly added

to the glance database over the internet.

3.2.5.5. Authentication

For a Linux based operating system, each user must pre-

created a SSH key before logging into the Linux virtual

machines. As for the Windows operating system, it does not

need the SSH key to connect to the instances.

3.2.5.6. Security

OpenStack has a built-in security feature that allows

only a certain port available to the users so that they can connect

to the VMs using the open port. The incorrect port setting will

make users unable to connect to the VMs.

Page 40: FYP Dissertation

23

3.2.6. Implementation

Implementation stage is where the cloud computing system

itself being deployed in the UniKL MIIT campus and BNS students can

use the service provided. Implementation will be divided into three

sections which is client side, server side and network.

3.2.6.1. Client side

In the table 3.2, there are two components required on

the client side which is a virtual machine hypervisor and

computers. This is compulsory to have in the client side.

Table 3.2: Required installation and configuration of the component

No. Component Description

1 VNC Viewer · Needed to connect to the running instance

inside the server.

2 Computer / Student

Laptop

· Running on any operating system that support

RealVNC Viewer.

· Connected to UniKL local area network via

wired connection.

Page 41: FYP Dissertation

24

3.2.6.2. Server side

Table 3.3 shows the most crucial part of the system

which is located on the server side. All the configuration must

be made for the system to be usable by students.

Table 3.3: Component involved in the server side

No. Component Description

1 All-in-one hardware

server

· Ubuntu 12.04.4 will be installed.

· OpenStack Havana will be installed and

configured.

· User account will be set up for students to be

able to log in to the OpenStack server.

· Allocation of resources for each student such

as RAM and CPU utilization will be limited so

that students won’t be able to exceed the

allocated resources.

3.2.6.3. Network side

As for the network side of the system, only one device is

required as shown in table 3.4

Table 3.4: Networking devices

No. Component Description

1 Switch · Interconnect computers / laptops, and all-in-

one OpenStack server into the UniKL network.

· Using default Vlan configuration that allows

the server to be used throughout UniKL MIIT.

Page 42: FYP Dissertation

25

3.2.7. Testing

The testing will be done by creating five instances (virtual

machine) and each instance is running GNS3 with three virtual routers

using basic Enhance Interior Gateway Routing Protocol (EIGRP).

The maximum number of instances for the testing purpose will

only be restricted to seven executed at once because the current server

specification does not have the capacity in term of CPU cores and

RAM.

3.2.8. Analysis

For this stage, all aspects of the cloud computing system will be

closely monitored to collect data using NMS software such as PRTG to

record all activities inside the server using specific sensors such as CPU

load, physical memory and traffic. These sensors will have the interval

of 60 second to capture all the relevant data inside the server and create

a graph for easy viewing. For a real-time monitoring feature, the server

can be installed with HTOP software that provide continuous real-time

view about the server resources but do not keep any record of it.

This analysis can be used by administrator to further improve

the cloud computing server by reducing resource allocation for

students, or to upgrade the server so it will be able to support more

users.

Page 43: FYP Dissertation

26

3.3. Work Breakdown Structure (WBS)

WBS is the foundation of the project as it provides the necessary step-

by-step for the administrator to develop the system according to plans. The

idea of WBS is to take the overall large complex project and break it down to

a smaller part to make it easy for the developer to manage each part

accordingly. A well-organized and detailed WBS can be the effective tools

because the allocation of resources, project budgeting, and scheduling are all

divided into smaller part making it easier to follow.

Figure 3.3: Work Breakdown Structure diagram

Cloud Computing for UniKL BNS Students

Initiation

Information Gathering

Investigation at UniKL lab

Research

Research about

available cloud

computing tools

Comparison between

cloud computing

tools

Determine hardware and

software

Study about OpenStack

Planning

Planning rough ideas

Sketch the relation

between each component

Creating a table to

determine the function & relation of

the component

Design

Create network diagram

Link all the components

Divide devices into three section

Explaining each

component function

Development

Understanding the concept of OpenStack architecture

Setup OpenStack

using UniKL network

configuration

Creating basic image

with Windows 7 pre-installed with software

Connecting the server to the UniKL

network

Implement

Testing server

connectivity

Configure user accounts

Configure client-side

Testing

Test the cloud

computing server by executing

seven instance at

once

Load a simple EIGRP GNS3

topology

Load HeavyLoad

software

Analysis

Collect information using PRTG

NMS

Analyze the recorded

information

Page 44: FYP Dissertation

27

3.4. Gantt Chart

The Gantt chart is used to show the duration or time taken for this

project to accomplish. The time taken to complete this project is

approximately two semesters or 170 days as can be seen in the Gantt chart. It

is also shown every phase taken from the beginning until the end of the project.

(Refer to Appendix A for Gantt chart)

3.5. Project Resources

Cloud computing system mainly consists of two important component

which is software and hardware required for the system to run. Each of

components of this project will be further discussed in this chapter.

3.5.1. Hardware

Technically for a cloud computing server to support all the BNS

students in the UniKL, it will be required a powerful server with a lot

of processing power, memory modules and large storages. For this

project, the scope is to only provide at least five instances at a time

because the limited processing power and storage using the available

server. Table 3.5 below shows the specification of the both of the server

and client.

Page 45: FYP Dissertation

28

Table 3.5: Detail information on each hardware.

No Hardware Specification

1 Server

All-in-one hardware

server

Processor

· AMD FX(tm)-6300 Six-Core 3.5GHz 8MB L3

Cache

RAM

· 12 GB DDR3 SDRAM

OS

· Ubuntu Server 64 12.04.4

Hard Disk

· 3.5-inch SATA 300GB

Interface

· 1 x TP-Link Gigabit PCI Express Card

· 1 x Gigabit Ethernet

Power Supply

· 650 watts Corsair high performance power

supply

2 Client

Dell OptiPlex

GX620

Processor

· Intel Pentium D 2.66 GHz

RAM

· 512 GB DDR1 SDRAM

OS

· Windows XP Professional SP2

Hard Disk

· 80GB HDD

Interface

· 1 x LAN (Gigabit Ethernet)

Graphic

· Integrated Intel Graphics Media Accelerator

950 Dynamic Video Memory Technology 3.0

Power Supply

· 305.0 Watt

Page 46: FYP Dissertation

29

3 Client

Acer Aspire 4710G

Processor

· Intel Core 2 Duo 1.66GHz

RAM

· 4 GB DDR2 SDRAM

OS

· Windows 7 Professional SP1

Hard Disk

· 160GB HDD

Interface

· 1 x LAN (Gigabit Ethernet)

Graphic

· Dedicated ATi Radeon HD2300

Power Supply

· 44.4 W 4000 mAh 6-cell Li-ion battery pack

· 3-pin 65 W AC adapter

4 Switch

Cisco Catalyst 2950

Switch

Interface

· Fast Ethernet

Ports

· 24 x 10/100

Features

· Full duplex capability

· VLAN support

· Auto-sensing per device

· Auto-negotiation

· Manageable

Memory

· RAM 16MB

· Flash Memory 8MB

Power

· AC 120/230 V ( 50/60 Hz ) 30Watt

Page 47: FYP Dissertation

30

3.5.2. Software

The major software component in the cloud computing system

is the OpenStack itself that act as the cloud management controller that

manage all the activity in the server. On the client side, students just

have to install VNC Viewer software.

Table 3.6: Indicate the software requirement in each component

No Software Description

1 Operating System Server: Ubuntu Server 64 12.04.4

Client: Any operating system supported

2 Server Software OpenStack Havana

Software Inside the virtual machine

· Microsoft Windows 7 Starter

· Packet Tracer V6.0.1

· GNS3 v0.8.6 all-in-one

· RealVNC Server

3 Client Software RealVNC Viewer

Page 48: FYP Dissertation

31

3.6. Project Costing and Budget

Table 3.7 and 3.8 indicates the estimated costing involve in this

project. For hardware, total of RM 5422 is required while for software is RM

366 is needed to complete this project.

3.6.1. Cost for Hardware

Table 3.7: Hardware costing

No Item Quantity Price (RM)

1 All-in-one hardware server 1 1,500

2 Cisco Catalyst 2950 Switch 1 2,572

3 Dell OptiPlex GX620 1 600

4 Acer Aspire 4710G 1 750

Total (RM) 5422

3.6.2. Cost for Software

Table 3.8: Software costing

No Item Quantity Price (RM)

1 Ubuntu Server 12.04.4 1 Open Source

2 OpenStack Havana 1 Open Source

3 Windows 7 Starter 1 269

4 RealVNC Server + Viewer 1 97

5 GNS3 V0.8.6 1 Open Source

6 Packet Tracer V6.0.1 1 Free from Cisco

7 HeavyLoad 1 Freeware

Total (RM) 366

Page 49: FYP Dissertation

32

CHAPTER IV: DEVELOPMENT/PROTOTYPE

4.1. Introduction

This chapter explains about the development of the cloud computing

server and the guide of setting up the user instances operating system or basic

image to be used by the user when they first using the cloud computing

service.

4.2. Cloud Computing Server

This is where all the components needed for the OpenStack server to

run are installed and configured with the local IP address which is 10.6.2.230.

Figure 4.1: Network diagram of the OpenStack implementation

Figure 4.1 shows the server design that using All-in-one: dedicated

hardware setup which mean all of the components needed for the OpenStack

to run is installed on one physical machine. This method is not recommended

Page 50: FYP Dissertation

33

because it will take up most of the server resources, approximately 2GB worth

of RAM, which can be assigned to instances unless the server has a huge

amount of RAM. Using this method, the current server specification can spawn

up to six Windows 7 instance with 1GB of RAM, 1VCPU and 20GB of disk

space.

4.3. Development Process

The development process is where all the component required for the

system to run is installed and configured.

4.3.1. Installation requirement

To successfully install OpenStack without any problem, the

administrator must have fast or stable internet connection, static IP

address for the server and range of available IP addresses to be used by

instances later on. Another important requirement is that the server

must have at least one 1Gbps network interface card to handle all

incoming and outgoing traffic. The server must also be configured with

SSH service and github to download the OpenStack components.

4.3.2. Preparing all-in-one: dedicated hardware server

1. Install Ubuntu Server version 12.04.4 with SSH service enabled

to remote the server from another terminal. Configure the server

with static IP address for this project is 10.6.2.230, and then a user

named stack is created and after the setup is complete, any

available updates is installed and the server is restarted.

Page 51: FYP Dissertation

34

2. The server then connected using another computer running Linux

or for Windows user can use Putty and invoke the following

command at the terminal.

3. Grant the user stack with sudo privileges because the user will

make many changes to the system. The following command is

invoked at the terminal:

4. Using git to download the latest OpenStack Havana scripted

installation method into the server:

Figure 4.2 shows the content of the devstack folder after being

downloaded using git.

Figure 4.2: All the necessary files retrieved using git

adduser stack echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

sudo apt-get install git git clone https://github.com/openstack-dev/devstack.git cd devstack

Ssh [email protected] # sudo apt-get update # sudo apt-get upgrade # sudo restart

Page 52: FYP Dissertation

35

5. After the scripted files are downloaded, navigate to devstack

folder and a new files named local.conf is created with the

following configuration so that the installation will use the preset

value defined in the file. The local.conf is created using command:

In figure 4.3, it shows the value that has been filed in the local.conf

file.

Figure 4.3: Modify the local.conf with the following configuration

· FLOATING_RANGE is set to a range not being used on the

UniKL MIIT network, i.e. 10.6.2.240/28.

· FIXED_RANGE and FIXED_NETWORK_SIZE sets with

192.168.1.0/24 to configure the internal IP address.

· FLAT_INTERFACE is set to the Ethernet interface that

currently in use and set with static IP address.

· ADMIN_PASSWORD is set with admin. This password is

used for the admin and demo accounts set up as OpenStack

users.

· MYSQL_PASSWORD administrative password is set with

admin123.

· SERVICE_PASSWORD is set with admin123. This is used

by the OpenStack services (Nova, Glance, etc.) to authenticate

with Keystone.

# nano local.conf

Page 53: FYP Dissertation

36

6. The following command ./stack.sh is invoked to start the scripted

installation as shown in figure 4.4. The remaining prompted

password are also filled with admin123.

Figure 4.4: Installation process is started

7. The installation will take about 20 minutes (depend on internet

connection) as it will automatically download all the needed

components and dependencies for the OpenStack to successfully

run. Figure 4.5 shows the main component of OpenStack which is

nova are being downloaded.

Figure 4.5: Example of the scripted installation process

Page 54: FYP Dissertation

37

8. After the installation is completed, the following result is obtained

and can began to logon to the OpenStack interface using a web

browser at 10.6.2.230 as shown in figure 4.6 and 4.7.

Figure 4.6: Summary of ./stack.sh

Figure 4.7: OpenStack login page at 10.6.2.230

Page 55: FYP Dissertation

38

4.3.3. Creating basic virtual machine image

In this part, administrator needs to have two important

components which are, windows 7 ISO as installer medium and libvirt

driver that can be obtained at http://libvirt.org/drivers.html to be used

for network interface card driver after the windows 7 is installed so that

it can have network connectivity.

1. Verify that libvirt default network is running is crucial

because it will allow the virtual machine to have an internet

connection.

If the network is not active, invoke the following command

to activate it.

2. Using the following script, the virtual machine is created

according to the defined configuration:

The KVM hypervisor starts the virtual machine with the

libvirt name windows7basic and with 1GB of RAM. The

virtual machine has a virtual CD-ROM drive associated

with the /IMG/windows.7.AIO.iso and a local of 15GB hard

disk in qcow2 format that is stored in the server at

/IMG/windows7basic.qcow2. The networking is configured

to use libvirt's default network.

# virsh net-list Name State Autostart ----------------------------------------- default active yes

# virsh net-start default

# qemu-img create -f qcow2 /IMG/windows7basic.qcow2 15G # virt-install --virt-type kvm --name windows7basic --ram 1024 \ --cdrom=/IMG/windows.7.AIO.iso \ --disk path=/IMG/windows7basic.qcow2,size=15,format=qcow2 \ --network network=default\ --graphics vnc,listen=0.0.0.0 --noautoconsole \ --os-type=windows --os-variant=win7

Page 56: FYP Dissertation

39

3. Because this not an unattended installer, admin have to

manually proceed and monitor the installation progress and

to do that, it need to be viewed using VNC. Invoke the

following command to get the VNC port number:

The virtual machine windows7basic uses VNC display :1,

which corresponds to TCP port 5901. The command is

invoked in terminal to connect to the virtual machine:

4. The installation process is continued by providing the

necessary information as shown in figure 4.8.

Figure 4.8: Fill in information during installation process

# virsh vncdisplay windows7basic :1

# ssh -f -L 5901:localhost:5901 [email protected] -N -p 22 # vinagre 0.0.0.0:5901

Page 57: FYP Dissertation

40

5. After the setup is complete, the virtual machine is shut down

and the script in step two is edited in the line:

The virtual machine is started again with new virtual CD-

ROM that contain all the necessary drivers for the

windows7basic to use.

6. Missing driver such as network interface is installed so that

it can have connectivity. In figure 4.9, it shows that the

Ethernet adapter is successfully installed.

Figure 4.9: Red Hat VirtIO is the default adapter to provide

connection

--cdrom=/IMG/virtio-win-0.1-74.iso .iso

Page 58: FYP Dissertation

41

7. All needed applications such as GNS3, Packet tracer,

Chrome browser, VNC Server is installed.

Figure 4.10: Installation of GNS3

Figure 4.10 shows GNS3 is being installed in the virtual

machine so that every instance created by OpenStack will

have the GNS3.

8. The virtual machine is shut down and the newly created

windows7basic.qcow2 is copied as a master copy to be

stored as backup.

Page 59: FYP Dissertation

42

4.3.4. Importing image to the OpenStack Glance

OpenStack Glance is where all the image uploaded by the

administrator or users are stored, when the user requested to use the

image it will convert it into instance which are virtual machine handled

by OpenStack Nova.

4.3.4.1. Logon to OpenStack

Figure 4.11 shows the logon screen and can be login

using username and password both are admin.

Figure 4.11: Login using username admin and password both admin

Page 60: FYP Dissertation

43

4.3.4.2. Navigate to image tab

Navigate to the image tab in the left side of OpenStack

website and admin will see option to create a new image for

users.

Figure 4.12: Create an image pop-up filled with necessary information

In Figure 12, each field is filled with the information correspond

to the image type that had been created earlier.

Page 61: FYP Dissertation

44

4.3.4.3. Launching an instance

After the process of creating the image is completed, it

is time to launch the image in order to test it. Select the created

image and launch it using launch button at the right side and

provide the necessary information such as number of VCPU,

amount of RAM and disk size as can be seen in figure 4.13.

After the instance is launched, click on the more button and

associated floating IP address so that clients can connect to that

instance.

Figure 4.13: Show the example on how to create an instance

Page 62: FYP Dissertation

45

Figure 4.14: Summary of the created instance

Figure 4.11 shows the summary of the created instance with the private

IP 192.168.1.2 and public IP address 10.6.2.241 are assigned to it. It is

also shown the state of the instance and how long it has been created.

4.3.5. Connecting to the instance

Connecting to the instance is a straightforward task and it only

needs RealVNC Viewer installed on the user’s computer.

4.3.5.1. VNC Viewer

Install VNC Viewer on any computer that user wants to use to

connect to the instance. The VNC Viewer supports many

platforms and easy to use.

Figure 4.15: Enter the VNC Server IP address

Figure 4.15 shows the VNC Viewer interface, change the VNC Server field

with the corresponding instance public IP address, which in this case is

10.6.2.241

Page 63: FYP Dissertation

46

4.4. Monitoring Performance

4.4.1. Network Management Software (NMS)

To closely monitor the server performance, such as CPU

utilization, RAM usage, and network bandwidth utilization, PRTG

Network Monitor has been chosen in this project because it is easy to

use, it can automatically discover device in the network, and use less

resources to run. Figure 4.16 shows the simple PRTG interface.

Figure 4.16: PRTG interface

To actively monitor device in the network, PRTG use sensors. The

sensor is the basic monitoring elements that fetch corresponding

information such as CPU load, bandwidth, RAM usage and many more.

Page 64: FYP Dissertation

47

4.4.2. Install SNMPD service on OpenStack server

Because the PRTG probe cannot be installed in a Linux

operating system, administrators can still use SNMP (Simple Network

Management Protocol) agent service such as SNMPD to allow the

PRTG to remotely monitor the server performance. Below are the step

taken in order to install SNMPD agent service.

1. The following command is invoked at server terminal:

2. SNMPD is configured by editing the configuration files at

/etc/snmp/snmpd.conf:

3. After that, the /etc/default/snmpd file is edited with the

following configuration:

4. SNMPD service is restarted to allow the new setting to take

place.

# apt-get update && apt-get install snmpd

# sudo vim /etc/snmp/snmpd.conf rocommunity public syslocation UniKL MIIT syscontact [email protected]

# sudo vim /etc/default/snmpd Disable the following line #SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid' Create new line SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux -p /var/run/snmpd.pid -c /etc/snmp/snmpd.conf'

# sudo /etc/init.d/snmpd restart

* Restarting network management services

Page 65: FYP Dissertation

48

4.4.3. Configure PRTG to monitor OpenStack server

1. The PRTG is started and navigated to the main interface and

navigate to the device > add device as shown in the figure

4.17.

Figure 4.17: First step to add new devices to PRTG

2. Group for the server is selected and continue to the next

step.

3. The required information in each column is entered to

ensure that the OpenStack server be able to discover by

PRTG NMS as shown in figure 4.18.

Page 66: FYP Dissertation

49

Figure 4.18: Example of device configuration

4. After that, wait for a while for all the sensors inside the

OpenStack be discovered by PRTG.

Figure 4.19: Sensors that discovered in OpenStack server

Figure 4.19 shows there 61 sensors that are automatically discovered by PRTG.

Page 67: FYP Dissertation

50

5. The unnecessary sensors that were discovered is removed

so that it can be easy to be monitored later on as can be seen

in figure 4.20.

Figure 4.20: Show the current active sensors

The details of the sensors are as follow:

a. SNMP CPU Load 1: Monitors all available CPU cores

inside the OpenStack server. Total available cores in the

server is six.

b. Physical Memory: Monitor total usage of RAM inside the

server. The server has a total capacity of 12GB RAM.

c. SSH Load Average: This sensor show the average use of the

server.

d. SSH Meminfo 1: This sensor is similar to the Physical

memory, the different is that it uses SSH protocol to monitor

the server state.

e. (003) eth1: Monitor the incoming and outgoing traffic for

the current interface.

f. Ping 1: Used only to measure the server is alive or not.

g. Ping Jitter 1: Used to measure any jitter during the server is

up, it had no significant value for this project because the

server is connected using VNC protocol which is cannot be

measure using this sensor.

Page 68: FYP Dissertation

51

CHAPTER V: TESTING AND RESULTS

5.1. INTRODUCTION

Analysis is the process of interpreting all the data collected during the

testing of this project. In this project, the testing will be conducted to test how

the server performs when handling many virtual machines (instance) used by

users.

The instances will use Windows 7 Starter 32-bit because by default it

does not include the Windows Aero theme thus use less resources to run,

which is suitable for the server specification that only allow 1GB of RAM to

the instance. The desktop wallpaper and visual styles are also unedited by

users.

To test the performance of the server, the server will spawn five

instances for the first two methods and then two additional instances are then

created to simulate heavy load and to test the capabilities of the server. Each

instance, will be loaded with GNS3 running three virtual routers with EIGRP

protocol and HeavyLoad software that's designed to put the CPU load to

100%, all activity in the OpenStack server will be recorded with network

monitoring system tool during the session.

Page 69: FYP Dissertation

52

Figure 5.1: Running instances handled by OpenStack

Figure 5.1 shows all the currently running instances handled by OpenStack

compute system. Each instance is created using medium flavor that contains 1

VCPU, 1GB of RAM and 20GB of total hard disk.

5.2. Testing Method Approach

The OpenStack server will be monitored with NMS using sensors that

corresponding to the testing requirements. The testing is conducted using three

approaches which are:

1. Creating instances

2. Common approach

3. Stressing OpenStack server

Page 70: FYP Dissertation

53

5.3. Creating Instances

Creating instance is the process of creating the virtual machine for user

to use. In order to launch an instance, an administrator must first set a flavor.

Flavor contains predefined setting for the instances to use such as number of

VCPUs, RAM capacity, and total disk space. This is important because it acts

as a quota system that will not allow users exceed the allocated resources for

them.

In figure 5.2, there are seven available image that ready to be used.

Windows7-VNC and Windows7Basic are created using the method in chapter

four, Ubuntu Server 12.04 and Fedora 20 can be downloaded from their

respective website and can be directly added to the OpenStack image list. The

cirros images are included in OpenStack installation for testing purpose.

Figure 5.2: List of images available in OpenStack server

Page 71: FYP Dissertation

54

5.3.1. Testing

First, users that have been created an account to use OpenStack

system logon to the server and navigate to the image section and click

on the image they want to launch and use, in this case Windows7-VNC.

Figure 5.3: Launch instance pop-up form

Figure 5.3 shows that the instance will be used Medium flavor

configured with 1 VCPUs, 20GB total disk and 1024MB or 1GB of RAM. It

also shows the project limits for that particular user. Detail of each flavor can

be viewed in Appendix C.

Page 72: FYP Dissertation

55

Figure 5.4: Summary of Windows7-VNC image converted to Win7-1 instance

After the instance has been successfully spawned, it must be assigned

with the floating IP address to make sure that users can access to the created

instances using VNC Viewer. By clicking on the button Associate Floating IP

shown in figure 5.4, OpenStack will automatically assign any available public

IP to the instance.

Figure 5.5: VNC Viewer interface

In figure 5.5, users have to change the VNC Server IP value address to

the corresponding public IP address of the created instance and let the rest

setting to default. After that, user can use the instance as if it in their own laptop

without noticeable delay depending on the type of connection used.

Page 73: FYP Dissertation

56

Figure 5.6: Select location in Win7-1

Figure 5.6 shows the set network location pop-up every time the

windows are connected to a new network location. In this case, the instance is

connected to the internal OpenStack network with the private IP address

192.168.1.3 and public IP address 10.6.2.242. It is advisable to select Home

network because it will turn on network discovery for home networks, which

allows the instance to see other computers and devices on the network and

allows other network users to see the instance.

Page 74: FYP Dissertation

57

Figure 5.7: Inside the instance

Figure 5.7 shows how the instance looks like and the available resource

currently assigned to it. The warning that prompted in VNC Server console

indicate that the connection is not password protected, thus can be logon by

anyone who knows the public IP address of the instance. VNC also allows users

to transfer files back and forth between server and client making it convenient

for users to retrieve their work without using other software that might take up

additional resources.

Page 75: FYP Dissertation

58

5.3.2. Result

The network monitoring system tool is measuring and recording

the entire session of the initial creating and using the instance. The

result can be viewed in the figure 5.8, 5.9, and figure 5.10 below.

Figure 5.8: CPU utilization from 1:20AM to 2:20AM

Figure 5.8 shows the CPU utilization when the Win 7- 1 instance is

created. The graph has been divided into three based on the utilization pattern.

At time frame one, only processor 1 is actively using the server resource

because the instance is set with only one VCPU. In time frame two, the server

has finished loading the first instance and began to stabilize. At time frame

three, all processors were active because this time the server is processing to

create additional four instances.

3 1 2

Page 76: FYP Dissertation

59

Figure 5.9: Physical memory utilization from 1:20AM to 2:20AM

Figure 5.9 shows the RAM usage when creating instances as can be

seen in time frame one and three. The process of creating instance took up most

of the server RAM and leaving only 1.1% available for other process to use,

this is probably happened because OpenStack is still not recognized the server

capabilities. After the server is restarted multiple times, the process of creating

another instances is not using more resources as the first time it created the

instance. Time frame two show after the first instance is finished loading, RAM

available is increased again.

1 3 2

Page 77: FYP Dissertation

60

Figure 5.10: Bandwidth utilization from 1:20AM to 2:20AM

Figure 5.10 shows that during the process of creating the instance in

time frame one, there is no activity happening because this stage the operating

system is not yet boot up. After it has booted up and inside the windows, the

instance then connected using VNC Viewer for the first time and showing the

network location selection because it has booted up in a new network location.

By selecting a Home network, the instance will execute discovery function to

look for other devices in the network that can be seen in time frame two and

three where the bandwidth usage is high.

1 2 3

Page 78: FYP Dissertation

61

5.3.3. Analysis

The first time the image is converted into an instance, it will

take up most of the server resources in order to boot up the instance

with the defined settings. During this process, the server response is

very slow and the process can take up to more than 10 minutes. After

the first instance is stable, the process of creating another instance of

the same image would not take very long and almost instantly because

it already has the configuration even though, it still take most of the

server resources.

In figure 5.8 it shows that all processor core is actively used by

OpenStack to create the first instance. Figure 5.9 shows that all RAM

is allocated for the process leaving only 1.1% available. In figure 5.10,

there is no significant traffic except in time frame three where all the

instance are connected simultaneously using VNC Viewer and

selecting Home network in the network location selection.

Page 79: FYP Dissertation

62

5.4. Common Approach

Common approach method is using the instances with the available

application inside it. For this project, GNS3 is used to measure the server

performance by executing three Cisco 3725 series virtual routers with a basic

EIGRP configuration.

5.4.1. Testing

All five instances are loaded with the GNS3 EIGRP simple

topology. Each Cisco 3725 virtual router is loaded with c3725-

adventerprisek9_ivs-mz. 124-25b IOS and have two interfaces each.

The topology can be viewed in figure 5.11. The router is pre-configured

with an idle PC value that are specific for instance with 1GB RAM and

1VCPUs to significantly reduce processor usage when the router are

active. If another instance is created using different configuration, idle

PC value is needed to be set with new value. Idle PC value is

compulsory to be created after adding the IOS image to avoid heavy

load to the processor.

Figure 5.11: Three Cisco 3725 series virtual routers

Page 80: FYP Dissertation

63

5.4.2. Result

For this testing session, the server is restarted and the

OpenStack service is started again. The result can be viewed in figure

5.12, 5.13, and figure 5.14. The sensors time frame is taken from 10:30

PM until 11:35 PM.

Figure 5.12: CPU utilization from 10:40PM to 11:35PM

Figure 5.12 shows the processor usage when the test is in progress.

From time frame two, all processors not showing too much activity except

processor 1 when booting up all the instance the second time. Time frame three,

this is where all the instance is successfully running the topology inside GNS3.

In, time frame one and four, show the instance is being started and shut down

after the test finish.

1 2 3 4

Page 81: FYP Dissertation

64

Figure 5.13: Physical memory utilization from 10:40PM to 11:35PM

During the testing, after the second time the instance boot up, RAM

usage was just static throughout the testing as can be seen from time frame two.

It shows that the instance only uses the allocated RAM that has been configured

without using any extra available RAM. Time frame one and three are where

the instances is started and shut down after the test.

1 2 3

Page 82: FYP Dissertation

65

Figure 5.14: Bandwidth utilization from 10:40PM to 11:35PM

Figure 5.14 shows only slight activity when starting up the instance at

time frame one. Time frame two shows more activity because each instance is

connected using VNC Viewer and transferring the topology file for each

instance and during GNS3 is executed, there is only slight network activity

occurs. Time frame three is also the same when all instances is connected

simultaneously and issued the shutdown command.

1 2 3

Page 83: FYP Dissertation

66

5.4.3. Analysis

From the result above, it shows that the GNS3 can successfully

execute the EIGRP topology without any problem and the server still

have resources to handle more virtual router or complex configuration.

Even though the routers has been configured with an idle PC value, it

still shows significant activity in the processor usage, but the server

performance is still the same and instances responsiveness is only

affected by a slight delay in loading when using another application or

opening folder inside the instance.

For the RAM usage, it can be concluded that after OpenStack is

restarted, the service has become more stable to evenly distribute RAM

among the created instance as can be viewed in figure 5.13. Unlike

when first creating a new instance the RAM usage were just down to

1.1%, even the instance has finished booting up.

Network bandwidth usage does not show many activities except

when the instance is connected simultaneously and first starting the

routers inside GNS3. It also shows some activity when the instance has

finished loading and when shutdown the operating system.

Page 84: FYP Dissertation

67

5.5. Stressing OpenStack Server

For this testing, the server will be pushed to the limit by creating the

sixth instance, and also each instance will be loaded with HeavyLoad software.

The software can put processor cores to full capacity and performs complex

calculations to simulate the load on the processor. The result is, each processor

inside the instance will be put under heavy loads.

5.5.1. Testing

Figure 5.15: Sixth and seventh instance created

Figure 5.15 shows the details of the sixth and the seventh instance

created using the same configuration as the previous instances. Creation of the

sixth and seventh instance only take about five minutes until can be logon using

VNC Viewer, even when other instances are currently running the HeavyLoad

software.

Page 85: FYP Dissertation

68

Figure 5.16: Hypervisor summary

Hypervisor summary is the OpenStack compute system or server

available that currently handling the instances. In figure 5.16, the VCPU usage

is red because the server has only six processor core and currently there are

seven instances running on the server. It also shows the remaining usage of

RAM and remaining disk usage in the OpenStack server. This can be used as

indicator about the server capacity.

Page 86: FYP Dissertation

69

Figure 5.17: HeavyLoad software stressing CPU usage

In figure 5.17, it shows that HeavyLoad software is operational and

currently stressing the instance by making complex calculation and put it under

heavy load as can be seen in windows task manager where the CPU usage is

100%. All instances of the OpenStack server except the seventh instance is

running HeavyLoad.

Page 87: FYP Dissertation

70

5.5.2. Result

During the process, the seventh instances will be loading the

EIGRP topology and the result is shown in figure 5.18, 5.19, 5.20 and

figure 5.21. The sensors timeline is taken from 11:00 PM until 11:45

PM.

Figure 5.18: Error occur to start virtual router

During the stressing of other instances using HeavyLoad software, the

seventh instance will try to execute the simple EIGRP topology, but in figure

5.18 it shows that GNS3 cannot start the virtual routers because an error has

occurred. It is also taking longer time to start the virtual routers compared with

previous instances.

Page 88: FYP Dissertation

71

Figure 5.19: CPU utilization from 9:40PM to 10:30PM

Time frame one in figure 5.19 shows that all the instance in currently

booting up. The time taken each time the instances is booting up is decreased

dramatically, it only takes about six to seven minutes compared to previous

session. This is probably happening because OpenStack server is adjusting the

resources to handle the instances efficiently. In time frame two, almost all the

processor is being used by the instances after the HeavyLoad software is

initiated.

Time frame three show a slight reduction in all processor usage, but it's

still quite high. This is also where the seventh instance is created, the creation

process only takes few minutes before it can be login using VNC Viewer. Time

frame four shows that all processors are being used to 100% load when the

seventh instance trying to executing the EIGRP topology but failed to start the

virtual routers. Time frame five shows reduction in processor usage because

all the instance in being issued shutdown command.

1 2 3 4 5

Page 89: FYP Dissertation

72

Figure 5.20: Physical memory utilization from 9:40PM to 10:30PM

Figure 5.20 in time frame one shows that the first five instances are

being booted up and slowly taking the allocated resources. Time frame two

shows consistence usage of the first five instances without any changes. Time

frame three show the sixth instance are being booted up and fourth time frame

shows the seventh instance is booting up and the usage is consistent throughout

the testing session. Time frame five shows increments of RAM because all the

instance are being issued shutdown command.

1 2 3 4 5

Page 90: FYP Dissertation

73

Figure 5.21: Bandwidth utilization from 9:40PM to 10:30PM

In figure 5.21, time frame one shows the highest usage because all the

first five instances are being connected simultaneously. Time frame two shows

some activity because it being connected and sending the HeavyLoad software

and EIGRP topology file. Time frame three show slight activity because the all

the instance are being issued with the shutdown command.

1 2 3

Page 91: FYP Dissertation

74

5.1.1. Analysis

From the result in stressing OpenStack server, it can be

concluded that when the server resources is being used 100% by all the

instance, it still can create additional instances without any problem.

The problem comes when the instance has finished loading into the

operating system and running high intensive application in this case is

GNS3, it failed to execute the virtual routers. This is probably happen

because the server has no available processor resources to be reserved

to off start the routers. Others less intensive application has no problem

to execute, but it will take some time to load.

RAM usage when compared with common approach method, it

shows that throughout the session the usage are remain the same

without any changes except for when creating the sixth and seventh

instance it need to allocated extra 2GB of RAM to the new instance. It

shows that after the server is restarted, it has become more stable and

able to handle resources efficiently.

As for the bandwidth usage, it only shows activity when the user

is connected to the instance. Other than that, there is no anything going

on to affect on the bandwidth utilization.

Page 92: FYP Dissertation

75

CHAPTER VI: CONCLUSION AND RECOMMENDATION

6.1. CONCLUSION

The primary objective of this project is to create BNS Cloud

Application system using OpenStack cloud management system and to test the

performance of the server when handling multiple instance. The cloud

computing server has been successfully developed using AIO physical

machine installed with OpenStack codename Havana which is the latest stable

version at the time this project is done. The server is also has been deployed in

the UniKL network and can be accessed anywhere in UniKL MIIT using wired

and wireless network connection.

The server has been tested using three methods in order to test its

performances, the test ran successfully and the server is able to execute an extra

instance, even though it has a max out its capacity to create only six instances.

The server contains multiple images for different purpose such as

Windows 7, Ubuntu server and Fedora 20. Windows 7 image is preinstalled

with software that is specific to BNS students such as GNS3 and packet tracer

that essential for learning purpose in subjects like CCNAs and CCNPs. The

Linux operating system such as Ubuntu server and Fedora 20 are suitable for

subjects that do programming in Linux so that students does not have to install

Linux on their own laptop which can cause a problem if they do not do it

correctly. OpenStack also provide tiny Linux, which is Cirros_x86 for testing

purpose and can be used by students to learn basic Linux operating.

In short, OpenStack cloud management system can fulfill any

requirements that the organization wants to use the system for if it is properly

configured and is provided with enough compute power to run many instances.

Flexibility is where each component of the OpenStack system can be installed

separately in different physical machine for optimal and maximum usage of

the server performance to support many students.

Page 93: FYP Dissertation

76

6.2. RECOMMENDATION

For further deployment of the server to support many students at a time,

the OpenStack server can be setup using multiple compute node from the same

processor architecture. This can increase the capacity of the server in term of

VCPUs number and RAM capacity to handle many instances operating at a

time without compromising the performance of the server.

For the windows based images inside the OpenStack server, it can be

further configured to use Cloud-Init and the desktop remote connection, so

that every time the instance is started using the security feature inside

OpenStack, the username and password to log on the instance will be changes

according to the specified security option making it more secure and only the

authorized students can use the instance he/she has created.

Page 94: FYP Dissertation

77

REFERENCES

EBOOK

[1] Michael, Armbrust. , et al. "A View of Cloud Computing."

COMMUNICATIONS OF THE ACM. 53.4 (2010): 50-58. Web.

[2] Sharma, Maneesha. , et al. "Cloud Computing: Different Approach & Security

Challenge." International Journal of Soft Computing and Engineering

(IJSCE). 2.1 (2012): 421-424. Web.

[3] Murugaboopathi, G., et al. "Study on Cloud Computing and Security

Approaches." International Journal of Soft Computing and Engineering

(IJSCE) 3.1 (2013): 212-215. Web.

[4] Wind, Stefan. "Open Source Cloud Computing Management Platforms." 2011

IEEE Conference on Open Systems (ICOS2011) (2011): 175 - 179. Web.

[5] Bhargava, Dr. Neeraj, et al. "Performance Analysis of Cloud Computing for

Distributed Client." International Journal of Computer Science and Mobile

Computing 2.6 (2013): 97-104. Web.

[6] Sempolinski, Peter, et al. "A Comparison and Critique of Eucalyptus,

OpenNebula and Nimbus." 2nd IEEE International Conference on Cloud

Computing Technology and Science (2010): 417-426. Web.

[7] OpenNebula - Flexible Enterprise Cloud Made Simple. N.p., n.d. Web.

26 Dec. 2013. <http://opennebula.org/>.

[8] "Types of Cloud Computing: Private, Public and Hybrid Clouds." Appcore -

Cloud Business Blog. N.p., n.d. Web. 26 Dec. 2013.

<http://blog.appcore.com/blog/bid/167543/Types-of-Cloud-Computing-

Private-Public-and-Hybrid-Clouds>.

[9] "Private Cloud Computing." Interesting and informative articles. N.p., n.d.

Web. 26 Dec. 2013. <http://abhijit.snydle.com/private-cloud-

computing.html>.

[10] Xiaolong Wen, etc. "Comparison of open-source cloud management platforms:

OpenStack and OpenNebula," Fuzzy Systems and Knowledge Discovery

(FSKD), 2012 9th International Conference on , vol., no., pp.2457,2461, 29-31

May 2012

Page 95: FYP Dissertation

78

Electronic References:

[1] http://devstack.org/guides/single-machine.html

[2] http://docs.openstack.org/image-guide/content/virt-install.html

[3] http://docs.openstack.org/image-guide/content/windows-image.html

[4] http://www.cloudbase.it/cloud-init-for-windows-instances/

[5] http://andrewpakpahan.blogspot.com/2012/09/how-to-enable-snmp-

monitoring-on-ubuntu.html

[6] http://docs.openstack.org/grizzly/openstack-

compute/install/apt/content/manual-ubuntu-installation.html

Page 96: FYP Dissertation

79

GLOSSARY

1. CPU

a. The central processing unit (CPU) but commonly called as

processor. It is the brain of the computer that most of the calculation

is made. In term of computing power, CPU is the most important

component of a computer system where it can consist more than

one CPU for large workstation as server and for personal

computing, it usually has one.

2. RAM

a. Random access memory (RAM) is a computer memory that is

available to be randomly accessed by program and managed by

operating system.

3. Linux

a. Linux is an open source operating system that is freely distributed

and can be installed on almost devices. It use Linux kernel to

operate and has many variant that is available to be downloaded.

4. Operating System

a. Operating systems (OS) is a software that perform basic tasks such

as recognizing input from the keyboard, sending output to the

display screen, keeping track of files and directories on the disk,

and controlling peripheral devices such as disk drives and printers.

5. NMS

a. Network monitoring system (NMS) is a software tools that allow

an administrator to supervise the individual components of a

network devices within a network remotely.

6. Instance

a. An instance is a virtual machine that run and handled by OpenStack

Nova framework that uses KVM libvirt.

Page 97: FYP Dissertation

80

7. OpenStack

a. OpenStack is a set of software tools for building and managing

cloud computing platforms for public and private clouds. It easy to

install, manage and supported by many organization.

8. Libvirt

a. Libvirt is collection of software that providing a convenient way for

managing virtual machines and other virtualization functionality,

such as storage and network interface management.

9. SSH

a. SSH stand for secure shell that provide the method to log into

another computer over a network, to execute commands in a remote

machine, and to move files from one machine to another. It is also

provides strong authentication and secure communications over

insecure channels.

10. VNC

a. Virtual Network Computing (VNC) is a graphical desktop sharing

system that uses the Remote Frame Buffer protocol (RFB) to

remotely control another computer. It transmits the keyboard and

mouse activity from one computer to another, relaying the graphical

screen updates back in the other direction, over the network.

Page 98: FYP Dissertation

81

APPENDIX A: GANTT CHART

Page 99: FYP Dissertation

82

Page 100: FYP Dissertation

83

Page 101: FYP Dissertation

84

Page 102: FYP Dissertation

85

Page 103: FYP Dissertation

86

APPENDIX B: UBUNTU SERVER 14.04.4 INSTALLATION

Page 104: FYP Dissertation

87

Figure 1: Ubuntu Server installation menu

Figure 2: Language selection

Page 105: FYP Dissertation

88

Figure 3: Location selection

Figure 4: Manually configure network with static IP

Page 106: FYP Dissertation

89

Figure 5: Set static IP address for the server

Figure 6: Configure gateway for the network

Figure 7: Configure DNS server

Page 107: FYP Dissertation

90

Figure 8: Set hostname for the server

Figure 9: Set Full of the server

Figure 10: Set user account for the server

Page 108: FYP Dissertation

91

Figure 11: Set user password

Figure 12: Set time zone location

Figure 13: Partitioning disks

Page 109: FYP Dissertation

92

Figure 14: Write changes to disk

Figure 15: Automatic updates option

Figure 16: Software selection to be installed

Page 110: FYP Dissertation

93

Figure 17: Grub loader for Ubuntu Server

Figure 18: Installation is finished

Figure 19: Ubuntu Server is operational

Page 111: FYP Dissertation

94

APPENDIX C: OPENSTACK SYSTEM OVERVIEW

Page 112: FYP Dissertation

95

Figure 20: Admin view on overviews of usage summary

Figure 21: Hypervisors of available OpenStack server

Page 113: FYP Dissertation

96

Figure 22: Host aggregates and availability zones

Figure 23: Admin can view all instances launched by users

Page 114: FYP Dissertation

97

Figure 24: Admin can view all volume created by users

Figure 25: Admin can set flavor for users to use

Page 115: FYP Dissertation

98

Figure 26: System info on services available

Figure 27: System info on compute service available

Page 116: FYP Dissertation

99

Figure 28: System info on quotas set for users

Figure 29: Projects available to be assigned for new user

Page 117: FYP Dissertation

100

Figure 30: User accounts available on the server

Figure 31: User views on overview page

Page 118: FYP Dissertation

101

Figure 32: List of instances launched by user

Figure 33: Volumes and snapshots page for user to create additional storage on

instances

Page 119: FYP Dissertation

102

Figure 34: List of images available for users

Figure 35: Security group on access and security option for users

Figure 36: Key Pairs for Linux based instances

Page 120: FYP Dissertation

103

Figure 37: Floating IPs to be assigned for instances

Figure 38: API access option

Page 121: FYP Dissertation

104

APPENDIX D: ACADEMIC RESEARCH PAPER

Page 122: FYP Dissertation

105

APPENDIX D: ACADEMIC WRITING

Academic Writing (Final Year Project)

Mohammad Hafiz Bin Zubair, Madam Roziyani Binti Rawi

BNS Cloud Computing System

Bachelor in Networking System

University Kuala Lumpur,

Malaysian Institute of Information Technology

Jalan Sultan Ismail Kuala Lumpur, Malaysia.

[email protected]

Abstract - This study is to focus on how the cloud computing works and implements it on real world situation where the system can be accessed throughout the UniKL City Campus MIIT and can be used by students to do their works or assignments using the system. The methodology used in this project is Rapid Application Development (RAD) model because it provide continuous development and documentation to be reviewed about the performance of the systems. This methodology consist of eight phases which are initiation, research, planning, design, develop, testing, implement and analysis. A work breakdown structure (WBS) is created to further define the total scope of this project which step by step approach of what need to be done in each phase until the project is completed. This project is developed using OpenStack cloud management software on a single all-in-one hardware server that capable to create five to six instance. This project is tested in a controlled environment where each of the instances are loaded with GNS3 and HeavyLoad software to simulate the real work load on CPU utilization, RAM usage and network load. During the session it was also being monitored using network monitoring system tool to capture the server state. Even though the server is being push to the limits, it can still create additional instance using available resources but the created instance will facing some problem when loading high compute application such as GNS3 where it failed to start the virtual routers.

INTRODUCTION

BNS Cloud Application system is developed to help students to do their assignment which require high computing applications such as GNS3 to accomplish. To achieve this, the server is developed using OpenStack cloud management technology which offer private cloud feature such as Infrastructure as a Service (IaaS) platform. This platform has allowed me to create a virtual machine that pre-installed with the software needed for BNS students that can be accessed over the network without burden their own laptop with all the computational work load.

PROBLEM STATEMENT

Many new applications today were designed for high end computer were it require fast processor with multiple cores and certain amount of RAM such as GNS3 where it’s stated in their forum, it require at least 2GB of RAM just to run three or more virtual routers efficiently. Also, it is very costly to change or replace obsolete hardware in the UniKL Lab just to run this applications. By investing on a single or more server node at less cost, this application can run virtually and can be accessed anywhere inside UniKL MIIT campus network.

OBJECTIVE The objective of this project is as follows:

Page 123: FYP Dissertation

106

1. To study and understand how to configure, set up and to apply the OpenStack system into the existing UniKL network.

2. To develop BNS Cloud Server system using Linux Ubuntu.

3. To analyse the performance of the BNS Cloud Server System among the control group of BNS students.

LIMITATION

There are several limitations of this project that cannot be avoided. 1. The server specification will only be able to

support five users at a time for optimal usage. 2. The performance of the server will be affected

as for the number of the users connected to the server are increasing.

3. Quota will be allocated for each BNS student’s account so that they can store their work as long as not exceed the quota allocated for them.

4. The server can only be able to access internally by BNS students, which means this is a private cloud.

LITERATURE REVIEW

Cloud computing is divided into three main platform that were widely supported in most cloud computing provider. The three platforms are:

(http://www.cloudave.com, 2013)

Three cloud computing infrastructure In this FYP project, only one category will be covered which is Infrastructure as a Service to be provided to BNS students as a learning platform. IaaS is the collection of software and hardware which is servers, storage, networks and operating system to be utilized by the consumer[2], all the resources that available in

the cloud can be used by the consumer without buying all those expensive equipment’s on their own[3]. This project is also using private cloud computing model that dedicated only to a particular organization that hosted the cloud server in their own building in this case is UniKL MIIT campus[8].

Table: Overview of all journal used in this project

RESEARCH METHODOLOGY

Rapid Application Development (RAD) adopted from Dan Gielan in the mid-1970s, is used in this project because it provide continuous development of the system and documentation to be reviewed about the performance of the systems. The documentation then can be used by others to improve this project. Figure below show the step-by-step or cycle of the RAD model.

Title Author Previous Project Current Project

A View of Cloud

Computing

Michael

Armbrust

Potential of the

Cloud Computing

Implementing the

potential of the

cloud computing

to UniKL campus.

Performance

Analysis of Cloud

Computing for

Distributed Client

Dr. Neeraj

Bhargava

Using simulation

tools to simulate

and measure

performance and

load-balancing of

the cloud server.

Using the real

cloud system to

measure

performance and

utilization of the

cloud server.

A Comparison

and Critique of

Eucalyptus,

OpenNebula and

Nimbus

Peter

Sempolinski

Different type of

Cloud Computing

architecture

Suitable

architecture for

UniKL network.

Open Source

Cloud Computing

Management

Platforms

Stefan Wind Cloud Computing

management tools

Comparison of

different cloud

management tools

Cloud

Computing:

Different

Approach &

Security

Challenge

Maneesha

Sharma

Security

challenges for

each cloud type.

Provide

authentication for

students when to

access the cloud

computing system.

Comparison of

Open-Source

Cloud

Management

Platforms:

Xiaolong Wen Comparison and

feature between

OpenStack and

OpenNebula

Using OpenStack

because easy to set

up and suitable for

small network.

Page 124: FYP Dissertation

107

Rapid Application Development Model

PROJECT DEVELOPMENT

The development of the cloud computing server and the guide of setting up the user instances operating system or basic image for the user will be discussed in this topic. The figure below shows how the OpenStack system will be deployed in a real situation.

Network diagram of this project

The server will be using all-in-one (AIO) configuration which means all the components to run OpenStack system will be installed in one physical machine running Ubuntu Server 12.04.4. The machine will have one CPU with six cores, 12GB of RAM, 300GB of hard disk and 1Gbps of Ethernet adapter. With this specs, the server are able to handle six instances, each instance configuration are using 1VCPU, 1GB RAM and 20GB disk. The figure below shows example when one instance running using the configurations.

Running instances

After the instance is executed and booted up, it will be connected using third party software which is RealVNC Viewer because the default windows remote desktop connection cannot be used with extra

configuration using cloud-int add-on. The server performance will be monitored using PRTG network monitoring system such as CPU, RAM and bandwidth utilization during the session. Figure below shows the sensors used to monitor the server performance.

Sensors monitoring server performance

TESTING AND RESULT

For the testing, this project used three methods to test the performance of the OpenStack capabilities.

1. Creating instance 2. Common approach 3. Stressing OpenStack server

Creating instance is just to test the how the resources inside the server is managed by OpenStack to create a new instance. Common approach is where the server is tested to handle the instances where it will execute GNS3 with three virtual routers using simple EIGRP protocol. Stressing OpenStack server is where the server is fully maxed out by creating seven instance and each instance is using HeavyLoad software to simulate heavy usage on CPU. The result of each methods is as below:

1. Creating instance Ø For the first time the image is converted into an

instance, it will take up most of the server resources in order to boot up the instance with the defined settings. During this process, the server response is very slow and the process can take up to more than 10 minutes. After the first instance is stable, the process of creating another instance of the same image would not take very long and almost instantly because it already has the configuration and but it still take most of the server resources.

2. Common approach Ø The GNS3 can successfully execute the EIGRP

topology without any problem and the server

Page 125: FYP Dissertation

108

still have resources to handle more virtual router or complex configuration. Even though the routers has been configured with an idle PC value, it still shows significant activity in the processor usage, but the server performance is still the same and instances responsiveness is only affected by a slight delay in loading when using another application or opening folder inside the instance.

3. Stressing OpenStack server Ø When the server resources is being used

100% by all the instance, it still can create additional instances without any problem. The problem comes when the instance has finished loading into the operating system and running high intensive application in this case is GNS3, it failed to execute the virtual routers. This is probably happen because the server has no available processor resources to be spared to start the routers. Others less intensive application has no problem to execute, but it will take some time to load.

Figure below shows the example one of the testing result.

CPU utilization on method three

CONCLUSION

The primary objective of this project is to create BNS Cloud Application system using OpenStack cloud management system and to test the performance of the server when handling multiple instance. The cloud computing server has been successfully developed using AIO physical machine and has been deployed in the UniKL network and can be accessed anywhere inside UniKL MIIT using wired and wireless network connection. The server has been tested using three methods in order to test its performances, the test ran

successfully and the server is able to execute an extra instance, even though it has a max out its capacity to create only six instances. In short, OpenStack cloud management system can fulfill any requirements that the organization wants to use the system for if it is properly configured and it is provided with enough compute power to run many instances created by students. Flexibility where each component of the OpenStack system can be installed separately in different physical machine for optimal and maximum usage of the server performance to support many students.

RECOMMENDATION

For further deployment of the server to support many students at a time, the OpenStack server can be setup using multiple compute node from the same processor architecture. This can increase the capacity of the server in term of VCPUs number and RAM capacity to handle many instances operating at a time without compromising the performance of the server. For the windows based images inside the OpenStack server, it can be further configured to use Cloud-Init and the remote desktop connection, so that every time the instance is started using the security feature inside OpenStack, the username and password to log on the instance will be changes according to the specified security option making it more secure and only the authorized students can use the instance he/she has created.

REFERENCES

[1] Michael, Armbrust. , et al. "A View of Cloud Computing." COMMUNICATIONS OF THE ACM. 53.4 (2010): 50-58. Web.

[2] Sharma, Maneesha. , et al. "Cloud Computing: Different Approach & Security Challenge." International Journal of Soft Computing and Engineering (IJSCE). 2.1 (2012): 421-424. Web.

[3] Murugaboopathi, G., et al. "Study on Cloud Computing and Security Approaches." International Journal of Soft Computing and Engineering (IJSCE) 3.1 (2013): 212-215. Web.

[4] Wind, Stefan. "Open Source Cloud Computing Management Platforms." 2011 IEEE Conference on Open Systems (ICOS2011) (2011): 175 - 179. Web.

Page 126: FYP Dissertation

109

[5] Bhargava, Dr. Neeraj, et al. "Performance Analysis of Cloud Computing for Distributed Client." International Journal of Computer Science and Mobile Computing 2.6 (2013): 97-104. Web.

[6] Sempolinski, Peter, et al. "A Comparison and Critique of Eucalyptus, OpenNebula and Nimbus." 2nd IEEE International Conference on Cloud Computing Technology and Science (2010): 417-426. Web.

[7] OpenNebula - Flexible Enterprise Cloud Made Simple. N.p., n.d. Web. 26 Dec. 2013. <http://opennebula.org/>.

[8] "Types of Cloud Computing: Private, Public and Hybrid Clouds." Appcore - Cloud Business Blog. N.p., n.d. Web. 26 Dec. 2013. <http://blog.appcore.com/blog/bid/167543/Types-of-Cloud-Computing-Private-Public-and-Hybrid-Clouds>.

[9] "Private Cloud Computing." Interesting and informative articles. N.p., n.d. Web. 26 Dec. 2013. <http://abhijit.snydle.com/private-cloud-computing.html>.

[10] Xiaolong Wen, etc. "Comparison of open-source cloud management platforms: OpenStack and OpenNebula," Fuzzy Systems and Knowledge Discovery (FSKD), 2012 9th International Conference on , vol., no., pp.2457,2461, 29-31 May 2012