Transcript
Page 1: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

1

RESOURCE PROVISIONING FOR VIDEO ON DEMAND IN SAAS

Praveen Reshmalal 1, Dr. S.H.Patil 2

1Research Scholar, Bharati Vidyapeeth Deemed University College of Engineering,

2Guide, Bharati Vidyapeeth Deemed University College of Engineering

ABSTRACT

A Cloud based video on demand is proposed solution to monitor the camera which

is accessible to the client on demand. This camera is present on server computer which is

controlled by cloud controller. Also it makes use of scheduling algorithms to handle

multiple requests. This software provides functionality to remotely access the camera by

making use of cloud architecture. All of the above actions are performed in complete

discretion, without the user’s knowledge, by a background approach, by making use of

cloud controller.

Keywords- VoD; Cloud sim

I. INTRODUCTION

The term cloud computing implies access to remote computing services offered by

third parties via a TCP/IP connection to the public Internet [1]. Cloud computing is a model

for enabling convenient, on demand network access to a shared pool of configurable

computing resources that can be rapidly provided and released with minimal management

effort or service provider interaction. This offers reliable services delivered through data

centers that are built on computer and storage virtualization technologies [5]. Therefore, it is

a technology aiming to deliver on demand IT resources on a pay per use basis and cloud

uses the stateless protocol HTTP, to communicate with your computers. The Cloud

Computing Architectural model is shown in Figure 1.

INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING

& TECHNOLOGY (IJCET)

ISSN 0976 – 6367(Print) ISSN 0976 – 6375(Online) Volume 4, Issue 3, May-June (2013), pp. 01-09 © IAEME: www.iaeme.com/ijcet.asp Journal Impact Factor (2013): 6.1302 (Calculated by GISI) www.jifactor.com

IJCET

© I A E M E

Page 2: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

2

Fig.1 The Cloud Computing Architectural model

II. WHAT IS CLOUD COMPUTING

Cloud computing? Cloud computing is the development of distributed processing (Distributed

Computing),Parallel Processing (Parallel Comp) and Grid Computing (Grid Computing) ,

split numerous processing system of computing into smaller subroutines automatically

through grid, send to an extensive system of multiple servers, return the results to the user

after calculation and analysis. Through cloud computing, network service providers can

handling tens of millions or even billions of dollars of information in seconds, reach a

powerful network services as “super computer “.

Cloud computing, for example, is the Virtualization of computer programs through an

internet connection rather than installing applications on every office computer. Using

virtualization, users can access servers or storage without knowing specific server or

storage details. The virtualization layer w i l l e x e c u t e u s e r r e q u e s t f o r c o m p u t i n g

resources by accessing appropriate resources. Virtualization can be applied to many

types of computer resources: Infrastructure such as Storage, Network, Compute (CPU /

Memory etc.), Platform (such as Linux/ Windows OS) and Software as Services.

Cloud computing in computing research and industry today has the potential to make

the new idea of ‘computing as a utility’ in the near future. The Internet is often represented

as a cloud and the term “Cloud Computing”. Cloud computing is the dynamic provisioning

of IT capabilities/IT services (hardware, software, or services) from third parties over a

network [1][2][9]. These IT services are delivered on demand and they aredelivered

elastically, in terms of ‘able to scale out’ and ‘scale in’. The sections below briefly details

different types of cloud computing and how Virtual Machines (VMs) can be provided as

cloud Infrastructure as a Service(Iaas).

III. MODELING THE VM ALLOCATION [5][6]

Cloud computing infrastructure is the massive deployment of virtualization tools

and techniques as it has an extra layer i.e. Virtualization layer that acts as an creation,

execution, management, and hosting environment for application services.

Page 3: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

3

The modeled VMs in the above virtual environment are contextually isolated but still they

need to share computing resources- processing cores, system bus etc. Hence, the amount

of hardware resources available to each VM is constrained by the total processing power ie.

CPU, the memory and system bandwidth available within the host. The choice of virtual

machine, meaning that you can select a configuration of CPU, memory, storage, bandwidth

etc. that is optimal for an application.

CloudSim supports VM provisioning at two levels:-

� At the host level – It is possible to specify how much of the overall processing

power of each core will be assigned to each VM. Known as VM policy Allocation

� At the VM level – the VM assigns a fixed amount of the available processing power

to the individual application services (task units) that are hosted within its execution

engine. Known as VM Scheduling.

Note that at each level CloudSim implements the time-shared and space-shared

provisioning policies. In this paper, we have proposed the VM load Balancing algorithm at

the VM level (VM Scheduling-time shared) where, individual application services is

assigned varying (different) amount of the available processing power of VMs.

This is because- in the real world, it’s not necessary all the VMs in a DataCenter has

fixed amount of processing powers but it can vary with different computing nodes at

different ends.

And then to these VMs of different processing powers, the tasks/requests (application

services) are assigned or allocated to the most powerful VM and then to the lowest and so

on. They are given the required priority weights. Hence, the performance parameters such

as overall response time and data processing time are optimized.

IV. LOAD BALANCING IN CLOUD COMPUTING

Load balancing is the process of distributing the load among various resources in any

system. Thus load need to be distributed over the resources in cloud-based architecture, so

that each resources does approximately the equal amount of task at any point of time. Basic

need is to provide some techniques to balance requests to provide the solution of the

application faster.

Cloud vendors are based on automatic load balancing services, which allow clients to

increase the number of CPUs or memories for their resources to scale with

increased demands. This service is optional and depends on the clients business needs. So

load balancing serves two important needs, primarily to promote availability of Cloud

resources and secondarily to promote performance [2,4].

In order to balance the requests of the resources it is important to recognize a few major

goals of load balancing algorithms:

a) Cost effectiveness: primary aim is to achieve an overall improvement in system

performance at a reasonable cost.

b) Scalability and flexibility: the distributed system in which the algorithm is implemented

may change in size or topology. So the algorithm must be scalable and flexible enough to

allow such changes to be handled easily.

c) Priority: prioritization of the resources or jobs need to be done on before hand through

the algorithm itself for better service to the important or high prioritized jobs in spite of

equal service provision for all the jobs regardless of their origin.

Page 4: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

4

Brief reviews of few existing load balancing algorithms are presented in the following:

I. Token Routing: The main objective of the algorithm [2,4] is to minimize the system cost

by moving the tokens around the system. But in a scalable cloud system agents cannot have

the enough information of distributing the work load due to communication bottleneck. So

the workload distribution among the agents is not fixed. The drawback of the token routing

algorithm can be removed with the help of heuristic approach of token based load

balancing. This algorithm provides the fast and efficient routing decision. In this algorithm

agent does not need to have an idea of the complete knowledge of their global state and

neighbor’s working load. To make their decision where to pass the token they actually

build their own knowledge base. This knowledge base is actually derived from the

previously received tokens. So in this approach no communication overhead is generated.

II. Round Robin: In this algorithm [2,5], the processes are divided between all processors.

Each process is assigned to the processor in a round robin order. The process allocation

order is maintained locally independent of the allocations from remote processors. Though

the work load distributions between processors are equal but the job processing time for

different processes are not same. So at any point of time some nodes may be heavily loaded

and others remain idle. This algorithm is mostly used in web servers where Http requests are

of similar nature and distributed equally.

III. Randomized: Randomized algorithm is of type static in nature. In this algorithm [2,5]

a process can be handled by a particular node n with a probability p. The process allocation

order is maintained for each processor independent of allocation from remote processor.

This algorithm works well in case of processes are of equal loaded. However, problem

arises when loads are of different computational complexities. Randomized algorithm

does not maintain deterministic approach. It works well when Round Robin algorithm

generates overhead for process queue.

IV. Central queuing: This algorithm [1,3] works on the principal of dynamic distribution.

Each new activity arriving at the queue manager is inserted into the queue. When request

for an activity is received by the queue manager it removes the first activity from the

queue and sends it to the requester. If no ready activity is present in the queue the

request is buffered, until a new activity is available. But in case new activity comes to the

queue while there are unanswered requests in the queue the first such request is removed

from the queue and new activity is assigned to it. When a processor load falls under the

threshold then the local load manager sends a request for the new activity to the central load

manager. The c e n t r a l manager then answers the request if ready activity is found

otherwise queues the request until new activity arrives.

V. Connection mechanism: Load balancing algorithm [6] can also be based on least

connection mechanism which is a part of dynamic scheduling algorithm. It needs to count the

number of connections for each server dynamically to estimate the load. The load balancer

records the connection number of each server. The number of connection increases when a

new connection is dispatched to it, and decreases the number when connection finishes or

timeout happens.

Page 5: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

5

Table 1 presents a comparative study of the above- mentioned load balancing algorithms:

Algorithm Nature Environment Process

Migration

Steadiness Resource

utilization

Token

Routing

Dynam

ic

Decentral

ized

Possible unstable More

Round

robin

Static Decentral

ized

Difficult Stable Less

Randomiz

ed

Staic Decentral

ized

Difficult Stable Less

Central

Queuing

Dynam

ic

Difficult Unstabl

e

Less

Least

location

Dynam

ic

Difficult stable Less

IV. SYSTEM ARCHITECTURE

The following structure shows the architecture of the cloud based VoD. Node Controller

controls the camera. User requests through the cloud controller and receives the live feed of the

video from the camera through the cloud controller.

Work On-Demand Cloud Architecture for Video Application: On-demand videos can be delivered to sub-scribers through different network structures – i.e.

the video server location and the network between the video servers to the subscriber. For many

cases, proxy server, located closer to the subscribers, is widely used to decrease network traffic and

delays through high speed and robust connection. But proxy server has a finite storage and

distribution capacity, and therefore, a popularity scheme is needed to assist in the s e l e c t i o n o f

v i d e o s d u r i n g c a c h i n g . Video servers, on the other hand, have a finite capacity and can

only service limited request at one time. For large content library and the unforeseen spikes in

number of active subscribers, Telco are looking for ways to keep service calls rejection to an absolute

minimum. Figure shows the system architecture of the on-demand cloud for IPTV. Videos can be

streamed from any of the virtual servers, irrespective of its capacity, which was align

continuously , notably to handle peak loads, to avoid overload and to achieve continuous, high

utilization levels of servers while meeting its Service Level Agreements (SLAs). In most cases,

performance is not affected as each virtual server behaves as a dedicated server. However, when too

many virtual servers reside on the single physical machine, services may be delivered more

slowly [8].

Fig. 2 System Architecture

Page 6: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

6

V. SIMULATION

Simulation is a technique where a program models the behaviour of the system

(CPU, network etc.) by calculating the interaction between its different entities using

mathematical formulas’, or actually capturing and playing back observations from a

production system. The available Simulation tools in Cloud Computing today are:

simjava, gridsim and CloudSim.

5.1 CloudSim [1][3][6] CloudSim is a framework developed by the GRIDS laboratory of University of

Melbourne which enables seamless modeling, simulation and experimenting on

designing Cloud computing infrastructures. CloudSim is a self-contained platform

which can be used to model video on demand, host, service brokers, scheduling and

allocation policies of a large scaled Cloud platform. This CloudSim framework is built on

top of GridSim framework which is also developed by the GRIDS laboratory. Hence, the

researcher has used CloudSim to model of video on demand hosts, VMs for

experimenting in simulated cloud environment.

Virtual machine enables the abstraction of an OS and Application running on it

from the hardware. The interior hardware infrastructure services interrelated to the Clouds is

modelled in the Cloudsim simulator by a video on demand element for handling service

requests. These requests are application elements sandboxed within VMs, which need to be

allocated a share of processing power on video on demand host components video on

demand object manages the data management activities such as VM creation and destruction

and does the routing of user requests.

5.2 Results

In this section, we present the evaluation of the performance of the cloud based load

balancer.

The main results investigated in this paper are summarized as follows:

♦ λm: The effective arrival rate to the main server

♦ λc: The effective arrival rate to the cloudserver

♦ W: average waiting time inside the system

♦ D: average delay in the buffer

♦ S: average server time in each server

♦ L : average number of requests in the system

♦ Q: average number of requests in the buffer

♦ X: average number of requests per server

(Server Utilization)

♦ Pr: probability of a request gets rejected

♦ Pd: probability of a request gets serviced without getting buffered

♦ Pb: probability of a request gets serviced after getting buffered

The simulator was validated by comparing its results with those of proven

formulas of M/M/c/k queuing system. Results of the formulas of the average waiting times

in the system as well as in the buffer were compared to that counterpart of the simulator

that considers exponential distributed random variable for both of the inter-arrival time and

Page 7: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

7

service time. Additionally, the average number of requests in the systems as well as in the

buffer of both theoretically proven formulas and simulation were also compared. Both

results of the theoretical and simulation were almost identical. Figure 1 shows the

comparison of the average numbers in the system. The average waiting times were too

small to be presented.

Figure 1. Simulation Validation, E[L]: expected number of requests in the system, E[q]:

expected number of requests in the queue

Cloudsim was used t o c a l c u l a t e t h e a v e r a g e number of requests in the

system. The law stated that during the steady state of a system, the average number of

requests is equal to their average arrival rate, multiplied by their average time spent in the

system. Little's law was used in to derive the average number of requests in the system,

buffer, and per server. For another validating the results of the simulation, the follow

equations were used and tested to hold true.

W = D + S L = Q + X

Pr + Pd + Pb = 1

Figure 2 shows the breakdown of the average time spent in the system

The time spent in the servers is quite constant and it represents the average service

time. The request response time is somehow dominated by the service time as the

buffering time represents small portion of the total response time.

Page 8: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

8

The average number of requests is shown in Figure 3. The figure shows that the number

of requests in the main servers is 30% more than that of the cloud server. It also depicts

that the requests occupancy of the buffer is noticeable only at higher offered loads

ρ≥70%. This shows that with specification model under consideration, the buffer plays

remarkable role only when the system under stress.

Figure 4 presents that the main server is utilized at least 30% more than the clouds based

server. This results is helpful is sizing the hardware of load balanced main-cloud server

system under certain workload.

VII CONCLUSION

We evaluated the performance of a load balanced cloud server system under

different offered loads. The results show that the buffer of the load balance plays marginal

role except at very high loads. It also show that the main server handle at least 30% as

much requests at the cloud based server. It will be very informative to pursue the study of

optimizing the buffer size that meets the minimal rejection probability. The future work is

to compare the performance evaluation of systems considering different combinations of

service time and interarrival time distributions.

Page 9: Resource provisioning for video on demand in saas

International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-

6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME

9

VIII. REFERENCES

[1] R. Buyy, R. Ranjan, R. Calheiros, "Modeling and simulation of scalable Cloud

computing environments and the CloudSim toolkit: Challenges and opportunities".

Proceedings of the Conference on High Performance Computing and Simulation (HPCS

2009), June 2009

[2] J. Cao , G. Bennett , K. Zhang, "Direct execution simulation of load balancing algorithms

with real workload distributed", Journal of Systems and Software, vol. 54, no. 3, p.227-

237, November 2000

[3] Y. Cheng; K. Wang; R. Jan; C. Chen; C. Huang; ``Efficient failover and Load Balancing

for dependable SIP proxy servers", IEEE Symposium on Computers and

Communications, pp. 1153 - 1158, 2008

[4] A. Downey, ``Evidence for long-tailed distributions in the internet," Proceedings of the

1st ACM IGCOMM Workshop on Internet Measurement, pp. 229-241, 2001. [5] A.

Downey,``Lognormal and Pareto distributions in the Internet," Computer

Communications, vol. 28, no. 7, pp. 790-801, 2005.

[6] D. Ersoz, M. S. Yousif, and C. Das, "Characterizing network traffic in a clusterbased,

multi-tier data center," Proceedings of the 27th International Conference on Distributed

Computing Systems (ICDCS'07), pp. 59-68, 2007.

[7] Bhathiya Wickremasinghe, Rodrigo N. Calheiros, Rajkumar Buyya,“CloudAnalyst: A

CloudSim-based Visual Modeller for Analysing Cloud Computing Environments and

Applications”, 20-23, April 2010, pp. 446-452.

[8] Cloud computing insights from 110 implementation projects; IBM Academy of

TechnologyThought Leadership White Paper, October 2010.

[9] IoannisPsoroulas,IoannisAnagnostopoulos,VassiliLoumos, Eleftherios Kayafas, “A

Study of the Parameters Concerning Load Balancing Algorithms”, IJCSNS

International Journal of Computer Science and Network Security, Vol. 7, No. 4, 2007,

pp. 202-214 .

[10] Sandeep Sharma, Sarabjit Singh, Meenakshi Sharma “Performance Analysis of Load

Balancing Algorithms”, World Academy of Science, Engineering and Technology,

38, 2008 pp. 269- 272.

[11] D.Asir, Shamila Ebenezer and Daniel.D, “Adaptive Load Balancing Techniques in

Global Scale Grid Environment”, International Journal of Computer Engineering &

Technology (IJCET), Volume 1, Issue 2, 2010, pp. 85 - 96, ISSN Print: 0976 – 6367,

ISSN Online: 0976 – 6375.

[12] Abhishek Pandey, R.M.Tugnayat and A.K.Tiwari, “Data Security Framework for Cloud

Computing Networks”, International Journal of Computer Engineering & Technology

(IJCET), Volume 4, Issue 1, 2013, pp. 178 - 181, ISSN Print: 0976 – 6367,

ISSN Online: 0976 – 6375.

[13] Gurudatt Kulkarni, Jayant Gambhir and Amruta Dongare, “Security in Cloud

Computing”, International Journal of Computer Engineering & Technology (IJCET),

Volume 3, Issue 1, 2013, pp. 258 - 265, ISSN Print: 0976 – 6367, ISSN Online:

0976 – 6375.


Recommended