20
212 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011 Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing Aziz Murtazaev and Sangyoon Oh Department of Computer and Information Engineering, Ajou University, Suwon, Korea Abstract Virtualization technologies changed the way data centers of enterprises utilize their server resources. Instead of using dedicated servers for each type of application, virtualization allows viewing resources as a pool of unified resources, thereby reducing complexity and easing manageability. Server consolidation technique, which deals with reducing the number of servers used by consolidating applications, is one of the main applications of virtualization in data centers. The latter technique helps to use computing resources more effectively and has many benefits, such as reducing costs of power, cooling and, hence, contributes to the Green IT initiative. In a dynamic data center environment, where applications encapsulated as virtual machines are mapped to and released from the nodes frequently, reducing the number of server nodes used can be achieved by migrating applications without stopping their services, the technology known as live migration. However, live migration is a costly operation; hence, how to perform periodic server consolidation operation in a migration-aware way is a challenging task. We propose server consolidation algorithm – Sercon, which not only minimizes the overall number of used servers, but also minimizes the number of migrations. We verify the feasibility of our algorithm along with showing its scalability by conducting experiments with eight different test cases. Keywords Bin-packing problem, Green IT, Live migration, Server consolidation, Vector-packing problem, Virtualization. 1. Introduction The terms “green computing” and “green IT” have been used frequently these days to refer to energy-efficient solutions in IT. Energy-efficiency issues of data centers are of major importance as costs of power and cooling make up a significant part of their operational costs. Energy-efficient initiatives done by data centers not only reduce their expenses, but overall, contribute to the Green IT policy that is targeted at minimizing carbon emissions. There are a number of ways of reducing power usage in data centers. Berl et al. [1] provide four approaches of increasing energy-efficiency, such as hardware level energy optimization, energy-aware scheduling in grid systems, server consolidation by means of virtualization and power minimization in wireless and wired networks. Our work is related to the third approach that deals with consolidating applications to fewer servers on virtualized cluster environment. Large number of unutilized servers has become a major problem in many IT departments. Large analyst firms estimate that 15 to 20 percent utilization is common in enterprises [2]. Low utilization of servers usually results from the requirements of vendors to run their application in isolation for security purposes. Installing a single application on a single machine was the only way of providing isolation until server virtualization technologies appeared in the market. Virtualization provides a uniform view of a pool of resources in data centers where a single server can host multiple computing environments encapsulated as a virtual machine, all of which are isolated from each other. By this way, applications are efficiently decoupled from the hardware, providing easy management of virtual machines to data center administrators. Applications encapsulated as virtual machines can be stopped, duplicated, suspended, therefore providing an additional management layer between the operating system layer and hardware layer. Besides, saving the state of the virtual machine and keeping this data package in storage as a backup helps in recovering from disasters. Virtualization is a cornerstone technology for “cloud computing”, which is a new paradigm for provisioning computing resources. Cloud computing has three major features: scalability, pay-per-use utility model, and virtualization [3]. Scalability is the distinguishing property of Clouds; whenever Cloud users want to conform leased resources to the demand on their services, they can scale it down or up in a matter of

Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

Embed Size (px)

Citation preview

Page 1: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

212 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

Aziz Murtazaev and Sangyoon OhDepartment of Computer and Information Engineering, Ajou University, Suwon, Korea

Abstract

Virtualization technologies changed the way data centers of enterprises utilize their server resources. Instead of using dedicated servers for each type of application, virtualization allows viewing resources as a pool of unified resources, thereby reducing complexity and easing manageability. Server consolidation technique, which deals with reducing the number of servers used by consolidating applications, is one of the main applications of virtualization in data centers. The latter technique helps to use computing resources more effectively and has many benefits, such as reducing costs of power, cooling and, hence, contributes to the Green IT initiative. In a dynamic data center environment, where applications encapsulated as virtual machines are mapped to and released from the nodes frequently, reducing the number of server nodes used can be achieved by migrating applications without stopping their services, the technology known as live migration. However, live migration is a costly operation; hence, how to perform periodic server consolidation operation in a migration-aware way is a challenging task. We propose server consolidation algorithm – Sercon, which not only minimizes the overall number of used servers, but also minimizes the number of migrations. We verify the feasibility of our algorithm along with showing its scalability by conducting experiments with eight different test cases.

KeywordsBin-packing problem, Green IT, Live migration, Server consolidation, Vector-packing problem, Virtualization.

1. Introduction

The terms “green computing” and “green IT” have been used frequently these days to refer to energy-efficient solutions in IT. Energy-efficiency issues of data centers are of major importance as costs of power and cooling make up a significant part of their operational costs. Energy-efficient initiatives done by data centers not only reduce their expenses, but overall, contribute to the Green IT policy that is targeted at minimizing carbon emissions.

There are a number of ways of reducing power usage in data centers. Berl et al. [1] provide four approaches of increasing energy-efficiency, such as hardware level energy optimization, energy-aware scheduling in grid systems, server consolidation by means of virtualization and power minimization in wireless and wired networks. Our work is related to the third approach that deals with consolidating applications to fewer servers on virtualized cluster environment.

Large number of unutilized servers has become a major problem in many IT departments. Large analyst firms estimate that 15 to 20 percent utilization is common in enterprises [2]. Low utilization of servers usually results from the requirements of vendors to run their

application in isolation for security purposes. Installing a single application on a single machine was the only way of providing isolation until server virtualization technologies appeared in the market.

Virtualization provides a uniform view of a pool of resources in data centers where a single server can host multiple computing environments encapsulated as a virtual machine, all of which are isolated from each other. By this way, applications are efficiently decoupled from the hardware, providing easy management of virtual machines to data center administrators. Applications encapsulated as virtual machines can be stopped, duplicated, suspended, therefore providing an additional management layer between the operating system layer and hardware layer. Besides, saving the state of the virtual machine and keeping this data package in storage as a backup helps in recovering from disasters.

Virtualization is a cornerstone technology for “cloud computing”, which is a new paradigm for provisioning computing resources. Cloud computing has three major features: scalability, pay-per-use utility model, and virtualization [3]. Scalability is the distinguishing property of Clouds; whenever Cloud users want to conform leased resources to the demand on their services, they can scale it down or up in a matter of

Page 2: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

213IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

minutes or even seconds. By this way Cloud users can avoid over-provisioning or under-provisioning of the resources they lease. Cloud computing provides such benefits as scalability and easy manageability owing to virtualization technology. Services that cloud computing provides can be classified as Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS) and many other types of services. IaaS type gives more flexibility to the user to deploy any kind of software and tune their application in a fine-grained mode; Amazon EC2 can be example of IaaS. While in PaaS, users are provided software platform over which they can deploy their services, such as in Google Apps Engine. In SaaS, services are provided to end users directly. Customer Relationship Management (CRM) systems and other business automation applications provided to users through network are examples of SaaS [4].

Server consolidation is one of the main applications of virtualization technology in data centers. Server consolidation provides such benefits as overall reduced CPU count, reduced power and management costs, consequent reductions in switching, communication and storage infrastructure, and it also reduces data-center footprint [2, 5]. We can differentiate two types of server consolidation. First type of consolidating servers is to move large number of applications from dedicated servers in a non-virtualized cluster to a small number of high-performance virtualized servers. This is usually a batch job, which does not happen frequently. However, server consolidation is not limited to that type of batch activity. Rather, in large virtualized data centers where applications wrapped within virtual machines (VMs) are mapped on demand and released frequently, there is a need for performing server consolidation operation periodically. This is the second type of server consolidation, where VM mappings are changed by dynamically reallocating VMs without stopping their execution. Throughout the paper our concern will be the second type of server consolidation.

Traditional approach for server consolidation involves manually mapping VMs from source hosts to destination

hosts [6]. However, this is not feasible in cases where periodic server consolidation operations are performed as in the second type. Some mechanism is needed for rearranging VMs to achieve minimal number of final used nodes, which is the goal of server consolidation. We can see this problem as a type of bin-packing problem, considering bins as nodes and items as VMs. However, the bin-packing problem does not consider many factors that are important in data centers. In order to achieve the minimal number of nodes, VMs are migrated from one node to another. And the number of migrations should also be minimal, as migration is a costly operation.

Server consolidation operation starts from some intermediate state [Figure 1] and performs several VM migrations to reach such state where minimal nodes are used. We propose server consolidation algorithm called Sercon, which not only minimizes the number of nodes used, but also minimizes the number of migrations. We give modeling of the server consolidation problem first where we make several constraints to our model. The metrics estimating efficiency of our algorithm are also discussed.

Sercon algorithm inherits some properties of well-known heuristic algorithms for bin-packing, First-Fit and Best-Fit. However, First-Fit and Best-Fit is targeted only at minimizing number of nodes, while Sercon is targeted at minimizing number of migrations as well. The key idea of Sercon algorithm is to migrate VMs from the least loaded nodes to most loaded ones, respecting number of constraints described in the problem modeling, so that least loaded nodes can be released. We discuss and provide the way to calculate load of a node by VMs. Besides, we provide an example where we illustrate the behavior of Sercon algorithm.

We developed special simulation software to test the feasibility of Sercon algorithm. We conducted experiments with eight different test cases, and compared them with well-known heuristic algorithm for bin-packing – First-Fit Decreasing (FFD). We provide analysis of the results that verify that our algorithm can achieve both of the abovementioned goals.

Figure 1: Server consolidation operation.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 3: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

214 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

Besides testing the efficiency of our algorithm, we conduct another set of experiments related to the scalability of our algorithm. Dealing with larger number of VMs can raise the scalability issue of the algorithm. While the number of VMs is increasing, the number of necessary migrations increases correspondingly. Hence, how to perform all these migrations in a time-efficient manner is another challenging task. We discuss about two live migration schemes, parallel and sequential, and show that migrations can be performed in feasible amount of time even without parallel migrations.

The rest of this paper is organized as follows. In Section 2, we introduce the server consolidation problem and its relationship to the bin-packing problem. Modeling the problem is described in Section 3, and our solution is given in Section 4. In Section 5, we describe the experiment environment first and then give results of the experiment. Related work is discussed in Section 6, and finally we conclude the paper in Section 7.

2. Problem Statement

In a cluster environment, we have a pool of server nodes with applications running on them. Suppose that a cluster is fully virtualized and all the applications are running on VMs. Virtual Machines are launched in some nodes and released from other nodes periodically. After some amount of time, we will observe dispersion of VMs along the pool of the nodes. Our task is to consolidate these VMs so that minimal nodes are used. This server consolidation operation is done by migrating VMs from one node to another via the Live Migration technology [7,8] implemented in most popular hypervisors, such as Xen and VMware. However, Live Migration is a costly operation; costs of VM migration include some amount of CPU processing on the migrating node, the link bandwidth between the migrating and migrated nodes, downtime of the service on the migrating VM, and total migration time [7]. Therefore, our second objective is to minimize the number of migrations.

The problem of VM allocation across a pool of nodes is related to general packing problems. The bin-packing problem [9] is one of those problems, where objects of different sizes must be packed into a finite number of bins so that the minimal bins are used. In our case, we can consider VM as an object and nodes as bins. The bin-packing problem is known as NP-hard, and a number of heuristic algorithms are provided to give sub-optimal results, such as Next-Fit, First-Fit, Best-Fit, which can be calculated quickly (O(n) for Next-Fit, and O(n logn) for First-Fit and Best-Fit) [10].

A number of adaptations to the general bin-packing

problem are needed to apply it to our server consolidation problem. First, multidimensionality of resources: VM and nodes can be characterized by several dimensions, such as CPU, memory, disk, network I/O, and disk I/O. Second, capacity of bins can be either fixed or variable, that is, unlike the classical bin-packing problem where bin sizes are equal, in our case the capacity of nodes among the cluster can be of different size, though variation in size will be small.

When we consider the multidimensionality of the bin-packing problem, we need to differentiate two variants of this packing problem: rectangle packing and vector packing [11]. Rectangle packing is a geometric problem where a set of n-dimensional objects are packed into n-dimensional bins. Objects must fit into the bin without intersecting one another. However, in vector packing, n-dimensions are independent, and the sum of each dimension of item vectors cannot exceed the corresponding dimension of bin vector. When n=2, we can clearly see the difference between the problems in Figure 2. In a two-dimensional (2D) rectangle packing, we pack rectangles into a minimal number of boxes, so that the items do not overlap and contained entirely inside the box. But, in 2D vector packing, items are packed corner-by-corner in a diagonal fashion. Therefore, in our case, we consider the vector packing variant of bin packing (simply referred as the vector packing problem throughout the paper), as resources are independent from one another.

However, our server consolidation problem via reallocating VMs has some differences from general bin-packing problem. Bin packing algorithms try to minimize the number of containers from “clean state” [12], that is, items have not been still loaded into the container, so containers are empty at the initial stage. On the other hand, the server consolidation problem assumes that the cluster is in its intermediate state, where VMs are already allocated to nodes, and the problem is to re-allocate some of VMs to minimize the number of used nodes.

Furthermore, there are two types of packing algorithms: online and offline. Online algorithms do not have global information about items, that is, they do not know what

Figure 2: Visual representation of 2D rectangle packing and vector packing solutions.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 4: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

215IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

kind of item to pack next, but they pack items just as they come, one-by-one. On the other hand, offline algorithms have as an input all items to be packed. In our case, we use an offline algorithm, as the state of the cluster is given as an input.

3. Server Consolidation Modeling

We need to take into account several constraints in our model. The first set of constraints is related to live migration technology. As we are going to re-allocate VMs by migrating the nodes, we need to consider the constraints of live migration technology. Currently, live migration requires having compatible virtualization software, comparable CPU types, similar network connectivity, and usage of shared storage on both source and destination nodes [12].

Second, it is necessary to limit the upper bound of CPU utilization of a single node by some threshold value. This is done to prevent a node’s CPU from reaching 100% utilization. The motive behind this is that 100% utilization can lead to performance degradation [13]. Furthermore, live migration technology also consumes some CPU cycles. Keeping CPU utilization below that threshold value allows retaining a certain level of CPU throughput.

Choosing the right value for CPU threshold is important, as very high threshold means that the performance of VMs running on a node may drop significantly, while very low threshold value lowers effectiveness of consolidation. There is no consensus among the researchers about the optimal CPU threshold value for consolidation. This number ranges from 50% to 75% in different papers [2, 14-16]. Hence, for our experiment we take the value in this range.

Here we do not consider memory in this context because it is possible for memory allocated to VMs to exceed the physical memory. In this case parts of memory are paged to the disk [17].

Third, we need to migrate the VM from the node only if this results in releasing this node. We aim at minimizing the number of nodes used and minimizing the number of migrations, but in the case of migrating VM without releasing the node, it does not contribute to either objective. We do not start migrating VMs while running our algorithm. Rather, we first schedule which VMs ought to be moved to which nodes, and after execution of the algorithm we accomplish the migrations according to the schedule. If we determine that scheduled VM migration does not release the node, then this migration is rolled back from the schedule.

In our work we use two dimensions to characterize a VM and a node – CPU and memory. We do not consider the disk size dimension because we assume that network-attached storage (NAS) is used as a main storage along the cluster. This assumption is inherited from Live Migration technology as mentioned earlier in the paper.

Let a single node i be represented by a vector ni with two dimensions – ni(ci, mi) where ci is the overall CPU capacity in GHz and mi is the memory capacity in Mb. And in the same way, a VM is represented by the vector vj(vcj, vmj), where vcj and vmj are CPU and memory capacities of the jth VM, respectively.

Suppose there are M nodes in the cluster and that we have K VMs. Let ‹S1, S2, …, SM› be sets representing the state of the cluster of nodes. Si represents the state of node i and Si = {i1, i2, …, ip}, where i1, i2, …, ip ∈ [1, K] represents the VM identification numbers currently residing on node i. In other words, Si represents the mapping of a single node with a number of VMs.

The CPU threshold value is defined by T, where T ∈ [0, 1). The ratio of aggregate CPU capacity of VMs on one node to CPU capacity of this node must not exceed the threshold value. Formally, this is defined as

vc

cT

jj S

i

i∈∑≤

(1)

for every node i.

We use the following notation v1<v2 regarding vectors v1(a1, a2, …, an) and v2(b1, b2, …, bn) if and only if a1<b1, a2<b2, …, an<bn. It is an obvious condition that

v njj S ii∈∑ <

(2)

which means that the overall capacity of a set of VMs residing in the node cannot be greater than the capacity of the node itself.

After applying server consolidation algorithm, the state of cluster of nodes will change. We have three metrics that can estimate the effectiveness of server consolidation operation:1. Number of nodes used2. Number of nodes released3. Number of VM migrations

However, we can use only two metrics, first and third ones, as the number of nodes released can be derived from the number of nodes used. The bin-packing problems consider the first number as their effectiveness metrics.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 5: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

216 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

Server consolidation algorithm should minimize the first and third numbers. If we only target the first number, then we can achieve a better result by re-allocating many VMs. This is what the bin-packing problems do. However, this is not desirable due to the costs of live migration described before. Hence, we target possibly sub-optimal result that considers the third number.

We can use a special metric for migration efficiency. Intuitively, we can divide the number of released nodes by the number of VM migrations:

100%Number of released nodes

Migration efficiencyNumber of migrations

= × (3)

For example, if this efficiency equals 40%, then it means that each migration contributed to releasing 40% of a single node, and 100% migration efficiency tells us that every VM migration resulted in releasing one node.

In our model, we can convert two optimization functions (minimizing number of nodes used and minimizing the number of VM migrations) into one objective function by introducing user-defined parameter. We can use one of the following two parameters:• Maximum allowed number of migrations – m0. This

allows us to fix the upper bound of the number of migrations, and

• Minimum allowed percentage of migration ef-ficiency – ME0. By this, we can guarantee at least some lower bound of effectiveness value.

By introducing either of these parameters, we can add one more constraint to our model and clearly define the objective function. In this case, the objective function will be to “minimize number of nodes used with at most m0 migrations” or “minimize number of nodes used with migration efficiency at least ME0”.

4. Our Proposal: Sercon

In this section we provide our proposed algorithm, Sercon, and describe greedy algorithms for bin-packing problem, which will be compared with Sercon. Especially, we choose one representative of greedy bin-packing algorithms, FFD, and provide this algorithm in detail, which is adapted to our model. Then we discuss about scalability of Sercon algorithm and afterwards, we provide an example of Sercon algorithm.

In our model, we consider the nodes with both uniform capacity and variable capacity. However, in case of nodes with variable capacities, we assume that in datacenters, computing nodes are mostly homogeneous and even if some nodes are of different capacity, the deviation is small. Therefore, for simplicity, we apply bin-packing

algorithms and their variations for equal-sized bins to our cases.

4.1 Sercon Algorithm

The algorithm we propose inherits some properties of First- and Best-Fit, which try to minimize the number of nodes used. However, we not only aim to minimize the number of nodes used, but also to minimize the number of migrations. Hence, our algorithm behaves differently from the ones related to bin-packing.

Our idea in simpler terms is the following: First, we sort the nodes based on load by VMs in decreasing order. Then, the VMs on the last (least loaded) node in this list are selected as a candidate for migration, and they are also sorted in the decreasing order of weights. We try to allocate them one-by-one on the first (most loaded) node, and if the attempt is unsuccessful, then the second node, and so on. The motive behind this idea is that the nodes are sparsely loaded in the beginning of the algorithm, and we try to “compact” them by sorting them first and migrating VMs from the least loaded nodes to most loaded nodes so that we can release the least loaded ones. And by this method, we can avoid numerous migrations because we choose less loaded node as a candidate for releasing from the assumption that a less loaded node contains fewer VMs in most cases. And all these steps are repeated until no more migrations are possible. Besides, we will incorporate several constraints described in previous section into our algorithm.

We have two numbers representing the node load by VMs: CPU and memory capacity of VMs. Let us represent CPU load and memory load on a node i as cli and mli, respectively, and they can be derived from the following formulas:

clvc

ci

jj S

i

i= ∈∑

(4)

mlvm

mi

jj S

i

i= ∈∑

(5)

In order to sort the nodes, we need a single value to represent the load of the node. In [18], three ways of calculating weight on two-dimensions are discussed. If we denote the weight of item i on the first and second dimensions by wi and vi, respectively, and there are M nodes to sort, then the first way is to sort items by max{wi, vi} values. The second way proposed by Spieksma [19] sorts nodes by μwi+vi, where μ is relative weight of wi to vi. In Spieksma’s work, items are packed in several iterations and in each iteration value of μ changes. The third way ranks nodes by “surrogate

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 6: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

217IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

weight”: si = λ·wi + (1 – λ)·vi, where

=+

∑∑

w

w v

ii M

i ii M( )

We choose to use the third way to sort nodes in decreas-ing order because parameter λ can be computed based on the existing nodes and VMs, unlike the second way, where parameter μ is either arbitrarily chosen or changes at every iteration. So we give a score to each node representing the load of single node by VMs on both dimensions:

score(Si)=λ·cli+(1-λ)·mli (6)

where λ can be calculated by the following formula:

=+

∑∑

cl

cl ml

ii M

i ii M( )

(7)

Similarly, we can give scores to a set of Q VMs based on their CPU and memory parameters, vci and vmi:

score(vi)=λv·vci+(1-λv)·vmi (8)

where

v

ii Q

i ii Q

vc

vc vm=

+∈

∑∑ ( )

(9)

Note that all methods and variable names in the algorithm shown in Figure 3 are self-explanatory, such as calculate migration efficiency, sort nodes by score decr, check if migration possible, etc. Sercon starts with initializing two variables: total migrations and unsuccessful migration attempts that, as their names state, count the total number of migrations and number of unsuccessful migration attempts, respectively. Both of these variables are needed for identifying when to stop the while loop. In the first case, we pass as an input minimum allowed migration efficiency percentage ME0, and the total number of migrations is needed to calculate the current migration efficiency amount and compare it with ME0. Here, we should point that we can use the maximum allowed number of migrations – m0 instead of ME0 in which we just compare m0 with total Migrations in while loop condition. In the second case, unsuccessful Migration Attempts is used to prevent infinite loop. Besides these two variables, we introduce VM Migration Schedule variable, which holds a list of migration schedules. A single migration schedule contains such information as the VM to be migrated, the source node and destination node.

Inside the while loop, Sercon calculates the scores of each node that represent their utilization level. They are sorted by decreasing order of their scores, and the node to be released is chosen. It is equal to the last busy node (node that hosts at least one VM) minus unsuccessful Migration Attempts. Then, the VMs from this node are sorted and checked as to whether or not they can be migrated to other nodes. Migration of VMs from a single node follows all-or-nothing property, that is, either all VMs from single node are migrated or if one of them fails, none of them are migrated. This logic is incorporated to our algorithm because VM migration that does not lead to releasing the node is unnecessary. Variable m counts the migrations of a single node, and it is used to determine if all the VMs are migrated or not. The method check If Migration Possible checks if the given VM and node satisfy conditions in inequalities (1) and (2). And our algorithm outputs are the number of nodes used, total number of migrations, number of released nodes, and migration schedule.

Note that we do not start migrating VMs during algorithm execution, but we schedule all the migrations and get this schedule as an output. Given this schedule, how to perform these migrations can be important from scalability point of view. Migrating a single VM and waiting for its completion and starting second migration can be inappropriate scheme for large number of VMs. In Section 4.3 we give more detailed discussion about scalability of the algorithm, particularly the issue of how to perform VM migrations in a time-efficient manner.

4.2 Comparison of Sercon with Greedy Bin-packing Algorithms

General bin-packing problem has many heuristic algorithms, such as Next-Fit (NF), First-Fit (FF), Best-Fit (BF), First-Fit Decreasing (FFD), Best-Fit Decreasing (BFD). The difference of FFD and BFD from FF and BF is that in FFD and BFD, items are sorted by weights in decreasing order before being packed to the bins. This sorting enables to improve solution over FF and BF dramatically [20]. We describe these algorithms to compare later with our proposed Sercon algorithm.

In the FFD algorithm, items are sorted in decreasing order first as mentioned before. Then we place an item in the first (lowest indexed) bin into which it will fit. If it does not fit with current bins, we open a new bin and put the item into this new bin. BFD algorithm is the same as FFD except that we place an item into such bin which after accommodating the item will have the least amount of space left over.

By conducting experiment comparing Sercon with FFD and BFD, we found that the results of BFD are the same

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 7: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

218 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

with FFD in almost all the cases. Therefore, we take the only representative of best known simple heuristics for bin-packing problem, FFD, and we provide its description in details. And also we provide results of experiment for only Sercon and FFD in Section 5.3 skipping results for BFD because of the abovementioned reasons.

To be comparable with Sercon, we adapted FFD algorithm to our model. In FFD, we assume that we do not have restrictions in migrating VMs and that it ignores the initial state of cluster. That is, given the state of the

cluster as an input, we “take out” objects from the bin and pack them from the scratch. FFD works in the following way: First, it sorts the nodes in the decreasing order of weights. Here, we use Equation (8) for calculating score of each VM that is used in sorting as a weight. For each VM, FFD allocates it in the first node (node with least index, given the sequence of nodes) that can accommodate it among the nodes that contain at least one VM. If no such node is found, then first blank node (containing no VMs) is used to accommodate it. Pseudo-code of this FFD algorithm is given in Figure 4.

Figure 3: Our proposed Sercon algorithm.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Sercon AlgorithmInput: S, V, N, ME0 total Migrations ← 0 unsuccessful Migration Attempts ← 0 vm Migration Schedule ← {} while (get Allowed Migration Attempts (S) >= unsuccessful Migration Attempts) if ((this is not first iteration) and calculate Migration Efficiency (total Migrations, S) >= ME0) then break (while loop) for i←1 to |S| score [i] ← calculate Score(Si, V, N) end for S ← sort Nodes by Score Decr (S, score) l ← get Node Index to be released (S) VMs To Mig ← get List of VMs from Node (S[l], V) vm score [] ← calculate VM Score (VMs To Mig) VMs To Mig ← sort VMs By Score Decr (VMs To Mig, vm score) m ← 0 foreach vm ∈ VMs To Mig for j←1 to l–1 success ← check If Migration Possible (vm, S[j], vm Migration Schedule) if (success) then create schedule to move vm to S[j]; add this schedule to vm Migration Schedule m ← m+1 break (for loop) end for end foreach if (m = | VMs To Mig | ) then total Migrations ← total Migrations + m unsuccessful Migration Attempts ← 0 else remove last added schedule from vm Migration Schedule unsuccessful Migration Attempts ← unsuccessful Migration Attempts + 1 end whileOutput: number of Nodes used, total Migrations, number of released Nodes, vm Migration SchedulewhereS - Current state of cluster: S =S1, S2, ..., SM>Si - Mapping in node with id=i: Si = <i1, i2, ..., ip>; i1, i2, ..., ip are ids of the VMs mapped to node iV - List of virtual machine vectors: V = <v1, v2, ..., vK>N - List of node vectors: N = <n1, n2, ..., nM >S[i] - Node mapping in ith node after sorting|x| - Cardinality of set x

Page 8: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

219IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

Now, let us see what the exact difference is between Sercon and FFD in Table 1. For Sercon, the initial cluster state does have a meaning, because it affects the final result, while for FFD it does not. Nodes are sorted in decreasing order in Sercon in each iteration, and it is necessary to pick up the most loaded node; however, in FFD the sequence of nodes does not change from the beginning to the end. VMs are sorted in both, but in FFD it is done only once in the beginning to order all the VMs. Besides, Sercon tracks the number of migrations, and the algorithm stops when this number reaches user-supplied parameter value or a Migration Efficiency is reached. FFD does not have such mechanism. While both algorithms have the objective to minimize the number of nodes used, only Sercon has the second objective to minimize the number of migrations.

We calculated time complexity of each algorithm, and Sercon requires more time to find solution (O(n4)) than FFD (O(n2)). This is understandable because we can see that Sercon is more complex algorithm than FFD, as Sercon should meet two objectives compared to only one in FFD. This time is negligible when the number of

VMs in the data center is relatively small (around 100). However, when this number is large enough (more than 1000) this factor should be taken into account, that is, powerful computing node should run this algorithm to decrease computing time. We will talk about running time of the algorithm in the next subsection and we also calculate execution time with various VM numbers and provide experimental results in the Evaluation section.

Even if Sercon requires more running time than FFD, it can be justified by one fact. As we already mentioned, live migration has some costs, and migration time is one of these costs. Live migration of a single VM takes much more time than Sercon or FFD algorithm execution. Therefore, increase in running time of Sercon (O(n4) against O(n2) in FFD) will be much compensated by fewer number of migrations, and overall less time will be spent to server consolidation operation in Sercon compared to FFD’s case.

4.3 Scalability of Sercon

In order to address scalability issue of Sercon we need to

Figure 4: First-fit decreasing algorithm.

Table 1: Comparison between Sercon and FFDSercon FFD

Initial cluster state affects result Yes NoNode sorting by weight In each iteration NoVMs sorting by weight Each time when one candidate node to be released is picked up Just once in the beginningTracking number of migrations Yes, the algorithm stops when this number reaches user-defined value No such mechanismTime complexity O(n4) O(n2)Minimize number of nodes used Yes Yes

Minimize number of migrations Yes No

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

First-Fit Decreasing AlgorithmInput: S, V, N S ← clear Mappings () vm score[] ← calculate VM Score (V) sorted VM List ← sort VMs by Score Decr (V, vm score) foreach vm ∈ sorted VM List for i ← 1 to |S| success ← check If Migration Possible (vm, Si) if (success) then add vm to Si break (for loop) end for end foreachOutput: number of Nodes used, total Migrations, number of released Nodeswhere S - Current state of cluster: S = <S1, S2, ..., SM>Si - Mapping in node with id=i: Si = <i1, i2, ..., ip >; i1, i2, ..., ip are ids of the VMs mapped to node iV - List of virtual machine vectors: V = <v1, v2, ..., vK>N - List of node vectors: N = <n1, n2, ..., nM>|x| - Cardinality of set x

Page 9: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

220 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

take into account several things, such as running time of the algorithm, migration time of single VM, number of migrations from single node, overall migration time from changing the cluster from one state to another, pressure on network and CPU, and parallel versus sequential migrations. In this subsection we provide discussions about scalability of the algorithm from time perspective. The time spent for whole server consolidation operation could be crucial to determine whether migrations are feasible or not. We can divide the total time for the server consolidation into two parts:• Running time of the algorithm – talg• Overall migration time of the VMs – tmig

One can expect that overall migration time tmig will contribute much more to total time for server consolidation operation than algorithm execution time talg. The reason for that is that algorithm only calculates new placement of VMs and outputs migration schedule list, but actual migrations start after algorithm execution and each migration may take from 30 to 90 s [21]. While we are dealing with smaller number of VMs in the cluster we can neglect the running time, but scaling to larger number of VMs or nodes may arise the issue of algorithm scalability in terms of running time. If the execution time is too long, it may be not feasible to use this algorithm. Hence we need to calculate talg for different VM numbers to show its scalability.

Second, we need to take into account overall migration time tmig. Single live migration time depends on memory size, the rate memory pages are dirtied, network bandwidth between source and destination hosts, CPU throughput on source and destination hosts [22]. Our algorithm outputs the schedule to live migrate multiple VMs, hence we need to consider how we can perform multiple migrations in a feasible amount of time. There are two schemes for multiple live migrations: sequential and parallel. As for now, only VMware’s latest hypervisor has a feature to live migrate 4 to 8 VMs in parallel given enough bandwidth (from 1 to 10 Gb/s dedicated link), while other hypervisors, such as Xen, Microsoft Hyper-V are limited to a single live migration at a time [23]. But even in one-live-migration-at-a-time scheme one can carry out series of migrations simultaneously on different nodes as far as these nodes are not busy with other migrations (here we note that throughout the paper parallel migrations refer to several migrations from one source at a time while simultaneous migrations refer to several migrations from non-overlapping source to non-overlapping destination). That is, while we are performing first migration, we do not have to wait for this first migration to complete, but we can perform other migrations simultaneously as far as one migration source (destination) nodes does not collide with other migration source (destination) nodes. Therefore, we can combine non-overlapping VM

migration schedules into one queue. And the maximum number of series of live migrations in one queue is limited by number_of_hosts/2 [24].

By grouping all the migrations into minimal number of queues of non-overlapping migrations we can reduce the overall migration time. This way we can address the scalability of Sercon in terms of overall migration time. We use simple algorithm called Migration_Ordering to put migration source–destination pairs into migration queue. The algorithm works in a following way: we add the migration pair one-by-one into first queue as far as this migration’s source (destination) does not collide with other migration’s source (destination) in that queue. If it collides, we put it into second queue, and so on. In this way, we can partition the migration schedules into several non-overlapping queues.

In the evaluation section, we provide experimental results with calculating running time of Sercon and estimating Migration_Ordering algorithm with various VM number cases.

4.4 Sercon Example

To better illustrate the behavior of Sercon algorithm, we provide an example in this subsection. Suppose we have M=6 nodes, and K=15 VMs are placed on them. The state of the cluster is S = ‹S1, S2, S3, S4, S5, S6›, where S6 = {1, 2, 3}, S2 = {4, 5, 6}, S3 = {7}, S4 = {8, 9}, S5 = {10, 11}, S6 = {12, 13, 14, 15}. We choose nodes with equal capacity: n1 = {6, 4}, n2 = {6, 4}, n3 = {6, 4}, n4 = {6, 4}, n5 = {6, 4}, n6 = {6, 4}; and the capacity of VMs: v1 = {1, 0.5}, v2 = {1, 0.25}, v3 = {1.5, 1}, v4 = {1, 0.25}, v5 = {1.2, 0.5}, v6 = {1, 1}, v7 = {0.5, 0.5}, v8 = {0.5, 0.5}, v9 = {1, 1}, v10 = {1, 0.5}, v11 = {1, 0.75}, v12 = {1.5, 1}, v13 = {1, 0.75}, v14 = {1, 0.5}, v15 = {0.5, 0.25}.

Here S4 = {8, 9} means that node #4 hosts VMs v8 and v9; n1 = {6, 4} means node #1’s CPU capacity is overall 6 GHz and memory is 4 Gb, and the same interpretation for VM capacity. We define CPU threshold T = 0.7.

Figure 5 illustrates the steps of the Sercon algorithm. Following the algorithm, in the first iteration of while loop, we calculate the scores of each node from Equation (6): score(S1)=0.51, score(S2)=0.49, score(S3)=0.10, score(S4)=0.31, score(S5)=0.32, score(S6)=0.65. Then, the nodes should be sorted based on score in decreasing order: S turns from state ‹S1, S2, S3, S4, S5, S6› to state ‹S6, S1, S2, S5, S4, S3› after sorting. The last item in this list, that is, S3 is chosen to be released; it contains one VM v7. We calculate the score for v7 using Equation (8). The calculation of VM scores is needed to sort them in decreasing order, but it makes sense when there are at least two VMs. So, foreach loop executes only once because of having only one VM to allocate. We check

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 10: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

221IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

from the beginning of this list ‹S6, S1, S2, S5, S4› if migration possible, and we find that S1 can host this VM, that is, v7 is scheduled to be moved to S1.

In the second iteration of while loop, scores of nodes are re-calculated: score(S1)=0.62, score(S2)=0.49, score(S3)=0, score(S4)=0.31, score(S5)=0.32, score(S6)=0.65; and the list is sorted again in decreasing order: ‹S6, S1, S2, S5, S4, S3› (the order has not changed compared to the previous state). As S3 does not host any VMs, we take the next one, S4, as a candidate node to be released. S4 hosts two VMs v8, v9 and we calculate the scores of each using Equation (8) where Q=2: score(v8)=0.5, score(v9)=1. After sorting completion, we have an ordered list of VMs: ‹v9, v8›. In the first foreach loop, we check from the beginning of this list ‹S6, S1, S2, S5› if we can allocate v9 to one of these nodes. Nodes S6

and S1 cannot host v9 while S2 can, and v9 is scheduled to be moved to S2. In the second iteration of foreach loop, v8 is checked to find out if it can be moved to one of these nodes ‹S6, S1, S2, S5›. Now we find that S5 can host it and as a result v8 is scheduled to be migrated to S5.

In subsequent iterations, we realize that no other migrations are possible, so node to VM mappings change to this: S1 = {1, 2, 3, 7}, S2 = {4, 5, 6, 9}, S3 = {}, S4 = {}, S5 = {8, 10, 11}, S6 = {12, 13, 14, 15}. And algorithm outputs three migration schedules: ‹v7, S3, S1›, ‹v9, S4, S2›, ‹v8, S4, S5› (In the notation ‹v7, S3, S1›, v7 is VM to migrate, S3 is source node and S1 is destination node). As we can see, the number of nodes used has been decreased from 6 to 4, and two nodes are released that can be turned off or used for other purposes.

5. Evaluation

In this section, we provide experiment setup and results of the experiment. We set up eight different test cases for our experiment, and we publish the numerical results of the experiment in Appendix 1 along with graphs showing a comparison of our approach with FFD. Also we show experimental results regarding scalability issue of our algorithm from time perspective which has been discussed in Section 4.3. In the first experiment where we compare Sercon with FFD, we use VM numbers ranging from 25 to 150. We have not included comparison above 150 VMs, because the tendency of the graphs did not change with larger numbers. However, as the second experiment is related to scalability of our algorithm we increase the number of VMs tested up to 1000.

5.1 Experiment Setup

In our experiment, we consider eight classes of randomly generated instances. Similar test cases for 2D vector packing problem have been proposed by Spieksma [19] and Caprara et al [18]. Table 2 shows the characteristics of each class.

We generate random values for VM dimensions in some given interval [a,b]. These random values follow a

Figure 5: Steps of Sercon algorithm.

Table 2. Test cases for the experimentClass Node VM

ci mi vcj vmj

1 6 4 [1.0, 2.0] [0.5, 1.5]2 10 6 [1.0, 2.2] [0.5, 1.5]3 {4,5,6*,7,8} {3,4*,5} [1.0, 2.0] [0.5, 1.5]4 {8,9,10*,11,12} {5,6*,7} [1.0, 2.2] [0.5, 1.5]5 6 4 [1.0, 2.0] [vcj /2, vcj /2+0.5]6 10 6 [1.0, 2.2] [vcj/2, vcj /2+0.4]7 {4,5,6*,7,8} {3,4*,5} [1.0, 2.0] [vcj /2, vcj /2+0.5]

8 {8,9,10*,11,12} {5,6*,7} [1.0, 2.2] [vcj /2, vcj /2+0.4]

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 11: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

222 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

discrete uniform distribution, that is, each number with difference 0.1 in [a,b] interval has equal probability of being chosen. For node capacity, we consider two cases: First, ci=6, mi=4, and second, ci=10, mi=6. For simplicity, we choose integer values for CPU and memory capacity of the node. ci=6 corresponds to the overall 6 GHz CPU clock, which can be a dual-core processor with 3 GHz clock each, and ci=10 corresponds to the overall 10 GHz CPU clock, which can be a quad-core processor with 2.5 GHz clock each or 2 × dual-core 2.5 GHz processors.

We consider both equal-sized nodes and variable-sized nodes. Classes 1, 2, 5, and 6 consider equal-sized nodes, while the rest consider variable-sized ones. In case of variable-sized nodes, we assume that in the data centers the nodes are mostly uniform, with little deviations in the configuration. In classes 3, 4, 7, and 8 the node CPU and memory are randomly chosen from the values in the brace. Distribution of the values is such that, for example, in node CPU of class 3, ci=6 (marked with asterisk) is the mean of distribution that has high probability pm of being chosen, while others (e.g. 4,5,7,8 in class 3) are uniformly distributed. That is, the probability of each value not marked with asterisk (4,5,7,8) being chosen is (1–pm)/(number of possible values) which equals to (1–pm)/4 in our case. We take pm=0.9 in our experiments.

The intervals in VM CPU vcj and memory vmj configuration in each class are chosen such that the node can contain on average 4 and 6 VMs on each dimension. In classes 1–4, both vcj and vmj are independently randomly generated in their given interval; however, in classes 5–8, we consider the cases where vcj is generated randomly in the given interval, and the interval of vmj depends on the generated value of vcj. In other words, in classes 5–8, vcj and vmj are positively correlated.

For each class of test cases, we generate 6 different numbers of VMs: 25, 50, 75, 100, 125, and 150. Besides, we need to generate the state of the cluster, where a number of VMs are arbitrarily allocated across a set of nodes. The issue here is how to determine the number of nodes to be generated for each test case. For this purpose, we calculate the theoretical lower bound of bins for a set of VMs from the 2D vector packing problem, which is mentioned by Spieksma [19]. However, Spieksma considers equal-sized bins with size 1; hence, by taking into account the size of bins, that is, capacity of nodes, lower bound of nodes can be retrieved by the following way:

L max vc c vm mc j ij K

j ij K

=∈ ∈∑ ∑{[ / ],[ / ]} (10)

where notation [x] represents the largest integer that is less than or equal to x. And ci and mi here are the mean CPU and memory capacities of the node.

Lower bound LC represents the theoretical minimum of nodes necessary to accommodate given set of VMs. We generate a set of 3·LC nodes to get enough dispersed state of the cluster and randomly allocate VMs over this set of nodes. This state is given as input to our Sercon algorithm, which rearranges VMs, hence modifying this state to achieve better state in terms of server consolidation.

In Section 3, we discussed about the constraints of the model and mentioned that some CPU threshold should be set to limit CPU utilization of the node. Many researchers specify this threshold in the range between 50% and 75%. Here for our experiments we take T=0.7 (70%) for CPU threshold as it is within the range specified. Also we take minimum migration efficiency ME0=0.1, low value to relax migration constraints in these experiments.

We compare our algorithm Sercon with mostly mentioned greedy algorithm for bin-packing, FFD that is adapted to our environment. Pseudo-code and description of FFD algorithm is given in Section 4.2.

Input for both algorithms is current state of the node with several VMs allocated on it, including both node and VM configurations. Besides, for Sercon algorithm, we may specify minimum migration efficiency ME0. Both algorithms generate new state of the cluster from the input, and outputs are the final number of nodes used, number of released nodes, number of migrations and migration schedule (only in Sercon case).

5.2 Experiment Environment

Experiment environment is shown in Table 3. We implemented simulation software developed on .NET 3.5 framework. This software mainly performs two operations: generating random values according to the classes in the experiment setup and analyzing data, by displaying tables, graphs or statistics. The snapshots of the simulation software can be seen in Figure 6.

In this software, we can specify the number of VMs and experiment class described in Section 5.1, and nodes, VMs, and the state of the cluster are generated. We can

Table 3: Experiment environmentHardware and software specifications

CPU type Intel PentiumNumber of cores 2 CPU clock 2.0 GHzRAM 2 GBOS Windows 7Development framework .NET 3.5

IDE MS Visual Studio 2010

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 12: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

223IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

Figure 6a: Sercon simulation software. Visual comparison of Sercon with First-fit decreasing and Best-fit decreasing algorithms.

Figure 6b: Sercon simulation software. Data generation for eight different test cases.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 13: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

224 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

see the generated result as a table, graph or statistic information. We can run our Sercon algorithm, FFD, and BFD with these generated values and compare them. The graphs represent the nodes’ utilization by VMs, and scores also calculated and displayed. Furthermore, this simulation software allows to generate values for all classes, and to display the aggregated result in forms of tables or graphs. In the graphs, we can see the relationship between the outputs of Sercon and FFD algorithms, such as the relationship between the number of migrations and number of VMs used or between migration efficiency and number of VMs used.

5.3 Experiment Results

Table 4 shows experiment results for Class 1 and Class 2 where two algorithms are compared. Full results including other classes and other detailed information are given in Appendix 1. We ran both algorithms for each class and for each of the node VM number cases (K=25, 50, 75, 100, 125, 150) 10 times and averaged the results. We ran each case 10 times to limit randomness and to get a more general picture.

We can see from the experiment results that the migration efficiency of Sercon much outperforms that of FFD. The migration efficiency of FFD ranges from 18%

to 25%, while in Sercon, it ranges from 72% to 100%. We can also see that the number of nodes used in Sercon is very close to that of FFD, and FFD is better than Sercon by only 0–6%. That is, Sercon uses 0 to 6% more nodes than FFD to allocate a set of nodes. This difference is negligible compared to the migration efficiency benefit that Sercon achieves over FFD.

We can also notice that the results of class 1, 3, 5, and 7 are similar to each other, and the results of classes 2, 4, 6, and 8 are also similar. This is because, in both group of classes, the average node capacity and average VMs capacity are equal though they are generated in different ways. In the odd numbered classes, the migration efficiency is higher (97–100%) than even-numbered classes (72–81%). This is because in the odd numbered classes, the nodes on average can contain 4 VMs, while for even-numbered classes, this number equals to 6, and hence, the more VMs on average a node can host, the more migrations are needed to release a single node.

For constructing graphs, we ran algorithms for more VM number cases: the interval of (20, 150) but with difference 10 instead of 25. The relationship between the number of VMs used and the number of migrations is given in Figure 7. Experiments across all the classes yield similar results; hence, we provide only a single graph for this

Figure 6c: Sercon simulation software. Generating graphs showing relationships of different parameters.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 14: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

225IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

case. The line for the FFD algorithm has a higher slope than that of Sercon algorithm, which means that a larger number of migrations are needed by FFD than that by Sercon, and the difference between their values gets bigger as more VMs are used. The result we can see here is not only the fact that Sercon requires fewer migrations, but also that it grows more slowly than FFD.

Figure 8 shows the relationship between the number of nodes released and the number of migrations. We can notice from this graph that the general trend is similar to the previous one in Figure 7. This graph tells that for every node release, Sercon requires a much fewer number of migrations than that required by FFD, and this difference increases as more nodes are released.

In Figure 9, we can see the migration efficiency of Sercon and FFD. As previously mentioned in the results of Appendix 1, in odd-numbered class cases, migration efficiency is higher (97–100%) than in even-numbered class cases (72–81%) which is depicted in two graphs. We can notice from both graphs that migration efficiency is stable in some range for both Sercon and FFD, and it does not depend on the number of VMs used. Rather,

in Sercon, it depends on which test case is generated, even-numbered class or odd-numbered class. According to this observation, we find that migration efficiency depends on the average number of VMs allocatable in a single node. So, in the odd-numbered class case, the average number of VMs per node is 4 and migration efficiency is high enough (97–100%), while in the even-numbered case, the average number of VMs per node is 6 and migration efficiency is not as high (72–81%).

And the last one is the graph in Figure 10, which shows the number of nodes used per each VM number case. We add a line for the theoretical lower bound LC, which is described in Section 5.1. We can see from this graph that line for Sercon is very close to that of FFD. Sercon uses a bit more nodes (0 to 6% more) than FFD but the difference is very small. This means that in minimizing the number of nodes, Sercon achieves comparable results with FFD. We draw lower bound line here taking into account the CPU threshold discussed previously as Sercon and FFD both incorporates this constraint as well. We can see that the values of both Sercon and FFD are not so far away from the lower bound line, which shows their effectiveness.

Table 4: Experiment results for Class 1 and 2 (for more detailed information refer to Appendix 1)# of VMs Class FFD Sercon

# of nodes used

# of nodes released

# of migrations

Migr. Eff. # of nodes used

# of nodes released

# of migrations

Migr. Eff.

25 1 10.5 4.5 24.2 19% 10.8 4.2 4.3 98%50 1 20.9 9.8 49.4 20% 21.8 8.9 8.9 100%75 1 30.9 13.7 74.2 18% 31.7 12.9 13.1 98%100 1 40.7 19.7 99.3 20% 41.7 18.7 18.7 100%125 1 51 24.1 124 19% 51 24.1 24.5 98%150 1 61.1 28.7 149 19% 61.1 28.7 28.7 100%25 2 6.7 5.4 23.4 23% 6.4 5.7 7.4 77%50 2 12.3 11.1 48.1 23% 12.4 11 14.4 76%75 2 18.3 15.5 74 21% 18.5 15.3 21.1 73%100 2 23.9 20 98.5 20% 24.3 19.6 27 73%125 2 30.7 26.8 122.8 22% 31 26.5 33.5 79%

150 2 36.4 31.3 148.2 21% 36.7 31 40.9 76%

Figure 7: Relationship between number of VMs used and number of migrations.

Figure 8: Relationship between number of nodes released and number of migrations.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 15: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

226 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

The cost of Sercon algorithm is that it requires a bit more nodes for consolidation than FFD, but the benefits that it achieves over FFD is several times decrease in the number of migrations as can be seen in Figures 7–9. That is a very important result as live migration is used to dynamically reallocate VMs and it is a costly operation. Our experiment showed that Sercon performs well in terms of minimizing the number of migrations, and it also showed a good result in terms of minimizing the number of nodes used.

5.4 Experiments with Scalability of Sercon Algorithm

In this subsection, we provide experimental results about scalability of algorithm in time perspective. As we discussed in Section 4.3 we partition the whole time for server consolidation operation into two parts: algorithm execution time talg and time for migrating all VMs tmig. We conducted several experiments to estimate time for each of these partitions.

5.4.1 Running Time of the Algorithm

First, the running time of the algorithm talg depends on the number of VMs and number of nodes. We conducted experiment to find the scalability of our algorithm in terms of running time. We run the same test cases as in Table 2, but with the VM numbers up to 1000. Figure 11(a) shows the result of the experiment conducted for Class 1 and Class 2. We did not include the results of other classes as Class 1 represents other odd-numbered classes and Class 2 represents even-numbered ones. We can see from the graph that in Class 1 case it takes 18 s to calculate new placement of 500 VMs but the running time increases up to 2 min when we increase the number of VMs to 1000. However, in Class 2 case, we can see that it increases slower than Class 1 case. It takes 8 and 52 s to calculate new placement for 500 and 1000 VMs correspondingly in Class 2. The reason for two different results with the same number of VMs is that the average number of VMs the node can contain differs in each experiment class case: on average 4 VMs per node in Class 1 and 6 VMs in Class 2. Figure 11(b) shows the running time of Sercon per number of nodes in the cluster. We can see that it takes around 60 s to calculate new placement for 500 nodes in Class 1 and a little less – 40 s in Class 2.

5.4.2 Overall Migration Time of the VMs

Second, we estimate overall migration time of the VMs. We discussed in Section 4.3 that we can group all the migration schedules into several non-overlapping migration queues. We proposed the algorithm named Migration_Ordering which partitions migration schedules into least possible number of non-overlapping queues.

Figure 10: Relationship between number of VMs used and number of nodes used to allocate those VMs.

Figure 9: Relationship between number of VMs used and mi-gration efficiency (for odd-numbered class cases (a) and for even-numbered class cases (b)).

(a)

(b)

We conducted another experiment to calculate how many queues of non-overlapping migrations we need in each VM number case.

The results of the experiment in Figure 12 show that in a cluster with 1000 VMs we can wrap the VM migration schedules produced by Sercon algorithm into on average 1 to 2 queues and maximum of 3 queues in our test cases. This means that with this VM migration schedule ordering, the time spent for all migrations corresponds to the time spent for maximum 3 separate migrations (In fact, there could be some overhead of conducting multiple migrations on network bandwidth depending

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 16: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

227IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

on network infrastructure, however, we assume that network architecture of data center is configured properly and this overhead is negligible. And also currently many hypervisor vendors require dedicated link for live migration [24, 25]).

If a single migration takes from 30 to 90 s on average [21], we can calculate approximate total time spent for our algorithm execution and VM replacement. For Class 1 case and for 1000 VMs case, talg = 118 s and tmig ranges between 30 and 90 s, and totally it takes at most talg + 3·tmig which ranges from 3.5 to 6.5 min for our algorithm. And for Class 2 case with the same VM case, talg = 52 s and totally it takes at most from 2.5 to 5.5 min Spending

Figure 12: Number of migration schedule queues per number of VMs.

from 2.5 to 6.5 min for new VM placement which helps to reduce number of nodes used is quite acceptable, because the intervals when the algorithm runs have much more duration than 6.5 min, for example, every 1 h interval. If we use hypervisor which enables parallel migrations, this duration can be even more reduced.

This experiment shows that our algorithm is scalable enough even without parallel live migration technologies. We verified the scalability of our algorithm experimenting up to 1000 VMs and up to 500 nodes. We can say that our algorithm can be suitable for middle-sized data centers.

6. Related Works

Our work is related to different fields, such as virtualization, server consolidation, bin-packing problems, and live migration of VMs. Virtualization, a technology appeared in the 1960s and having lost attention for many years, is being revived again by attracting both the business and research communities. Server consolidation is one of the main applications of virtualization technology used by enterprises in their data centers. Chebiyyam et al. [6] discuss the motivation and benefits of server consolidation and also discuss about algorithmic approaches to the consolidation operation. They point that many data centers currently face the problem of underutilization of servers because applications are hosted on dedicated servers, and they are fully utilized only for a fraction of the time, while being under-utilized other times. Virtualization technologies can solve this problem and, besides increased server utilization, they provide such benefits as reduced complexity of management, rapid application development, and business resilience. They show the server consolidation problem as a vector packing problem and introduce many variations of this problem along with heuristics solutions, such as genetic algorithms and ant colony optimization. However, they do not mention about the migration constraints, which are incorporated into our model.

Migrating a VM while it continues to run is a great achievement done in the field of virtualization. This technology helps performing server consolidation, load balancing or maintenance tasks in data centers in a transparent way for the users of hosted applications. Live migration technology has been implemented in both Xen [7] and VMware [8]. Both of these approaches are similar in that they use a pre-copy approach. In that approach, memory pages are iteratively copied from source host to destination host without stopping the execution of VM. Clark et al. [7] emphasize two costs of migration: service downtime and total migration time that should be minimized. We propose the algorithm in which migration cost is taken into account, and the

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Figure 11: Sercon algorithm running time (relative to number of VMs (a) and number of nodes (b)).

(a)

(b)

Page 17: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

228 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

second objective of our algorithm, along with minimizing the number of servers used, is to minimize the number of migrations as well.

The problem of re-allocating VMs to minimize the number of used servers can be seen as a bin-packing problem. There are many variations of this problem, such as vector packing, geometric packing, online, offline packing, identical-sized bins, variable-sized bins, etc. These problems are known to be NP-hard, and these optimization problems are studied by Operational Research. There exist numerous literatures about bin-packing and related problems [10, 11, 18, 19, 20, 26]. Variation of the bin-packing problem related to our work is called 2D vector packing problem (2DVPP). Spieksma [19] studied this problem, and he introduced some heuristic approaches and proposed his branch-and-bound algorithm. Caprara and Toth [18] provided a detailed study of 2DVPP and analyzed lower bounds and algorithms for solving that problem. Their experiment setup was a great background for our evaluation, and we also used their approach in calculating the weights of 2D vectors.

Hyser et al. [12] discuss about VM mapping with the help of live migration and propose a VM placing system in data centers. They clearly state the difference of the related problems such as bin-packing from the VM mapping problem. The difference is that bin-packing and the related problems start with “clean state”, while the VM mapping problem starts with the existing state where VMs are already allocated to some nodes. They also note that a certain policy should be chosen to solve the VM mapping problem, such as load-balancing, server consolidation or maximizing user’s SLA.

Gmach et al. [26] provide a comprehensive research on two different types of workload placement in virtualized datacenters. They use proactive trace-based workload placement scheme along with reactive migration controller scheme to address efficient resource pool management. They show various combinations of these two schemes and they consider trade-offs between required capacity, QoS, and number of migrations. They show that it is possible to achieve best results in terms of required capacity and QoS but that scheme requires considerable number of migrations. And by putting constraint to the number of allowed migrations, they could achieve good trade-off among all three parameters. Main focus of their research is the comparison of different workload placement policies, and they use genetic algorithms for finding optimal placement of VMs on the resource pools. However, our work is more specific and we focus only on workload placement algorithm. Our algorithm differs from theirs in that ours’ computational complexity is lower than genetic algorithms, and still it can give good placement results.

Verma et al [28] present application placement scheme in virtualized systems which takes into account power cost and migration cost. They first model the application placement problem under different perspectives, such as static, load-balancing, power-minimizing, and combined power-minimizing and migration cost minimizing schemes. They developed pMapper architecture targeting at minimizing power subject to fixed performance requirement. They estimate migration cost versus power cost trade-offs by experimenting with different schemes. In their work, server consolidation is only one part of their architecture, whereas other parts deal with power minimization at hardware layer and VM resizing based on their SLA. Similarly to our paper, they also show that taking into account the initial placement of VMs in server consolidation operation can reduce migration cost. Migration cost is determined by estimating impact of migration to application throughput. Even though they mention about migration costs being minimized, their main focus is energy efficiency in the placement servers. And also they conclude that power-minimizing model is the only feasible model from practicality view. But in contrast to their work, we concentrate only on server consolidation approach and migration cost in our case is the number of performing migrations. And also they test the scalability of their algorithm with only up to 16 servers, whereas we tested feasibility of our algorithm with up to 500 server nodes.

Bobroff et al. [29] presents dynamic server migration and consolidation algorithm. Their approach is based on forecasting of future resource demand and allocating appropriate resources in the next placement. To properly forecast utilization, they identify if the resource demands show some pattern. Those VM workloads showing particular utilization patterns, such as daily or weekly variations, benefit most from their forecasting method. Objective of their algorithm is to reduce the amount of required physical resource capacity and the rate of SLA violations. However, their algorithm does not take into account the number of migrations required to a new placement. Their algorithm relies on First-Fit bin-packing heuristics which as we showed can reduce the number of nodes used well, but it requires numerous migrations. Similarity of their approach with ours is that both try to reduce the number of nodes used, but the second goal is to reduce SLA violations in their approach and to reduce the number of migrations in ours.

7. Conclusion

Server consolidation is one of the main benefits that virtualization technology provides to data centers. It allows the efficient utilization of a pool of servers by consolidating computing environments encapsulated as VMs to fewer nodes. By consolidating servers,

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 18: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

229IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

we can reduce the costs for power, cooling, and management.

Live migration technology opens new frontiers in server consolidation problem in data centers. This technology enables migrating VMs without stopping their executions, which in turn, makes dynamic replacement of VMs possible across the cluster of nodes. Whenever we detect dispersion of VMs across the cluster, we can consolidate these VMs into fewer nodes without stopping the services running on VMs. The mechanism of achieving this is the goal of this paper.

Server consolidation problem can be seen as a type of bin-packing problem, where the VMs represent items and the nodes represent bins. The bin-packing problem deals with minimizing the number of bins used given a set of items. However, we cannot apply it directly to our case as we need to take into account the current state of the cluster. That is, we should keep migrations minimal, as live migration is a costly operation.

In this paper, we proposed a server consolidation algorithm, Sercon, which has two objectives: minimizing the number of nodes used and minimizing the number of migrations. We compared Sercon with well-known heuristic algorithm for bin-packing, FFD. Difference of our problem case from bin-packing case is that we take into account the current state of the cluster, that is, which VM is mapped to which nodes before running the algorithm. This information helps us to minimize VM migrations, while achieving our second goal as well (minimizing number of nodes used).

We conducted several experiments, first, comparing our Sercon algorithm with FFD and second testing scalability of Sercon. In the first experiment, we ran two algorithms for eight different test cases and from the perspective of minimizing VM migrations, Sercon was from 3 to 5 times better than FFD. As FFD has the only goal of minimizing number of nodes used, it performs slightly better (from 0 to 6%) in this component than Sercon. But this advantage of FFD over Sercon in this component is negligible compared to multiple times benefit Sercon gets over FFD in lowering the number of migrations. Furthermore, we devised a special metric, migration efficiency that considers both number of nodes used and number of VM migrations. Migration efficiency shows the contribution of one migration to releasing a single node. So migration efficiency in Sercon is in the range of 72% to 100% while it is in the range of 18% to 25% in FFD. In the second experiment, we showed that time taken for both Sercon execution and for performing all migrations resulted from the algorithm with 1000 VMs can range from 2.5 to 6.5 min. This result shows that our algorithm is scalable enough for middle-sized data centers.

As a future work we want to test Sercon algorithm in a more practical environment scenario. Besides, we want to experiment with different complex cases, such as VMs with CPU-intensive workloads and memory-intensive workloads.

8. Acknowledgments

This research was supported by the MKE (The Ministry of Knowledge Economy), Korea, under the ITRC (Information Technology Research Center) support program supervised by the NIPA (National IT Industry Promotion Agency (NIPA-2010-(C1090-1021-0011)). And this paper was completed with Ajou University research funds of 2010 (S-2010-G0001-00087).

References

1. A. Berl, E. Gelenbe, M. Girolamo, G. Giuliani, H. Meer, M. Quan Dang, and K. Pentikousis, “Energy-Efficient Cloud Computing,” The Computer Journal, 2009.

2. W. Vogels, “Beyond Server Consolidation,” ACM Queue, January/ February 2008.

3. L. Vaquero, L. Rodero-Merino, J. Caceres, and M. Lindner, “A break in the clouds: Towards a cloud definition,” ACM SIGCOMM Computer Communication Review, Vol. 39, No. 1, 2009

4. I. Foster, Y. Zhao, I. Raicu, and S. Lu, “Cloud computing and grid computing 360-degree compared,” Grid Computing Environments Workshop, 2008.

5. S. Crosby, and D. Brown, “Virtualization reality,” ACM Queue, December-January 2006-2007.

6. M. Chebiyyam, R. Malviya, S. K. Bose, and S. Sundarrajan, “Server consolidation: Leveraging the benefits of virtualization,” Infosys Research, SETLabs Briefings, Vol. 7 No.1, 2009.

7. C. Clark, K. Fraser, S. Hand, J. Hansen, E. Jul, C. Limpach, et al, “Live migration of virtual machines,” NSDI, 2005.

8. M. Nelson, B. Lim, and G. Hutchins, “Fast transparent migration for virtual machines,” USENIX 2005 Annual Technical Conference, 2005.

9. E. W. Weisstein. “Bin-packing problem.” From mathworld–a wolfram web resource. Available from: http://mathworld.wolfram.com/Bin-PackingProblem.html. [cited in 2010].

10. G. Galambos, and G. Woeginger, “On-line bin packing – A restricted survey,” Mathematical Methods of Operational Research, 1995.

11. J. Beck, and D. Siewiorek, “Modeling multicomputer task allocation as a vector packing problem,” International Symposium on Systems Synthesis, 1996.

12. C. Hyser, B. McKee, R. Gardner, and B. Watson, “Autonomic virtual machine placement in the data center,” HP Laboratories. February 26, 2008.

13. S. Srikantaiah, A. Kansal, and F. Zhao, “Energy aware consolidation for cloud computing,” USENIX HotPower’08: Workshop on Power Aware Computing and Systems at OSDI, December 2008.

14. H. Magesh, and J. Smith, “Server consolidation through virtualization with quad-core intel xeon processors,” White Paper by Intel Corporation and Infosys echnologies, January 2008.

15. T. Wood, P. Shenoy, A. Venkataramani, and M. Yousif, “Black-box and gray-box strategies for virtual machine migration,” 4th USENIX Symposium on Networked Systems Design & Implementation, 2007.

16. Y. Song, H. Wang, Y. Li, B. Feng, and Y. Sun, “Multi-tiered ondemand resource scheduling for vm-based data center,” 9th IEEE/ ACM International Symposium on Cluster Computing and the Grid, 2009.

17. M. Rosenblum, “Virtual machine monitors: Current technology and future trends,” Computer, vol.38, no. 5, May 2005.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 19: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

230 IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

Appendix 1: Detailed result of the experiment

# of VMs Class Lower bound FFD Sercon# of nodes

used# of nodes released

# of migrations

Migration efficiency %

# of nodes used

# of nodes released

# of migrations

Migration efficiency %

25 1 6.9 10.5 4.5 24.2 19 10.8 4.2 4.3 9850 1 13.5 20.9 9.8 49.4 20 21.8 8.9 8.9 10075 1 19.5 30.9 13.7 74.2 18 31.7 12.9 13.1 98100 1 25.6 40.7 19.7 99.3 20 41.7 18.7 18.7 100125 1 31.8 51 24.1 124 19 51 24.1 24.5 98150 1 38.9 61.1 28.7 149 19 61.1 28.7 28.7 10025 2 4.7 6.7 5.4 23.4 23 6.4 5.7 7.4 7750 2 9.1 12.3 11.1 48.1 23 12.4 11 14.4 7675 2 12.9 18.3 15.5 74 21 18.5 15.3 21.1 73100 2 17.5 23.9 20 98.5 20 24.3 19.6 27 73125 2 21.6 30.7 26.8 122.8 22 31 26.5 33.5 79150 2 25.6 36.4 31.3 148.2 21 36.7 31 40.9 7625 3 7 10.7 5.2 24.5 21 10.8 5.1 5.1 10050 3 13.1 20.3 9.8 49 20 21.4 8.7 8.8 9975 3 19.6 30.9 15.1 74.1 20 31.4 14.6 15 97100 3 25.8 40.4 21.2 98.5 22 42 19.6 19.7 99125 3 32.3 51.1 25.4 124 20 51.5 25 25.1 100150 3 38.8 61 29.2 148.3 20 61.3 28.9 29.2 9925 4 5 6.5 5.9 23.2 25 6.6 5.8 7.5 7750 4 9 12.4 11.7 48.2 24 12.8 11.3 14.3 7975 4 13 18.7 15.1 72.7 21 19.1 14.7 19.5 75100 4 17.1 24.5 21.2 97.9 22 24.8 20.9 28.4 74125 4 21.2 30.5 25.4 122.9 21 30.8 25.1 32.5 77150 4 25.8 36 30.7 148 21 36.8 29.9 40.3 7425 5 7 10.6 5.1 24.2 21 10.7 5 5 10050 5 13.1 20.4 9.3 48.7 19 21 8.7 8.8 9975 5 19.7 30.7 15.5 74.3 21 31.4 14.8 14.8 100100 5 25.6 40.2 20.3 98.7 21 41.4 19.1 19.2 99125 5 31.9 50.8 25.1 124 20 51.5 24.4 24.4 100150 5 38.5 60.9 30.3 149.2 20 61.9 29.3 29.3 10025 6 5 6.8 5 22.4 22 6.8 5 6.7 7550 6 8.9 12.3 10.6 47.6 22 12.6 10.3 13.7 7575 6 13 18.5 15.1 73.9 20 18.5 15.1 21.1 72100 6 17.3 24.4 20.7 98.6 21 24.8 20.3 27.8 73125 6 21.8 30.7 25.9 123.5 21 31.1 25.5 33.4 76

18. A. Caprara, and P. Toth, “Lower bounds and algorithms for the 2-dimensional vector packing problem,” Discrete Applied Mathematics. Vol. 111 No. 3, 2001.

19. F. Spieksma, “A branch-and-bound algorithm for the twodimensional vector packing problem,” Computer Operations Research, Vol. 21, No. 1, 1994.

20. E. G. Coffman, M. R. Garey, and D. S. Johnson, “Approximation algorithms for bin-packing: A survey,” Approximation algorithms for NP-hard problems, p. 46-93. 1996.

21. VMWare Vmotion server live migration explained. Available from: http://www.webhostingresourcekit.com/295.html. [cited in 2010].

22. F. Checconi, T. Cucinotta, and M. Stein. “Real-time issues in live migration of virtual machines,” Proceedings of the 4th Workshop on Virtualization and High-Performance Cloud Computing (VHPC 2009), August 2009.

23. Why choose VMware? VMware whitepaper. Available from: http:// www.vmware.com/files/pdf/vmware_advantage.pdf. [cited in 2010].

24. Windows Server 2008 R2 Hyper-V Live Migration. Microsoft Whitepaper. Published: August, 2009. Available from: http://www.

microsoft.com/downloads/en/details.aspx?FamilyID=FDD083C6-3FC7-470B-8569-7E6A19FB0FDF&displaylang=en. [cited in 2010].

25. VMware VMotion and CPU Compatibility. VMware Information Guide. Available at: http://www.vmware.com/files/pdf/vmotion_info_guide.pdf. [cited in 2010].

26. J. Kang, and S. Park, “Algorithms for the variable sized bin packing problem,” European Journal of Operational Research 147, 2003.

27. D. Gmach, J. Rolia, L. Cherkasova, and A. Kemper. “Resource pool management: Reactive versus proactive or let’s be friends,” Computer Networks, Vol. 53, Issue 17, 2009.

28. A. Verma, P. Ahuja, and A. Neogi. “pMapper: Power and migration cost aware application placement in virtualized systems,” Proceedings of the 9th ACM/IFIP/USENIX International Conference on Middleware, 2008.

29. N. Bobroff, A. Kochut, and K. Beaty. “Dynamic placement of virtual machines for managing SLA violations,” 10th IFIP/IEEE International Symposium on Integrated Network Management, 2007.

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

Page 20: Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green Computing

231IETE TECHNICAL REVIEW | VOL 28 | ISSUE 3 | MAY-JUN 2011

AUTHORSAziz Murtazaev received B.A. in Economics from the National University of Uzbekistan in 2007. Then, he worked 2 years for Muranosoft, outsourcing software solution provider, as a software engineer. Currently, he is pursuing his M.S. in Computer Engineering in Ajou University, South Korea and is expected to get the degree in July, 2011. His research interests include

distributed systems, cloud computing, information retrieval and large-scale software system.

E-mail: [email protected]

DOI: 10.4103/0256-4602.81230; Paper No. TR 330_10; Copyright © 2011 by the IETE

Sangyoon Oh (82-10-3129-7022) received M.S. in Computer Science from Syracuse University, U.S.A. and Ph.D. in Computer Science Department from Indiana University at Bloomington, U.S.A. He is an assistant professor of Div. of Information and Computer Engineering at Ajou University, South Korea. He

previously held a research position at SK Telecom, South Korea. His main research interest is in the design and development of web based large scale software systems and he has published papers in the area of mobile software system, collaboration system, Grid systems, Cloud Computing, and Service Oriented Architecture (SOA).

E-mail: [email protected]

Murtazaev A and Oh S: Sercon Algorithm Using Live Migration of VMs

150 6 25.9 36.6 31.7 147.6 21 37.1 31.2 40.9 7625 7 6.9 10.1 5.1 24 21 10.5 4.7 4.7 10050 7 13 20.1 9.8 49.1 20 21.2 8.7 8.7 10075 7 19.4 30.2 15.5 73.7 21 31.6 14.1 14.1 100100 7 25.6 40.2 19.9 98.8 20 41.2 18.9 19.1 99125 7 31.9 50.6 25.8 124.3 21 52.3 24.1 24.2 100150 7 38.2 59.7 31.8 149.1 21 61.2 30.3 30.6 9925 8 5 6.7 5.4 23.3 23 6.5 5.6 6.9 8150 8 9 12.5 10.6 48.8 22 12.9 10.2 13.7 7475 8 12.9 18.1 14.9 73 20 18.4 14.6 19.4 75100 8 17.5 24.7 21.5 97.9 22 25.1 21.1 26.3 80125 8 21.3 30.1 25.8 123.2 21 30.8 25.1 32.9 76

150 8 25.9 36.1 32.5 148.4 22 36.6 32 42.7 75

Appendix 1: Contd/-