16
A Unified Approach to Workload Lifecycle Management By Stephen Pollack, Founder and CEO, PlateSpin Ltd.

A Unified Approach to Workload Lifecycle Managementhosteddocs.ittoolbox.com/white_paper__workload_lifecycle... · A Unified Approach to Workload Lifecycle Management By Stephen Pollack,

Embed Size (px)

Citation preview

A Unified Approach to Workload Lifecycle ManagementBy Stephen Pollack, Founder and CEO, PlateSpin Ltd.

Executive SummaryThe way organizations view the enterprise data center is changing. Traditionally, the data center has been seen as a mix of servers, operating systems, applications and data. Moreover, the data, applications and operating systems have been inextricably tied to the hardware on which they reside, impeding the optimal use of server resources, whether physical or virtual.

The rapid adoption of virtualization technologies has been a catalyst for a major reevaluation of the composition and day-to-day operations of the data center. By helping to dissolve the bonds between software and hardware, virtualization has encouraged organizations to see the data center in a different way – not as a heterogeneous mix of different servers, operating systems, applications and data, but as a set of portable workload units. At the most basic level, a workload encapsulates the data, applications and operating systems that reside on a physical or virtual host.

The ability to profile, move, copy, protect and replicate these aggregated workload units between physical and virtual hosts is rapidly emerging as a key enabler for operational and business success. New technologies that empower data centers to copy, move, cut, paste and protect entire server workloads with relative ease are already helping many organizations achieve new operational efficiencies and cost savings in the data center.

The new workload-based model is also providing an impetus for organizations to begin unifying their approach to solving common IT challenges such as server consolidation, end-of-lease hardware migration, data center relocation and disaster recovery. Rather than adopt piecemeal, project-based solutions to address these recurring challenges, organizations are starting to discover the benefits of a more long-term, holistic view of workload lifecycle management. By recognizing the importance of workloads and by unifying the approach to workload management, organizations have an opportunity to shape, refine, move and protect their data center assets with unprecedented levels of flexibility, efficiency and ease.

This white paper provides an overview of workload lifecycle management and why organizations should consider adopting a more unified approach to managing workloads in the data center.

A Unified Approach to Workload Lifecycle Management

www.platespin.com

“The IT industry is clearly moving toward an environment in which application workloads are dynamically relocated from one server to another to meet service-level agreements, address uptime requirements, more effectively use underlying server hardware or some combination of all three. Server virtualization is a technology that accommodates this type of dynamic workload reassignment. Blade servers (without virtualization) are another example.

Under this theory, you just move a workload from one piece of hardware to another. You can move the workload at the level of the application image, the operating system hosting the application or a virtual machine hosting the operating system (and, in turn, hosting the application). Accommodating this movement is not trivial in any technical sense, but plenty of solutions are available to do so.”

—Gartner, Inc., “I/O Is the New Frontier of x86 Virtualization”, by John Enck, May 21, 2007

1

The Current State of the Data CenterToday’s enterprise data center tends to be comprised of an assortment of inflexible platforms – multiple physical servers including desktops, rack servers and blades from different hardware manufacturers, numerous image and backup archives and a growing number of virtual machines running one or more virtualization platforms such as VMware, Microsoft Virtual Server, Virtual Iron or XenSource. It is extremely rare to find a data center that has standardized on a single platform. The inherent inflexibility of most data centers can be traced to the growing heterogeneity of systems and hardware platforms and the difficulties in moving and rehosting workloads across platform boundaries. The size, complexity, heterogeneity and inflexibility of data center environments create operational headaches and drives up total cost of ownership (TCO).

But the problems don’t end there. In most cases, imbalances exist between server workloads and available resources. The majority of servers may be under-utilized, handling small or periodic workloads. This means the servers’ resources are being wasted while they continue to consume costly power and cooling assets, as well as computing resources. Over-utilization of server resources is less common, but it can still occur, typically when workloads grow more rapidly than expected. A web-based reporting application may start out with 10 users, for instance, but within a few months have 50 regular users. Depending on the business-critical nature of the workload, an over-utilized server puts business continuity at risk.

Organizations recognize the need for capacity planning in order to optimize server utilization in the data center and combat server sprawl, but without a detailed understanding of workloads this is easier said than done. To find and maintain the optimal balance between resource supply and workload demand, data centers need the means to monitor and measure workloads over time and continuously readjust the balance by moving workloads back and forth between physical and virtual hosts. Data centers also need to size and resize the resources allocated to a given workload to accommodate its changing resource requirements, which may shrink or grow significantly throughout the workload lifecycle.

A Unified Approach to Workload Lifecycle Management

www.platespin.com 2

Virtual Machines

Diagram: The current data center landscape is a mix of virtual machines, physical servers and image archives.

Physical Servers Image Archives

Large Resource Supply

Diagram: Imbalances between workloads and resources in the data center.

Small Resource Supply

Small Workload Demand Large Workload Demand

OverutilizedUnderutilized Underutilized

Understanding the Workload LifecycleAs we have already seen, there are many different kinds of workloads running in the data center, each with their own resource and availability requirements which may change over time. These changes may be cyclical, seasonal or completely random. For instance, financial reporting applications may place heavy demands on server resources at month’s end, or a web server may experience unpredictable traffic spikes. Moreover a workload that is deemed business-critical such as a web server or mail server requires greater levels of protection and availability than a less critical workload like a print server.

Though workloads differ greatly, every workload tends to undergo a number of common changes throughout its lifecycle – workloads are provisioned, protected, consolidated and migrated to new hardware, reconfigured and retired. Many of the greatest challenges in the data center are related to changing workload requirements and managing the transitions that occur throughout the workload lifecycle.

Two Cornerstones of Workload ManagementWhen we talk about unified workload management, we mean the combination of two key concepts – workload profiling and anywhere-to-anywhere workload portability.

Workload profiling refers to the ability to monitor workloads and the changing demands they place on server resources over time. The workload profile describes the workload in terms of its purpose, departmental owner, level of business criticality, required recovery time and point objectives (RTO and RPO), and so on. Workload portability refers to the ability to move workloads to and from different host platforms and infrastructures in the data center. By combining a workload profile with the ability to move and rebalance the workload as required in any direction between physical and virtual host or image archives, data centers can establish a common approach to effectively manage workloads throughout their entire lifecycle.

A Unified Approach to Workload Lifecycle Management

www.platespin.com

“Viewing platforms by workload helps you rationalize your server portfolio and reduce complexity and cost.”

—Gartner, Inc., “Introducing Seven Workloads for Server Selection”, by Philip Dawson and Andrew Butler, July 20, 2006

3

Diagram: Workloads and the demands they place on available server resources change over time.

Life of a Workload

Rollout

AB C

Production Sunset

B - SeasonalA - Cyclical C - Random

Diagram: Workload profiling and portability are the cornerstones of unified workload lifecycle management.

Physical-to-PhysicalV2P

V2V

P2P

P2V

Virtual-to-Physical

Virtual-to-Virtual Physical-to-Virtual

NameResource Profile

Application ProfileInventory

CostSLA

Workload Profile

The Workload ProfileTo effectively manage workloads, each workload unit must have a profile attached to it. The workload profile is a key description of attributes such as:

• The purpose of the workload• The owner of the workload• The inventory of the workload• The software and services running on the workload• The current resource allocation and actual requirements of the workload A workload profile contains not only the name and inventory of a server but also captures the server’s current resource requirements based on real performance data. The workload profile follows the workload throughout its entire lifecycle, allowing organizations to make decisions about how best to manage and protect the workload based on the data contained in its profile rather than relying on best guesses. Workload profiling also brings consistency and predictability to the way organizations manage workloads by allowing administrators from different departments or sites to utilize the same workload profile for operational decision making.

The workload profile provides a much deeper understanding of workloads, enabling data center managers to make more informed and intelligent operational decisions. When combined with analysis and forecasting capabilities, the workload profile provides increased visibility into data center operations and allows managers and architects to effectively plan for current and future data center initiatives such as server consolidation, hardware migration and disaster recovery. This awareness of workloads – their lifecycles and how they use resources over time – is driving more sophisticated capacity planning while reducing the costs and overhead associated with ongoing data center management. For example, organizations can use workload profiling data to plan and schedule maintenance windows during off-peak daytime hours when workloads consume fewer resources. By analyzing peaks and valleys in the daily workload lifecycle, organizations can identify periods of lesser resource utilization during the business day when a maintenance window would not impact operations. This is where cost savings come into play. Equipped with an awareness of the hour-to-hour variability of the workload lifecycle, organizations can utilize their daytime IT staff and not incur overtime for staff to complete evening or weekend maintenance. When workload profiling data is captured and analyzed, organizations tend to gain greater control over the data center and see marked improvements in resource utilization.

A Unified Approach to Workload Lifecycle Management

www.platespin.com 4

Diagram: The workload profile captures information about the encapsulated data, applications and operating systems residing on a physical or virtual host.

Data

OS

Applications

NameResource Profile

Application ProfileInventory

CostSLA

Workload Profile

Workload PortabilityVirtualization has gone a long way toward increasing the portability of server workloads. By helping to dissolve the bonds between workloads and the hardware configurations on which they reside, virtualization makes it possible to move the entire software stack of a server to any physical or virtual host. Until recently, however, organizations migrating to virtual infrastructures have tended to focus on simple one-time physical-to-virtual (P2V) migrations, usually for server consolidation projects. While automating P2V migrations is one critical component for the successful adoption of virtualization, it does not address the ongoing requirement for anywhere-to-anywhere workload portability in the data center or the need to utilize workload profile data for continuous optimization.

The flexibility to move and rebalance workloads in any direction between physical and virtual hosts – physical-to-virtual, virtual-to-physical, physical-to-physical, in and out of imaging formats and so on – ensures optimal data center efficiency. It also enables organizations to better address common challenges such as end-of-lease hardware migration and periodic necessities such as the de-virtualization of applications to account for changing workloads, diagnose problems or comply with support agreements.

Common Workload Movement ActivitiesAbove, we saw that workloads have a tendency to undergo a number of changes throughout their lifecycles, which coincide with common data center initiatives. Workloads must be provisioned, protected, consolidated, migrated to new hardware when a hardware lease expires, reconfigured and eventually retired.

The challenges that data centers encounter on a day-to-day basis map directly to these regular workload movement activities. These issues, which arise over and over again, are almost always related to the lack of workload portability. If we dig a little deeper, we see that these workload portability issues can be broken into four key areas: workload relocation, workload protection, workload provisioning and workload optimization.

A Unified Approach to Workload Lifecycle Management

www.platespin.com

“Physical to virtual conversion, or P to V, is the process of disaggregating the operating system and other software from a physical server in order to move it into a virtual machine. Microsoft and VMware offer tools to handle migrations, but they are limited in functionality and only migrate in one direction. More comprehensive solutions expand upon this process and give users the tools to perform ‘any to any’ conversions. This enables firms to do things like move virtual systems back to images or physical systems for backups or if the load is too heavy on a virtual system. More advanced tools include features like scheduling and the ability to do live conversions without reboots, minimizing disruptions, and enabling users to create scheduled images to recover from disasters.”

—Forrester Research Inc.,“Decoding Virtualization’s Present and Future”, by Galen Schreck, January 9, 2007

5

Image Archives

Physical Servers

Blade Servers

Virtual Machines

Diagram: Workload portability enables workloads to be migrated across different data center infrastructures.

Workload DecoupledFrom Hardware

Diagram: The four pillars of unified workload lifecycle management.

Unified Workload Lifecycle Management

Workload Relocation

Server Consolidation

Hardware Migration

Data Center Moves

Storage Migration

Consolidated Recovery

Workload Protection

Hardware IndependentRestore

Server Provisioning

Test Lab

Desktop Provisioning

Workload Provisioning

Continuous Optimization

Workload Optimization

The conventional approach to solving these problems involves investing in a variety of different software solutions, utilities and implementation consultants to address each specific problem. Organizations purchase one solution for disaster recovery, a completely different solution for automating new server provisioning, and so on.

The lack of a holistic approach to solving problems in the data center adds considerable time and expense to operations, increases complexity and contributes to the ubiquitous problem of infrastructure sprawl and underutilization. In fact, by investing in virtualization without adequate planning or a broad vision of workload lifecycles, many organizations find that they’ve merely replaced their physical server sprawl with a virtual sprawl. Utility sprawl, or the investment in multiple technologies to solve multiple IT problems, is another common issue. In the absence of a unified approach, multiple problem-solving technologies can become a significant problem in themselves, with training and support investments quickly getting out of hand.

When we recognize that the most common data center challenges can be grouped into these four key areas – workload relocation, workload protection, workload provisioning and workload optimization – we can begin to seek a common approach for addressing these challenges, rather than implementing costly, piecemeal fixes to solve each issue. In doing so, we are well on our way toward a unified approach to workload lifecycle management.

A unified workload management approach encourages data center managers to think strategically rather than tactically, moving beyond a “once-and-done” project-based view of solving data center issues. Unifying the approach to managing workloads also drives cost efficiencies since data centers are able to invest in a single technology platform to address a whole host of IT problems.

Workload RelocationWorkloads must periodically be relocated to new physical or virtual hosts. Three of the most common data center activities that necessitate workload relocations are server consolidation initiatives, end-of-life hardware migrations and data center consolidation or relocation.

Server ConsolidationAs discussed above, the growing size and complexity of enterprise data centers often leads to poor resource utilization and administrative headaches. To achieve greater efficiencies, many organizations turn to server consolidation initiatives that leverage blade servers or virtual infrastructures.

The benefits of a successful server consolidation project include the retirement of old or underutilized hardware, savings in floor space and power consumption and an overall reduction in total cost of ownership. However, successful consolidation initiatives require considerable upfront planning, time and effort, and a thorough understanding of the workloads that will need to be consolidated.

Workload profiling helps organizations accelerate the server consolidation process by automating the capacity planning phase of a consolidation project. Prior to the consolidation, organizations can inventory data center assets and monitor workloads and their resource utilization over time in order to identify underutilized servers that are ideal candidates for consolidation. Workload profiling also helps data center managers plan the layout of the consolidated infrastructure and ensure sufficient physical and virtual host capacity in the consolidated environment to accommodate current and future business requirements.

A Unified Approach to Workload Lifecycle Management

www.platespin.com

“A recent innovation in the x86 environment is the ability to dynamically move running workloads to other servers to provide greater capacity for the workload. Virtual-machine relocation will become a default technology for most large x86 server infrastructures within five years, and it will dramatically change how servers are managed – disaggregating operating-system instances from physical servers. For businesses, the most important benefit will be a much faster response to changing scaling requirements. Virtual-machine relocation will also make capacity planning much easier for users, shifting from an application-specific function to an infrastructure-wide function.”

—Gartner, Inc., “Server Capacity on Demand Spans many Capabilities”, by John R. Phelps, July 13, 2007

6

Once workload requirements have been monitored and assessed, organizations will need a workload portability solution to decouple data, applications and operating systems from the underlying hardware and stream them to any physical or virtual platform. In evaluating solutions, data centers should consider ease of use (does the solution have an intuitive drag-and-drop interface for migrating workloads from physical servers to virtual machines or blade servers?) and the solution’s ability to perform both local or remote migrations in either a staged or direct mode. The solution should also accommodate multiple concurrent migrations to ensure that workload movement activities can be completed in the most efficient and timely manner. Organizations can de-risk their server consolidation initiatives by streaming workloads to new hardware for testing prior to the production move.

To future-proof their technology investment, data centers should seek a solution with multiplatform support and broader capabilities than simply physical-to-virtual (P2V) workload migrations. As we’ve seen, workloads may eventually need to be moved off of virtual infrastructures as their resource requirements grow (virtual-to-physical scale-outs) or may have to be de-virtualized to ensure the integrity of application maintenance and support agreements, many of which have failed to keep pace with virtualization.

Hardware MigrationData center managers spend considerable time and effort dealing with machines that are nearing the end of their life. The challenge of migrating key workloads from legacy or end-of-lease platforms to new infrastructures such as blades is significant – so much so that some companies continue to maintain old and inefficient servers rather than upgrade their hardware.

The traditional approach to hardware migrations involves building the new system from scratch by reinstalling the OS, applications and data. Not only is this process time-consuming and costly, but organizations also run the risk of lost data or business disruption.

A workload profiling and portability solution allows organizations to stream workloads (the server software layer including data, applications and operating systems) from old end-of-lease machines to new servers using a simple drag-and-drop interface for physical-to-physical (P2P) migrations. Any changes to operating systems needed to adapt to the new hardware are made automatically on-the-fly.

The first step is to capture a central inventory of workload profiling data for new hardware planning, sizing and reporting. Workloads can then be streamed to new hardware for testing prior to cutover to troubleshoot potential issues such as hardware driver problems, reducing hardware migration risks. Without the ability to move and test workloads in either a staged or direct fashion, data center personnel have often been left with no other option but to “power-up and pray” that the workloads will run properly on the new hardware.

Data Center Relocation and ConsolidationRapid growth, globalization, mergers and acquisitions sometimes necessitate physical data center moves or consolidations. In their ongoing quest to achieve cost efficiencies and economies of scale, organizations often institute enterprise-wide mandates to reduce the total number of data centers and the size of the IT organization required to support them. To this end, data centers often view data center relocation as a good opportunity to consolidate some of their physical and virtual workloads in order to reduce the overall footprint of the data center.

A Unified Approach to Workload Lifecycle Management

www.platespin.com

“A server consolidation project should be undertaken with specific and clear objectives in mind. Although cost reductions are a frequent goal, there are many equally important reasons to spend time and money consolidating systems. IT organizations should engage in a server consolidation project with three major goals: cost reduction, agility improvement and service-level improvement.”

—Gartner, Inc., “Key Issues for Data Center Servers, 2007”, by Mike Chuba, March 12, 2007

7

Moving resources between data centers traditionally requires physically shipping servers to new locations or rebuilding servers from scratch. In the latter case, new hardware is typically rebuilt manually from an image that is captured on disk and shipped to the new location. Data center relocations require considerable IT planning, time and effort – especially where mission-critical servers are concerned. In these cases, business continuity is of the essence and organizations have a very short window to get new servers up and running. Organizations often can’t simply power down the servers, load them on a truck, and ship them to the new data center location.

One of the first steps in planning a successful data center move is to create a thorough inventory of all data center equipment that will need to be moved. Workload profiling enables organizations to discover and inventory server assets and complete a detailed analysis of both physical and virtual workloads including operating system, data, applications, CPU, disk, memory and network resources. This kind of detailed workload profiling data has proven invaluable in the development of data center relocation plans and helping to ensure adequate floor space, power and HVAC capabilities in the new facility.

Armed with a detailed profile of workloads in the data center including a complete inventory of data center assets, organizations can use a workload portability solution to stream workloads directly from one location to another over the network, whether locally or remotely. In cases where no high-speed WAN exists, organizations may opt to capture server images onto a CD or USB drive which can then be shipped and easily deployed at the new site. The ability to stream physical and virtual workloads to the new or consolidated data center site for testing prior to production cutover greatly reduces the risks associated with a data center move.

Workload ProtectionThe high cost of traditional workload protection alternatives such as clustering has meant that most data centers can only afford to protect their most mission-critical workloads. Working within these budgetary constraints, the approach to protecting less-critical lower-tier server workloads might delicately be termed “best effort.”

The reality is that organizations typically spend 80% of their disaster recovery budget to protect only their most mission-critical servers – as little as 20% of their server network. This leaves 80% of workloads under-insured or uninsured, creating the potential for inconveniences and lost productivity. The trend of overspending on high-end disaster recovery solutions such as clustering to protect a relatively small number of servers, while the remainder of the IT infrastructure is under-insured, is inherently inefficient.

A Unified Approach to Workload Lifecycle Management

www.platespin.com 8

Diagram: Data center relocations require considerable up-front planning.

Existing Data Center New Data Center

WAN

Organizations are beginning to see the benefits of using virtual infrastructures as a flexible and affordable solution for backing up production workloads to any virtual or physical recovery environment. Whereas DR solutions such as server clustering or data replication require costly one-to-one hardware and software redundancy to protect data center assets, virtualization allows organizations to utilize virtual host capacity as an affordable recovery platform to address common disaster recovery challenges. Organizations can back up multiple servers to a single virtual recovery machine, allowing organizations to protect a greater percentage of their servers while avoiding costly hardware investments and reducing TCO. Virtualized recovery solutions are also easier to test and quicker to restore than more costly solutions.

Different workloads require different protection scenarios. The following sections will outline a few of the most common workload protection scenarios employed in the enterprise data center.

Consolidated Recovery (Physical-to-Virtual)A consolidated recovery solution provides whole workload protection of physical workloads by replicating multiple workloads to a single warm-standby virtual recovery environment. Workloads can be replicated locally or remotely over a WAN to a geographically-dispersed recovery site.

Workload profiling is critical to inventory and monitor production workloads, develop a disaster recovery plan, and create a right-sized recovery environment with sufficient virtual resources.

When evaluating solutions, organizations should look for live workload replication capabilities, or the ability to transfer a workload into an off-line virtual machine without taking the production system offline. Live replication reduces disruptions and ensures business continuity.

Image-based Hardware-Independent RestoreFor organizations that are not ready to base their disaster recovery solution on virtualization technology, a flexible image-based solution may be the answer. Many data centers already maintain vast libraries of backup images that they attempt to restore to new hardware in the event of a failure or disaster. Using conventional image technologies this leads to frequent errors and delays because images are generally tied to the hardware from which they were originally captured. A common problem is that a workload running on older hardware will fail and the data center has no additional platforms of that model to which they can restore the backup image. In this case, huge delays can occur costing the organization valuable time and expense.

A Unified Approach to Workload Lifecycle Management

www.platespin.com

“We were surprised to learn from survey respondents that 23% were already using server virtualization to assist in some way with business continuity and disaster recovery, and 19% thought it was very suitable for doing so. Other users we’ve spoken to like the idea that they don’t have to have identical hardware and server counts at the backup site, reducing the need for idle duplicate hardware. Further, it eases system maintenance by making temporary rehosting of software on a dissimilar system a simple file copy of the virtual server across the network.”

—Forrester Research Inc.,“Pragmatic Approaches to Server Virtualization”, by Frank E. Gillett and Galen Schreck, June 19, 2006

9

Diagram: Whole workload protection of physical machines using a virtual recovery environment.

P2VP2V

More flexible image-based solutions are available that enable data centers to capture any image type and restore it to any X86 hardware platform which drastically reduces recovery time and virtually eliminates errors. Images can be captured without taking critical production servers offline and rapidly redeployed to any physical or virtual machine. Organizations can completely replicate a server by scheduling the capture and conversion of a server to a hardware-independent image. Flexible image-based solutions combine system and data restore in a single step, reducing TCO. Workload metadata simplifies recovery configuration.

Disaster Planning and Recovery MetricsTwo metrics commonly used to evaluate disaster recovery solutions are Recovery Time Objective (RTO), which measures the time between a system disaster and the time when the system is again operational, and Recovery Point Objective (RPO), which measures the time between the latest backup and the system disaster, representing the nearest historical point in time to which a system can be recovered.

A third metric that is rapidly emerging as a key point of measurement for the effectiveness of recovery alternatives is Test Time Objective (TTO), which measures the ease with which a disaster recovery plan can be tested to ensure its effectiveness.

Traditional disaster planning and recovery solutions including tape backup, image capture and clustering fail to deliver desired RTO and RPO within reasonable budgetary constraints.

Tape Backup: Tape backup is the most economically prudent alternative, however it can be difficult to administer and frequently takes days to restore.

Image Capture: Image capture is moderately more expensive and maintains an adequate RPO, but recovery time tends to be lethargic and error prone due to the inflexibility of the underlying imaging technology.

Clustering: Finally, clustering fully achieves recovery time and point objectives, but it can be prohibitively expensive and complicated to implement. With the exception of the most mission-critical server environments, clustering is typically not a viable disaster recovery option.

A Unified Approach to Workload Lifecycle Management

www.platespin.com 10

Diagram: Image-based system workload restore (hardware independent).

Recovery Servers

Capture Image

Diagram: Disaster recovery metrics.

RTO Recovery Time Objective The Measure of Downtime

RPO Recovery Point Objective The Measure of Data

TTO Test Time Objective The Measure of Testing Ease

Importance of DR TestingThe complexity of traditional DR solutions makes testing difficult – so much so that many organizations have never fully tested their DR solutions in production. Despite having made significant investments in disaster recovery, these organizations have no way of knowing for certain how quickly servers can be restored. When selecting a DR solution, organizations should consider not only whether it meets RTO and RPO requirements, but how easily the solution can be tested.

DR solutions that leverage virtual infrastructures or flexible backup images tend to afford more rapid test restore capabilities than traditional approaches. Virtualized recovery solutions allow data centers to easily test failure scenarios which were previously too time-consuming for all but the most mission-critical server assets.

Workload ProvisioningEven though server and data center consolidation is top-of-mind for most IT executives, most enterprise data centers are very large and getting larger. In our work with clients worldwide, we see that server sprawl continues to be a prevalent problem. One organization operating within the European energy industry, maintained three data centers, the largest of which comprised 700 servers and was growing by some 100 servers per year. This kind of rapid infrastructure growth is not uncommon. Every new server that enters the data center must be provisioned, creating time-consuming headaches for administrators.

The issue is exacerbated when you consider the sheer number of servers that data centers must maintain for hosting both production and test environments, and the frequency with which test servers and virtual test environments must be set up and torn down.

New Server ProvisioningWhen new servers arrive in the data center, the IT operations team typically employs a combination of imaging, scripting and manual processes to provision the new hardware. Traditional image-based provisioning solutions tend to be hardware-dependent, meaning the target system configuration must be identical to the source machine. Highly-scripted provisioning solutions are more flexible, but they still may not work consistently across multi-platform environments, and often require considerable upfront manual effort to configure.

Workload portability solutions that enable network-based provisioning provide the best of both worlds by combining the flexibility of scripting with the speed and efficiency of imaging. An entire library of hardware-independent server snapshots can be captured and deployed on new servers quickly and easily over the network, eliminating the need to manually install software and configure hardware and drivers.

A Unified Approach to Workload Lifecycle Management

www.platespin.com

“The testing of disaster recovery solutions is very important and should be done at least yearly and when significant processes change.”

—Gartner, Inc., “Server Capacity on Demand Spans many Capabilities”, by John R. Phelps, February 13, 2007

11

Diagram: Evaluating traditional disaster recovery alternatives.

Solution Weakness RPO RTO Cost

Tape/manual rebuild

Image capture

Server Clustering

Difficult to administerSlow, prone to errors

Limited restore flexibility

Duplicate hardwareComplicated setup

24 hr +

24 hr

0

Days

Hours

0

$

$$$

$$$$$

Gold images can be built in as virtual machines and deployed over the network as needed. These gold images can be patched and updated simply by booting up the virtual machine, saving organizations time and effort in provisioning new servers. It is important to note that using a virtual machine as an image format does not mean that it must be provisioned in a virtual machine. By performing a virtual-to-physical (V2P) migration, the virtual image can also be easily provisioned on a physical machine. Flexible imaging technology enables data centers to unify the approach to provisioning both physical and virtual platforms – within a single technology investment.

By choosing a workload portability solution with flexible imaging support, organizations can expand and manage disk allocation on deployment, and unify provisioning processes across physical and virtual infrastructure through a common approach.

Test Lab AutomationTo test applications prior to production, many data centers operate large-scale test labs that are exact one-to-one replicas of production servers. The hardware required to maintain an exact duplicate of the production environment can be both costly and difficult to manage.

Virtualization allows organizations to dramatically reduce the number of servers required for testing. Using virtual machines, data centers have consolidated their test lab servers, running multiple virtual machines on a single virtual host.

However, data centers still struggle with setting up and tearing down complex virtual test labs and provisioning test environments. Workload portability provides a way to accelerate test cycles by automating the deployment, setup and teardown of test labs on the fly. Production workloads can be copied and streamed between physical servers, virtual machines and image archives over the network, allowing test labs to quickly reconfigure test environments and optimize testing resources.

Hosted Desktop ProvisioningA workload portability solution with flexible imaging support can be used to extend provisioning capabilities to desktop machines as well as servers. Organizations benefit from a unified process and a single interface for provisioning all physical and virtual workloads in the data center. The same approach applies to both servers and desktops: gold images are captured and maintained as virtual machines and deployed to physical or virtual hosts as needed. The gold images can be patched and updated simply by booting up the virtual machine. Maintenance is simplified by eliminating the need to re-image, and each workload can be easily customized, resized and configured at deployment time.

Workload OptimizationIn the sections above, we have made a case for using a workload profiling and portability solution to provide a unified approach to common workload movement activities in the data center.

Let’s suppose for a moment that an organization has taken a project-based view of addressing the periodic workload movement necessities outlined above. The organization has successfully consolidated servers and leveraged virtualization and blade servers to reduce the physical footprint in its data center, reducing floor and rack-space, as well as power and cooling requirements. The data center has never run more efficiently and server utilization has never been at a more optimal level. But now the question is how long will it stay that way?

Workload optimization is a moving target. As we saw above, workloads and resource utilization change over time, necessitating periodic monitoring and rebalancing of workloads and resources to keep the data center running in an optimal state. Organizations that take a project-based view, rather than a strategic one, and that invest in simple P2V and imaging tools will continually play catch-up, struggling to keep the data center in balance.

A Unified Approach to Workload Lifecycle Management

www.platespin.com 12

Resource RebalancingThe real benefit of moving toward a unified approach to workload lifecycle management is in the flexibility to rebalance workloads and resources at will to ensure that the data center is running optimally – not only today or tomorrow but always. A collection of limited-functionality utilities – one tool for provisioning, another for P2V migrations, and so on – may work well for projects, but this disjointed, multi-tool approach is insufficient for ongoing resource rebalancing and performance optimization, and may add considerable overhead to data center administration, training and support. Capacity planning and continuous data center optimization require a broader solution set and a more harmonious approach.

To achieve this broader goal, a workload profiling and portability solution must be deployed as a key component of the data center infrastructure. With such a solution in place, data center administrators can regularly monitor, profile and assess workloads to identify resource mismatches and forecast capacity issues so they can be proactively addressed. Administrators may choose to continuously monitor the data center environment, run the workload profiling solution at periodic intervals such as monthly or quarterly, or capture workload profiling data for a few weeks or months prior to any planned data center initiative.

The anywhere-to-anywhere workload portability solution can then be used to reduce wastage by reallocating workloads to appropriate resources, whether physical or virtual. Efficiencies are maximized by eliminating under-utilized assets and risks are reduced by preventing over-utilization of server resources.

By embracing a workload-based paradigm, and investing in workload-aware technologies that integrate workload profiling and dynamic workload movement, data centers gain agility and adaptability, as well as bottom line cost saving and operational benefits such as lower overhead and TCO.

Summing UpIn order to move toward a unified view of workload lifecycle management, organizations must begin to think differently about the data center and the workloads running it in. Organizations need to embrace a view of anywhere-to-anywhere workload profiling and portability in which workloads can be moved, copied and replicated at will between any physical or virtual host – regardless of virtual infrastructure or hardware configuration. Organizations must invest in technologies and solutions that provide detailed insight into workload patterns and make it easier, faster and more economical to move workloads between physical and virtual hosts or image archives. Investing in solutions that support workload profiling and enable anywhere-to-anywhere workload movement will ensure that the data center is well equipped to address current and future requirements while accommodating business growth.

A Unified Approach to Workload Lifecycle Management

www.platespin.com 13

“The ‘art’ of capacity planning – used to anticipate workload growth for the purpose of adding required hardware as needed – has existed in data centers for more than 30 years. However, this discipline may not be effective in the Internet world, where marketing promotions and other unpredictable workloads can dramatically increase processing requirements overnight.”

—Gartner, Inc., “Server Capacity on Demand Spans many Capabilities” by John R. Phelps, February 13, 2007

About the AuthorAs founder and CEO of PlateSpin, Stephen Pollack leads the overall business strategy for the company. Stephen is a seasoned IT veteran, with a strong history of creating products and services that exceed customer expectations. He brings nearly 25 years of experience in marketing, sales, development and lifecycle support of successful commercial software products in a variety of horizontal technology and vertical market segments.

Before joining PlateSpin, Stephen was Vice President, Product Management at FloNetwork Inc., a leading direct marketing and email software/services ASP. Under his leadership, the company established itself as the number one supplier of scalable software services to direct marketers and on-line publishers in an ASP context.

Prior to that, Stephen held a number of senior management and business unit management positions in a variety of business, marketing and technical disciplines with companies such as Fulcrum Technologies (now part of OpenText) and NCR Corporation and as an independent technology consultant. Stephen is a product of Queen’s University, where he studied both Math and Computing Science. He has also completed business leadership training at the Ivey School of Business in London, Ontario.

About PlateSpin Ltd.PlateSpin provides the most advanced data center automation solutions designed to optimize the use of server resources across the enterprise. PlateSpin technology liberates software from hardware and streams server workloads over the network between any physical or virtual machine. Global 2000 companies use PlateSpin solutions to lower costs, improve service levels and solve today’s most critical data center challenges including server consolidation, disaster recovery and hardware migration.

PlateSpin’s patent-pending conversion and optimization solutions transform the enterprise data center by breaking the dependency between hardware infrastructure and server software. Organizations can monitor and manage server workloads to ensure the best fit between server resources and application demands. By enabling the free and flexible interchange of data, applications and operating systems with a simple drag and drop, PlateSpin brings greater flexibility and new efficiencies to the data center.

A Unified Approach to Workload Lifecycle Management

www.platespin.com 14

PlateSpin Ltd.200 - 340 King Street EastToronto, Ontario, M5A 1K8Phone: 416 203 6565Toll Free: 1 877 528 3774www.platespin.com

© 2007 PlateSpin Ltd. All rights reserved. PlateSpin and the PlateSpin logo are registered trademarks and PowerConvert, PowerRecon, Server Sync, Workload Portability and PowerSDK are trademarks of PlateSpin Ltd. PlateSpin conversion and optimization technology and related products are patent pending. All other marks and names mentioned herein may be trademarks of their respective companies.