20
An Economic Study of the Hyperscale Data Center Ericsson Hyperscale Datacenter System 8000 transforms the economics of the enterprise data center, enabling a new generation of hyperscale architecture January 2016

Ericsson Hyperscale Datacenter System 8000 transforms the … · 2017-06-01 · Ericsson Hyperscale Datacenter System 8000 overcomes these limitations by creating a pooled set of

  • Upload
    others

  • View
    12

  • Download
    2

Embed Size (px)

Citation preview

An Economic Study of the Hyperscale Data Center

Ericsson Hyperscale Datacenter System 8000 transforms the economics of the enterprise data center, enabling a new generation of hyperscale architecture

January 2016

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

2

TABLE OF CONTENTS

Executive Summary 3

Welcome to the Networked Society 4

Transforming the Enterprise Data Center 4

Virtualization and Software-Defined Infrastructure 5

Hyperscale Pioneers 6

Ericsson Hyperscale Datacenter System 8000: Hyperscale Data Center Architecture for the Enterprise 6

Higher Utilization 7

Optimized Data Center Infrastructure Management: From “Procure-to-Provision” to “Pool-to-Provision” 9

Business Impact and TCO Analysis 10

Operating Expense (OPEX) Savings 10

Capital Expenditure (CAPEX) Savings 12

Revenue and Other Strategic Impacts 13

5-Year TCO Impact for Enterprise Data Centers 13

TCO and ROI Summary 14

TCO Model Approach and Assumptions 15

The Journey to Hyperscale 16

About This Study 17

About Mainstay 17

Appendix 18

Endnotes 19

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

3

EXECUTIVE SUMMARY

This paper examines the economics and performance of a new data center infrastructure solution from Ericsson called Ericsson Hyperscale Datacenter System 8000 that is poised to reshape how enterprise data centers are architected to meet the challenges of the hyper-connected Networked Society. Based on the paradigm-shifting Intel® Rack Scale Architecture, Ericsson Hyperscale Datacenter System 8000 delivers a hyperscale cloud solution for enterprise data centers that will enable quantum leaps in efficiency, infrastructure elasticity, and economics.

Our research found that companies implementing Ericsson’s hyperscale platform can capture significant TCO savings compared to traditional data center infrastructures — generating CAPEX savings of up to 55%, OPEX savings of up to 75%, and a return on investment (ROI) of up to 138% for large enterprise data center operators over a five-year period.

Ericsson Hyperscale Datacenter System 8000 generates these savings by delivering performance and scalability on par with data center leaders such as Facebook, Google and Amazon in a commercially available platform. By creating a pool of infrastructure components leveraging Intel’s new RSA technology, SDI techniques, and optical networking, data center operators can better match infra-structure capacity with application and network workload requirements, thereby improving utilization of these resources. In this white paper we detail the range of value drivers that support these results and estimate the impacts across different data center scale scenarios.

In an economy where companies face exponentially rising data traffic and demands for new cloud computing capabilities, data center operators need to take a close look at new ways to provide IT services that are more flexible, cost-effective and vastly more scalable than what they offer today. Operators need to ask themselves whether they are ready for the Networked Society.

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

4

WELCOME TO THE NETWORKED SOCIETY

We live in a world where five billion people are connected and where mobile, broadband, and cloud are transforming the fabric of society. Ericsson calls it the Networked Society and it is impacting our everyday lives in profound ways, not unlike the titanic shifts of the industrial revolution more than a century ago. Today, Internet connectivity and mobile communications are taken for granted.1 And with the rise of the Internet of Things, more products are becoming “smart” and connected with other things across an expanding global cloud network. Meanwhile information streams from these networks are proliferating, spawning opportunities to harness Big Data for competitive advantage as well as social progress.

The world’s data traffic is exploding as a result. Analysts estimate that global workload growth will increase by 81% between 2015 and 2018, with the highest growth (113%) being cloud-based work-loads. And Ericsson is currently forecasting compounded annual growth of mobile traffic of 45% between 2015–2021.2 All of this is placing enormous pressure on businesses and data centers to keep pace. Indeed, one of the most critical challenges facing enterprise data centers in the next few years will be how to scale their computational capacity to meet exponentially growing digital workloads in an economically efficient way.

To thrive in the Networked Society, businesses will need to seize fast-emerging digital business opportunities that require immediate deployments of compute, storage and networking resources. Demand can also recede or shift to new projects just as quickly, so enterprise data centers need to be able to turn capacity on and off at short notice. This means data centers will need to provide business stakeholders with the same “infrastructure on tap” capabilities that modern cloud-service providers like Amazon Web Services have pioneered.

Indeed, as McKinsey & Company has observed, one of the top priorities for enterprise data centers will be establishing a commercial-style business relationship with internal customers. This will require an ability to standardize the data center’s service offerings, provide a solid “bottom-up” service cost structure, implement robust infrastructure management solutions, and reinvent the way they serve customers.3

TRANSFORMING THE ENTERPRISE DATA CENTER

The great majority of enterprise data centers today are ill-equipped to help businesses succeed in the Networked Society. Designed primarily to deliver back-office applications to internal business users, the traditional data center is burdened with an inefficient and costly architectural design that prevents it from scaling and flexing rapidly to meet the changing workloads of a modern digital enterprise.

One of the biggest design inefficiencies of the conventional data center is the creation of separate silos of hardware and software for each application workload. Because resources are not shared between workloads, investments in capacity buffers (over-provisioning) and high availability need to be made separately for each workload, adding to the overall data center cost burden.

CHALLENGES FACING DATA CENTERS TODAY

• Scaling to support exponential growth of application and network workloads with minimal additional costs to the business (OPEX/CAPEX)

• Transforming service capabilities from back office operations to a strategic revenue driving, customer-facing asset

• Increasing flexibility and speed of delivery of infrastructure services to meet the dynamics of digital industrialization

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

5

Figure 1. Traditional heterogeneous application workload management

Moreover, because traditional data centers typically follow a lengthy “procure-to-provision” process tied to the business’s annual capital investment cycle, system architects will typically over-provision the infrastructure to weather these lengthy procurement cycles. Enterprise architects we interviewed said they typically design 30% over-capacity for a standard workload and up to 60–70% for mission-critical workloads.

Adding to data center complexity and cost is the wide variety of proprietary infrastructure components associated with each application stack. The bill of materials includes database appliances, storage solutions, servers, and networking solutions — all procured on a per-application, per-department basis. Heterogeneous environments like this always require more people to administer, adding to OPEX.

Virtualization and Software-Defined Infrastructure

To address these inefficiencies, the traditional data center is evolving. Server virtualization technology has allowed system administrators to have more flexibility in provisioning infrastructure resources. According to Forrester Research, 77% of x86 servers in enterprise data centers have been virtualized.4 Analysts estimate that virtualization has lifted x86 server utilization from 5–10% to over 30%.5

The success of server virtualization has opened the door to virtualizing other data center components, such as network and storage, and leading to the next wave of enterprise data center optimization — software-defined infrastructure (SDI) — which seeks to virtualize the entire data center infrastructure. Accelerating the spread of this new architecture is an industry-standard technology blueprint developed by Intel called Intel® Rack Scale Architecture (RSA).

Intel RSA is a logical architecture that disaggregates compute, storage, and network resources and introduces the ability to pool these resources for more efficient utilization of assets. It simplifies resource management and provides the ability to dynamically compose resources based on workload-specific demands.6

RequiredCapacityLong Procure-to-Pay

Process per Workload Creates Overprovisioning

Increases hardware capital

expenses

Proportionally increases operating

expenses

Reduces resource utilization

ApplicationWorkloadfor BU 1

OverprovisionedCapacity

RequiredCapacity

ApplicationWorkloadfor BU 1

OverprovisionedCapacity

RequiredCapacity

ApplicationWorkloadfor BU 2

OverprovisionedCapacity

RequiredCapacity

ApplicationWorkloadfor BU 2

OverprovisionedCapacity

High Availability Buffer

SOFTWARE-DEFINED INFRASTRUCTURE (SDI)

Software-defined infrastructure is the definition of technical computing infrastructure entirely under the control of software. It operates independent of any hardware-specific dependencies and are programmatically extensible.

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

6

Hyperscale Pioneers

Technology giants Google, Amazon, and Facebook were among the first to exploit the new architectural paradigm, developing what are called hyperscale data centers — highly scalable platforms built to handle the massively complex workloads of the Networked Society.

These pioneering hyperscale platforms have set the modern standard for performance, efficiency, and manageability. In fact, our research found that these custom-built hyperscale data centers, when compared to traditional data centers, deliver four times the level of CPU utilization, increase network utilization by as much as 26%, and dramatically lift productivity by enabling a single IT administrator to manage thousands of servers versus just hundreds, as shown in Figure 2.

These efficiencies have enabled large public cloud providers to drive down data center infrastructure and management costs, making it economically feasible to accommodate the massive computing and storage demands of the Networked Society. The next step in the evolution of the data center is to make hyperscale cloud architectures commercially available to enterprises worldwide.

Figure 2. Hyperscale data center efficiencies

ERICSSON HYPERSCALE DATACENTER SYSTEM 8000: HYPERSCALE DATA CENTER ARCHITECTURE FOR THE ENTERPRISE

Recognizing the game-changing potential of Intel Rack Scale Architecture and software-defined infrastructure, technology leader Ericsson developed a data center solution — Ericsson Hyperscale Datacenter System 8000 — that brings the benefits of hyperscale computing to enterprises in a commercially available solution. For the first time, enterprises can deliver cloud computing performance and manageability on par with hyperscale leaders Google, Facebook and Amazon.

As discussed above, traditional data centers typically build dedicated infrastructures for each workload. Server virtualization has helped break down some of these silos, but in practice this technique can only be applied to a portion of enterprise workloads. Many enterprise applications, for example, are too costly to virtualize, while other legacy applications can present performance and tuning challenges

GOING HYPERSCALE

Hyperscale is the ability of an architecture to scale by seamlessly adding compute, memory, networking, and storage resources to accommodate increasing volumes of data and work-loads. Hyperscale computing is often associated with cloud computing and the very large data centers owned by Facebook, Google and Amazon.

Source: Mainstay Partners

1.01.26

Network

Network Utilization1

(Traditional vs. SDN Indexed)

26% improvement in Network Utilization

Sources:1 Juniper Networks & Wakefield Research Survey for 400 U.S.-Based IT Decision Makers, 20142 Traditional DC benchmark based on 2014 Data Center Efficiency Assessment, NRDC; Google/Facebook benchmark based on extensive Mainstay and Ericsson research (see endnote #7 for a full list of sources)

3 Delfina Eberly, Director of Datacenter Operations, Facebook

15%

Traditional DC Hyperscale DCTraditional DC Hyperscale DC

65%

CPU

CPU Utilization2

(Traditional vs. Google/Facebook)

4x improvement in CPU Utilization (2x vs. virtualized workloads)

300

Traditional DC Hyperscale DC

5,000

Servers/Admin

Servers per Admin Resource3

(Traditional vs. Google/Facebook)

10x+ improvementin Resource Scaling

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

7

in virtualized environments. In addition, the performance of certain business-critical workloads — particularly high-speed, revenue-generating, and customer-facing applications — can suffer from the overhead latency effects of virtualization, and are excluded for that reason.

Ericsson Hyperscale Datacenter System 8000 overcomes these limitations by creating a pooled set of physical layer infrastructure components to support the virtualized and application layers, enabling businesses to flexibly support the dynamic demands of heterogeneous workloads. Administrators use management software — the HDS Command Center portal (or via APIs to an existing enterprise management system) — to define the optimal infrastructure components for each workload. Figure 3 compares a traditional data center architecture with the Ericsson platform, showing the advantages of deploying a centralized and homogenous bare metal layer of resources managed by a centralized software controls system (HDS Command Center) across all workloads.

Figure 3. Traditional data center vs. Ericsson Hyperscale Datacenter System 8000 higher utilization

Higher Utilization

The ability to rapidly match workloads with hardware components in a common pool of bare-metal infrastructure enables data centers to increase the average utilization of components. Our research indicates that system administrators could routinely maintain server utilization rates of approximately 60% across most workloads.7

LegacyWorkloads

OS

Network

Servers

Disk

OS

Network

Servers

Disk

SpecializedWorkloads

Traditional Data Center Infrastructure Architecture

Virtualization

OS

Network

Servers

Disk

VirtualizationOS

Network

Servers

Disk

Business-CriticalWorkloads

OS

Network

Servers

Disk

LegacyWorkloads

CPU

Pooled Hardware Components

Operating System Operating System

Memory Disk Network

IaaS (e.g., Openstack, VMWare)

Command Center Portal

Disk

SpecializedWorkloads

Ericsson Hyperscale Datacenter System 8000 Infrastructure Architecture

Disk

Business-CriticalWorkloads

• Siloed workload infrastructure• Heterogeneous bare-metal complexity• Virtualization only applicable for

select workload segments

• Pooled workload infrastructure• Homogeneous bare metal components• Utilization improved across workloads

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

8

We found three critical drivers of utilization improvements. By replacing the “procure-to-provision” planning process with a streamlined, software-defined “pool-to-provision” planning process (see next section), system administrators can better match infrastructure to workload demands, removing the overprovisioning allowance that is prevalent in current workload planning decisions. In our discussions with data center experts, we found this driver could improve utilization by up to 60%. Another utiliza-tion booster is the ability to dynamically allocate CPU resources across servers and racks, allowing administrators to quickly migrate resources to address shifting demand. We found this can drive 100–300% greater utilization for virtualized workloads and 200–600% for bare-metal workloads.

Finally, HDS Command Center’s open platform approach allows data center operators to manage their entire data center infrastructure (other vendor solutions as well as Ericsson) through a single portal. This enables system administrators to identify opportunities to allocate infrastructure to multiple workloads, improving utilization by up to 50%. For example, infrastructure used to support Workload A, which experiences peak demand during the day, could be shifted to Workload B, which experiences peak demand during the night. The visibility and reporting capabilities of HDS Command Center also allow operators to audit all infrastructure components, enabling them to locate under-utilized “ghost servers” (those not procured or managed by IT) and bring these resources under IT management. By leveraging the management and reporting capabilities of HDS Command Center, data center operators could improve utilization by up to 10%, our research shows. Figure 4 summarizes the range of utilization improvements that could be achieved by implementing the Ericsson Hyperscale Datacenter System 8000 platform.

Figure 4. Summary of Utilization Improvements

Utilization Improvement Driver

Ericsson Hyperscale Datacenter System

8000 Impact

Virtualized and Bare Metal Workload

Improvement Improvement Driver Description

“Pool-to- Provision” Planning Process Impact

Up to 60% Virtualized Impact — 0.25X improvement in utilization

Bare Metal Impact — 0.5X improvement in utilization

Traditional “procure-to-provision” processes combined with high availability and other per- formance considerations drive overprovision-ing for infrastructure. The Ericsson system streamlines this process, creating a “pool-to-provision” capability, including auto-provision-ing, that better matches infrastructure requirements with workload demands.

CPU Re-provision-ing andInfrastructure Tuning

1X – 3X for virtualized servers

2X – 6X for bare-metal (non-virtualized) servers

Virtualized Impact — 1.5X improvement in CPU utilization

Bare Metal Impact — 3X improvement in CPU utilization

HDS Command Center software-defined infrastructure management enables simple CPU re-assignment, maximizing the utilization of available resources across workloads (e.g., Workload A is not meeting planned demand metrics so infrastructure is moved to Workload B).

CPU Multi- Provisioning

Up to 50% Virtualized Impact — 0.25X improvement in utilization

Bare Metal Impact — 0.5X improvement in utilization

With the Ericsson system’s greater automation, monitoring, and resource discovery capabili-ties, ghost servers can be eliminated. The hyperscale platform also allows for re-allocation of resources across multiple workloads that are migrated based on need (e.g., Workload A during peak daytime hours; Workload B during peak nighttime hours).

Ghost Servers Discovery and Avoidance

Up to 10%

AVERAGE 2X (virtualized)

4X (non-virtualized)

Based on the drivers above, we modeled the median of each factor, which gets us to approximately 2X CPU utilization improvement for virtualized workloads and 4X CPU improvement for non-virtualized workloads*Please note that all these factors will vary based on the unique operating practices of the enterprise data center.

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

9

By contrast, server utilization rates for traditional data center workloads typically range between 6% and 15%, with hardware virtualization benchmarks averaging closer to 30%.8 Whether applications are virtualized or running on bare metal, the ability to pool infrastructure at the component level using the Ericsson Hyperscale Datacenter System 8000 platform creates a homogeneous data center infrastructure across all workloads.

Optimized Data Center Infrastructure Management: From “Procure-to-Provision” to “Pool-to-Provision”

Ericsson’s software-defined data center platform will modernize the way operators manage infrastructure planning and procurement. The focus of planning cycles, for example, will move from procuring infrastructure for individual workloads (“procure-to-provision”) to managing the disposition and deployment of the data center’s entire of pool of hardware (“pool-to-provision”).

This shift to more holistic planning will eliminate traditional over-provisioning practices and enable rapid reallocations based on actual usage patterns. To effectively manage the new planning approach, operators may find they need to modify their organizational structures somewhat, relying less heavily on individual component solution specialists (server, storage, and networking experts) and more heavily on cross-component designers and SDI experts.

INSIDE THE ERICSSON HYPERSCALE DATACENTER SYSTEM 8000 PLATFORM

Ericsson’s new platform introduces unique capabilities to enable companies to build a software-defined hyperscale data center.

• Resource pooling. Leveraging RSA, Ericsson’s platform reinvents the way system administrators provision, manage and monitor data center infrastructure assets through the pooling of infrastructure components. Resource pooling results in significant improvements in asset utilization.

• HDS Command Center. This system management portal gives IT administrators full visibility into all data center infrastructure elements (including non-Ericsson system elements via APIs) and enables composing a system using pooled resources that include compute, network and storage based on workload requirements. Command Center also provides auto-provisioning, real-time performance tracking, and triggers to adapt to workload shifts.

• HDS Optical Backplane. Ericsson’s state-of-the-art optical interconnect removes the traditional distance and capacity limitations of electrical connections, enabling maximum pooling of resources and flexibility in physical placement of data center assets, while increasing the lifespan of racking chassis and cabling by 3-4x.

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

10

BUSINESS IMPACT AND TCO ANALYSIS

What specific benefits and savings can businesses expect to achieve if they implement the Ericsson Hyperscale Datacenter System 8000 platform? Our analysis, based on extensive discussions with data center engineers and industry experts, estimated a range of impacts spanning three key business-value categories, as summarized in Figure 59:

Figure 5. Ericsson Hyperscale Datacenter System 8000 Economic Value Tree: Impact Categories

Estimates of TCO and business impact will vary depending on the existing maturity and operating practices of the enterprise data center being studied. To simplify our analysis, we used third-party benchmarks to estimate the current operating practices of data centers today. For example, virtualization of enterprise data centers has helped to drive additional cost savings and higher efficiencies but our research has found that the range of virtualization can vary from 25% to 75% of data center workloads.

Operating Expense (OPEX) Savings

Our analysis showed that the Ericsson Hyperscale Datacenter System 8000 solution offers clear opportunities for reducing operating costs. These drivers of OPEX savings can be grouped into five categories, as shown in Figure 6 and summarized below.

Figure 6. Ericsson Hyperscale Datacenter System 8000 economic value tree — OPEX savings

CapitalExpense (CAPEX)Savings

Benefit Category Benefit Description Business Impact

Workload efficiencies reducessoftware license support costs

Optical Backplane simplifies deploymentand increases rack lifespan by 3x+

SDI scales key management activitiesacross all components

Increases utilization + improves data centerdensity, maximizing PUE

Reduced SoftwareLicense Support Costs

Data Center ReducedFootprint OPEX

DeploymentAdministrative Savings

Simplified SystemAdmininstration Processes

Reduced Cooling/PowerConsumption

Higher density and capacity utilizationreduces facility costs

Operating Expense (OPEX)Savings

Benefit Category Benefit Description Business Impact

Reduced CPUs per workload lowers operatingsystem license purchase requirements

Increasing utilization of CPUs by pooling components and advanced capacity management practices

Increased storage efficiency gained from erasurecoding storage dispersal management approach

Reduced spend by moving to a fabric/SDN solution increasing utilization with commodity components

Reduced OperatingSystem Licenses

Reduced Virtualization Licenses (e.g., Hypervisor)

Increased Capacity Utilizationof Server/CPU infrastructure

Increased Storage Efficiencyof Unstructured Data

Reduced Network Costs by Fabric-Based Approach

Reduced CPU per workload lowers the virtualization software license purchase requirements

StrategicImpacts

Revenue and Other StrategicImpacts

Operating Expense(OPEX) Savings

• System administration savings via Command Center workload lifecycle management• Power savings from improved asset utilization, density, PDU architecture• Optical backplane wiring and lifecycle cost savings

• Server/CPU consolidation and improved utilization reduces component costs• SDN Architecture + Ericsson hyperscale networking solution reduces cost per port• SDS Architecture + Ericsson hyperscale storage solution reduces total capacity required• Facility infrastructure density improvement • SDI agility to migrate infrastructure reduces total capacity requirements

• Enable revenue generating services • Deliver Networked Society strategic opportunities • Create “infrastructure on tap” or “AWS-like” asset nimbleness• Improve “green” data center measures

Capital Expense (CAPEX) Savings

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

11

• Physical deployments savings. Ericsson’s innovative racking system, combined with the efficiency improvements delivered by HDS, can significantly reduce infrastructure physical deploy-ment costs per year. Our analysis showed that these savings could reach as high as 74% over traditional physical deployment annual costs, as shown in Figure 7. The savings are driven by the platform’s advanced optical backplane technology, as well as greater CPU utilization which together dramatically reduce the hardware and the labor required to build and maintain racks and increases the average lifespan of racks from about 3 years to 10 years or more. Further, Ericsson’s high-utilization, component-based architecture changes the traditional approach of re-racking and re-cabling after each systems-replacement cycle (every 3 to 5 years). The Ericsson Hyperscale Datacenter System 8000 platform allows data center operators to maintain the existing optical backplane rack and cabling and replace component sleds into that racking system, reducing effort and minimizing costs.

• Simplified system administration. Companies, especially large enterprise-scale data center operators, will realize significant labor productivity savings using Ericsson’s command center software to streamline the management of data centers. Common system administration tasks such as monitoring, notifications, quota management and resource allocation, ghost server discov-ery, and provisioning across workloads are now automated and orchestrated by HDS Command Center. Further, because the data center is now managed as a homogeneous infrastructure, fewer component and vendor experts are required. As a result, our research indicates that large data center operators can expect to reduce system asset management tasks and reduce FTE require-ments by as much as 94% for managing storage, server and networking environments, as shown in Figure 8.10

• Power and cooling savings. Our research shows that Ericsson’s high-utilization platform will enable data centers to operate with substantially less physical infrastructure, as detailed in the section on CAPEX savings. Based on these estimates, the average data center could reduce overall power and cooling costs up to 72%.11

• Reduced software licensing and support costs. As organizations boost component utilization and cut back on data center hardware — particularly CPUs — the cost of software licensing would fall proportionally. Based on our earlier estimates, companies could realize as much as a 30% reduction in ongoing virtualization software license costs.12 However, these savings are offset by software license investments in HDS Command Center Software. Our analysis found that this TCO category would be a net investment for operators.

• Lower real estate costs. In most cases, real estate cost savings will take the form of cost avoidance as companies find they can accommodate growth more efficiently and thus use existing data center space longer. Ericsson’s optical backplane also facilitates better space utilization, allowing for more flexible

Figure 7. Ericsson Hyperscale Datacenter System 8000: Physical deployment cost savings

Figure 8. System administration management savings

Source: Mainstay Partners

1.00

Traditional Ericsson Hyperscale

0.26

Physical Deployment Costs*(60-Rack Data Center — annual spend)

* Assumes a 60-rack scale data center with electrical, standard rack architecture; Savings derived from: 1. Increased lifespan of Optical Backplane; 2. Reduced footprint from improved capacity utilization; 3. Reduced footprint from less over-provisioning; and 4. Simplified cabling requirements.

Source: Mainstay Partners

35

Traditional Ericsson Hyperscale

2

Infrastructure System Administrators*(at 60-Rack Data Center)

* Assumes Ericsson system best practices, unified infrastructure simplification, and SDI benefits are achieved

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

12

placement of racks. Our analysis estimates that by switching to Ericsson’s hyperscale infrastructure, the typical enterprise could reduce its data center footprint by up to 50%.

Capital Expenditure (CAPEX) Savings

Ericsson’s hyperscale cloud solution can help companies significantly reduce or avoid data center capital expenditures as each component of the data center becomes more efficient and easily scalable. Figure 9 details the range of capital savings companies can potentially realize.

Figure 9. Ericsson Hyperscale Datacenter System 8000 economic value tree — CAPEX savings

• Server CAPEX savings. As enterprises shift workloads to Ericsson’s software-defined infrastructure, they can expect to reduce capital expenditures on servers (CPUs) in line with our earlier 2-4x utiliza-tion improvement estimate. In estimating the total savings potential for an enterprise data center with 75% virtualized workloads, we calculated a 65% TCO savings for server-related CAPEX spending. Companies can also economize on CPU costs by optimizing how they are deployed in the hardware pool. For example, as a CPU ages and newer components are brought online, administrators can match older components to less critical workloads, extending the average life of a CPU from three to five years, according to data center experts we interviewed. Many companies will see additional savings in lower software license costs tied to CPU cores or sockets.

• Storage savings. By employing an advanced erasure coding storage algorithm, Ericsson Hyperscale Datacenter System 8000 storage minimizes the replication of unstructured data across the data center, enabling cost reductions in disk storage by as much as 35% (Figure 10). In addition to lowering physical disk CAPEX, the enterprise should also see reductions in storage-related software license costs.

• Network CAPEX savings. Ericsson’s software-defined networking topology, combined with the physical advantages driven by its optical backplane racking solution, will increase the overall efficiency and effectiveness of the data center’s physical switch infrastructure, enabling capital cost reductions in this area. In addition, our analysis

Figure 10. Traditional vs. Ericsson hyperscale platform’s unstructured data storage

Source: Mainstay Partners

3.6

Traditional Ericsson Hyperscale

1.7

Amount of Storage Required for Similar Workloads in Petabytes

CapitalExpense (CAPEX)Savings

Benefit Category Benefit Description Business Impact

Workload efficiencies reducessoftware license support costs

Optical Backplane simplifies deploymentand increases rack lifespan by 3x+

SDI scales key management activitiesacross all components

Increases utilization + improves data centerdensity, maximizing PUE

Reduced SoftwareLicense Support Costs

Data Center ReducedFootprint OPEX

DeploymentAdministrative Savings

Simplified SystemAdmininstration Processes

Reduced Cooling/PowerConsumption

Higher density and capacity utilizationreduces facility costs

Operating Expense (OPEX)Savings

Benefit Category Benefit Description Business Impact

Reduced CPUs per workload lowers operatingsystem license purchase requirements

Increasing utilization of CPUs by pooling components and advanced capacity management practices

Increased storage efficiency gained from erasurecoding storage dispersal management approach

Reduced spend by moving to a fabric/SDN solution increasing utilization with commodity components

Reduced OperatingSystem Licenses

Reduced Virtualization Licenses (e.g., Hypervisor)

Increased Capacity Utilizationof Server/CPU infrastructure

Increased Storage Efficiencyof Unstructured Data

Reduced Network Costs by Fabric-Based Approach

Reduced CPU per workload lowers the virtualization software license purchase requirements

StrategicImpacts

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

13

shows that network-layer savings will increase exponentially as the scale and complexity of the data center increases.13

• Power and cooling equipment savings. A side benefit of migrating to Ericsson’s streamlined infrastructure is a corresponding reduction in the amount of power backup systems and cooling equip-ment required. Since these savings are likely to represent only a small portion of overall cost savings, they are documented here for completeness but not included the TCO calculations that follow.

Revenue and Other Strategic Impacts

Although the specific purpose of this study was to detail the potential TCO savings of the Ericsson hyperscale platform, our research and discussions with data center and industry experts also uncovered a range of strategic and revenue-generating opportunities that enterprises could capture when they are equipped with hyperscale data center capabilities. Although the sheer diversity and unpredictability of these opportunities makes it impossible to quantify for the average company, they may well represent the greatest potential source of value for many enterprises, especially those seeking business transfor-mation through digital initiatives.

For example, a hyperscale platform like Ericsson’s is ideally suited for enabling IoT and Big Data initiatives, both of which require massively scalable compute, networking and storage capabilities. McKinsey estimates that the IoT could have a total economic impact of $11.1 trillion per year by 2025,14 the equivalent of 11 percent of the world economy. However, to exploit IoT opportunities, data centers will need to bring new capabilities into play, such as enabling interoperability and the ability to analyze huge volumes of data. Ericsson Hyperscale Datacenter System 8000 makes all of this possible and at a lower cost basis than the competition. For “hyperscale-enabled” companies, billion-dollar opportunities become addressable markets.

Other capabilities enabled by the Ericsson hyperscale platform, such as rapid workload provisioning, optimal hardware utilization, and better infrastructure visibility and manageability, will enable enter-prise data centers to provide services comparable to what’s available from public cloud providers such as AWS. Furthermore, by increasing data center density, conserving space, and improving power usage effectiveness, or PUE, the Ericsson hyperscale platform will help companies meet environmental sustainability goals, an important strategic priority for many companies.

5-YEAR TCO IMPACT FOR ENTERPRISE DATA CENTERS

What is the total economic benefit enterprises could gain from implementing Ericsson Hyperscale Datacenter System 8000 in a typical data center? To estimate the impact, we created a total cost of ownership (TCO) model, factoring in the range of value drivers described in the previous section. The model analyzes operating scenarios for three sizes of data centers: small (1 rack), medium (6 racks), and enterprise-scale (60 racks), recognizing that infrastructure components can vary greatly in each case depending on industry, geographic coverage, the types of applications being run, and other factors.

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

14

TCO and ROI Summary

The results of our TCO analysis are clear: Enterprise data centers that adopt the Ericsson Hyperscale Datacenter System 8000 platform can expect to capture significant savings — as much as 55% in CAPEX savings and 75% in OPEX savings for a 60-rack data center (see below for more details on our TCO modeling), with an estimated overall TCO savings of 63%. This translates into an estimated ROI of 138% over five years.

One of the key drivers of TCO savings is greater CPU utilization, which allows for the deployment of fewer servers and CPUs, less network hardware, and less storage disk. The Ericsson platform’s ability to transform the “procure to provision” process into a vastly more efficient “pool to provision” process enables more frequent, collaborative planning efforts that help avoid over-investment in infrastructure. For these utilization and labor efficiencies to be fully realized, however, organizational changes may be required to facilitate collaborative planning across departments and optimize compute, storage and network capacity.

Ericsson’s Optical Backplane solution drives more efficient cabling, resulting in fewer cables procured and ease of physical deployment resulting in lower OPEX spending. The automation and orchestration with HDS Command Center, results in being able to scale up the amount of infrastructure managed per systems administrator. Less infrastructure additionally reduced power and cooling, software licenses and hardware and software support costs.

Figure 11 summarizes the estimated magnitude of cost savings and ROI that could be achieved at different scales of deployment. Figure 12 shows a more detailed breakdown of savings by data center cost category.

Figure 11. Ericsson Hyperscale Datacenter System 8000: Summary of estimated savings and ROI (five years)16

Figure 12. Full comparison of TCO cost elements

Cost Element

Scale of Deployment

60 Racks 6 Racks 1 Rack

Hardware CAPEX 59% 57% 53%

Software CAPEX 30% 29% 29%

Hardware OPEX 44% 41% 35%

Software OPEX* 30% 29% 29%

System Administration OPEX 94% 75% 0%

Power, Cooling and Space OPEX 62% 62% 36%

*Software OPEX does not include investment in licensing costs for HDS Command Center software.

Financial Category

Scale of Deployment

60 Racks 6 Racks 1 Rack

CAPEX Savings 55% 53% 50%

OPEX Savings 75% 62% 0.4%

Total Cost of Ownership (TCO) Savings 63% 57% 26%

Return on Investment 138% 109% 29%

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

15

Figure 13 shows the 5-year economic impact of moving to Ericsson Hyperscale Datacenter System 8000 at a hypothetical 60-rack data center currently operating in a non-hyperscale environment. Figure 14 depicts the relative percentages of each type of savings. See Figure 15 in the Appendix for additional notes and calculations on each category of savings and investment cost.

Figure 13. 5-Year TCO Impact Summary (60-rack data center)

Figure 14. Ericsson Hyperscale Datacenter System 8000: Savings by cost category

TCO Model Approach and Assumptions

To establish realistic assumptions and benchmarks used in the model, we worked closely with data center and industry experts, including operators at Fortune 500 companies, and took a conservative view of the potential cost savings of the Ericsson hyperscale platform. For example, wherever possible, we have assumed a conservative basis for calculations, such as using discounted “market” prices for hardware and other components.

MORE CAPACITY…

Assuming equivalent hardware, Ericsson Hyperscale Datacenter System 8000 can deliver double the compute capacity of a traditional data center.

AND LESS TCO…

A 25% improvement in data center asset utilization can translate into a 10% TCO savings.

Hardware Capital47%

System Administration39%

Software Support<1%

Hardware Support1%

Power, Cooling and Space10%

Software Capital3%

Current(non-HDS)

TCO

HardwareCapital

Cost Savings

SoftwareCapital

Cost Savings

Note: the Ericcson Hyperscale Datacenter System 8000 is abbreviated as “HDS” in this table.

HardwareSupport

Cost Savings

HDSInvestment

SystemAdministrationEffort Savings

Power, Coolingand Space

Cost Savings

EricssonPlatform

TCO

Savings Investment

$63,565,634

$19,878,060 $1,326,353

$2,205,000$16,500,000

SoftwareSupportSavings

$84,887

$3,953,774

$23,551,384

$476,176

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

16

For the same reason, we chose not to include the potential revenue or strategic benefits that might result from providing the enterprise with greater agility, speed or environmental sustainability, as outlined in the previous section. Although these benefits could be substantial, even exceeding the other categories of benefits, we judged them to be too variable and unpredictable to include in the TCO model.

The relative percentage of virtualized vs. non-virtualized (or bare metal) workloads can vary on a case by case basis. We chose to present a conservative view of the impact by assuming that 75% of the workloads had already been virtualized, thus using a relatively efficient baseline for our analysis. Based on our research, we set the average CPU utilization in non-virtualized environments at 15%, and virtualized environments at 30%. Because of the Ericsson platform’s superior resource pooling and disaggregation capabilities, its utilization rate was set at 60%, which is consistent with performance levels achieved at leading hyperscale data centers.

The TCO analysis covers a 5-year period and factors in the depreciated costs of the infrastructure. The following CAPEX elements were considered: server hardware, storage HW, network hardware, cable hardware, physical deployment effort, virtualization software, management software. The following OPEX elements were considered: hardware support, software support, HDS Command Center software support, system administration, power and cooling, and space utilization.

Our analysis assumed a greenfield approach to implementing the Ericsson platform. In our model, the compute-storage-network environment is completely converted to the Ericsson hyperscale platform on Day One and compared to a traditional, non-hyperscale operating model. Since the infrastructure is assumed to be fully-depreciated, there is no time-lag or phasing of impacts. A phased approach model would need to be created for cases that are not greenfield implementations. Our recommendation is to use the framework, benchmarks and research provided in this paper as a “starter kit” to develop a custom ROI analysis that takes into account the unique features and opportunities of your individual data center.

THE JOURNEY TO HYPERSCALE

For most enterprise IT organizations, moving to an Ericsson hyperscale platform won’t happen overnight. In reality, it will be more of a journey than a flip of the switch. Enterprise data center management will be tasked with not only transforming the physical architecture of the data center, but also changing work processes and job descriptions in order to maximize TCO savings — a process that will likely take months, if not years.

EARLY RESULTS: GOING HYPERSCALE AT ERICSSON’S STATE-OF-THE-ART DATA CENTER

Ericsson is committed to being its first and best customer with its hyperscale data center platform. As part of its 2016 IT planning efforts, Ericsson has prioritized its engineering systems infrastructure as the best place to start. In our interviews, IT executives explained that the Ericsson Hyperscale Datacenter System 8000 solution was particularly well suited for the engineering systems environment given the very high CPU demand of its application workloads.

Executives said that the platform’s HDS Command Center will help the IT team more effectively manage the infrastructure and meet the unique requirements of its engineering applications. With Ericsson’s hyperscale platform, the IT team is expected to achieve an estimated 400% improvement in system utilization across more than 30,000 servers.

The team expects to launch the hyperscale project in 2016 and then expand the platform’s capacity over several years. Results from the project will be analyzed and shared widely within the company to help improve the product for internal users as well as Ericsson’s customers.

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

17

In most cases, enterprises will be best served by following an incremental approach to hyperscale transformation. That could mean starting with a pilot on a selected workload in order to demonstrate the benefits of pooling resources and delivering “infrastructure on tap.” Another initial step, which Ericsson itself took internally, is to leverage the audit capabilities of the HDS Command Center to gain full visibility into the utilization of the existing infrastructure and find dormant or shadow infrastructure that may be unknown or poorly leveraged. This activity can provide an early return by eliminating dormant or underproductive assets.

After successfully proving the platform’s capabilities in the pilot, most organizations will be ready to plan for a wider rollout. Ericsson, for example, decided to leverage the visibility gained from its HDS Command Center audit to identify areas of the data center that could benefit most from the hyperscale platform and then start the deployment there (See callout: “Early Results: Going Hyperscale at Ericsson’s State-of-the-Art Data Center”).

Capital depreciation schedules will likely influence the pace of change, since most companies will want to avoid write-offs of existing equipment and prefer to phase in the hyperscale platform only when legacy hardware has been fully depreciated. This financial constraint would suggest a three- to five-year journey for most enterprises.

Considering the comprehensiveness of Ericsson’s hyperscale platform and its potential for transforming the business, companies should take advantage of Ericsson’s experts in their planning efforts. This could include the development of a multi-year business case, a phased implementation approach, and a technical design evaluation that explores the range of opportunities for leveraging this next-generation data center architecture.

About This Study

Research and analysis for this study was conducted by Mainstay, the leading IT management consulting firm focused on quantifying and communicating the business value of technology. This study was based on reviews of primary research and interviews with business and IT executives at Ericsson and numerous industry leaders, including experts from one of the world’s largest systems integrators; a data center architect with experience in server and storage virtualization, provisioning and networking, and storage management; and an executive from a leading gaming and entertainment company with experience in data center virtualization and performance. TCO and ROI calculations use industry standard assumptions regarding the time value of money. For more information about the study’s methodology and model assumptions, see Figure 15 in the Appendix. Information contained in the publication has been obtained from sources considered reliable, but is not warranted by Mainstay.

About Mainstay

Research and analysis for this study was conducted by Mainstay, a leader in quantifying and communicating the value of business and technology investments. Mainstay serves as an independent advisor to eight of the top ten global high-tech companies.

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

18

Appendix

Figure 15. Comparison of non-HDS and HDS environments and cost calculation methodologies

(Note: the Ericcson Hyperscale Datacenter System 8000 is abbreviated as “HDS” in this table.)

Non-HDS Servers HDS Servers

Non-HDS environment has 60 racks, which have 900RU/servers (and 3600 ports). Price per server is multiplied by total number of servers and depreciated. During refresh cycle, 20% of the server hardware can be reused, therefore 80% of new server cost is applied to year 5.

CPU utilization improves by 2x for virtualized server workloads, from 30% to 60% utilization. For non-virtualized the improve-ment is 4X, going from 15% to 60% utilization. The weighted average improvement is 2.5X for the 75% scenario modeled in this analysis. The 900 legacy servers are reduced to 394RU.

For physical deployment efforts (racking/stacking), we considered 2 FTE x 10 hrs per server + installation + other hardware and software deployment activities (e.g., testing: 2 FTE x 70 hours per physical server). These costs are depreciated in the model.

Physical deployment of HDS requires 2 FTE x 5 hrs per server. Installation plus other hardware and software deployment including testing is estimated at 2 FTE x 40 hours per physical server.

Non-HDS Storage HDS Storage

Centralized storage, 7.2k HDD (72 TB / 1 RU), 250WCentralized storage, 10k HDD (24 TB / 1 RU), 250WCentralized storage, SSD (40TB / 1RU), 250WTotal of 230RU make up 10 PB (i.e.,130 of SSD + 50 of 10K+ 50 of 7.2k)

Assuming that 70% of this storage environment is unstructured data, HDS software-defined storage provides 50% reduction in replication of unstructured storage, resulting in a 35% savings.

Non-HDS Network HDS Network

96 port fabric interconnect switches, 950W. Total of 80 switches, with an additional 8 routers for inter-rack routing.

44 Ericsson leaf switches, 800W per switch plus EAM.

Non-HDS Cables HDS Cables

Standard 10G electrical copper cables:Compute-Network: Assumes 4 10G ports are connected. 90U x 4 port x price per cableStorage-Network: 23 U x 2 ports x price per cableIntra-Network: 8 U x 2 ports x price per cable

Ericsson optical cabling: CSU-NSU: Per server price per cable, assuming 4 10G ports are connected. 394U x 4 port x price per cableSSU-NSU: 150U x 2 ports x price per cableNSU-NSU: 44U x 1.5 ports x price per cable

Virtualization Software: Non-HDS Virtualization Software: HDS

Virtualization/Hypervisor Software (75% of the environment is virtualized x 900 physical servers) priced per CPU. 75% of environment x 900 servers x 2 CPUs.

Number of servers reduced to 394RU; this expense is reduced by fewer CPUs.

Management and Orchestration: Non-HDS Management and Orchestration: HDS

Assuming 15 virtual servers per physical server. Calculated at price per virtual machine.

Expenses do not change (enterprise needs for the number of virtual servers does not change). Note: HDS Command Center costs are listed under OPEX.

Software and Hardware Support Costs: Non-HDS Software and Hardware Support Costs: HDS

Assumed at 8% per year (first year $0) for each hardware cost above (server, storage, network).

Same calculation as non-HDS.Additionally, HDS Command Center costs are considered at:Price per box x (394U + 150U + 44 U) per year.

Systems Administration: Non-HDS Systems Administration: HDS

1 FTE per 300 virtual servers. This represents server, storage and network administration composite resource estimation.

Ratio is increased to 5000 virtual servers per FTE.

Power, Cooling & Space: Non-HDS Power, Cooling & Space: HDS

Standard cost of power 10 c per KWH, 8736 hours total avail-able hours per year x total wattage of servers, storage and network. For space, each rack occupies 30 sq. ft. Data center space cost is $310 per sq. ft. Assumes 60 racks are occupied.

Same calculation method, fewer server hardware and fewer total number of racks reduces OPEX expenses for power, cooling and space.

Sources for benchmarks: Recent research from Mainstay and Ericsson data center analysts and research publications from Gartner, VMWare, Uptime Institute and IDC.

An Economic Study of the Hyperscale Data Center

Copyright © 2016 Mainstay, LLC

White Paper

19

Endnotes

1 Worldwide mobile data traffic is projected to grow at a compound annual rate of 57% for the next four years, reaching a throughput of 24.3 exabytes per month by 2019 (Source: Statistica: “Global data center IP traffic in exabytes per year, 2015).

2 “Ericsson Mobility Report: On the pulse of the Networked Society,” Ericsson, November, 2015.3 “The enterprise IT infrastructure agenda for 2014,” McKinsey & Co, 2014.4 “Strategic Benchmarks 2013: IT Infrastructure,” Forrester Research, May 2013.5 Multiple sources: “The Economics of Virtualization: Moving Toward an Application-Based Cost Model,” IDC, Nov, 2009; “Chapter 20:

Data Center IT Efficiency Measures,” National Renewable Entergy Laboratory, US Department of Energy, Jan 2015.6 Intel Rack Scale Architecture is a set of architecture guidelines and principles that are being implemented by

Ericsson and other vendors.7 Based on interviews with leading SI data center experts, enterprise data center management, and publicly available Google and Facebook

benchmarks (see list of sources below). Administrators may choose to maintain some critical workloads at lower averages to manage spikes, while maintaining higher rates for non-critical workloads. (Our research found that >65% utilization is possible for certain workloads.)

http://www.epanorama.net/newepa/2011/04/10/facebook-datacenter-secrets/comment-page-1/

http://www.datacenterdynamics.com/content-tracks/servers-storage/how-facebook-deals-with-constant-change/83715.fullarticle

http://www.singjupost.com/facebook-fb-presents-credit-suisse-2014-annual-technology-conference-transcript/

http://www.vmware.com/solutions/consolidation.html

http://www.slideshare.net/dellenterprise/forrester-the-economic-impact-of-server-virtualization

http://ieeexplore.ieee.org/document/6529276/?reload=true

http://www.slideshare.net/sgurnam73/server-virtualization-by-vmware

https://www.linkedin.com/pulse/grace-hopper-2015-jade-chu?articleId=6062981612944056320

https://turbonomic.com/blog/on-turbonomic/optimal-cpu-utilization-depends/8 Based on the diversity of workloads in data centers, we are assuming (based on available research) that a best-in-class data center may have as

high as 60–75% of their workloads virtualized today. To be conservative, the analysis assumed 75% of the workloads are virtualized, leading to a 69% improvement in server utilization across the data center.

9 Given that the solution has not been released commercially at the time of this paper, all quantifications of impact are estimates based on available research and assumptions made in concert with the experts interviewed.

10 These savings are dependent on enterprise data center operators transforming the people, process and technology practices as described in this piece. 11 Assumes $0.10 per KWH @ 8,736 hours per year; wattage per component (storage, network, server) estimated based on available benchmarks;

assumes rack occupies 30 sq. ft. of data center space at $310/sq. ft.12 Data center infrastructure TCO includes software license costs and annual support costs for operating system software, virtualization software,

disaster recovery software, and the like.13 This is due to two major factors: (1) As the complexity and requirements for the network increase the costs of the switching environment grow

exponentially; (2) The fabric-based network environment, coupled with the fiber-based racking environment, allow for more flexibility in how you physically deploy the network switches into the racking environment. Due to electrical connection limitations, typical top-of-rack switches are required to manage very dense environments. With the Ericsson hyperscale platform, these physical switches can be better utilized because of the distance and low-latency capabilities of optical connections.

14 Mckinsey Global Institute’s June 2015 survey.15 Cost savings are estimates are based on full adoption the Ericsson Hyperscale Datacenter System 8000 solution as well as implementing new

organizational best practices discussed earlier, such as holistic infrastructure planning that reduces hardware overprovisioning. 16 CAPEX includes: server, storage, network, cabling, power and cooling equipment hardware costs, and virtualization and orchestration software

license costs.17 OPEX includes: server, storage, network hardware support costs, virtualization and orchestration software support costs, system administration

labor costs, power, cooling and space costs.

Mainstay764 Valderrama Ct.

Castle Rock, CO, 80108

www.mainstaycompany.comp. 650.638.0575 f. 800.638.0526