102
Proven Infrastructure Guide EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Enabled by EMC XtremIO and EMC Data Protection EMC VSPEX Abstract This Proven Infrastructure Guide describes the EMC ® VSPEX ® Proven Infrastructure solution for private cloud deployments with Microsoft Windows Server 2012 R2 with Hyper-V and EMC XtremIO™ all-flash array technology. July 2015

EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

EMC VSPEX PRIVATE CLOUD Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Enabled by EMC XtremIO and EMC Data Protection

EMC VSPEX

Abstract

This Proven Infrastructure Guide describes the EMC® VSPEX® Proven Infrastructure solution for private cloud deployments with Microsoft Windows Server 2012 R2 with Hyper-V and EMC XtremIO™ all-flash array technology.

July 2015

Page 2: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

2 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.

Published July 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines Enabled by EMC XtremIO and EMC Data Protection Proven Infrastructure Guide

Part Number: H14157.1

Page 3: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Contents

3 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Contents

Chapter 1 Executive Summary 10

Introduction ............................................................................................................. 11

Target audience ........................................................................................................ 11

Document purpose ................................................................................................... 11

Business benefits ..................................................................................................... 12

Chapter 2 Solution Overview 13

Introduction ............................................................................................................. 14

Virtualization ............................................................................................................ 14

Private cloud foundation ...................................................................................... 14

Compute .................................................................................................................. 14

Network .................................................................................................................... 15

Storage ..................................................................................................................... 15

Challenges ........................................................................................................... 15

Scalability ............................................................................................................ 16

Operational agility ............................................................................................... 16

Deduplication ...................................................................................................... 17

Thin provisioning ................................................................................................. 17

Data protection .................................................................................................... 17

Microsoft ODX support ......................................................................................... 17

EMC ViPR integration ........................................................................................... 17

API support .......................................................................................................... 18

Benefits of using XtremIO .................................................................................... 18

Chapter 3 Solution Technology Overview 19

Overview .................................................................................................................. 20

VSPEX Proven Infrastructures ................................................................................... 20

Key components ....................................................................................................... 22

Virtualization layer ................................................................................................... 22

Overview .............................................................................................................. 22

Microsoft Hyper-V ................................................................................................ 23

Virtual Fibre Channel ports ................................................................................... 23

Microsoft System Center Virtual Machine Manager .............................................. 23

High availability with Hyper-V Failover Clustering ................................................. 23

Hyper-V Replica ................................................................................................... 24

Page 4: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Contents

4 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Cluster-Aware Updating ....................................................................................... 24

EMC Storage Integrator for Windows Suite ........................................................... 25

Compute layer .......................................................................................................... 25

Network layer ........................................................................................................... 27

Storage layer ............................................................................................................ 28

EMC XtremIO ........................................................................................................ 28

EMC Data Protection ................................................................................................. 30

Overview .............................................................................................................. 30

EMC Avamar deduplication .................................................................................. 31

EMC Data Domain deduplication storage systems ............................................... 31

EMC RecoverPoint ................................................................................................ 31

Other technologies ................................................................................................... 31

Overview .............................................................................................................. 31

EMC PowerPath .................................................................................................... 31

EMC ViPR Controller ............................................................................................. 32

Public-key infrastructure ...................................................................................... 32

Chapter 4 Solution Architecture Overview 33

Overview .................................................................................................................. 34

Solution architecture ................................................................................................ 34

Overview .............................................................................................................. 34

Logical architecture ............................................................................................. 34

Key components .................................................................................................. 35

Hardware resources ............................................................................................. 37

Software resources .............................................................................................. 38

Server configuration guidelines ................................................................................ 39

Overview .............................................................................................................. 39

Intel Ivy Bridge updates ....................................................................................... 39

Hyper-V memory virtualization ............................................................................. 40

Memory configuration guidelines ......................................................................... 42

Network configuration guidelines ............................................................................. 42

Overview .............................................................................................................. 42

VLANs .................................................................................................................. 43

Enable jumbo frames (for iSCSI) .......................................................................... 44

Storage configuration guidelines .............................................................................. 44

Overview .............................................................................................................. 44

XtremIO X-Brick scalability ................................................................................... 44

Hyper-V storage virtualization .............................................................................. 46

VSPEX storage building blocks ............................................................................. 47

Page 5: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Contents

5 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

High-availability and failover .................................................................................... 48

Overview .............................................................................................................. 48

Virtualization layer ............................................................................................... 48

Compute layer ..................................................................................................... 48

Network layer ....................................................................................................... 49

Storage layer ....................................................................................................... 50

XtremIO Data Protection....................................................................................... 50

Backup and recovery configuration guidelines.......................................................... 50

Chapter 5 Environment Sizing 51

Overview .................................................................................................................. 52

Reference workload .................................................................................................. 52

Overview .............................................................................................................. 52

Defining the reference workload .......................................................................... 52

Scaling out ............................................................................................................... 53

Reference workload application................................................................................ 53

Overview .............................................................................................................. 53

Example 1: Custom-built application ................................................................... 53

Example 2: Point-of-sale system .......................................................................... 54

Example 3: Web server ........................................................................................ 54

Example 4: Decision-support database ............................................................... 54

Summary of examples ......................................................................................... 55

Quick assessment .................................................................................................... 55

Overview .............................................................................................................. 55

CPU requirements ................................................................................................ 56

Memory requirements .......................................................................................... 56

Storage performance requirements ...................................................................... 56

IOPS .................................................................................................................... 56

I/O size ................................................................................................................ 57

I/O latency ........................................................................................................... 57

Unique data ......................................................................................................... 57

Storage capacity requirements ............................................................................ 58

Determining equivalent reference virtual machines ............................................. 58

Fine-tuning hardware resources ........................................................................... 61

EMC VSPEX Sizing Tool ........................................................................................ 63

Chapter 6 VSPEX Solution Implementation 64

Overview .................................................................................................................. 65

Pre-deployment tasks ............................................................................................... 65

Deployment resources checklist .......................................................................... 66

Page 6: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Contents

6 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Customer configuration data ................................................................................ 67

Network implementation .......................................................................................... 67

Preparing the network switches ........................................................................... 67

Configuring the infrastructure network ................................................................. 67

Configuring VLANs ............................................................................................... 68

Configuring jumbo frames (iSCSI only) ................................................................. 69

Completing network cabling ................................................................................ 69

Microsoft Hyper-V hosts installation and configuration ............................................. 69

Overview .............................................................................................................. 69

Installing the Windows hosts ............................................................................... 69

Installing Hyper-V and configuring failover clustering ........................................... 70

Configuring Windows host networking ................................................................. 70

Installing and configuring Multipath software ...................................................... 70

Planning virtual machine memory allocations ...................................................... 70

Microsoft SQL Server database installation and configuration .................................. 71

Overview .............................................................................................................. 71

Creating a virtual machine for SQL Server ............................................................ 72

Installing Microsoft Windows on the virtual machine ........................................... 72

Installing SQL Server ............................................................................................ 72

Configuring SQL Server for SCVMM ...................................................................... 72

System Center Virtual Machine Manager server deployment ..................................... 73

Overview .............................................................................................................. 73

Creating a SCVMM host virtual machine ............................................................... 74

Installing the SCVMM guest OS ............................................................................ 74

Installing the SCVMM server ................................................................................ 74

Installing the SCVMM Admin Console .................................................................. 74

Installing the SCVMM agent locally on a host ....................................................... 74

Adding the Hyper-V cluster to SCVMM .................................................................. 74

Storage array preparation and configuration ............................................................. 74

Overview .............................................................................................................. 74

Configuring the XtremIO array .............................................................................. 75

Preparing the XtremIO array ................................................................................. 75

Setting up the initial XtremIO configuration ......................................................... 75

Creating the CSV disk .......................................................................................... 80

Creating a virtual machine in SCVMM................................................................... 80

Performing partition alignment ........................................................................... 80

Creating a template virtual machine..................................................................... 81

Deploying virtual machines from the template ..................................................... 81

Page 7: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Contents

7 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Chapter 7 Solution Verification 82

Overview .................................................................................................................. 83

Post-installation checklist ........................................................................................ 84

Deploying and testing a single virtual machine ......................................................... 84

Verifying solution component redundancy ................................................................ 84

Chapter 8 System Monitoring 85

Overview .................................................................................................................. 86

Key areas to monitor ................................................................................................. 86

Performance baseline .......................................................................................... 86

Servers ................................................................................................................ 87

Networking .......................................................................................................... 87

Storage ................................................................................................................ 88

XtremIO resource monitoring guidelines ................................................................... 88

Monitoring the storage......................................................................................... 88

Monitoring the performance ................................................................................ 90

Monitoring the hardware elements ...................................................................... 91

Using advanced monitoring ................................................................................. 93

Appendix A Reference Documentation 95

EMC documentation ................................................................................................. 96

Other documentation ............................................................................................... 96

Appendix B Customer Configuration Worksheet 98

Customer configuration worksheet ........................................................................... 99

Appendix C Server Resource Component Worksheet 101

Server resources component worksheet ................................................................. 102

Figures Figure 1. I/O randomization brought by server virtualization .............................. 15

Figure 2. VSPEX Proven Infrastructures .............................................................. 21

Figure 3. Compute layer flexibility examples ...................................................... 26

Figure 4. Example of highly available network design ........................................ 27

Figure 5. Logical architecture for the solution ..................................................... 35

Figure 6. Hypervisor memory consumption ........................................................ 41

Figure 7. Required networks for XtremIO storage ................................................ 43

Figure 8. Single X-Brick XtremIO storage ............................................................ 44

Figure 9. Cluster configuration as single and multiple X-Brick clusters ............... 45

Figure 10. Hyper-V virtual disk types .................................................................... 46

Page 8: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Contents

8 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 11. XtremIO Starter X-Brick building block for 300 virtual machines .......... 47

Figure 12. XtremIO single X-Brick building block for 700 virtual machines ........... 47

Figure 13. High availability at the virtualization layer ........................................... 48

Figure 14. Redundant power supplies .................................................................. 49

Figure 15. Network layer high availability ............................................................. 49

Figure 16. XtremIO high availability ..................................................................... 50

Figure 17. Resource pool flexibility ...................................................................... 55

Figure 18. Required resources from the RVM pool ................................................ 59

Figure 19. Aggregate resource requirements - Stage 2 .......................................... 61

Figure 20. Customizing server resources .............................................................. 62

Figure 21. Sample Ethernet network architecture ................................................. 68

Figure 22. XtremIO initiator group ........................................................................ 76

Figure 23. Adding volume .................................................................................... 77

Figure 24. Volume summary................................................................................. 78

Figure 25. Volumes and initiator group ................................................................ 79

Figure 26. Mapping volumes ................................................................................ 80

Figure 27. Monitoring the efficiency ..................................................................... 89

Figure 28. Volume capacity .................................................................................. 90

Figure 29. Physical capacity ................................................................................. 90

Figure 30. Monitoring the performance (IOPS)...................................................... 91

Figure 31. Data and management cable connectivity ........................................... 92

Figure 32. X-Brick properties ................................................................................ 92

Figure 33. Monitoring the SSDs ............................................................................ 93

Tables Table 1. Solution hardware ............................................................................... 37

Table 2. Solution software ................................................................................ 38

Table 3. Hardware resources for the compute layer ........................................... 40

Table 4. XtremIO scalable scenarios with virtual machines ............................... 48

Table 5. VSPEX Private Cloud RVM workload ..................................................... 52

Table 6. Blank worksheet row ........................................................................... 56

Table 7. Reference virtual machine resources ................................................... 58

Table 8. Sample worksheet row ........................................................................ 59

Table 9. Example applications – Stage 1 ........................................................... 60

Table 10. Example applications -Stage 2 ............................................................ 60

Table 11. Server resource component totals ....................................................... 62

Table 12. Deployment process overview ............................................................. 65

Table 13. Pre-deployment tasks .......................................................................... 66

Page 9: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Contents

9 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Table 14. Deployment resources checklist .......................................................... 66

Table 15. Tasks for switch and network configuration ......................................... 67

Table 16. Tasks for server installation ................................................................. 69

Table 17. Tasks for SQL Server database setup ................................................... 71

Table 18. Tasks for SCVMM configuration ........................................................... 73

Table 19. Tasks for XtremIO configuration ........................................................... 75

Table 20. Storage allocation for block data ......................................................... 79

Table 21. Testing the installation ........................................................................ 83

Table 22. Advanced monitor parameters ............................................................. 93

Table 23. Common server information ................................................................ 99

Table 24. ESXi server information ....................................................................... 99

Table 25. X-Brick information .............................................................................. 99

Table 26. Network infrastructure information .................................................... 100

Table 27. VLAN information .............................................................................. 100

Table 28. Service accounts ............................................................................... 100

Table 30. Blank worksheet for server resource totals ........................................ 102

Page 10: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 1: Executive Summary

10 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 1 Executive Summary

This chapter presents the following topics:

Introduction ............................................................................................................. 11

Target audience ....................................................................................................... 11

Document purpose ................................................................................................... 11

Business benefits..................................................................................................... 12

Page 11: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 1: Executive Summary

11 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Introduction

Server virtualization has been a driving force in data center efficiency gains for the past decade. However, mixing multiple virtual machine workloads creates a randomization of I/O for the storage array, which stalls virtualization of I/O intensive workloads.

EMC® VSPEX® Proven Infrastructures are optimized for virtualizing business-critical applications. VSPEX provides modular solutions built with technologies that enable faster deployment, greater simplicity, greater choice, higher efficiency, and lower risk.

The VSPEX Private Cloud architecture provides your customers with a modern system capable of hosting many virtual machines at a consistent performance level. This solution runs on the Microsoft Windows Server 2012 R2 with Hyper-V virtualization layer backed by the highly available EMC XtremIO™ all-flash array family. The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of the virtual machine environment.

XtremIO effectively addresses the effects of virtualization on I/O-intensive workloads with impressive random I/O performance and consistent ultra-low latency. XtremIO also provides new levels of speed and provisioning agility to virtualized environments, with advanced data services that include space-efficient snapshots, inline data deduplication, and thin provisioning features.

Target audience

You must have the necessary training and background to install and configure Microsoft Hyper-V, the EMC XtremIO storage systems, and the associated infrastructure as required by this implementation. External references are provided where applicable, and you should be familiar with these documents.

You should also be familiar with the infrastructure and database security policies of the customer installation.

If you are a partner selling and sizing a Private Cloud for Microsoft Hyper-V infrastructure, you must pay particular attention to the first four chapters of this guide. After purchase, the implementers of the solution should focus on the configuration guidelines in Chapter 6, the solution verification in Chapter 7, and the appropriate references and appendices.

Document purpose

This guide includes an initial introduction to the VSPEX architecture, an explanation of how to modify the architecture for specific customer engagements, and instructions for effectively deploying and monitoring the system.

The EMC VSPEX Private Cloud for Microsoft Hyper-V solution for up to 700 virtual machines described in this guide is based on the XtremIO storage array and a defined reference workload. The guide describes the minimum server capacity required for

Page 12: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 1: Executive Summary

12 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

CPU, memory, and network interfaces when sizing this solution. You can select server and networking hardware that meets or exceeds these minimum requirements.

A private cloud architecture is a complex system offering. This guide makes the solution setup easier by providing you with lists of prerequisite software and hardware materials, step-by-step sizing guidance and worksheets, and verified deployment steps. After all components have been installed and configured, verification tests and monitoring instructions ensure that the systems of your private cloud are operating properly. Follow the instructions in this guide to ensure an efficient and painless journey to the cloud.

Business benefits

VSPEX solutions are built with proven technologies to create complete virtualization solutions that enable you to make an informed decision about the hypervisor, server, network, and storage environment. The VSPEX Private Cloud for Microsoft Hyper-V reduces the complexity of configuring every component of a traditional deployment model.

The solution simplifies integration management while maintaining the application design and implementation options. It also provides unified administration while enabling adequate control and monitoring of process separation.

The business benefits of the VSPEX Private Cloud for Microsoft Hyper-V architecture include:

An end-to-end virtualization solution to effectively use the capabilities of the all-flash array infrastructure components

Efficient virtualization of 700 reference virtual machines (RVMs) for varied customer use cases

A reliable, flexible, and scalable reference design

Secure, multitenancy services to both intra- and inter-company departments and organizations

Server consolidation from isolated resources to a shared, flexible resource model that further simplifies management

A single environment to run mixed workloads and tiered applications

Extendible platform to provide complete self-service portal functionality to users

Optional implementation of the Federation Enterprise Hybrid Cloud offering on this platform to provide full-service cloud functionality

Optional integration with configuration management tools such as Docker Orchestration or DevOps to simplify management and maintenance of the cloud platform

Page 13: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 2: Solution Overview

13 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Chapter 2 Solution Overview

This chapter presents the following topics:

Introduction ............................................................................................................. 14

Virtualization ........................................................................................................... 14

Compute .................................................................................................................. 14

Network ................................................................................................................... 15

Storage .................................................................................................................... 15

Page 14: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 2: Solution Overview

14 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Introduction

The VSPEX Private Cloud for Microsoft Hyper-V solution provides a complete cloud-enabled system architecture capable of supporting up to 700 RVMs with a redundant server, network topology, and highly available storage. The core components that make up this solution are virtualization, compute, network, and storage.

Virtualization

Microsoft Hyper-V is a key virtualization platform. It provides flexibility and cost savings by enabling you to consolidate large, inefficient siloed server farms into nimble, reliable cloud infrastructures.

Features such as Live Migration, which enables a virtual machine to move between different servers with no disruption to the guest operating system, and Dynamic Optimization, which performs live migrations automatically to balance loads, make Hyper-V a solid business choice.

With the release of Windows Server 2012 R2, a Microsoft virtualized environment can host virtual machines with up to 64 virtual CPUs and 1 TB of virtual random access memory (RAM).

Cloud computing is the next logical progression from virtualization and is becoming mainstream in a modern data center. Cloud computing provides a hardware and software platform that is flexible in how users perceive and operate within the environment.

This VSPEX reference architecture provides the methods to guarantee a private-cloud environment with a known level of performance and availability. In a private cloud environment, organizations manage their virtual-machine environment internally. Virtual machines can be moved seamlessly throughout the private cloud platform.

The platform can be extended to offer multitenancy by adding additional software components. Full self-service provisioning complete with chargeback, cost control, and workflow automation, can also be layered.

The platform can be further extended to offer hybrid cloud services, which enable virtual machines to run locally in the private cloud or remotely in a service provider’s public-cloud environment. Virtual machines can be moved between the two physical platforms without interruption of services. The VSPEX reference architecture serves as the core pillar for all of these services.

Compute

VSPEX provides the flexibility to design and implement a customer’s choice of server components. The infrastructure must have sufficient:

CPU cores and memory to support the required number and types of virtual machines

Network connections to enable redundant connectivity to the network switches

Private cloud foundation

Page 15: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 2: Solution Overview

15 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Capacity to enable the environment to withstand a server failure and failover in the environment

Network

VSPEX provides the flexibility to design and implement a customer’s choice of network components. The infrastructure must provide:

Redundant network links for the hosts, switches, and storage

Traffic isolation based on industry-accepted best practices

Support for link aggregation

Network switches with a minimum non-blocking backplane capacity that is sufficient for the target number of virtual machines and their associated workloads. EMC recommends enterprise-class network switches with advanced features such as quality of service.

Storage

Virtualization

In highly virtualized environments, when a large number of virtual machines are virtualized on a cluster of servers that share a common storage pool, the I/O requests from all the disparate virtual machines are randomized for the storage, as shown in Figure 1.

Traditional storage architectures cannot handle these high random I/O requests and introduce unacceptable application and virtual-machine latency. This is known as the “I/O blender”.

Figure 1. I/O randomization brought by server virtualization

Storage efficiency challenges

The challenge for all-flash arrays is that often their high I/O performance alone can be insufficient for virtualized environments. Additional technologies that drive high

Challenges

Page 16: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 2: Solution Overview

16 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

storage efficiencies are also required. Storage efficiency is important because storage infrastructure acquisition and operations costs are among the top challenges of cloud-based virtual machine environments.

To achieve storage efficiency, customers must maximize available storage capacity and processing resources, which often turn out to be competing resources. Storage efficiency is key to enabling the promise of elastic scalability, pay-as-you-grow efficiency, and a predictable cost structure, while increasing productivity and innovation. Technologies such as data compression and deduplication are key enablers to efficiency from a capacity standpoint, while simple, insightful management tools reduce management complexity. Resiliency and availability features, especially if enabled by default, further increase efficiency.

While storage efficiency is important, in a private cloud environment, many disparate virtual machines with vastly different performance profiles and criticality are typically consolidated. Customers need a storage platform that can fulfill the performance demands, enhance storage efficiencies by reducing the data footprint, and enable agile provisioning and management of service delivery.

An agile, virtualized infrastructure must also scale in the multiple dimensions of performance, capacity, and operations. It must have the ability to scale efficiently, without sacrificing performance and resiliency, or requiring additional IT resources to manage the environment.

Agility is a major reason why organizations choose to virtualize their infrastructures. However, IT responsiveness often slows exponentially as virtual environments grow. Resources are typically unable to deploy or service quickly enough to meet rapidly changing business requirements. Bottlenecks occur because organizations do not have the right tools to quickly determine the capacity and health of their physical and virtual resources.

While enterprise users want responsive deployment of business applications to meet changing business requirements, the enterprise is often unable to rapidly deploy or update virtual machines and storage on a large scale. Standard virtual machine provisioning or cloning methods, which are commonly implemented in flash arrays, can be expensive, because full copies of virtual machines can require 50 GB or more storage for each copy.

In a large-scale cloud data center, when shared storage may be cloning up to hundreds of virtual machines each hour while concurrently delivering I/O to active virtual machines, cloning can become a major bottleneck for data center performance and operational efficiency.

Most storage arrays are designed to be statically installed and run, yet virtualized application environments are naturally dynamic and variable. Change and growth of virtualized workloads causes organizations to actively redistribute workloads across storage-array resources for load balancing to avoid running out of space or reducing performance. This ongoing load balancing is usually a manual, iterative task that is often costly and time-consuming. As a result, storage arrays that support large-scale virtualization environments require optimal and inherent data placement to ensure

Scalability

Operational agility

Page 17: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 2: Solution Overview

17 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

maximum utilization of both capacity and performance without any planning demands.

Storage arrays can accumulate duplicate data over time, which increases management and other costs. In particular, large-scale virtual machine environments create large amounts of duplicate data when virtual machines are deployed by cloning existing virtual machines, or when the same OS and applications are installed.

Deduplication eliminates duplicate data by replacing it with pointers to unique instances of the data. This deduplication process can be implemented after I/O has been de-staged to disks, or it can be done in real time, which actively reduces the amount of redundant data written to the array.

Thin provisioning is a popular technique that improves storage utilization. The storage capacity is consumed only when data is written instead of when storage volumes are provisioned.

Thin provisioning removes the need for overprovisioning storage up front to meet anticipated future capacity demands and enables you to allocate storage on-demand from an available storage pool.

While storage arrays have traditionally supported several RAID data protection levels, the arrays required that storage administrators choose between data protection and performance for specific workloads. The challenge for large-scale virtual environments is the shared storage system that stores data for hundreds or thousands of virtual machines with different workloads.

Optimal data protection for virtualized environments requires that arrays support data protection schemes that combine the best attributes of existing RAID levels while avoiding the drawbacks. Because flash endurance is a special consideration in an all-flash array, the scheme maximizes the service life of the array’s solid-state drives (SSDs) while complementing the high I/O performance of flash media.

XtremIO 4.0, in beta at the time of publication of this guide, supports Microsoft Offloaded Data Transfers (ODX) technology, which offloads intra-array data movement requests to the array itself. This frees up the compute and network resources and reduces response times to data-transfer requests, which can result in drastically reduced virtual machine provisioning times and snapshot creation.

For additional information on ODX, refer to the Microsoft Windows Dev Center Library topic Offloaded data transfers.

EMC ViPR® integrates with Microsoft System Center Virtual Machine Manager (SCVMM) and Orchestrator APIs to simplify storage management and reduce the need for multiple management tools to address common management tasks. Using ViPR, storage provisioning and management can be done within SCVMM, and common tasks can be done within Orchestrator.

Deduplication

Thin provisioning

Data protection

Microsoft ODX support

EMC ViPR integration

Page 18: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 2: Solution Overview

18 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

RESTful API support enables advanced functionality exposure of the XtremIO 4.0 storage resources for customized workflows, and self-service portal development and integration without heavy coding efforts. This API support enables orchestration architects and developers access to a wide range of features without having to develop cumbersome wrappers or one-off drivers.

To meet the multiple demands of a large-scale virtualized data center, you need a storage solution that is able to provide superb performance and capacity scale-out to accommodate:

Infrastructure growth

Built-in data reduction features

Thin provisioning for capacity efficiency and cost mitigation

Flash-optimized data protection techniques

Near-instantaneous virtual machine provisioning and cloning

Automated load-balancing

Integration with key monitoring and orchestration tools

Consistent, predictable, highly random I/O performance

The XtremIO all-flash array is built to unlock the full performance potential of flash storage and to deliver array-based inline data services that make it an optimal storage solution for large-scale, agile, and dynamic virtual environments.

API support

Benefits of using XtremIO

Page 19: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

19 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Chapter 3 Solution Technology Overview

This chapter presents the following topics:

Overview .................................................................................................................. 20

VSPEX Proven Infrastructures................................................................................... 20

Key components ...................................................................................................... 22

Virtualization layer ................................................................................................... 22

Compute layer .......................................................................................................... 25

Network layer ........................................................................................................... 27

Storage layer ........................................................................................................... 28

EMC Data Protection ................................................................................................ 30

Other technologies .................................................................................................. 31

Page 20: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

20 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

This solution uses the XtremIO all-flash array and Microsoft Hyper-V to provide storage and server virtualization in a private cloud. The solution has been designed and proven by EMC to provide virtualization, server, network, and storage resources that provide customers with the ability to deploy up to 700 RVMs and the associated shared storage.

This guide provides guidance on how to scale the solution infrastructure for larger environments or as the environment grows.

The following sections describe the components in more detail.

VSPEX Proven Infrastructures

EMC has joined forces with the providers of IT infrastructure to create a complete virtualization solution that accelerates the deployment of the private cloud. VSPEX enables customers to accelerate their IT transformation with faster deployment, greater simplicity and choice, higher efficiency, and lower risk.

VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a virtual infrastructure for customers who want the simplicity that is characteristic of truly converged infrastructures, with more choice in individual stack components.

VSPEX Proven Infrastructures, as shown in Figure 2, are modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. These infrastructures include virtualization, server, network, and storage layers. Partners can choose the virtualization, server, and network technologies that best fit a customer’s environment, while XtremIO storage systems and technologies provide the storage layers.

Page 21: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

21 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Figure 2. VSPEX Proven Infrastructures

Page 22: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

22 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Key components

This section describes the following key components of this solution:

Virtualization layer: Decouples the physical implementation of resources from the applications that use the resources, so that the application view of the available resources is no longer directly tied to the hardware. This enables many key features in the private cloud concept. This solution uses Microsoft Hyper-V for the virtualization layer.

Compute layer: Provides memory and processing resources for the virtualization layer software and for the applications running in the private cloud. The VSPEX program defines the minimum amount of required compute layer resources, and implements the solution by using any server hardware that meets these requirements.

Network layer: Connects the users of the private cloud to the resources in the cloud and the storage layer to the compute layer. The VSPEX program defines the minimum number of required network ports, provides general guidance on network architecture, and enables you to implement the solution by using any network hardware that meets these requirements.

Storage layer: Critical for the implementation of the server virtualization. With multiple hosts accessing shared data, many use cases can be implemented. The XtremIO all-flash array used in this solution provides high performance, enables rapid service and virtual machine provisioning, and supports a number of capacity efficiency and data services capabilities.

Data protection: The components of the solution provide protection when the data in the primary system is deleted, damaged, or unusable. For more information, see EMC Data Protection.

Security layer: Optional solution component that provides customers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system. This solution uses RSA SecurID® to provide secure user authentication.

For more details about the reference architecture components, see Solution architecture.

Virtualization layer

The virtualization layer decouples the application resource requirements from the underlying physical resources that serve them. This enables greater flexibility in the application layer by eliminating hardware downtime for maintenance, and enables the system to physically change without affecting the hosted applications. In a server virtualization or private cloud use case, the virtualization layer enables multiple independent virtual machines to share the same physical hardware, rather than being directly implemented on dedicated hardware.

Overview

Page 23: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

23 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Microsoft Hyper-V is a Windows Server role that was introduced in Windows Server 2008. Hyper-V virtualizes computer hardware resources, such as CPU, memory, storage, and networking. This transformation creates fully functional virtual machines that run their own operating systems and applications like physical computers.

Hyper-V works with Failover Clustering and Cluster Shared Volumes (CSVs) to provide high availability in a virtualized infrastructure. Live migration and live storage migration enable seamless movement of virtual machines or virtual machine files between Hyper-V servers or storage systems transparently and with minimal performance impact.

Windows Server 2012 R2 provides virtual Fibre Channel (FC) ports within a Hyper-V guest operating system. The virtual FC port uses the standard N-port ID virtualization (NPIV) process to address the virtual machine WWNs within the Hyper-V host’s physical host bus adapter (HBA). This provides virtual machines with direct access to external storage arrays over FC, enables clustering of guest operating systems over FC, and offers an important new storage option for the hosted servers in the virtual infrastructure. Virtual FC in Hyper-V guest operating systems also supports related features, such as virtual SANs, live migration, and multipath I/O (MPIO).

Prerequisites for virtual FC include:

One or more installations of Windows Server 2012 R2 with the Hyper-V role

One or more FC HBAs installed on the server, each with an appropriate HBA driver that supports virtual FC

NPIV-enabled SAN

Virtual machines using the virtual FC adapter must use one of the following as the guest OS: Windows Server 2008, Windows Server 2008 R2, Windows Server 2012, or Windows Server 2012 R2 as the guest operating system.

Microsoft System Center Virtual Machine Manager (SCVMM) is a centralized management platform for the virtualized data center. SCVMM enables administrators to configure and manage the virtualized host, networking, and storage resources, and to create and deploy virtual machines and services to private clouds. SCVMM simplifies provisioning, management, and monitoring in the Hyper-V environment.

The Windows Server 2012 R2 Failover Clustering feature provides high-availability in Microsoft Hyper-V. High availability is impacted by both planned and unplanned downtime, and Failover Clustering significantly increases the availability of virtual machines during planned and unplanned downtimes.

Configure Windows Server 2012 R2 Failover Clustering on the Hyper-V host to monitor virtual machine health, and migrate virtual machines between cluster nodes. The advantages of this configuration are:

Enables migration of virtual machines to a different cluster node if the cluster node where they reside must be updated, changed, or rebooted.

Microsoft Hyper-V

Virtual Fibre Channel ports

Microsoft System Center Virtual Machine Manager

High availability with Hyper-V Failover Clustering

Page 24: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

24 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Enables other members of the Windows Failover Cluster to take ownership of the virtual machines if the cluster node where they reside suffers a failure or significant degradation.

Minimizes downtime due to virtual machine failures. Windows Server Failover Cluster detects virtual machine failures and automatically takes steps to recover the failed virtual machine. This allows the virtual machine to be restarted on the same host server or migrated to a different host server.

Hyper-V Replica, introduced in Windows Server 2012 R2, provides asynchronous virtual machine replication over the network from one Hyper-V host at a primary site to another Hyper-V host at a replica site. Hyper-V replicas protect business applications in the Hyper-V environment from downtime associated with an outage at a single site.

Hyper-V Replica tracks the write operations on the primary virtual machine and replicates the changes to the replica server over the network using HTTP and HTTPS. The amount of network bandwidth required is based on the transfer schedule and data change rate.

If the primary Hyper-V host fails, you can manually fail over the production virtual machines to the Hyper-V hosts at the replica site. Manual failover brings the virtual machines back to a consistent point from which they can be accessed with minimal impact on the business. After recovery, the primary site can receive changes from the replica site. You can perform a planned failback to manually revert the virtual machines back to the Hyper-V host at the primary site.

Cluster-Aware Updating, introduced in Windows Server 2012 R2, provides a way to update cluster nodes with little or no disruption. Cluster-Aware Updating transparently performs the following tasks during the update process:

1. Puts one cluster node into maintenance mode and takes it offline (virtual machines are live-migrated to other cluster nodes)

2. Installs the updates

3. Performs a restart if necessary

4. Brings the node back online (migrated virtual machines are moved back to the original node)

5. Updates the next node in the cluster

The node managing the update process is called the Update Coordinator. The Update Coordinator works in a couple of modes:

Self-updating: Runs on the cluster node being updated

Remote-updating: Runs on a standalone Windows operating system and remotely manages the cluster update

Cluster-Aware Updating is integrated with Windows Server Update Service. PowerShell enables automation of the Cluster-Aware Updating process.

Hyper-V Replica

Cluster-Aware Updating

Page 25: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

25 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

EMC Storage Integrator (ESI) for Windows Suite is a software package with the essential components for storage administrators to provision business applications in less time, monitor storage health with an in-depth storage topology view, and automate storage management with rich scripting libraries.

Administrators can provision block and file storage for Microsoft Windows or Microsoft SharePoint sites by using wizards in ESI. ESI supports the following functions:

Provisioning, formatting, and presenting drives to Windows servers

Provisioning new cluster disks and automatically adding them to the cluster

Provisioning SharePoint storage, sites, and databases in a single wizard

Compute layer

The choice of a server platform for an EMC VSPEX infrastructure is not only based on the technical requirements of the environment, but on the supportability of the platform, existing relationships with the server provider, advanced performance, management features, and many other factors. For this reason, EMC VSPEX solutions are designed to run on a wide variety of server platforms. Instead of requiring a specific number of servers with a specific set of requirements, VSPEX solutions have minimum requirements for the number of processor cores and the amount of RAM. The solution can be implemented with two or twenty servers, and still be considered the same VSPEX solution.

In the example shown in Figure 3, the compute layer requirements for a specific implementation are 25 processor cores and 200 GB of RAM. One customer might want to implement this with white-box servers containing 16 processor cores and 64 GB of RAM, while another customer might select a higher-end server with 20 processor cores and 144 GB of RAM.

EMC Storage Integrator for Windows Suite

Page 26: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

26 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 3. Compute layer flexibility examples

The first customer needs four of the selected servers, while the other customer needs two.

Note: To enable high availability for the compute layer, each customer needs one additional server to ensure that the system has enough capability to maintain business operations when a server fails.

Use the following best practices in the compute layer:

Use several identical, or at least compatible, servers. VSPEX implements hypervisor-level high-availability technologies that may require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.

If you implement high availability at the hypervisor layer, the largest virtual machine you can create is constrained by the smallest physical server in the environment.

Implement the available high-availability features in the virtualization layer, and ensure that the compute layer has sufficient resources to accommodate at

Page 27: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

27 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

least single server failures. This enables the implementation of minimal-downtime upgrades, and tolerance for single-unit failures.

Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX can be sufficiently flexible to meet your specific needs. Ensure that there are sufficient processor cores and enough RAM per core to for the target environment.

Network layer

The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This is a required configuration regardless of whether the network infrastructure for the solution already exists, or you are deploying it alongside other components of the solution. Figure 4 shows an example of this highly available network topology.

Figure 4. Example of highly available network design

Page 28: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

28 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

This validated solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

XtremIO is a block-only storage platform, and it provides network high availability or redundancy by using two ports per storage controller. If a link is lost on the storage processor I/O port, the link fails over to another port. All network traffic is distributed across the active links.

Storage layer

The storage layer is a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in a data center storage processing system.

This VSPEX solution uses XtremIO storage arrays to provide virtualization at the storage layer. The XtremIO platform provides the required storage performance, increases storage efficiency and management flexibility, enhances operational agility, and reduces total cost of ownership.

The EMC XtremIO all-flash array is a clean-sheet design with a revolutionary architecture. It brings together all the necessary and sufficient requirements to enable the agile data center: linear scale-out, inline, all-the-time data services, and rich data center services for the workloads.

The basic hardware building block for these scale-out arrays is the EMC XtremIO X-Brick. Each X-Brick has two active-active controller nodes and a disk array enclosure packaged together with no single point of failure. The EMC XtremIO Starter X-Brick with 13 SSDs can be non-disruptively expanded to a full X-Brick with 25 SSDs without any downtime. The scale-out cluster can support up to six X-Bricks.

The XtremIO platform is designed to optimize the use of flash storage media. Key attributes of this platform are:

High levels of I/O performance, particularly for the random I/O workloads that are typical in virtualized environments

Consistently low (sub-millisecond) latency

Inline data services that include thin provisioning, deduplication, data compression, and copy data management

Scale-out architecture that scales capacity and I/O performance in tandem while ensuring consistently low sub-millisecond latency

A full suite of enterprise array capabilities, such as N-way active controllers, high availability, strong data protection, and thin provisioning

Integration with EMC solutions for data center services including business continuity, backup and data protection, and converged infrastructure deployments

EMC XtremIO

Page 29: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

29 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Because the XtremIO array has a scale-out design, you can add additional performance and capacity in a building block approach, with all building blocks forming a single clustered system. XtremIO storage includes the following components:

Host adapter ports: Provide host connectivity through fabric into the array.

Storage controllers: The compute component of the storage array. Storage controllers handle all aspects of data moving into, out of, and between arrays.

Disk drives: SSDs that contain the host/application data and their enclosures.

InfiniBand switches: A computer network communications link used in multi X-Brick configurations that is switched, has high throughput, low latency, scalable, and provides quality of service and failover capability. This is used for intra-X-Brick communication and high-speed data movement.

EMC XtremIO Operating System

The XtremIO storage cluster is managed by the EMC XtremIO Operating System (XIOS). XIOS ensures that the system remains balanced and always delivers the highest levels of performance without any administrator intervention. XIOS:

Ensures that data is evenly distributed across all SSD and controller resources, providing the highest possible performance and endurance that stands up to demanding workloads for the entire life of the array.

Eliminates the need to perform the complex configuration and optimization performance steps required for traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths, set caching policies, build aggregates, and so on.

Automatically and optimally configures every volume at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system.

Standards-based enterprise storage system

The XtremIO system interfaces with Hyper-V hosts using standard FC and iSCSI block interfaces. The system supports complete high-availability features, including support for native Microsoft Multipath I/O, protection against failed SSDs, non-disruptive software and firmware upgrades, no single point of failure, and hot-swappable components.

Real-time, inline data reduction

The XtremIO storage system deduplicates and compresses incoming data in real time, enabling a massive amount of virtual machines and application data to reside in a small and economical amount of flash capacity. Due to the inline functionality, there is no post-processing of data, which helps to extend the endurance of the SSDs.

Furthermore, data reduction on the XtremIO array does not adversely affect input/output per second (IOPS) or latency performance; instead, it enhances the performance of the virtualized environment.

Page 30: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

30 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Scale-out architecture

Using a Starter X-Brick, Microsoft Hyper-V deployments can start small and be grown to nearly any scale required by upgrading the Starter X-Brick to an X-Brick, and then configuring a larger XtremIO cluster if required. The system expands capacity and performance linearly as building blocks are added, making the virtualized environments simple to size and manage as the demands grow over time.

Massive performance

The XtremIO array is designed to handle very high, sustained levels of small, random, mixed read-and-write I/O, which is typical in virtual environments. It does so with consistent predictable sub-millisecond latency.

Fast provisioning

XtremIO arrays deliver writable snapshot technology that is space-efficient for both data and metadata. XtremIO snapshots are free from limitations of performance, features, topology, or capacity reservations. With their unique in-memory metadata architecture, XtremIO arrays can rapidly clone virtual machine environments of any size.

Ease of use

The XtremIO storage system requires only a few basic setup steps that can be completed in minutes with absolutely no tuning or ongoing administration to achieve and maintain high performance levels. The XtremIO system can be deployment-ready in less than an hour after delivery.

Security with Data at Rest Encryption (D@RE)

XtremIO securely encrypts all data stored on the all-flash array, delivering protection for regulated use cases in sensitive industries, such as healthcare, finance, and government.

Data center economics

XtremIO provides breakthrough total cost of ownership in the virtualized workload environment through its exceptional performance, capacity savings from unique data reduction capabilities, linear predictable scaling with scale-out architecture, and ease of use.

EMC Data Protection

EMC Data Protection provides data protection by backing up data files or volumes on a defined schedule, and then restores data from backup for recovery after a disaster.

EMC Data Protection is a smart method of backup. It consists of optimal integrated storage protection and software designed to meet backup and recovery objectives now and in the future. With EMC storage protection, deep data source integration, and feature-rich data management services, you can deploy an open, modular storage protection architecture that enables you to scale resources while lowering cost and minimizing complexity.

Overview

Page 31: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

31 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

EMC Avamar provides fast, efficient backup and recovery through a complete software and hardware solution. Equipped with integrated variable-length deduplication technology, Avamar facilitates fast, daily full backups for virtual environments, remote offices, enterprise applications, NAS servers, and desktops/laptops.

EMC Data Domain® deduplication storage systems continue to revolutionize disk backup, archiving, and disaster recovery with high-speed, inline deduplication for backup and archive workloads.

EMC RecoverPoint® is an enterprise-scale solution that protects application data on heterogeneous SAN-attached servers and storage arrays. EMC RecoverPoint runs on a dedicated appliance and combines continuous data protection technology with a bandwidth-efficient, no-data-loss replication technology. This technology enables dedicated appliances to protect data locally (continuous data protection (CDP)), remotely (continuous remote replication (CRR)), or both (continuous local and remote replication (CLR)), offering the following advantages:

EMC RecoverPoint CDP replicates data within the same site or to a local bunker site some distance away, and transfers the data via FC.

EMC RecoverPoint CRR uses either FC or an existing IP network to send the data snapshots to the remote site using techniques that preserve write-order.

EMC RecoverPoint CLR replicates to both a local and a remote site simultaneously.

EMC RecoverPoint uses lightweight splitting technology to mirror application writes to the EMC RecoverPoint cluster, and supports the following write splitter types:

Array-based

Intelligent fabric-based

Host-based

Other technologies

In addition to the required technical components for EMC VSPEX solutions, other items may provide additional value depending on the specific use case. These include, but are not limited to, the following technologies.

EMC PowerPath® is a host-based software package that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. It offers the following benefits for the VSPEX Proven Infrastructure:

Standardized data management across physical and virtual environments

EMC Avamar deduplication

EMC Data Domain deduplication storage systems

EMC RecoverPoint

Overview

EMC PowerPath

Page 32: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 3: Solution Technology Overview

32 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Automated multipathing policies and load balancing to provide predictable and consistent application availability and performance across physical and virtual environments

Improved service-level agreements by eliminating application impact from I/O failures

Note: In this solution, we used PowerPath 6.0 for the management of I/O traffic.

EMC ViPR Controller is storage automation software that centralizes, automates, and transforms storage into a simple and extensible platform. It abstracts and pools resources into a single storage platform to deliver automated, policy-driven storage services on demand via a self-service catalog. With vendor-neutral centralized storage management, your team can reduce costs, provide choice, and deliver a path to the cloud.

The ability to secure data and ensure the identity of devices and users is critical in today’s enterprise IT environment. This is particularly true in regulated sectors such as healthcare, finance, and government. VSPEX solutions can offer hardened computing platforms in many ways, most commonly by implementing a public-key infrastructure (PKI).

VSPEX solutions can be engineered with a PKI designed to meet the security criteria of your organization. The solution can be implemented via a modular process where layers of security are added as needed. The general process involves first implementing a PKI by replacing generic self-certified certificates with trusted certificates from a third-party certificate authority. Services that support PKI are then enabled using the trusted certificates to ensure a high degree of authentication and encryption, where supported.

Depending on the scope of PKI services needed, you may need to implement a PKI dedicated to those needs. There are many third-party tools that offer these services, including end-to-end solutions from RSA that can be deployed within a VSPEX environment.

EMC ViPR Controller

Public-key infrastructure

Page 33: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

33 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Chapter 4 Solution Architecture Overview

This chapter presents the following topics:

Overview .................................................................................................................. 34

Solution architecture ............................................................................................... 34

Server configuration guidelines ............................................................................... 39

Network configuration guidelines ............................................................................ 42

Storage configuration guidelines ............................................................................. 44

High-availability and failover ................................................................................... 48

Backup and recovery configuration guidelines ......................................................... 50

Page 34: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

34 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

This chapter is a comprehensive guide to the architecture and configuration of this solution. Server capacity is presented in generic terms for the required minimum CPU, memory, and network resources. Your server and networking hardware must meet the stated minimum requirements outlined in this chapter. EMC has validated the storage architecture to ensure that it delivers a high performance, highly available architecture.

Each Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines validated by EMC. In practice, each virtual machine has its own set of requirements that rarely fit a predefined idea of a virtual machine. In any discussion about virtual infrastructures, it is important that you first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that takes into account every possible combination of workload characteristics.

Solution architecture

This section details the VSPEX Private Cloud solution for Microsoft Hyper-V with XtremIO configuration for up to 700 RVMs.

Note: VSPEX uses a reference workload to describe and define a virtual machine. Therefore, one physical or virtual machine in an existing environment may not be equal to one virtual machine in a VSPEX solution. Evaluate your workload in terms of the reference to arrive at an appropriate point of scale. This process is described in Reference workload application.

Figure 5 shows a validated XtremIO infrastructure, where an 8 Gb/s FC or 10 Gb/s iSCSI SAN carries storage traffic, and 10 GbE carries management and application traffic.

Overview

Logical architecture

Page 35: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

35 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Figure 5. Logical architecture for the solution

This solution architecture includes the following key components:

Microsoft Hyper-V: Provides a common virtualization layer to host the server environment. Hyper-V provides a highly available infrastructure through features such as:

Live Migration: Provides live migration of virtual machines within a virtual infrastructure cluster, with no virtual machine downtime or service disruption

Live Storage Migration: Provides live migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption

Failover Clustering High Availability: Detects and provides rapid recovery for a failed virtual machine in a cluster

Dynamic Optimization: Provides load balancing of computing capacity in a cluster with support of SCVMM

Microsoft System Center Virtual Machine Manager: SCVMM is technically not required for this VSPEX solution, because the Hyper-V Management Tools in Windows Server 2012 R2 can be used to manage the Hyper-V environment. However, considering the large number of virtual machines the solution is capable of hosting, EMC recommends using SCVMM.

Key components

Page 36: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

36 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Microsoft SQL Server: Stores configuration and monitoring details for SCVMM, which requires a database service. This solution uses a Microsoft SQL Server 2012 database.

DNS server: Provides name resolution for the various solution components. This solution uses the Microsoft DNS Service running on Windows Server 2012 R2.

Active Directory server: Provides functionality to various solution components that require the Active Directory service. The Active Directory service runs on a Windows Server 2012 R2 system.

Shared infrastructure: Adds DNS and authentication/authorization services with existing infrastructure or sets them up as part of the new virtual infrastructure.

IP network: Carries user and management traffic. A standard Ethernet network carries all network traffic with redundant cabling and switching.

Storage network

The storage network is isolated to provide hosts access to the array with the following options:

Fibre Channel: Performs high-speed serial data transfer with a set of standard protocols. Fibre Channel (FC) provides a standard data transport frame among servers and shared storage devices.

10 Gb Ethernet (iSCSI): Enables the transport of SCSI blocks over a TCP/IP network. ISCSI works by encapsulating SCSI commands into TCP packets and sending the packets over the IP network.

XtremIO all-flash array

The XtremIO all-flash array includes the following components:

X-Brick: Represents a physical chassis that contains two active/active storage controllers as the fundamental scaling unit of the array, and a disk array enclosure (DAE) of eMLC SSDs. When the XtremIO cluster scales, the array clusters together multiple X-Bricks with an InfiniBand back-end switch.

Storage controller: Represents a physical computer (1U in size) in the cluster, which acts as the storage controllers, providing block data that supports FC and iSCSI protocols. Storage controllers can access all SSDs in the same X-Brick.

Processor D: Represents one of two CPU sockets for each storage controller. Processor D is responsible for disk access.

Processor RC: Represents the other CPU socket that is responsible for the router (hash writes and lookup) and controller (meta data).

Battery backup unit: Provides enough power to each storage controller to ensure that any data in flight destages to disk in the event of a power failure. The first X-Brick has two battery backup units for redundancy. As clusters require additional X-Bricks, only a single battery backup unit is necessary for each additional X-Brick, which is 1U in size.

DAE: Houses the flash drives that the array uses and is 2U in size.

Page 37: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

37 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

InfiniBand switches: Connects multiple X-Bricks together and is 1U in size. Two separate switches are needed to ensure the fabric that ties the controllers together is highly available.

Table 1 lists the hardware used in this solution.

Table 1. Solution hardware

Component Configuration

Hyper-V servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

Note: For Intel Ivy Bridge or later processors, use six vCPUs per physical core.

For 700 virtual machines:

700 vCPUs

Minimum of 175 physical CPU cores (117 cores for Intel Ivy Bridge or later processors)

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per Hyper-V host

For 700 virtual machines:

Minimum of 1,400 GB RAM

Add 2 GB for each physical server

Network 2 x 10 GbE network interface cards (NICs) per server

2 HBA per server or 2 x 10 GbE NICs per server for data traffic

Note: You must add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V high availability functionality and to meet the listed minimums.

Network infrastructure

Minimum switching capacity

2 physical Ethernet switches

2 physical SAN switches, if implementing FC

2 x 10 GbE ports per Hyper-V server for management, user/application traffic, and Live Migration

2 ports per Hyper-V server for the storage network (FC or iSCSI)

2 ports per storage controller for storage data (FC or iSCSI)

EMC XtremIO all-flash array One X-Brick with 25 x 400 GB SSD drives

Hardware resources

Page 38: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

38 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Component Configuration

Shared infrastructure In most cases, the customer environment already has infrastructure services configured, such as Active Directory, DNS, and so on. The setup of these services is beyond the scope of this guide.

If implemented without the existing infrastructure, the new minimum requirements are:

2 physical servers

16 GB RAM per server

4 processor cores per server

2 x 1 GbE ports per server

Note: You can migrate the services into this solution post-deployment. However, the services must exist before the solution is deployed.

Note: EMC recommends that you use a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled.

Table 2 lists the software used in this solution.

Table 2. Solution software

Software Configuration

Microsoft Windows Server with Hyper-V

Microsoft Windows Server Version 2012 R2 Datacenter Edition

Note: Datacenter Edition is necessary to support the number of virtual machines in this solution.

Microsoft System Center Virtual Machine Manager

Version 2012 R2 Datacenter Edition

Note: Datacenter Edition is necessary to support the number of operating system environments (servers and virtual machines) used in this solution.

Microsoft SQL Server

Version 2012 Standard Edition

Note: Any version of Microsoft SQL Server supported by SCVMM is acceptable.

EMC PowerPath Use latest version

XtremIO ( for Hyper-V datastores)

EMC XtremIO Operating System Release 3.0

EMC Data Protection

Software resources

Page 39: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

39 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Software Configuration

EMC Avamar Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

EMC Data Domain Operating System Refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Virtual machines (used for validation, but not required for deployment)

Microsoft Windows Base Operating System

Microsoft Windows Server 2012 R2 Datacenter Edition

Server configuration guidelines

When designing and ordering the compute layer of this VSPEX solution, several factors may impact the final purchase. From a virtualization perspective, if a system workload is well understood, features such as Dynamic Memory can reduce the aggregate memory requirement.

If the virtual machine pool does not have a high level of peak or concurrent usage, reduce the number of vCPUs. Conversely, if the applications being deployed are highly computational in nature, increase the number of CPUs and memory purchased.

Testing on the Intel Ivy Bridge processor series has shown significant increases in virtual machine density from the server resource perspective. If your server deployment comprises Ivy Bridge processors, EMC recommends increasing the vCPU/physical CPU (pCPU) ratio from 4:1 to 6:1. This reduces the number of server cores required to host the RVMs.

Current VSPEX sizing guidelines require a maximum vCPU core to pCPU core ratio of 4:1, with a maximum 6:1 ratio for Ivy Bridge or later processors. This ratio is based on an average sampling of CPU technologies available at the time of testing. As CPU technologies advance, original equipment manufacturer (OEM) server vendors that are VSPEX partners may suggest different (normally higher) ratios. Follow the updated guidance supplied by the OEM server vendor.

Overview

Intel Ivy Bridge updates

Page 40: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

40 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Table 3 lists the hardware resources used for the compute layer.

Table 3. Hardware resources for the compute layer

Component Configuration

Microsoft Hyper-V servers

CPU 1 vCPU per virtual machine

4 vCPUs per physical core

Note: For Intel Ivy Bridge or later processors, use six vCPUs per physical core.

For 700 virtual machines:

700 vCPUs

Minimum of 175 physical CPU cores (117 cores for Intel Ivy Bridge or later processors)

Memory 2 GB RAM per virtual machine

2 GB RAM reservation per Hyper-V host

For 700 virtual machines:

Minimum of 1,400 GB RAM

Add 2 GB for each physical server

Network

Block 2 x 10 GbE NICs per server

2 HBA per server or 2 x 10 GbE NICs per server for iSCSI connection

Note: Add at least one additional server to the infrastructure beyond the minimum requirements to implement Microsoft Hyper-V high availability functionality and to meet the listed minimums.

Note: EMC recommends using a 10 GbE network or an equivalent 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled.

Microsoft Hyper-V has several advanced features that help maximize performance and overall resource use. The most important features relate to memory management. This section describes some of these features, and what to consider when using these features in a VSPEX environment.

Figure 6 shows how a single hypervisor consumes memory from a pool of resources. Hyper-V memory management features such as Dynamic Memory and Smart Paging can reduce total memory usage and increase consolidation ratios in the hypervisor.

Hyper-V memory virtualization

Page 41: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

41 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Figure 6. Hypervisor memory consumption

Understanding the technologies in this section makes it easier to understand this basic concept.

Dynamic Memory

Dynamic Memory was introduced in Windows Server 2008 R2 SP1 to increase physical memory efficiency by treating memory as a shared resource, and dynamically allocating it to virtual machines. The amount of memory used by each virtual machine is adjustable at any time. Dynamic Memory reclaims unused memory from idle virtual machines, which allows more virtual machines to run at any given time. In Windows Server 2012 R2, Dynamic Memory enables administrators to dynamically increase the maximum memory available to virtual machines.

Smart Paging

Even with Dynamic Memory, Hyper-V enables more virtual machines than the available physical memory can support. In most cases, there is a memory gap between minimum memory and startup memory. Smart Paging is a memory management technique that uses disk resources as temporary memory replacement.

Page 42: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

42 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

It swaps out less-used memory to disk storage, and swaps it in when needed. Performance degradation is a potential drawback of Smart Paging. Hyper-V continues to use the guest paging when the host memory is oversubscribed because it is more efficient than Smart Paging.

Non-Uniform Memory Access

Non-Uniform Memory Access (NUMA) is a multinode computer technology that enables a CPU to access remote-node memory. This type of memory access degrades performance, and therefore Windows Server 2012 R2 employs a process known as processor affinity, which pins threads to a single CPU to avoid remote-node memory access. In previous versions of Windows, this feature was only available to the host. Windows Server 2012 R2 extends this functionality to the virtual machines, which provides improved performance in symmetrical multiprocessor (SMP) environments.

The memory configuration guidelines take into account the Hyper-V memory overhead and the virtual machine memory settings.

Hyper-V memory overhead

Virtualized memory has some associated overhead, including the memory consumed by Hyper-V, the parent partition, and additional overhead for each virtual machine.

In this solution, leave at least 2 GB memory for the Hyper-V parent partition.

Virtual machine memory

In this solution, configure each virtual machine with 2 GB memory in fixed mode.

Network configuration guidelines

This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines consider VLANs and FC/iSCSI connections on XtremIO storage. For detailed network resource requirements, refer to Table 1 on page 37.

Memory configuration guidelines

Overview

Page 43: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

43 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Isolate network traffic so that the traffic between hosts and storage, hosts and clients, and management traffic all move over isolated networks. In some cases, physical isolation may be required for regulatory or policy compliance reasons; however, in many cases, logical isolation with VLANs is sufficient.

For a best practice, EMC recommends that you use a minimum of three or four VLANs for:

Customer data

Storage for iSCSI, if implemented

Live Motion or Storage Migration

Management

Figure 7 shows the VLANs and the network connectivity requirements for the XtremIO array.

Figure 7. Required networks for XtremIO storage

The customer data network is for system users (or clients) to communicate with the infrastructure. The storage network provides communication between the compute layer and the storage layer. Administrators use the management network as a dedicated way to access the management connections on the storage array, network switches, and hosts.

Note: Some best practices need additional network isolation for cluster traffic, virtualization layer communication, and other features. Implement these additional networks if necessary.

VLANs

Page 44: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

44 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

EMC recommends setting the maximum transmission unit (MTU) to 9,000 (jumbo frames) for efficient storage and migration traffic. Refer to the switch vendor guidelines to enable jumbo frames on switch ports for storage and host ports on the switches.

Storage configuration guidelines

This section provides guidelines for setting up the storage layer to provide high availability and the expected level of performance.

Microsoft Hyper-V allows more than one method of storage when hosting virtual machines. The tested solution uses different block protocols (FC/iSCSI), and the storage layout described in this section adheres to all current best practices. If required, you can make modifications to this solution based on your system usage and load requirements.

XtremIO storage clusters support a fully distributed, scale-out design that enables linear increases in both capacity and performance in order to provide infrastructure agility. XtremIO uses a building-block approach in which the array can be scaled using additional X-Bricks. With clusters of two or more X-Bricks, XtremIO uses a redundant 40 Gb/s quad data rate (QDR) InfiniBand network for back-end connectivity among the storage controllers. This ensures a highly available, ultra-low-latency network. Host access is provided by using two N-way active controllers for linear scaling of performance and capacity for simplified support of growing virtual environments. As a result, as capacity in the array grows, performance also grows as more storage controllers are added.

As shown in Figure 8, the single brick is the basic building block of an XtremIO array.

Figure 8. Single X-Brick XtremIO storage

Each X-Brick comprises:

One 2U DAE, containing:

25 eMLC SSDs (10 TB X-Brick) or 13 eMLC SSDs (5 TB Starter X-Brick)

Two redundant power supply units

Two redundant SAS interconnect modules

Enable jumbo frames (for iSCSI)

Overview

XtremIO X-Brick scalability

Page 45: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

45 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

One battery backup unit

Two 1U storage controllers (redundant storage processors). Each storage controller includes:

Two redundant power supply units

Two 8 Gb/s FC ports

Two 10 GbE iSCSI ports

Two 40 Gb/s InfiniBand ports

One 1 Gb/s management/IPMI port

Note: For details on X-Brick racking and cabinet requirements, refer to the EMC XtremIO Storage Array Site Preparation Guide.

Figure 9 shows what the different cluster configurations look like as you scale up. You can start from one single X-Brick, and then, as you scale, you can add a second X-Brick, and then a third, and so on. The performance scales linearly as additional X-Bricks are added.

Figure 9. Cluster configuration as single and multiple X-Brick clusters

Note: A Starter X-Brick is physically similar to a single X-Brick cluster, except for the number of SSDs in the DAE (13 SSDs in a Starter X-Brick instead of 25 SSDs in a standard single X-Brick).

Page 46: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

46 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Windows Server 2012 R2 Hyper-V and Failover Clustering use CSVs and VHDX features to virtualize storage presented from external shared storage system to host virtual machines. In Figure 10, the storage array presents block-based LUNs (as CSVs) to the Windows hosts to host virtual machines.

Figure 10. Hyper-V virtual disk types

CSV

A CSV is a shared disk containing an NTFS volume that is made accessible to all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage.

Pass-through disks

Windows Server 2012 R2 also supports pass-through disks, which enable a virtual machine to access a physical disk mapped to a host that does not have a volume configured on it.

VHDX

Hyper-V in Windows Server 2012 R2 contains a VHD format update, VHDX, which has much greater capacity and built-in resiliency. The main features of the VHDX format are:

Support for virtual hard disk storage capacity of up to 64 TB

Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures

Optimal structure alignment of the virtual hard disk format to suit large sector disks

The VHDX format has the following features:

Larger block size for dynamic and differential disks, which enables the disks to better meet the needs of the workload

A 4 KB logical-sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors

Hyper-V storage virtualization

Page 47: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

47 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

The ability to store custom file metadata that the user might want to record, such as the operating system version or applied updates

Space reclamation features that can result in smaller file sizes and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware)

Sizing the storage system to meet virtual machine IOPS is a complicated process. Customers must consider various factors when planning and scaling their storage system to balance capacity, performance, and cost for their applications.

VSPEX uses a building block approach to reduce complexity. A building block is a set of disks that can support a certain number of virtual machines in the VSPEX architecture. Each building block combines several disks to create an XtremIO protection group that supports the needs of the private cloud environment.

Building block for Starter X-Brick

The Starter X-Brick building block can support up to 300 virtual machines with 13 SSDs in the XtremIO data protection group, as shown in Figure 11.

Figure 11. XtremIO Starter X-Brick building block for 300 virtual machines

In the Starter X-Brick configuration, the raw capacity is 5 TB. Detailed information about the test profile can be found in Chapter 5. You can expand the raw capacity of this building block to 10 TB by adding an additional 12 SSDs and enabling the configuration to support up to 700 virtual machines.

Building block for a single X-Brick

X-Bricks with 25 SSDs as shown in Figure 12 are available with 10 TB and 20 TB raw capacity.

Figure 12. XtremIO single X-Brick building block for 700 virtual machines

A single X-Brick with 10 TB raw capacity can support up to 700 virtual machines, while an X-Brick with 20 TB raw capacity can support up to 1,400 virtual machines.

VSPEX storage building blocks

Page 48: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

48 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Table 4 lists the different scales of virtual machines supported by different types and numbers of X-Bricks.

Table 4. XtremIO scalable scenarios with virtual machines

Scalable Virtual machines

Starter X-Brick (5 TB) 300

One X-Brick (10 TB) 700

One X-Brick (20 TB) 1,400

Two X-Brick cluster (40 TB) 2,800

Four X-Brick cluster (80 TB) v4.0 5,600

Six X-Brick cluster (120 TB) v4.0 8,400

Note: The number of supported virtual machines is based on a tested configuration using a value of 15 percent for unique data. This constitutes data that cannot be deduplicated. The XtremIO platform uses real-time deduplication to maximize the efficiency of its all-flash architecture. As a result, its logical capacity presented to users is greater than the physical capacity available in the system. When managing the system, monitor the current physical usage independently of the logical allocation so that out-of-space conditions can be avoided. EMC recommends keeping the physical allocation of the unit below 90 percent as a best practice.

High-availability and failover

This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When you implement the solution by following the instructions in this guide, business operations survive with little or no impact from single-unit failures.

Configure high availability in the virtualization layer, and enable the hypervisor to automatically restart failed virtual machines. Figure 13 shows the hypervisor layer responding to a failure in the compute layer.

Figure 13. High availability at the virtualization layer

By implementing high availability at the virtualization layer, even in a hardware failure, the infrastructure attempts to keep as many services running as possible.

While the choice of servers to implement in the compute layer is flexible, we recommend that you use enterprise-class servers designed for the data center. This type of server has increased component redundancy, for example, redundant power

Overview

Virtualization layer

Compute layer

Page 49: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

49 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

supplies, as shown in Figure 14. Connect these servers to separate power distribution units (PDUs) following your server vendor’s best practices.

Figure 14. Redundant power supplies

To configure high availability in the virtualization layer, configure the compute layer with enough resources to meet the needs of the environment, even with a server failure, as demonstrated in Figure 13.

The XtremIO advanced networking features provide protection against network connection failures at the array. Each Hyper-V host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 15. Spread these connections across multiple Ethernet switches to guard against component failure in the network.

Figure 15. Network layer high availability

Network layer

Page 50: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 4: Solution Architecture Overview

50 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

The XtremIO families are designed for five-9s (99.999%) availability by using redundant components throughout the array, as shown in Figure 16. All of the array components are capable of continued operation in case of hardware failure. XtremIO Data Protection (XDP) delivers the superior protection of RAID 6, while exceeding the performance of RAID 1 and the capacity utilization of RAID 5, ensuring against data loss due to drive failures.

Figure 16. XtremIO high availability

EMC storage arrays are designed to be highly available by default. Use the installation guides to ensure that there are no single unit failures that result in data loss or unavailability.

Every other flash array in the market uses standard disk-based RAID algorithms, which do not perform, waste a lot of expensive flash capacity, and hurt the lifespan of flash. XtremIO developed a data protection scheme, XtremIO Data Protection (XDP), that uses both the random access nature of flash and the unique XtremIO dual-stage metadata engine. The result is flash-native data protection that delivers much lower capacity overhead, superior data protection, and much better flash endurance and performance than any RAID algorithm.

XDP delivers superior RAID 6 performance, while exceeding RAID 1 performance and RAID 5 capacity utilization. More importantly, XDP is optimized for long-term enterprise operating conditions, where overwriting existing data becomes dominant in the array. Unlike other flash arrays, XDP enables XtremIO to maintain its performance until completely full, giving you the most economical use of flash.

Backup and recovery configuration guidelines

For details regarding backup and recovery configuration for this VSPEX Private Cloud solution, refer to EMC Backup and Recovery Options for VSPEX Private Clouds Design and Implementation Guide.

Storage layer

XtremIO Data Protection

Page 51: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

51 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Chapter 5 Environment Sizing

This chapter presents the following topics:

Overview .................................................................................................................. 52

Reference workload.................................................................................................. 52

Scaling out ............................................................................................................... 53

Reference workload application ............................................................................... 53

Quick assessment .................................................................................................... 55

Page 52: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

52 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

The following sections provide definitions of the reference workload used to size and implement the VSPEX architectures. The sections include instructions on how to correlate those reference workloads to customer workloads, and descriptions of how that may change the end delivery from the server and network perspective.

Modify the storage definition by adding drives for greater capacity and performance and by adding X-Bricks to improve the cluster performance. The cluster layouts provide support for the appropriate number of virtual machines at the defined performance level.

Reference workload

When you move an existing server to a virtual infrastructure, you can gain efficiency by right-sizing the virtual hardware resources assigned to that system, and by improving resource utilization of the underlying hardware.

Each VSPEX Proven Infrastructure balances the storage, network, and compute resources needed for a set number of virtual machines, as validated by EMC. Each virtual machine has its own unique requirements. In any discussion about virtual infrastructures, you need to first define a reference workload. Not all servers perform the same tasks, and it is impractical to build a reference that considers every possible combination of workload characteristics.

To simplify this discussion, this section presents a representative customer reference workload. By comparing the actual customer usage to this reference workload, you can determine how to size the solution.

VSPEX Private Cloud solutions define an RVM workload, which represents a common point of comparison. Since XtremIO has an in-line deduplication feature, it is critical to determine the unique data percentage, as this parameter will impact XtremIO physical capacity usage. In the validated solution, we set the unique data ratio to 15 percent. The parameters are described in Table 5.

Table 5. VSPEX Private Cloud RVM workload

Parameter Value

Virtual machine OS Windows Server 2012 R2

vCPUs 1

vCPUs per physical core (maximum) 41

Memory per virtual machine 2 GB

IOPS per virtual machine 25

1 Based on testing with Intel Sandy Bridge processors. Newer processors can support six vCPU/core or greater. Follow the recommendations of your VSPEX server vendor.

Overview

Defining the reference workload

Page 53: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

53 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Parameter Value

I/O size 8 KB

I/O pattern Fully random; skew = 0.5

I/O read percentage 67%

I/O write percentage 33%

Virtual machine storage capacity 100 GB

Unique data 15%

This specification for a virtual machine represents a single common point of reference by which to measure other virtual machines.

Scaling out

XtremIO is designed to scale from a Starter X-Brick or single X-Brick to a cluster of multiple X-Bricks (up to six X-Bricks based on the current code release). Unlike most traditional storage systems, as the number of X-Bricks grows, so do capacity, throughputs, and IOPS. The scalability of performance is linear with the growth of the deployment. Whenever additional storage and compute resources (such as servers and drives) are needed, you can add them modularly. Storage and compute resources grow together so that the balance between them is maintained.

Reference workload application

The solution creates storage resources that are sufficient to host a target number of RVMs with the characteristics shown in Table 5. Virtual machines may not exactly match the specifications. In that case, define a single specific customer virtual machine as the equivalent of some number of RVMs together, and assume these virtual machines are in use in the pool. Continue to provision virtual machines from the pool until no resources remain.

A small custom-built application server must move into a virtual infrastructure. The physical hardware that supports the application is not fully used. A careful analysis of the existing application reveals that the application can use one processor and needs 3 GB of memory to run normally. The I/O workload ranges is between four IOPS at idle time to a peak of 15 IOPS when busy. The entire application consumes about 30 GB on direct-attached storage (DAS).

Based on these numbers, the application needs the following resources:

CPU of one RVM

Memory of two RVMs

Storage of one RVM

I/Os of one RVM

Overview

Example 1: Custom-built application

Page 54: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

54 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

In this example, a corresponding virtual machine uses the resources for two of the RVMs. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 698 RVMs remain.

The database server for a customer’s point-of-sale system must move into this virtual infrastructure. It is currently running on a physical system with four CPUs and 16 GB of memory. It uses 200 GB of storage and generates 200 IOPS during an average busy cycle.

The requirements to virtualize this application are:

CPUs of four RVMs

Memory of eight RVMs

Storage of two RVMs

I/Os of eight RVMs

In this case, the corresponding virtual machine uses the resources of eight RVMS. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 692 RVMs remain.

The customer’s web server must move into a virtual infrastructure. It is currently running on a physical system with two CPUs and 8 GB of memory. It uses 25 GB of storage and generates 50 IOPS during an average busy cycle.

The requirements to virtualize this application are:

CPUs of two RVMs

Memory of four RVMs

Storage of one RVM

I/Os of two RVMs

In this case, the corresponding virtual machine uses the resources of four RVMs. If implemented on a single 10TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 696 RVMs remain.

The database server for a customer’s decision-support system must move into a virtual infrastructure. It is currently running on a physical system with ten CPUs and 64 GB of memory. It uses 5 TB of storage and generates 700 IOPS during an average busy cycle.

The requirements to virtualize this application are:

CPUs of 10 RVMs

Memory of 32 RVMs

Storage of 52 RVMs

I/Os of 28 RVMs

Example 2: Point-of-sale system

Example 3: Web server

Example 4: Decision-support database

Page 55: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

55 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

In this case, the corresponding virtual machine uses the resources of 52 RVMs. If implemented on a single 10 TB XtremIO X-Brick storage system, which can support up to 700 virtual machines, resources for 648 RVMs remain.

These examples demonstrate the flexibility of the resource pool model. In all four examples, the workloads reduce the amount of available resources in the pool. With business growth, the customer must implement a much larger virtual environment to support one custom-built application, one point-of-sale system, two web servers, and ten decision-support databases. Using the same strategy, calculate the number of equivalent RVMs to get a total of 538 RVMs. All these RVMs can be implemented on the same virtual infrastructure with an initial capacity of 700 RVMs that is supported with a single 10 TB X-Brick.

The resources for 162 RVMs remain in the resource pool, as shown in Figure 17.

Figure 17. Resource pool flexibility

In this case, you must examine the change in resource balance and determine the new level of requirements. Add these virtual machines to the infrastructure with the method described in the examples.

In more advanced cases, tradeoffs might be necessary between memory and I/O or other relationships in which increasing the amount of one resource, decreases the need for another. In these cases, the interactions between resource allocations become highly complex and are beyond the scope of this guide.

Quick assessment

Performing a quick assessment of the customer's environment helps you determine the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations and help assess the customer environment.

First, summarize the applications planned for migration into the VSPEX private cloud. For each application, determine the number of vCPUs, the amount of memory, the required storage performance, the required storage capacity, and the number of RVMs required from the resource pool. The Reference workload section provides examples of this process.

Complete the worksheet for each application listed in Table 6. Each row requires inputs on the following resources: CPU, memory, IOPS, and capacity.

Summary of examples

Overview

Page 56: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

56 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Table 6. Blank worksheet row

Application CPU (virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Equivalent RVMs

Example application

Resource requirements

NA

Equivalent reference virtual machines

Optimizing CPU utilization is a significant goal for almost any virtualization project. A simple view of the virtualization operation suggests a one-to-one mapping between physical CPU cores and virtual CPU cores regardless of the physical CPU utilization. In reality, consider whether the target application can effectively use all CPUs presented.

Use a performance-monitoring tool, such as Perfmon in Microsoft Windows to examine the CPU utilization counter for each CPU. If they are equivalent, implement that number of virtual CPUs when moving into the virtual infrastructure. However, if some CPUs are used and some are not, consider decreasing the number of virtual CPUs required.

In any operation that involves performance monitoring, collect data samples for a period of time that includes all operational use cases of the system. Use either the maximum or 95th percentile value of the resource requirements for planning purposes.

Server memory plays a key role in ensuring application functionality and performance. Therefore, each server process has different targets for the acceptable amount of available memory. When moving an application into a virtual environment, consider the current memory available to the system and monitor the free memory by using a performance-monitoring tool, such as Perfmon, to determine memory efficiency.

Several components become important when discussing the I/O performance of a system:

The number of requests coming in, or IOPS.

The size of the request or I/O size. For example, a request for 4 KB of data is easier and faster to process than a request for 4 MB of data.

The average I/O response time, or I/O latency.

The RVM calls for 25 IOPS. To monitor this on an existing system, use a performance-monitoring tool such as Perfmon. Perfmon provides several counters that can help. The most common are:

Logical disk or disk transfer/sec

CPU requirements

Memory requirements

Storage performance requirements

IOPS

Page 57: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

57 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Logical disk or disk reads/sec

Logical disk or disk writes/sec

The RVM assumes a 2:1 read/write ratio. Use these counters to determine the total number of IOPS and the approximate ratio of reads to writes for the customer application.

The I/O size is important because smaller I/O requests are faster and easier to process than large I/O requests. The RVM assumes an average I/O request size of 8 KB, which is appropriate for a large range of applications. Most applications use I/O sizes that are even—powers of 2: 4 KB, 8 KB, 16 KB, 32 KB, and so on are common. The performance counter does a simple average; it is common to see 11 KB or 15 KB instead of even I/O sizes.

If the average customer I/O size is less than 8 KB, use the observed IOPS number. However, if the average I/O size is significantly higher, apply a scaling factor to account for the large I/O size. A safe estimate is to divide the I/O size by 8 KB and use that factor. For example, if the application uses mostly 32 KB I/O requests, use a factor of four (32 KB/8 KB = 4). If that application generates 100 IOPS at 32 KB, the factor indicates you should plan for 400 IOPS, since the RVM assumes 8 KB I/O sizes.

You can use the average I/O response time, or I/O latency, to measure how quickly the storage system processes I/O requests. VSPEX solutions must meet a target average I/O latency of 20 ms. The XtremIO array easily achieved this with an average sub-millisecond response time.

The recommendations in this guide enable the system to continue to meet that 20 ms target, and at the same time, monitor the system and reevaluate the resource pool utilization if needed. To monitor I/O latency, use the Logical Disk\Avg. Disk sec/Transfer counter in Microsoft Windows Perfmon. If the I/O latency is continuously over the target, reevaluate the virtual machines in the environment to ensure that these machines do not use more resources than intended.

XtremIO automatically and globally deduplicates data as it enters the system. Deduplication is performed in real time and not as a post-processing operation. XtremIO is an ideal capacity-saving storage array due to this feature. The consumed capacity is based on the deduplication ratio from the testing tool.

Virtualization platforms typically have a high number of duplicate datasets. For example, the use of common OS builds and versions for virtual machines results in a relatively low percentage of truly unique data. The scaling numbers for this solution were based on a data uniqueness value of 15 percent. This translates into a deduplication ratio of approximately 7:1, which was validated by monitoring the XtremIO deduplication and compression metrics during testing.

If your datasets have a higher percentage of unique data, the amount of capacity consumed on the XtremIO array will increase, and the number of available storage resources for RVMs will decrease accordingly. This may lower the number of RVMs the configuration can hold unless additional capacity is added.

I/O size

I/O latency

Unique data

Page 58: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

58 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

XtremIO offers tools to assess the deduplicatability of data in your present environment. Use the tool to determine a likely deduplication ratio and compare it to that used for this testing to assess the impact to available capacity and the number of RVMs the configuration can support. For information about the XtremIO Data Reduction Estimator tool, read the “Everything Oracle at EMC” blog post EMC XtremIO Data Reduction Estimator.

Determine the disk space used, and add an appropriate factor to accommodate growth. For example, virtualizing a server that currently uses 40 GB of a 200 GB internal drive with anticipated growth of approximately 20 percent over the next year requires 48 GB. In addition, reserve space for regular maintenance patches and swapping files. Some file systems, such as Microsoft NTFS, degrade in performance if they become too full.

With all of the resources defined, determine an appropriate value for the equivalent RVMs line by using the relationships in Table 7. Round all values up to the closest whole number.

Table 7. Reference virtual machine resources

Resource Value for RVMs Relationship between requirements and equivalent RVMs

CPU 1 vCPU Equivalent reference virtual machines = resource requirements

Memory 2 GB Equivalent reference virtual machines = (resource requirements)/2

IOPS 25 IOPS Equivalent reference virtual machines = (resource requirements)/25

Capacity 100 GB Equivalent reference virtual machines = (resource requirements)*0.15/100

For example, the point-of-sale system database used in Example 2: Point-of-sale system requires four CPUs, 16 GB of memory, 200 IOPS, and 30 GB (15 percent unique data converted to physical capacity consumption is 200 * 0.15 = 30 GB) of physical capacity. This translates to four RVMs of CPU, eight RVMs of memory, eight RVMs of IOPS, and two RVMs of capacity. Table 8 shows how that fits into the worksheet row.

Storage capacity requirements

Determining equivalent reference virtual machines

Page 59: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

59 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Table 8. Sample worksheet row

Application CPU

(vCPUs) Memory (GB)

IOPS Capacity (GB)

Equivalent RVMs

Sample application

Resource requirements

4 16 200 30 N/A

Equivalent reference virtual machines

4 8 8 1 8

Use the highest value in the row to fill in the Equivalent RVMs column. As shown in Figure 18, the sample requires eight RVMs.

Figure 18. Required resources from the RVM pool

Implementation example - Stage 1

A customer wants to build a virtual infrastructure to support one custom-built application, one point of sale system, and one web server. The customer computes the sum of the Equivalent RVMs column, as shown in Table 9, to calculate the total number of RVMs required. The table shows the result of the calculation, rounded up to the nearest whole number.

Page 60: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

60 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Table 9. Example applications – Stage 1

Application Server resources Storage resources RVMs

CPU

(vCPUs) Memory (GB)

IOPS Capacity (GB)

Example application 1: Custom-built application

Resource requirements

1 3 15 5 N/A

Equivalent reference virtual machines

1 2 1 1 2

Example application 2: Point-of-sale system

Resource requirements

4 16 200 60 N/A

Equivalent reference virtual machines

4 8 8 1 8

Example application 3: Web server

Resource requirements

2 8 50 4 N/A

Equivalent reference virtual machines

2 4 2 1 4

Total equivalent reference virtual machines 14

This example requires 14 RVMs. According to the sizing guidelines, a Starter X-Brick with 13 SSDs provides sufficient resources for current needs and room for growth, because it supports up to 300 RVMs.

Implementation example – Stage 2

The customer must add a decision-support database to the virtual infrastructure. Using the same strategy, you can calculate the number of RVMs required, as shown in Table 10.

Table 10. Example applications -Stage 2

Application Server resources Storage resources Equivalent RVMs

CPU

(vCPUs)

Memory (GB)

IOPS Capacity (GB)

Example application 1: Custom-built application

Resource requirements

1 3 15 5 N/A

Equivalent reference virtual machines

1 2 1 1 2

Example application 2:

Resource requirements

4 16 200 30 N/A

Page 61: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

61 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Application Server resources Storage resources Equivalent RVMs

Point-of-sale system

Equivalent reference virtual machines

4 8 8 1 8

Example application 3: Web server

Resource requirements

2 8 50 4 N/A

Equivalent reference virtual machines

2 4 4 1 4

Example application 4: Decision-support database

Resource Requirements

20 128 14 1,500 N/A

Equivalent reference virtual machines

20 64 56 15 64

Total equivalent reference virtual machines 78

This example requires 78 RVMs. According to the sizing guidelines, a Starter X-Brick with 13 SSDs provides sufficient resources for current needs and room for growth. You can implement this storage layout with a Starter X-Brick that supports up to 300 virtual machines.

Figure 19 shows that 222 RVMs are available after implementing one Starter X-Brick.

Figure 19. Aggregate resource requirements - Stage 2

This process usually determines the recommended hardware size for servers and storage. However, in some cases, there is a desire to further customize the hardware resources available to the system. A complete description of system architecture is beyond the scope of this guide; however, additional customization can be done at this point.

Fine-tuning hardware resources

Page 62: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

62 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Server resources

For some workloads, the relationship between server needs and storage needs does not match what is outlined in the RVM. You should size the server and storage layers separately in this scenario, as shown in Figure 20.

Figure 20. Customizing server resources

To do this, first total the resource requirements for the server components, as shown in Table 11. In the Server resource component totals row, add up the server resource requirements from the applications in the table.

Note: When customizing resources in this way, confirm that storage sizing is still appropriate. The Server and storage resource component totals row in Table 11 describes the required amount of storage.

Table 11. Server resource component totals

Application Server resources Storage resources RVMs

CPU

(Virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Example Application 1: Custom-built application

Resource Requirements

1 3 15 5 N/A

Equivalent Reference Virtual Machines

1 2 1 1 2

Example Application 2: Point-of-sale system

Resource Requirements

4 16 200 30 N/A

Equivalent Reference Virtual Machines

4 8 8 1 8

Example Application

Resource Requirements

2 8 50 4 N/A

Page 63: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 5: Environment Sizing

63 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Application Server resources Storage resources RVMs

#3: Web Server

Equivalent Reference Virtual Machines

2 4 2 1 4

Example Application #4: Decision Support Database

Resource Requirements

10 64 700 768

Equivalent Reference Virtual Machines

10 32 28 8 32

Total equivalent reference virtual machines 46

Server and storage resource component totals

17 155

Note: Calculate the sum of the Resource requirements row for each application, not the Equivalent reference virtual machines, to get the Server and storage resource component totals.

In this example, the target architecture required 17 vCPUs and 155 GB of memory. If four vCPUs per physical processor core are allocated, and memory over-provisioning is not necessary, the architecture requires five physical processor cores and 155 GB of memory. With these numbers, the solution can be effectively implemented with fewer server resources.

Note: Keep high-availability requirements in mind when customizing the hardware resource.

To simplify the sizing of this solution, EMC has produced the VSPEX Sizing Tool. This tool uses the same sizing process described in the section above, and also incorporates sizing for other VSPEX solutions.

The VSPEX Sizing Tool enables you to input your resource requirements from the customer’s answers in the qualification worksheet. After you complete the inputs to the VSPEX Sizing Tool, the tool generates a series of recommendations, which allows you to validate your sizing assumptions and provides platform configuration information that meets those requirements.

You can access this tool at: EMC VSPEX Sizing Tool.

EMC VSPEX Sizing Tool

Page 64: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

64 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 6 VSPEX Solution Implementation

This chapter presents the following topics:

Overview .................................................................................................................. 65

Pre-deployment tasks .............................................................................................. 65

Network implementation .......................................................................................... 67

Microsoft Hyper-V hosts installation and configuration ........................................... 69

Microsoft SQL Server database installation and configuration ................................. 71

System Center Virtual Machine Manager server deployment ................................... 73

Storage array preparation and configuration ........................................................... 74

Page 65: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

65 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Overview

The deployment process consists of the stages listed in Table 12. After deployment, integrate the VSPEX infrastructure with the existing customer network and server infrastructure.

Table 12. Deployment process overview

Stage Description Reference

1 Verify prerequisites Pre-deployment tasks

2 Obtain the deployment tools Deployment resources

3 Gather customer configuration data

Customer configuration data

4 Rack and cable the components

Refer to the vendor documentation.

5 Configure the switches and networks, connect to the customer network

Network implementation

6 Install and configure the XtremIO array

Storage array

7 Configure virtual machine storage

Storage array

8 Install and configure the servers

Microsoft Hyper-V hosts

9 Set up Microsoft SQL Server (used by SCVMM)

Microsoft SQL Server database

10 Install and configure SCVMM Server and virtual machine networking

Configuring SQL Server for SCVMM

Pre-deployment tasks

The pre-deployment tasks, as shown in Table 13, include procedures not directly related to the environment installation and configuration, and provide needed results at the time of installation. Pre-deployment tasks include gathering hostnames, IP addresses, VLAN IDs, license keys, and installation media. Perform these tasks before the customer visit to decrease the time required on site.

Page 66: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

66 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Table 13. Pre-deployment tasks

Task Description

Gathering documents

Gather the related documents listed in Appendix A. These documents provide setup procedures and deployment best practices for the various components of the solution.

Gathering tools

Gather the required and optional tools for the deployment. Use Table 14 to confirm that all equipment, software, and appropriate licenses are available before starting the deployment process.

Gathering data

Collect the customer-specific configuration data for networking, naming, and required accounts. Enter this information into the Customer configuration worksheet for reference during the deployment process.

Table 14 lists the hardware, software, and licenses required to configure the solution. For more information, refer to Table 1 and Table 2 on pages 37 and 38.

Table 14. Deployment resources checklist

Requirement Description

Hardware Physical servers to host virtual machines: Sufficient physical server capacity as determined by sizing for the deployment (see Chapter 5)

Microsoft Hyper-V servers to host virtual infrastructure servers Note: The existing infrastructure may already meet this requirement.

Switch port capacity and capabilities as required by the virtual machine infrastructure

EMC XtremIO X-Bricks in the type and quantity as determined by sizing for the deployment (see Chapter 5).

Software Microsoft Windows Server 2012 R2 (or later) Datacenter Edition installation media

Microsoft System Center Virtual Machine Manager 2012 R2 installation media

Microsoft SQL Server 2012 or newer installation media

Note: This requirement may be covered in the existing infrastructure.

Licenses

Microsoft System Center Virtual Machine Manager 2012 R2 license keys

Microsoft Windows Server 2012 R2 Datacenter Edition license keys

Note: An existing Microsoft Key Management Server (KMS) may cover this requirement.

Microsoft SQL Server Standard Edition license key

Note: The existing infrastructure may already meet this requirement.

Deployment resources checklist

Page 67: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

67 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Gather information such as IP addresses and hostnames as part of the planning process to reduce time onsite.

The Customer configuration worksheet provides a set of tables to maintain a record of relevant customer information. Add, record, and modify information as needed during the deployment process.

Network implementation

This section describes the network infrastructure requirements needed to support this architecture. Table 15 provides a summary of the tasks for network configuration, and references for further information.

Table 15. Tasks for switch and network configuration

Task Description Reference

Configuring the infrastructure network

Configure storage array and Hyper-V host infrastructure networking.

Storage array preparation and configuration

Microsoft SQL Server database installation and configuration

Configuring VLANs

Configure private and public VLANs as required.

Vendor switch configuration guide

Completing network cabling

1. Connect the switch interconnect ports.

2. Connect the XtremIO front-end ports.

3. Connect the Microsoft Hyper-V server ports.

For validated performance levels and high availability, this solution requires the switching capacity listed in Table 1 on page 37. You do not need to use new hardware if the existing infrastructure meets the requirements.

To provide both redundancy and additional network bandwidth, the infrastructure network requires redundant network links for:

Each Hyper-V host

The storage array

The switch interconnect ports

The switch uplink ports

This is a required configuration regardless of whether the network infrastructure or the solution already exists, or you are deploying it alongside other components of the solution.

Figure 21 shows a sample redundant infrastructure for this solution and the use of redundant switches and links to ensure that there are no single points of failure.

Customer configuration data

Preparing the network switches

Configuring the infrastructure network

Page 68: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

68 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Converged switches provide customers with different protocol options (FC or iSCSI) for storage networks for block storage. While existing 8 Gb FC switches are acceptable for the FC protocol option, use 10 Gb Ethernet network switches for iSCSI.

Figure 21. Sample Ethernet network architecture

Ensure that there are adequate network switch ports for the storage array and Windows hosts. EMC recommends that you configure the Windows hosts with three VLANs:

Customer data network: Virtual machine networking (these are customer-facing networks, which can be separated if needed).

Storage network: XtremIO data networking (private network).

Management network: Live Migration or Storage Migration networking (private network). These networks can also reside on separate VLANs for additional traffic isolation.

Configuring VLANs

Page 69: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

69 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Use jumbo frames for iSCSI protocol. Set the maximum transmission unit (MTU) to 9,000 for the switch ports for the iSCSI storage network. To enable jumbo frames on switch ports for storage and host ports on the switches refer to the switch vendor guidelines.

Ensure that all solution servers, switch interconnects, and switch uplinks have redundant connections and are plugged into separate switching infrastructures. Ensure that there is a complete connection to the existing customer network.

Note: The new equipment is connected to the existing customer network. Ensure that unexpected interactions do not cause service issues on the customer network.

Microsoft Hyper-V hosts installation and configuration

This section provides the requirements for installing and configuring the Windows hosts and infrastructure servers required to support the architecture. Table 16 describes the tasks that must be completed.

Table 16. Tasks for server installation

Task Description Reference

Installing the Windows hosts

Install Windows Server 2012 R2 on the physical servers for the solution.

Installing Windows Server 2012 R2

Installing Hyper-V and configuring Failover Clustering

1. Add the Hyper-V Server role.

2. Add the Failover Clustering feature.

3. Create and configure the Hyper-V cluster.

Installing Windows Server 2012 R2

Configuring Microsoft Hyper-V networking

Configure Windows hosts networking, including NIC teaming and the virtual switch network.

Installing Windows Server 2012 R2

Installing PowerPath on Windows Servers

Install and configure PowerPath to manage multipathing for XtremIO LUNs.

PowerPath and PowerPath/VE for Windows Installation and Administration Guide

Planning virtual machine memory allocations

Ensure that Microsoft Hyper-V guest memory-management features are configured properly for the environment.

Installing Windows Server 2012 R2

Follow Microsoft best practices to install Windows Server 2012 R2 on the physical servers for the solution. Windows requires hostnames, IP addresses, and a root password for installation. The Customer configuration worksheet provides appropriate values.

Configuring jumbo frames (iSCSI only)

Completing network cabling

Overview

Installing the Windows hosts

Page 70: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

70 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

To install Hyper-V and configure Failover Clustering: 1. Install and patch Windows Server 2012 R2 on each Windows host.

2. Configure the Hyper-V role, and the Failover Clustering feature.

To ensure performance and availability, the following NICs are required:

At least one NIC for virtual machine networking and management (can be separated by network or VLAN if necessary)

At least two 10 GbE NICs for the storage network (iSCSI)

At least two 8 GbE HBAs for the storage network (FC)

At least one NIC for Live Migration

Note: Enable jumbo frames for NICS that transfer iSCSI data. Set the MTU to 9,000. For instructions, refer to the NIC configuration guide.

To improve and enhance the performance and capabilities of XtremIO storage array, you can choose the Windows Native Multipathing feature or install PowerPath for Windows on the Microsoft Hyper-V host.

For detailed information and the configuration steps to install EMC PowerPath, refer to the PowerPath and PowerPath/VE for Windows Installation and Administration Guide.

Note: This solution uses PowerPath as the multipathing solution to manage XtremIO LUNs.

Server capacity

Server capacity in the solution is required for two purposes:

To support the new virtualized server infrastructure

To support required infrastructure services such as authentication and authorization, DNS, and databases

For information on the minimum infrastructure requirements, refer to Table 3 on page 40. There is no need for new hardware if existing infrastructure meets the requirements.

Memory configuration

Take care to properly size and configure the server memory for this solution.

Memory virtualization techniques, such as Dynamic Memory, enable the hypervisor to abstract physical host resources to provide resource isolation across multiple virtual machines and avoid resource exhaustion. With advanced processors, such as Intel processors with Extended Page Table support, abstraction takes place within the CPU. Otherwise, abstraction takes place within the hypervisor itself.

Microsoft Hyper-V includes multiple techniques for maximizing the use of system resources such as memory. Do not substantially overcommit resources as this can

Installing Hyper-V and configuring failover clustering

Configuring Windows host networking

Installing and configuring Multipath software

Planning virtual machine memory allocations

Page 71: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

71 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

lead to poor system performance. The exact implications of memory over-commitment in a real-world environment are difficult to predict. Performance degradation due to resource exhaustion increases with the amount of memory overcommitted.

Microsoft SQL Server database installation and configuration

Most customers use a management tool to provision and manage their server virtualization solution even though this is not required. The management tool requires a database back end. SCVMM uses SQL Server 2012 as the database platform.

Note: Do not use Microsoft SQL Server Express Edition for this solution.

Table 17 lists the tasks for installing and configuring a SQL Server database for the solution. The subsequent sections describe these tasks.

Table 17. Tasks for SQL Server database setup

Task Description Reference

Creating a virtual machine for SQL Server

Create a virtual machine to host SQL Server. Verify that the virtual machine meets the hardware and software requirements.

msdn.microsoft.com

Installing Microsoft Windows on the virtual machine

Install Microsoft Windows Server 2012 R2 on the virtual machine created to host SQL Server.

technet.microsoft.com

Installing Microsoft SQL Server

Install Microsoft SQL Server on the designated virtual machine.

technet.microsoft.com

Configuring SQL Server for SCVMM

Configure a remote SQL Server instance for SCVMM.

technet.microsoft.com

Overview

Page 72: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

72 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

On one of the Windows servers designated for infrastructure virtual machines, create a virtual machine with sufficient computing resources for SQL Server. Use the datastore designated for the shared infrastructure.

Note: EMC recommends CPU and memory values of 2 vCPU and 6 GB respectively for the SQL virtual machine. If the customer environment already contains a SQL Server instance, refer to Configuring SQL Server for SCVMM.

The SQL Server service must run on Microsoft Windows. Install the required Windows version on the virtual machine, and select the appropriate network, time, and authentication settings.

Install SQL Server on the virtual machine from the SQL Server installation media. Microsoft SQL Server Management Studio is one of the components in the SQL Server installer. Install this component on the SQL Server instance directly, and on an administrator console.

In many implementations, you may want to store data files in locations other than the default path. To change the default path for storing data files:

1. Right-click the server object in SQL Server Management Studio and select Database Properties.

2. In the Properties window, change the default data and log directories for new databases created on the server.

Note: For high availability, install SQL Server on a Microsoft failover cluster.

To use SCVMM in this solution, configure the SQL Server instance for remote connections. Create individual login accounts for each service that accesses a database on the SQL Server instance.

For detailed requirements and instructions, refer to the Microsoft TechNet Library topic Configuring a Remote Instance of SQL Server for VMM.

For further information, refer to the list of documents in Reference Documentation.

Creating a virtual machine for SQL Server

Installing Microsoft Windows on the virtual machine

Installing SQL Server

Configuring SQL Server for SCVMM

Page 73: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

73 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

System Center Virtual Machine Manager server deployment

This section provides information about configuring SCVMM for the solution. Table 18 outlines the tasks to be completed.

Table 18. Tasks for SCVMM configuration

Task Description Reference

Creating the SCVMM host virtual machine

Create a virtual machine for the SCVMM server.

Create a virtual machine

Installing the SCVMM guest OS

Install Windows Server 2012 R2 Datacenter Edition on the SCVMM host virtual machine.

Install the guest operating system

Installing the SCVMM server

Install an SCVMM server. How to Install a VMM Management Server

Installing the VMM Server

Installing the SCVMM Admin Console

Install an SCVMM Admin Console. How to Install the VMM Console

Installing the VMM Administrator Console

Installing the SCVMM agent locally on the hosts

Install an SCVMM agent locally on the hosts that SCVMM manages.

Installing a VMM Agent Locally on a Host

Adding the Hyper-V cluster to SCVMM

Add the Hyper-V cluster to SCVMM. How to Add a Host Cluster to VMM

Creating a virtual machine in SCVMM

Create a virtual machine in SCVMM. Creating and Deploying Virtual Machines in VMM

How to Create a Virtual Machine with a Blank Virtual Hard Disk

Performing partition alignment

Use diskpart.exe to perform partition alignment, assign drive letters, and assign the file allocation unit size of the virtual machine’s disk drive.

Disk Partition Alignment Best Practices for SQL Server

Creating a template virtual machine

Create a template virtual machine from the existing virtual machine.

Create the hardware profile and Guest OS profile at this time.

How to Create a Virtual Machine Template

How to Create a Template from a Virtual Machine

Deploying virtual machines from the template virtual machine

Deploy the virtual machines from the template virtual machine.

How to Create and Deploy a Virtual Machine from a Template

How to Deploy a Virtual Machine

Overview

Page 74: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

74 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

To deploy a SCVMM server as a virtual machine on a Hyper-V server that is installed as part of this solution, connect directly to an infrastructure Hyper-V server by using the Hyper-V manager. Create a virtual machine on the Hyper-V server with the customer guest OS configuration by using infrastructure server storage presented from the storage array. The memory and processor requirements for the SCVMM server depend on the number of Hyper-V hosts and virtual machines that SCVMM must manage.

Install the guest OS on the SCVMM host virtual machine. Install the required Windows Server version on the virtual machine and select appropriate network, time, and authentication settings.

Set up the SCVMM database and the default library server; then install the SCVMM server.

To install the SCVMM server, refer to the Microsoft TechNet Library topic Installing the VMM Server.

The SCVMM Admin Console is a client tool to manage the SCVMM server. Install the SCVMM Admin Console on the same computer as the VMM server.

To install the SCVMM Admin console, refer to the Microsoft TechNet Library topic Installing the VMM Administrator Console.

If the hosts must be managed on a perimeter network, install an SCVMM agent locally on the host before adding the host to SCVMM. Optionally, install an SCVMM agent locally on a host in a domain before adding the host to SCVMM. In all other cases, agents are installed automatically.

To install a VMM agent locally on a host, refer to the Microsoft TechNet Library topic Installing a VMM Agent Locally on a Host .

SCVMM manages the Hyper-V cluster. Add the deployed Hyper-V cluster to SCVMM.

To add the Hyper-V cluster, refer to the Microsoft TechNet Library topic How to Add a Host Cluster to VMM.

Storage array preparation and configuration

This section provides information about creating volume in XtremIO and mapping XtremIO volumes to SCVMM environment.

Implementation instructions and best practices may vary depending on the storage network protocol selected for the solution. Follow high-level these steps in each case:

1. Configure the XtremIO array, including the register host initiator group.

2. Provision storage and LUN masking to the Hyper-V hosts.

Creating a SCVMM host virtual machine

Installing the SCVMM guest OS

Installing the SCVMM server

Installing the SCVMM Admin Console

Installing the SCVMM agent locally on a host

Adding the Hyper-V cluster to SCVMM

Overview

Page 75: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

75 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

The following sections explain the options for each step separately, depending on whether the FC or iSCSI protocol is selected.

This section describes how to configure the XtremIO storage array for host access using a block-only protocol such as FC or iSCSI. In this solution, XtremIO provides data storage for Hyper-V hosts. Table 19 describes the XtremIO configuration tasks.

Table 19. Tasks for XtremIO configuration

Task Description Reference

Preparing the XtremIO array

Physically install the XtremIO hardware following the procedures in the product documentation.

XtremIO Storage Array Operation Guide

XtremIO Storage Array Site Preparation Guide version 3.0

XtremIO Storage Array User Guide version 3.0

Vendor switch configuration guide

Setting up the initial XtremIO configuration

Configure the IP addresses and other key parameters on the XtremIO.

Provisioning storage for Microsoft Hyper-V hosts

Create the storage areas required for the solution.

The XtremIO Storage Array Operation Guide provides instructions to assemble, rack, cable, and power up the XtremIO. There are no specific setup steps for this solution.

After completing the initial XtremIO array setup, configure key information about the existing environment so that the storage array can communicate with other devices in the environment. Configure the following common items in accordance with your IT data center policies and existing infrastructure information:

DNS

NTP

Storage network interfaces

For data connections using the FC protocol

Ensure that one or more servers are connected to the XtremIO storage system through qualified FC switches. For detailed instructions, refer to the EMC Host Connectivity Guide for Windows.

For data connections using the iSCSI protocol

1. Connect one or more servers to the XtremIO storage system through qualified IP switches. For detailed instructions, refer to the EMC Host Connectivity Guide for Windows.

2. Additionally, configure the following items in accordance with your IT data center policies and existing infrastructure information:

a. Set up a storage network IP address.

Configuring the XtremIO array

Preparing the XtremIO array

Setting up the initial XtremIO configuration

Page 76: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

76 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Logically isolate the other networks in the solution as described, in Chapter 3. This ensures that other network traffic does not impact traffic between hosts and storage.

b. Enable jumbo frames on the XtremIO front-end iSCSI ports.

Use jumbo frames for iSCSI networks to permit greater network bandwidth. Apply the MTU size specified below across all network interfaces in the environment. To enable the jumbo frames option:

i. From the menu bar, click the Administration icon to display the Administration workspace.

ii. Click the Cluster tab and select iSCSI Ports Configuration from the left pane. The iSCSI Ports Configuration screen appears.

iii. In the Port Properties Configuration section, select the Enable Jumbo Frames option.

iv. Set the MTU value by using the up and down arrows.

v. Click Apply.

The reference documents listed in Appendix A provide more information on how to configure the XtremIO platform. The Storage configuration guidelines section provides more information on the disk layout.

Managing the initiator group

The XtremIO storage array uses "initiators" to refer to ports that can access a volume. Initiators can be managed by the XtremIO storage array by assigning them to an initiator group. You can do this by either editing an initiator group in the GUI as shown in Figure 22 and adding the initiator's properties or using the relevant CLI command.

Figure 22. XtremIO initiator group

Page 77: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

77 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

The initiators within an initiator group share access to one or more of the cluster's volumes. You can define which initiator groups have access to which volumes using LUN mapping. For detailed instructions, refer to the EMC XtremIO User Guide.

Managing the volumes

This section describes provisioning XtremIO volumes for Microsoft Hyper-V hosts. You can define various quantities of disk space as volumes in an active cluster. Volumes are defined as:

Volume size: The quantity of disk space reserved for the volume.

LB size: The logical block size in bytes.

Alignment-offset: A value for preventing unaligned access performance problems.

Note: In the GUI, selecting a predefined volume type defines the alignment-offset and LB size values. In the CLI, you can define the alignment-offset and LB size values separately.

This section explains how to manage volumes using the XtremIO storage array GUI. Complete the steps in the XtremIO GUI to configure LUNs to store virtual machines.

When XtremIO initializes during the installation process, the data protection domain is created automatically. Provision the LUNs based on the sizing information in Chapter 4.

This example uses the array recommended maximums described in Chapter 4.

1. Log in to the XtremIO GUI.

2. From the menu bar, click Configuration.

3. From the Volumes pane, click Add, as shown in Figure 23.

Figure 23. Adding volume

4. In the Add New Volumes window, as shown in Figure 24, define the following:

a. Name: The name of the volume.

b. Size: The amount of disk space allocated for this volume.

Page 78: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

78 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

c. Volume Type: Select one of the following types that define the LB size and alignment-offset:

i. Normal (512 LBs)

ii. 4 KB LBs

iii. Legacy Windows (offset:63)

d. Small I/O Alerts: Enable if you want an alert to be sent when small I/Os (less than 4 KB) are detected.

e. Unaligned I/O Alerts: Enable if you want an alert to be sent when unaligned I/Os are detected.

f. VAAI TP Alerts: Enable if you want an alert to be sent when the storage capacity reaches the set limit.

Figure 24. Volume summary

5. For volumes:

a. If you do not want to add the new volumes to a folder, click Finish. The new volumes are created and appear in the root under Volumes in the Configuration window.

b. If you want to add the new volumes to a folder:

i. Click Next.

ii. Select the existing folder (or click New Folder to create a new one).

Page 79: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

79 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

iii. Click Finish. The new volumes are created and appear in the selected folder under Volumes in the Configuration window.

Table 20 lists a single X-Brick storage allocation layout for 700 virtual machines in the solution.

Table 20. Storage allocation for block data

Configuration Availability physical capacity (TB)

Number of SSDs (400 GB) for single X-Brick

Number of LUNs for single X-Brick

Volume capacity (TB)

700 virtual servers

7.2 25 1 50

Note: In this solution, each virtual machine occupies 102 GB, with 100 GB for the OS and user space and a 2 GB swap file.

Mapping volumes to an initiator group

This section describes how to map XtremIO volumes to an initiator group. To enable initiators within an initiator group to access a volume's disk space, you can map the volume to the initiator group. A LUN is automatically assigned when this is done. This number appears under Selected Volumes in the Configuration window.

To map a volume to an initiator:

1. From the menu bar, click Configuration.

2. Under Volumes, select the volumes you want to map. To select multiple volumes, hold Shift and select the volumes. The volumes appear under Volumes in the Configuration window, as shown in Figure 25.

Figure 25. Volumes and initiator group

3. Under Initiator Groups, select the initiator group to which you want to map the volume. The initiator appears under Initiator Groups in the Configuration window.

4. Once you have selected the volumes and initiator groups you want to map, under LUN Mapping Configuration, click Map All.

5. Click Apply, as shown in Figure 26. The selected volumes are mapped to the initiator group.

Page 80: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

80 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 26. Mapping volumes

XtremIO volumes have been created and mapped to an initiator group. You can see the disks in the Windows hosts.

To create the CSV disk for the Failover Cluster:

1. On each Microsoft Hyper-V host, open Disk Management, click Action and Rescan disks. After the rescan, all the XtremIO volumes appear under Disk Management on each Hyper-V host.

2. Initialize and format each XtremIO volume with NTFS file systems on one of the Hyper-V hosts.

3. Under Failover Cluster Manager, expand the name of the cluster, and then expand Storage. Right-click Disks, and then click Add Disk. Select the disks and click OK.

4. To add the disks to the CSV, select all the cluster disks and right-click Add to Cluster Shared Volumes.

Note: EMC recommends that you format the Windows C drive and CSV volumes with the Allocation Unit Size set to 8,192(8 KB). To format the boot volume to 8,192, refer to EMC best practices.

To create the CSV disks, refer to the Microsoft TechNet Library topic Use Cluster Shared Volumes in a Failover Cluster.

Create a virtual machine in SCVMM to use as a virtual machine template. Install the virtual machine, install the software, and then change the Windows and application settings.

To create a virtual machine, refer to the Microsoft TechNet Library topic How to Create and Deploy a Virtual Machine from a Blank Virtual Hard Disk.

Perform disk partition alignment only for virtual machines running Windows Server 2003 R2 or earlier.

EMC recommends implementing disk partition alignment with an offset of 1,024 KB, and formatting the disk drive with a file allocation unit (cluster) size of 8 KB.

Creating the CSV disk

Creating a virtual machine in SCVMM

Performing partition alignment

Page 81: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 6: VSPEX Solution Implementation

81 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

To perform partition alignment, assign drive letters, and assign file allocation unit size using diskpart.exe, refer to the Microsoft TechNet topic Disk Partition Alignment Best Practices for SQL Server.

Create a template virtual machine from the existing virtual machine in SCVMM. Create a hardware profile and a guest OS profile when creating the template. Use the profiler to deploy the virtual machines.

Converting a virtual machine into a template destroys the source virtual machine. Consequently, you should back up the virtual machine before converting it.

To create a template from a virtual machine, refer to the Microsoft TechNet topic How to Create a Template from a Virtual Machine.

The virtual machine deployment wizard in the SCVMM Admin Console enables you to save the PowerShell scripts that perform the conversion and reuse them to deploy other virtual machines with the same configuration.

To deploy a virtual machine from a template, refer to the Microsoft TechNet topic How to Deploy a Virtual Machine.

Creating a template virtual machine

Deploying virtual machines from the template

Page 82: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 7: Solution Verification

82 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Chapter 7 Solution Verification

This chapter presents the following topics:

Overview .................................................................................................................. 83

Post-installation checklist ....................................................................................... 84

Deploying and testing a single virtual machine........................................................ 84

Verifying solution component redundancy ............................................................... 84

Page 83: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 7: Solution Verification

83 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Overview

This chapter provides a list of items to review and tasks to perform after configuring the solution. To verify the configuration and functionality of specific aspects of the solution, and to ensure that the configuration meets the customer’s core availability requirements, complete the tasks listed in Table 21.

Table 21. Testing the installation

Task Description Reference

Post-installation checklist

Verify that sufficient virtual ports exist on each Hyper-V host virtual switch.

Hyper-V: How many network cards do I need?

Verify that the VLAN for virtual machine networking is configured correctly on each Hyper-V host.

Network Recommendations for a Hyper-V Cluster in Windows Server 2012 R2

Verify that each Hyper-V host has access to the required Cluster Shared Volumes.

Hyper-V: Using Hyper-V and Failover Clustering

Verify that the live migration interfaces are configured correctly on all Hyper-V hosts.

Virtual Machine Live Migration Overview

Deploying and testing a single virtual machine

Deploy a single virtual machine by using the System Center Virtual Machine Manager (SCVMM) interface.

Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager

Verifying solution component redundancy

Perform a reboot for each storage processor in turn, and ensure that the storage connectivity is maintained.

Disable each of the redundant switches in turn and verify that the Hyper-V host, virtual machine, and storage array connectivity remains intact.

Vendor documentation

On a Hyper-V host that contains at least one virtual machine, restart the host and verify that the virtual machine can successfully migrate to an alternate host.

Creating a Hyper-V Host Cluster in VMM Overview

Page 84: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 7: Solution Verification

84 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Post-installation checklist

Before moving to production, on each Windows Server, verify the following critical items:

The VLAN for virtual machine networking is configured correctly.

The storage networking is configured correctly.

Each server can access the required CSVs.

A network interface is configured correctly for Live Migration.

Deploying and testing a single virtual machine

Deploy a virtual machine to verify that the solution functions as expected. Verify that the virtual machine is joined to the applicable domain, has access to the expected networks, and that it is possible to log in to it.

Verifying solution component redundancy

To ensure that the various components of the solution maintain availability requirements, test specific scenarios related to maintenance or hardware failures.

Complete the following steps to restart each XtremIO storage controller in turn and verify that connectivity to Microsoft Hyper-V CSV file system is maintained throughout each restart:

1. Log in to XtremIO XMS CLI console with administrator credentials.

2. Power off storage controller 1 using the following command:

deactivate-storage-controller sc-id=1

power-off sc-id=1

3. Activate storage controller 1 using the following command:

power-on sc-id=1

activate-storage-controller sc-id=1

4. When the cycle completes, change the sc-id=2 to verify another storage controller using the same command as in the previous steps.

5. On the host side, enable maintenance mode and verify that you can successfully migrate a virtual machine to an alternate host.

Page 85: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

85 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Chapter 8 System Monitoring

This chapter presents the following topics:

Overview .................................................................................................................. 86

Key areas to monitor ................................................................................................ 86

XtremIO resource monitoring guidelines .................................................................. 88

Page 86: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

86 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Overview

Monitoring a VSPEX environment is no different from monitoring any core IT system, and is a relevant and essential component of administration. Monitoring a highly virtualized infrastructure, such as a VSPEX environment, is more complex than in a purely physical infrastructure, because the interaction and interrelationships between various components can be subtle and nuanced.

If you are experienced in administering virtualized environments, you should be familiar with the key concepts and focus areas. The key differentiators are monitoring at scale and the ability to monitor end-to-end systems and workflows.

Several business needs require proactive, consistent monitoring of the environment:

Stable, predictable performance

Sizing and capacity needs

Availability and accessibility

Elasticity—the dynamic addition, subtraction, and modification of workloads

Data protection

If self-service provisioning is enabled in the environment, the ability to monitor the system is more critical because clients can generate virtual machines and workloads dynamically. This can adversely affect the entire system.

This chapter provides the basic knowledge necessary to monitor the key components of a VSPEX Proven Infrastructure environment. Additional resources are included at the end of this chapter.

Key areas to monitor

VSPEX Proven Infrastructures provide end-to-end solutions and require system monitoring of three discrete, but highly interrelated areas:

Servers, both virtual machines, and clusters

Networking

Storage

This chapter focuses primarily on monitoring key components of the storage infrastructure, the XtremIO array, but also briefly describes other components.

When a workload is added to a VSPEX deployment, server and networking resources are consumed. As more workloads are added, modified, or removed, resource availability and, more importantly, capabilities change, which affects all other workloads running on the platform. Customers should fully understand their workload characteristics on all key components before deploying them on a VSPEX platform; this is a requirement to correctly size resource utilization against the defined RVM.

Performance baseline

Page 87: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

87 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Deploy the first workload, and then measure the end-to-end resource consumption along with platform performance. This removes the guesswork from sizing activities and ensures initial assumptions were valid. As more workloads are deployed, reevaluate resource consumption and performance levels to determine cumulative load and the impact on existing virtual machines and their application workloads. Adjust resource allocation accordingly to ensure that any oversubscription is not negatively impacting overall system performance. Run these assessments consistently to ensure the platform as a whole, and the virtual machines themselves, operate as expected.

The following components comprise the critical areas that affect overall system performance.

Servers

Networking

Storage

The key server resources to monitor include:

Processors

Memory

Disk (local and SAN)

Networking

Monitor these areas both from a physical host level (the hypervisor host level) and from a virtual level (from within the guest virtual machine). For a VSPEX deployment with Microsoft Hyper-V, you can use Windows Perfmon to monitor and log the metrics. Follow your vendors’ guidance to determine performance thresholds for specific deployment scenarios, which can vary greatly depending on the application.

For detailed information about Perfmon, refer to the Microsoft TechNet Library topic Using Performance Monitor.

Each VSPEX Proven Infrastructure provides a guaranteed level of performance based on the number of RVMs deployed and their defined workload.

Ensure that there is adequate bandwidth for networking communications. This includes monitoring network loads at the server and virtual machine level, the fabric (switch) level, and the storage level. From the server and virtual machine level, the monitoring tools mentioned previously provide sufficient metrics to analyze flows into and out of the servers and guests. Key items to track include aggregate throughput or bandwidth, latencies, and IOPS size. Capture additional data from network card or HBA utilities.

From the fabric perspective, tools that monitor switching infrastructure vary by vendor. Key items to monitor include port utilization, aggregate fabric utilization, processor utilization, queue depths, and inter-switch link (ISL) utilization. Networking storage protocols are discussed in the following section.

Servers

Networking

Page 88: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

88 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Monitoring the storage aspect of a VSPEX implementation is crucial to maintaining the overall health and performance of the system. Fortunately, the tools provided with the XtremIO series of storage arrays provide an easy, yet powerful manner in which to gain insight into how the underlying storage components are operating. For both block and file protocols, there are several key areas to focus on, including:

Capacity

Hardware elements

X-Brick

Storage controllers

SSDs

Cluster elements

Clusters

Volumes

Initiator groups

Additional considerations (primarily from a tuning perspective) include:

I/O size

Workload characteristics

These factors are outside the scope of this document; however storage tuning is an essential component of performance optimization. EMC offers additional guidance on the subject in the EMC XtremIO Storage Array User Guide.

XtremIO resource monitoring guidelines

To monitor XtremIO, use the XMS GUI console, which you can access by opening an HTTPS session to the XMS IP address. The XtremIO series is an all-flash array storage platform that provides block storage access through a single entity.

This section explains how to use the XtremIO GUI to monitor block storage resource usage that includes the list elements. Performance counters can be displayed from the Dashboard.

Efficiency

You can monitor the cluster efficiency status under Storage > Overall Efficiency in the Dashboard, as shown in Figure 27.

Storage

Monitoring the storage

Page 89: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

89 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Figure 27. Monitoring the efficiency

The Overall Efficiency section displays the following data:

Overall Efficiency: The disk space saved by the XtremIO storage array, calculated as:

𝑇𝑜𝑡𝑎𝑙 𝑝𝑟𝑜𝑣𝑖𝑠𝑖𝑜𝑛𝑒𝑑 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦

𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷

Data Reduction Ratio: The inline data deduplication and compression ratio, calculated as:

𝐷𝑎𝑡𝑎 𝑤𝑟𝑖𝑡𝑡𝑒𝑛 𝑡𝑜 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦

𝑃ℎ𝑦𝑠𝑖𝑐𝑎𝑙 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑢𝑠𝑒𝑑

Deduplication Ratio: The real-time Inline data deduplication ratio, calculated as:

𝐷𝑎𝑡𝑎 𝑤𝑟𝑖𝑡𝑡𝑒𝑛 𝑡𝑜 𝑡ℎ𝑒 𝑎𝑟𝑟𝑎𝑦

𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷

Compression Ratio: The real-time inline compression ratio, calculated as:

𝑈𝑛𝑖𝑞𝑢𝑒 𝑑𝑎𝑡𝑎 𝑜𝑛 𝑆𝑆𝐷

𝑃ℎ𝑦𝑠𝑖𝑐𝑎𝑙 𝑐𝑎𝑝𝑎𝑐𝑖𝑡𝑦 𝑢𝑠𝑒𝑑

Thin Provisioning Savings: Used disk space compared to allocated disk space.

Volume capacity

You can monitor the volume capacity status under Storage > Volume Capacity in the Dashboard, as shown in Figure 28.

Page 90: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

90 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Figure 28. Volume capacity

Volume Capacity displays the following data:

Total disk space defined by the volumes

Physical space used

Logical space used

Physical capacity

You can monitor the physical capacity status under Storage > Physical Capacity in the Dashboard, as shown in Figure 29.

Figure 29. Physical capacity

Physical Capacity displays the following data:

Total physical capacity

Used physical capacity

To monitor the cluster performance from the GUI:

1. From the menu bar, click the Dashboard icon to display the Dashboard.

2. Under Performance, select the desired parameters:

a. Select the measurement unit of the display by clicking one of the following:

i. Bandwidth: MB/s

ii. IOPS

iii. Latency: Microseconds (μs). Applies only to the activity history graph.

b. Select the item to be monitored from the Item Selector:

i. Block Size

ii. Initiator Groups

iii. Volumes

Monitoring the performance

Page 91: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

91 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

c. Set the Activity History timeframe by selecting one of the following periods from the Time Period Selector:

i. Last Hour

ii. Last 6 Hours

iii. Last 24 Hours

iv. Last 3 Days

v. Last Week

Figure 30 shows the Performance GUI.

Figure 30. Monitoring the performance (IOPS)

Note: You can also monitor the performance through the CLI. For more information, refer to the XtremIO Storage Array User Guide.

Monitoring X-Bricks

You can quickly view the X-Brick name and any associated alerts by hovering the mouse pointer over the X-Brick in the Hardware pane of the Dashboard workspace.

To view the displayed X-Brick’s details in the Hardware workspace, you can hover the mouse pointer over different parts of the component to view that component’s parameters and associated alerts:

1. Click Show Front to view the X-Brick’s front end.

2. Click Show Back to view the X-Brick’s back end.

Monitoring the hardware elements

Page 92: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

92 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

3. Click Show Cable Connectivity to view the X-Brick’s cable connections. Figure 31 shows the data and management cable connectivity.

Figure 31. Data and management cable connectivity

4. Click X-Brick Properties to display the dialog box, as shown in Figure 32.

Figure 32. X-Brick properties

Page 93: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

93 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Monitoring storage controllers

To view the storage controller information from the GUI:

1. From the menu bar, click the Hardware icon to display the Hardware workspace.

2. Select the X-Brick for the storage controller to be monitored.

3. Click X-Brick Properties to open the X-Brick Properties dialog box.

4. View the details of the selected X-Brick’s two storage controllers.

Monitoring SSDs

To view the SSDs information from the GUI:

1. From the menu bar, click the Hardware icon to display the Hardware workspace.

2. Select the X-Brick for the storage controller to be monitored.

3. Click X-Brick Properties to open the X-Brick Properties dialog box.

4. View the details of the selected X-Brick’s SSDs, as shown in Figure 33.

Figure 33. Monitoring the SSDs

In addition to the available monitoring services provided by the XtremIO storage array, you can define monitors tailored to your cluster’s needs. Table 22 displays the parameters that can be monitored (depending on the selected monitor type).

Table 22. Advanced monitor parameters

Parameters Description

Read-IOPS By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

Write-IOPS By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

IOPS Total read and write IOPS, by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

Using advanced monitoring

Page 94: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Chapter 8: System Monitoring

94 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Parameters Description

Read-BW (MB/s) By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

Write-BW (MB/s) By block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

BW (MB/s) Total bandwidth of read and write combined, by block, 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

Write-Latency (μsec) 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

Read-Latency (μsec) 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

Average-Latency (μsec) The average of Read and Write latency. 512 B, 1 KB, 2 KB, 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256 KB, 512 KB, 1 MB, GT1 MB

SSD-Space-In-Use SSD space in use

Endurance-Remaining-% Percentage of SSD remaining endurance

Memory-Usage-% Percentage of memory usage

Memory-In-Use (MB) Memory-In-Use (MB)

CPU (%) Percentage of used CPU

For detailed information on using the advanced monitoring feature, refer to the EMC XtremIO Storage Array User Guide.

Page 95: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Appendix A: Reference Documentation

95 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Appendix A Reference Documentation

This appendix presents the following topics:

EMC documentation ................................................................................................. 96

Other documentation ............................................................................................... 96

Page 96: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Appendix A: Reference Documentation

96 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

EMC documentation

The following documents, available on EMC Online Support, provide additional and relevant information. If you do not have access to a document, contact your EMC representative.

EMC XtremIO Storage Array User Guide

EMC XtremIO Storage Array Operations Guide

EMC XtremIO Storage Array Site Preparation Guide

EMC XtremIO Storage Array Security Configuration Guide

EMC XtremIO Storage Array RESTful API Guide

EMC XtremIO Storage Array Release Notes

EMC XtremIO Simple Support Matrix

EMC Host Connectivity with Q-Logic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Fibre Channel over Ethernet Converged Network Adapters (CNAs) for the Linux Environment

EMC Host Connectivity with Emulex Fibre Channel and iSCSI HBAs and Converged Network Adapters (CNAs) for the Linux Environment

EMC Host Connectivity with QLogic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

EMC Host Connectivity with Emulex Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) in the Windows Environment

EMC Host Connectivity with Q-Logic Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Fibre Channel over Ethernet Converged Network Adapters (CNAs) for the Solaris Environment

EMC Host Connectivity with Emulex Fibre Channel and iSCSI Host Bus Adapters (HBAs) and Converged Network Adapters (CNAs) for the Solaris Environment

Other documentation

The following documents, located on the Microsoft website, provide additional and relevant information:

Adding Hyper-V Hosts and Host Clusters, and Scale-Out File Servers to VMM

Configuring a Remote Instance of SQL Server for VMM

Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager (video)

Hardware and Software Requirements for Installing SQL Server 2012

Hyper-V: How many network cards do I need?

How to Add a Host Cluster to VMM

Page 97: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Appendix A: Reference Documentation

97 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

How to Create a Virtual Machine Template

How to Create a Virtual Machine with a Blank Virtual Hard Disk

How to Deploy a Virtual Machine

How to Install a VMM Management Server

Hyper-V: Using Hyper-V and Failover Clustering

Install SQL Server 2012

Installing a VMM Agent Locally on a Host

Installing the VMM Administrator Console

Installing the VMM Server

Installing Virtual Machine Manager

Install and Deploy Windows Server 2012 R2 and Windows Server 2012 R2

Use Cluster Shared Volumes in a Failover Cluster

Virtual Machine Live Migration Overview

Page 98: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Appendix B: Customer Configuration Worksheet

98 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Appendix B Customer Configuration Worksheet

This appendix presents the following topic:

Customer configuration worksheet .......................................................................... 99

Page 99: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Appendix B: Customer Configuration Worksheet

99 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Customer configuration worksheet

Before you start the configuration, gather some customer-specific network and host configuration information. The following tables list the essential numbering, naming, and host address information required for assembling the network. This worksheet can also be used as a “leave behind” document for future reference.

Table 23. Common server information

Server name Purpose Primary IP address

Domain Controller

DNS Primary

DNS Secondary

DHCP

NTP

SMTP

SNMP

System Center Virtual Machine Manager

SQL Server

Table 24. ESXi server information

Server name Purpose Primary IP address

Private net (storage) addresses

VMkernel IP address

Hyper-V Host 1

Hyper-V Host 2

Table 25. X-Brick information

Array name

Admin account

XtremIO Management Server IP

Storage Controller 1 management IP

Storage Controller 2 management IP

Page 100: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Appendix B: Customer Configuration Worksheet

100 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Array name

SC1 IPMI IP

SC2 IPMI IP

Datastore name

Block FC WWPN

iSCSI IQN

iSCSI Server IP

Table 26. Network infrastructure information

Name Purpose IP address Subnet mask Default gateway

Ethernet Switch 1

Ethernet Switch 2

Table 27. VLAN information

Name Network purpose VLAN ID Allowed subnets

Virtual machine networking

Windows Management

iSCSI storage network

Live Motion

Storage Migration

Table 28. Service accounts

Account Purpose Password (optional, secure appropriately)

Windows Server administrator

Array administrator

SCVMM administrator

SQL Server administrator

Page 101: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Appendix C: Server Resource Component Worksheet

101 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines

Proven Infrastructure Guide

Appendix C Server Resource Component Worksheet

This appendix presents the following topic:

Server resources component worksheet ................................................................ 102

Page 102: EMC VSPEX Private Cloud: Microsoft Windows 2012 R2 with Hyper-V for up to 700 Virtual Machines

Appendix C: Server Resource Component Worksheet

102 EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 700 Virtual Machines Proven Infrastructure Guide

Server resources component worksheet

Table 30 provides a blank worksheet to record the server resource totals.

Table 30. Blank worksheet for server resource totals

Server resources Storage resources

Application CPU

(Virtual CPUs)

Memory (GB)

IOPS Capacity (GB)

Reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Resource requirements

N/A

Equivalent reference virtual machines

Total equivalent reference virtual machines

Server customization

Server component totals N/A

Storage customization

Storage component totals N/A

Storage component equivalent reference virtual machines

N/A

Total equivalent reference virtual machines - storage