70
DESIGN GUIDE EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection EMC VSPEX Abstract This Design Guide describes how to design an EMC ® VSPEX ® End-User-Computing solution for Citrix XenDesktop 7.6. EMC XtremIO TM , EMC Isilon ® , EMC VNX ® , and Microsoft Windows Server 2012 R2 with Hyper-V provide the storage and virtualization platforms for this solution. July 2015

EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Embed Size (px)

Citation preview

Page 1: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

DESIGN GUIDE

EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection

EMC VSPEX

Abstract

This Design Guide describes how to design an EMC® VSPEX® End-User-Computing solution for Citrix XenDesktop 7.6. EMC XtremIOTM, EMC Isilon®, EMC VNX®, and Microsoft Windows Server 2012 R2 with Hyper-V provide the storage and virtualization platforms for this solution.

July 2015

Page 2: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

2 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC ExtremIO Design Guide

Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.

Published July 2015

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection Design Guide

Part Number H14197.1

Page 3: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Contents

3 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Contents

Chapter 1 Introduction 7

Purpose of this guide .................................................................................................. 8

Business value ........................................................................................................... 8

Scope ......................................................................................................................... 9

Audience .................................................................................................................... 9

Terminology.............................................................................................................. 10

Chapter 2 Before You Start 11

Deployment workflow ............................................................................................... 12

Essential reading ...................................................................................................... 12

Chapter 3 Solution Overview 13

Overview .................................................................................................................. 14

VSPEX Proven Infrastructures ................................................................................... 14

Solution architecture ................................................................................................ 15

High-level architecture ......................................................................................... 15

Logical architecture ............................................................................................. 17

Key components ....................................................................................................... 18

Desktop virtualization broker ................................................................................... 19

Overview .............................................................................................................. 19

Citrix .................................................................................................................... 19

XenDesktop 7.6 ................................................................................................... 19

Machine Creation Services ................................................................................... 21

Citrix Provisioning Services .................................................................................. 21

Citrix Personal vDisk ............................................................................................ 21

Citrix Profile Management .................................................................................... 21

Virtualization layer ................................................................................................... 22

Microsoft Hyper-V ................................................................................................ 22

Microsoft System Center Virtual Machine Manager .............................................. 22

Microsoft Hyper-V high availability ....................................................................... 22

Compute layer .......................................................................................................... 23

Network layer ........................................................................................................... 23

Storage layer ............................................................................................................ 23

EMC XtremIO ........................................................................................................ 23

EMC Isilon............................................................................................................ 25

Page 4: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Contents

4 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

EMC VNX .............................................................................................................. 28

Virtualization management .................................................................................. 31

Data protection layer ................................................................................................ 31

Citrix ShareFile StorageZones solution ..................................................................... 32

Chapter 4 Sizing the Solution 34

Overview .................................................................................................................. 35

Reference workload .................................................................................................. 35

Login VSI ............................................................................................................. 36

VSPEX Private Cloud requirements............................................................................ 36

Private cloud storage layout ................................................................................. 37

VSPEX/XtremIO array configurations ......................................................................... 37

Validated XtremIO configurations ........................................................................ 37

XtremIO storage layout ........................................................................................ 37

Expanding existing VSPEX end-user computing environments ............................. 38

Isilon configuration .................................................................................................. 38

VNX array configurations .......................................................................................... 39

EMC FAST VP ........................................................................................................ 39

VNX shared file systems....................................................................................... 39

Choosing the appropriate reference architecture ...................................................... 40

Using the Customer Sizing Worksheet .................................................................. 40

Selecting a reference architecture ........................................................................ 42

Fine tuning hardware resources ........................................................................... 43

Summary ............................................................................................................. 44

Chapter 5 Solution Design Considerations and Best Practices 45

Overview .................................................................................................................. 46

Server design considerations ................................................................................... 46

Server best practices ........................................................................................... 47

Validated server hardware ................................................................................... 48

Hyper-V memory virtualization ............................................................................. 48

Memory configuration guidelines ......................................................................... 50

Network design considerations ................................................................................ 51

Validated network hardware ................................................................................ 52

Network configuration guidelines ........................................................................ 52

Storage design considerations ................................................................................. 56

Overview .............................................................................................................. 56

Validated storage hardware and configuration ..................................................... 56

Hyper-V storage virtualization .............................................................................. 57

High availability and failover .................................................................................... 58

Page 5: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Contents

5 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Virtualization layer ............................................................................................... 58

Compute layer ..................................................................................................... 58

Network layer ....................................................................................................... 59

Storage layer ....................................................................................................... 59

Validation test profile ............................................................................................... 60

Profile characteristics .......................................................................................... 60

EMC Data Protection configuration guidelines .......................................................... 61

Data protection profile characteristics ................................................................. 61

Data protection layout ......................................................................................... 61

VSPEX for Citrix XenDesktop with ShareFile StorageZones solution ........................... 61

ShareFile StorageZones architecture .................................................................... 61

StorageZones ...................................................................................................... 62

Design considerations ......................................................................................... 63

VSPEX for ShareFile StorageZones architecture .................................................... 63

Chapter 6 Reference Documentation 65

EMC documentation ................................................................................................. 66

Other documentation ............................................................................................... 66

Appendix A Customer Sizing Worksheet 68

Customer Sizing Worksheet for end-user computing ................................................. 69

Figures Figure 1. VSPEX Proven Infrastructures .............................................................. 15

Figure 2. Architecture of the validated solution .................................................. 16

Figure 3. Logical architecture ............................................................................. 17

Figure 4. XenDesktop 7.6 architecture components ........................................... 19

Figure 5. Isilon cluster components ................................................................... 26

Figure 6. EMC Isilon OneFS Operating System functionality ................................ 26

Figure 7. Isilon node classes .............................................................................. 28

Figure 8. EMC Unisphere Management Suite ...................................................... 30

Figure 9. Compute layer flexibility ...................................................................... 46

Figure 10. Hypervisor memory consumption ........................................................ 49

Figure 11. Highly available XtremIO FC network design example .......................... 53

Figure 12. Highly available VNX Ethernet network design example ....................... 54

Figure 13. Required networks .............................................................................. 55

Figure 14. Hyper-V virtual disk types .................................................................... 57

Figure 15. High availability at the virtualization layer ........................................... 58

Figure 16. Redundant power supplies .................................................................. 58

Page 6: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Contents

6 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Figure 17. VNX Ethernet network layer high availability ........................................ 59

Figure 18. XtremIO series high availability ........................................................... 59

Figure 19. ShareFile high-level architecture.......................................................... 62

Figure 20. VSPEX for Citrix XenDesktop with ShareFile StorageZones: Logical architecture ......................................................................................... 63

Figure 21. Printable customer sizing worksheet ................................................... 70

Tables Table 1. Terminology......................................................................................... 10

Table 2. Deployment workflow .......................................................................... 12

Table 3. Solution components .......................................................................... 18

Table 4. VSPEX end-user computing: Design process ........................................ 35

Table 5. Reference virtual desktop characteristics ............................................ 35

Table 6. Infrastructure server minimum requirements ....................................... 36

Table 7. XtremIO storage layout ........................................................................ 38

Table 8. User data resource requirement on Isilon ............................................ 38

Table 9. User data resource requirement on VNX .............................................. 39

Table 10. Example Customer Sizing Worksheet ................................................... 40

Table 11. Reference virtual desktop resources .................................................... 41

Table 12. Server resource component totals ....................................................... 44

Table 13. Server hardware .................................................................................. 48

Table 14. Minimum switching capacity ............................................................... 52

Table 15. Tested configurations .......................................................................... 56

Table 17. Validated environment profile ............................................................. 60

Table 18. Data protection profile characteristics ................................................. 61

Table 19. Recommended VNX storage for ShareFile StorageZones CIFS share ..... 64

Table 20. Customer Sizing Worksheet ................................................................. 69

Page 7: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 1: Introduction

7 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Chapter 1 Introduction

This chapter presents the following topics:

Purpose of this guide ................................................................................................. 8

Business value ........................................................................................................... 8

Scope ......................................................................................................................... 9

Audience .................................................................................................................... 9

Terminology ............................................................................................................. 10

Page 8: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 1: Introduction

8 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Purpose of this guide

An end-user computing or virtual desktop infrastructure is a complex system offering. The EMC® VSPEX® End-User Computing Proven Infrastructure provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This solution for Citrix XenDesktop 7.6 runs on a Microsoft Windows Server 2012 R2 with Hyper-V virtualization layer backed by the highly available EMC XtremIO™ family, which provides the storage. In this solution, the desktop virtualization infrastructure components are layered on a VSPEX Private Cloud that uses a Microsoft Hyper-V Proven Infrastructure, while the desktops are hosted on dedicated resources.

The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual desktop environment. XtremIO storage systems provide storage for virtual desktops, EMC Isilon® or EMC VNX® systems provide storage for user data, and EMC Avamar® data protection solutions provide data protection for Citrix XenDesktop data.

This solution is validated for up 3,500 virtual desktops. These validated configurations are based on a reference desktop workload and form the basis for creating cost-effective, custom solutions for individual customers.

XtremIO supports scale-out clusters of up to six X-Bricks. Each additional X-Brick increases performance and virtual desktop capacity linearly. XtremIO X-Bricks have been validated to support a higher number of desktops and the VSPEX validated numbers are particular to the communicated solution only.

This Design Guide describes how to design an end-user computing solution according to best practices for Citrix XenDesktop for Microsoft Hyper-V enabled by XtremIO, Isilon, VNX, and Data Protection.

Business value

Employees are more mobile than ever, and they expect access to business-critical data and applications from any location and any device. They want the flexibility to bring their own devices to work, which means IT departments are increasingly investigating and supporting Bring Your Own Device (BYOD) initiatives. This adds layers of complexity to safeguarding sensitive information. Deploying a virtual desktop project is one way to do this.

Implementing large-scale virtual desktop environments presents many challenges, however. Administrators must rapidly roll out persistent or non-persistent desktops for all users—task workers, knowledge workers, and power users—while offering an outstanding user experience that outperforms physical desktops.

In addition to performance, a virtual desktop solution must be simple to deploy, manage, and scale, with substantial cost savings over physical desktops. Storage is also a critical component of an effective virtual desktop solution.

EMC VSPEX Proven Infrastructures are designed to help you address the most serious of IT challenges by creating solutions that are simple, efficient, and flexible, and

Page 9: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 1: Introduction

9 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

designed to take advantage of the many possibilities that XtremIO’s flash technology offers.

The business benefits of this solution include:

An end-to-end virtualization solution to use the capabilities of the unified infrastructure components

Efficient virtualization for varied customer use cases of up to 3,500 virtual desktops for an X-Brick and up to 1,750 virtual desktops for a Starter X-Brick.

Reliable, flexible, and scalable reference architectures

Scope

This Design Guide describes how to plan a simple, effective, and flexible VSPEX end-user computing solution for Citrix XenDesktop 7.6. It provides a deployment example of virtual desktop storage on XtremIO and user data storage on an Isilon system or VNX storage array.

The desktop virtualization infrastructure components of the solution are layered on a VSPEX Private Cloud that uses a Microsoft Hyper-V Proven Infrastructure. This guide illustrates how to size XenDesktop on the VSPEX infrastructure, allocate resources following best practices, and use all of the benefits that VSPEX offers.

Audience

This guide is intended for internal EMC personnel and qualified EMC VSPEX Partners. The guide assumes that VSPEX partners who intend to deploy this VSPEX Proven Infrastructure for Citrix XenDesktop have the necessary training and background to install and configure an end-user computing solution based on Citrix XenDesktop with Microsoft Hyper-V as the hypervisor, XtremIO, Isilon, and VNX series storage systems, and associated infrastructure.

Readers should also be familiar with the infrastructure and database security policies of the customer installation.

This guide provides external references where applicable. EMC recommends that partners implementing this solution are familiar with these documents. For details, see Essential reading and Chapter 6: Reference Documentation.

Page 10: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 1: Introduction

10 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Terminology

Table 1 lists the terminology used in this guide.

Table 1. Terminology

Term Definition

Data deduplication Data deduplication reduces physical storage utilization by eliminating redundant blocks of data.

Reference architecture

The validated architecture that supports this VSPEX end-user computing solution at four particular points of scale— that is, an X-Brick capable of hosting up to 3,500 virtual desktops and a Starter X-Brick capable of hosting up to 1,750 virtual desktops.

Reference workload For VSPEX end-user computing solutions, the reference workload is defined as a single virtual desktop—the reference virtual desktop—with the workload characteristics indicated in Table 5. By comparing the customer’s actual usage to this reference workload, you can determine which reference architecture to choose as the basis for the customer’s VSPEX deployment.

Refer to Reference workload for details.

Storage processor (SP)

The compute component of the storage array. SPs are used for all aspects of moving data into, out of, and between arrays.

Storage controller (SC)

The compute component of the XtremIO storage array. SCs are used for all aspects of moving data into, out of, and between XtremIO arrays.

Virtual Desktop Infrastructure (VDI)

The VDI decouples the desktop from the physical machine. In a VDI environment, the desktop OS and applications reside inside a virtual machine running on a host computer, with data residing on shared storage. Users access their virtual desktop from any computer or mobile device over a private network or Internet connection.

XtremIO Starter X-Brick

A specialized configuration of the XtremIO All-Flash Array that includes 13 SSDs for this solution

XtremIO X-Brick A specialized configuration of the XtremIO All-Flash Array that includes 25 SSDs for this solution

Page 11: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 2: Before You Start

11 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Chapter 2 Before You Start

This chapter presents the following topics:

Deployment workflow .............................................................................................. 12

Essential reading ..................................................................................................... 12

Page 12: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 2: Before You Start

12 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Deployment workflow

To design and implement your end-user computing solution, refer to the process flow in Table 2.

Table 2. Deployment workflow

Step Action

1 Use the Customer Sizing Worksheet to collect customer requirements. Refer to Appendix A for more information.

2 Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX reference architecture for your end-user computing solution, based on the customer requirements collected in Step 1.

For more information about the Sizing Tool, refer to the EMC VSPEX Sizing Tool portal.

Note: If the Sizing Tool is not available, you can manually size the application using the guidelines in Chapter 4.

3 Use this Design Guide to determine the final design for your VSPEX solution.

Note: Ensure that all resource requirements are considered and not just the requirements for end-user computing.

4 Select and order the right VSPEX reference architecture and Proven Infrastructure. Refer to the VSPEX Proven Infrastructure Guide in Essential reading for guidance on selecting a Private Cloud Proven Infrastructure.

5 Deploy and test your VSPEX solution. Refer to the VSPEX Implementation Guide in Essential reading for guidance.

Note: The solution was validated by EMC using the Login VSI tool, as described in Chapter 4. Please visit www.loginvsi.com for further information.

Essential reading

EMC recommends that you read the following documents, available from the VSPEX space in the EMC Community Network , from EMC.com , or from the VSPEX Proven Infrastructure partner portal:

EMC VSPEX End User Computing Solution Overview

EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Implementation Guide

EMC VSPEX Private Cloud: Microsoft Windows Server 2012 R2 with Hyper-V for up to 1,000 Virtual Machines Proven Infrastructure Guide

Page 13: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

13 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Chapter 3 Solution Overview

This chapter presents the following topics:

Overview .................................................................................................................. 14

VSPEX Proven Infrastructures................................................................................... 14

Solution architecture ............................................................................................... 15

Key components ...................................................................................................... 18

Desktop virtualization broker ................................................................................... 19

Virtualization layer ................................................................................................... 22

Compute layer .......................................................................................................... 23

Network layer ........................................................................................................... 23

Storage layer ........................................................................................................... 23

Data protection layer................................................................................................ 31

Citrix ShareFile StorageZones solution .................................................................... 32

Page 14: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

14 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Overview

This chapter provides an overview of the VSPEX End-User Computing solution and the key technologies used in the solution. The solution has been designed and proven by EMC to provide the desktop virtualization, server, network, storage, and data protection resources to support reference architectures of up to 3,500 virtual desktops for an X-Brick and up to 1,750 virtual desktops for a Starter X-Brick.

Although the desktop virtualization infrastructure components of the solution shown in Figure 3 are designed to be layered on a VSPEX Private Cloud solution, the reference architectures do not include configuration details for the underlying Proven Infrastructure. Refer to the VSPEX Proven Infrastructure Guide in Essential reading for information on configuring the required infrastructure components.

VSPEX Proven Infrastructures

EMC has joined forces with IT infrastructure providers to create a complete virtualization solution that accelerates the deployment of the private cloud and Citrix XenDesktop virtual desktops. VSPEX enables customers to accelerate their IT transformation with faster deployment, greater simplicity and choice, higher efficiency, and lower risk, compared to the challenges and complexity of building an IT infrastructure themselves.

VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a virtual infrastructure for customers who want the simplicity characteristic of truly converged infrastructures, with more choice in individual stack components.

VSPEX Proven Infrastructures, as shown in Figure 1, are modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. They include virtualization, server, network, storage, and data protection layers. Partners can choose the virtualization, server, and network technologies that best fit a customer’s environment, while the highly available XtremIO, Isilon, and VNX storage systems and EMC Data Protection technologies provide the storage and data protection layers.

Page 15: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

15 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Figure 1. VSPEX Proven Infrastructures

Solution architecture

The EMC VSPEX End-User Computing for Citrix XenDesktop solution provides a complete system architecture capable of supporting up to 3,500 virtual desktops for an X-Brick, or up to 1,750 virtual desktops for a Starter X-Brick. The solution supports block storage on XtremIO for virtual desktops and optional file storage on Isilon or VNX for user data.

Figure 2 shows the high-level architecture of the validated solution.

High-level architecture

Page 16: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

16 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Figure 2. Architecture of the validated solution

The solution uses EMC XtremIO, Isilon, VNX, and Microsoft Hyper-V to provide the storage and virtualization platforms for a Citrix XenDesktop environment of Microsoft Windows 7 or Windows 8.1 virtual desktops provisioned by Citrix XenDesktop Machine Creation Services (MCS) or Citrix Provisioning Services (PVS).

For the solution, we1 deployed an XtremIO array in multiple X-Brick configurations to support up to 3,500 virtual desktops. Two different XtremIO X-Brick types were tested:

a Starter X-Brick capable of hosting up to 1,750 virtual desktops

an X-Brick capable of hosting up to 3,500 virtual desktops

We also deployed Isilon and VNX arrays for hosting user data.

The highly available XtremIO array provides the storage for the desktop virtualization components. The infrastructure services for the solution, as shown in Figure 2, can be provided by existing infrastructure at the customer site, by the VSPEX Private Cloud, or by deploying them as dedicated resources as part of the solution. The virtual desktops require dedicated end-user computing resources and are not intended to be layered on a VSPEX Private Cloud. 1 In this guide, "we" refers to the EMC Solutions engineering team that validated the solution.

Page 17: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

17 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Planning and designing the storage infrastructure for a Citrix XenDesktop environment is critical because the shared storage must be able to absorb large bursts of I/O that occur during a day. These bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users can adapt to slow performance, but unpredictable performance frustrates users and reduces efficiency.

To provide predictable performance for end-user computing solutions, the storage system must be able to handle the peak I/O load from the clients while keeping response time to a minimum. This solution uses the XtremIO array to provide the sub-millisecond response times that clients require, while the real-time, inline deduplication and inline compression features of the platform reduce the amount of physical storage needed.

EMC Data Protection solutions enable user data protection and end-user recoverability. This Citrix XenDesktop solution uses Avamar and its desktop client to achieve this.

The EMC VSPEX End-User Computing for Citrix XenDesktop solution supports block storage on XtremIO for the virtual desktops. Figure 3 shows the logical architecture of the solution.

Figure 3. Logical architecture

This solution uses two networks: one 8 Gb FC network or 10 GbE iSCSI for carrying virtual desktop and virtual server OS data, and one 10 Gb Ethernet network for carrying all other traffic.

Note: The solution also supports 1 Gb Ethernet if the bandwidth requirements are met.

Logical architecture

Page 18: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

18 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Key components

This section provides an overview of the key technologies used in this solution, as outlined in Table 3.

Table 3. Solution components

Component Description

Desktop virtualization broker

Manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software is critical to enable on-demand creation of desktop images, allow maintenance to the image without affecting user productivity, and prevent the environment from growing in an unconstrained way.

The desktop broker in this solution is Citrix XenDesktop 7.6.

Virtualization layer

This layer enables the physical implementation of resources to be decoupled from the applications that use them. In other words, the application’s view of the resources available is no longer directly tied to the hardware. This enables many key features in the end-user computing concept.

This solution uses Microsoft Hyper-V for the virtualization layer.

Compute layer

This layer provides memory and processing resources for the virtualization layer software and for the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resources required but enables the customer to implement the requirements using any server hardware that meets these requirements.

Network layer This layer connects the users of the environment to the resources they need and connects the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but enables the customer to implement the requirements using any network hardware that meets these requirements.

Storage layer The storage layer is a critical resource for the implementation of the end-user computing environment. This layer must be able to absorb large bursts of activity as they occur without unduly affecting the user experience.

This solution uses XtremIO, Isilon, and VNX arrays to efficiently handle this workload.

Data protection This is an optional solution component that provides data protection if data in the primary system is deleted, damaged, or otherwise unusable.

This solution uses Avamar for data protection.

Citrix ShareFile StorageZones solution

This component provides optional support for Citrix ShareFile StorageZones deployments

Page 19: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

19 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Desktop virtualization broker

Desktop virtualization encapsulates and hosts desktop services on centralized computing resources at remote data centers. This enables end users to connect to their virtual desktops from different types of devices across a network connection. Devices can include desktops, laptops, thin clients, zero (ultra-thin) clients, smart phones, and tablets.

In this solution, we used Citrix XenDesktop to provision, manage, broker, and monitor the desktop virtualization environment.

XenDesktop is the desktop virtualization solution from Citrix that enables virtual desktops to run on the Hyper-V virtualization environment. Citrix XenDesktop 7.6 integrates Citrix XenApp application delivery technologies and XenDesktop desktop virtualization technologies into a single architecture and management experience. This architecture unifies both management and delivery components to enable a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as secure mobile services to users anywhere on any device.

Figure 4 shows the XenDesktop 7.6 architecture components.

Figure 4. XenDesktop 7.6 architecture components

Overview

Citrix XenDesktop 7.6

Page 20: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

20 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

The XenDesktop 7.6 architecture includes the following components:

Citrix Director—A web-based tool that enables IT support and help desk teams to monitor an environment, troubleshoot issues before they become system-critical, and perform support tasks for end users.

Citrix Receiver—Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access to documents, applications, and desktops from any of their devices including smart phones, tablets, and computers. Receiver provides on-demand access to Windows, web, and software as a service (SaaS) applications.

Citrix StoreFront—Provides authentication and resource delivery services for Citrix Receiver. It enables centralized control of resources and provides users with on-demand, self-service access to their desktops and applications.

Citrix Studio—Let’s you configure and manage the deployment, eliminating the need for separate consoles to manage delivery of applications and desktops. Studio provides wizards to guide you through the process of setting up your environment, creating your workloads to host applications and desktops, and assigning applications and desktops to users.

Delivery Controller—Installed on servers in the data center, Delivery Controller consists of services that communicate with the hypervisor to:

Distribute applications and desktops

Authenticate and manage user access

Broker connections between users and their virtual desktops and applications

Delivery Controller manages the state of the desktops, starting and stopping them based on demand and administrative configuration. In some editions, the controller enables you to install profile management to manage user personalization settings in virtualized or physical Windows environments.

License Server—Assigns user or device licenses to the XenDesktop environment. License Server can be installed along with other Citrix XenDesktop components or on a separate virtual or physical machine.

Virtual Delivery Agent (VDA)—Installed on server or workstation operating systems, the VDA enables connections for desktops and applications. For remote computer access, you install the VDA on your office computer.

Server OS machines—Virtual machines or physical machines, based on the Windows Server OS, used for delivering applications or hosted-shared desktops (HSDs) to users.

Desktop OS machines—Virtual machines or physical machines, based on a Windows desktop OS, used for delivering personalized desktops to users, or applications from desktop operating systems.

Remote PC Access—Enables users to access resources on their office computers remotely, from any device running Citrix Receiver.

Page 21: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

21 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Machine Creation Services (MCS) is a provisioning mechanism that is integrated with the XenDesktop management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle from a centralized point of management.

MCS enables several types of desktop experience to be managed within a catalog in Citrix Studio. An end-user logs into one desktop for a static desktop experience, or a new desktop for a random desktop experience. Desktop customization is persistent for a static desktop that uses the Personal vDisk (PvDisk or PvD) feature or the desktop’s local hard drive to save changes. While on the other hand, a random desktop discards changes and refreshes the desktop when the user logs off.

Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by fundamentally changing the relationship between hardware and the software that runs on it. By streaming a single shared disk image (vDisk) instead of copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage. As the number of machines continues to grow, PVS provides the efficiency of centralized management with the benefits of distributed processing.

Because machines stream disk data dynamically in real time from a single shared image, machine image consistency is ensured. In addition, large pools of machines can completely change their configuration, applications, and even OS during a reboot operation.

The Citrix PvD feature enables users to preserve customization settings and user-installed applications in a pooled desktop by redirecting the changes from the user’s pooled virtual machine to a separate PvD. During runtime, the content of the PvD is blended with the content from the base virtual machine to provide a unified experience to the end user. The PvD data is preserved during reboot and refresh operations.

Citrix Profile Management preserves user profiles and dynamically synchronizes them with a remote profile repository. Profile Management downloads a user’s remote profile dynamically when the user logs in to XenDesktop, and applies personal settings to desktops and applications regardless of the user’s login location or client device.

The combination of Profile Management and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the amount of storage required in an organization.

Machine Creation Services

Citrix Provisioning Services

Citrix Personal vDisk

Citrix Profile Management

Page 22: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

22 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Virtualization layer

Microsoft Hyper-V provides a complete virtualization platform that provides flexibility and cost savings by enabling the consolidation of large, inefficient server farms into nimble and reliable cloud infrastructures. The core Microsoft virtualization components are the Microsoft Hyper-V hypervisor and the Microsoft System Center Virtual Machine Manager for system management.

The Hyper-V hypervisor transforms a computer’s physical resources by virtualizing the CPU, memory, storage, and network. This transformation creates fully functional virtual machines that run isolated and encapsulated operating systems and applications, just like physical computers do.

Hyper-V runs on a dedicated server and enables multiple operating systems to execute simultaneously on the system as virtual machines. Microsoft clustered services enable multiple Hyper-V servers to operate in a clustered configuration. The Hyper-V cluster configuration is managed as a larger resource pool through the Microsoft System Center Virtual Machine Manager. This enables dynamic allocation of CPU, memory, and storage across the cluster.

Microsoft System Center Virtual Machine Manager (SCVMM) is a scalable, extensible, centralized management platform for the Hyper-V infrastructure. It provides administrators with a single interface that they can access from multiple devices for all aspects of monitoring, managing, and maintaining the virtual infrastructure.

Microsoft Hyper-V’s high-availability features—such as Failover Clustering, Live Migration, and Storage Migration—enable seamless migration of virtual machines and stored files from one Hyper-V server to another with minimal or no performance impact.

Hyper-V Failover Clustering enables the virtualization layer to automatically restart virtual machines in various failure conditions. If the physical hardware has an error, the impacted virtual machines can be restarted automatically on other servers in the cluster. You can configure policies to determine which machines are restarted automatically and under what conditions these operations are performed.

Note: For Hyper-V Failover Clustering to restart virtual machines on different hardware, those servers must have resources available. The Server design considerations section provides specific recommendations to enable this functionality.

Live Migration provides migration of virtual machines within clustered and non-clustered servers with no virtual machine downtime or service disruption.

Storage Migration provides migration of virtual machine disk files within and across storage arrays with no virtual machine downtime or service disruption.

Microsoft Hyper-V

Microsoft System Center Virtual Machine Manager

Microsoft Hyper-V high availability

Page 23: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

23 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Compute layer

VSPEX defines the minimum amount of compute layer resources required, but enables the customer to implement the requirements using any server hardware that meets these requirements. For details, refer to Chapter 5.

Network layer

VSPEX defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but enables the customer to implement the requirements using any network hardware that meets these requirements. For details, refer to Chapter 5.

Storage layer

The storage layer is a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in a data center storage processing system. This VSPEX solution uses XtremIO storage arrays to provide virtualization at the storage layer. The XtremIO platform provides the required storage performance, increases storage efficiency and management flexibility, and reduces total cost of ownership. This solution also uses the Isilon or VNX arrays to provide storage for user data.

The EMC XtremIO All-Flash Array is an all-new design with a revolutionary architecture. It brings together all of the necessary requirements to enable the agile data center: linear scale-out, inline all-the-time data services, and rich data center services for the workloads.

The basic hardware building block for these scale-out arrays is the X-Brick. Each X-Brick consists of two active-active controller nodes and a disk array enclosure packaged together with no single point of failure. The Starter X-Brick with 13 SSDs can be non-disruptively expanded to a full X-Brick with 25 SSDs without any downtime. Up to six X-Bricks can be combined in single a scale-out cluster to increase performance and capacity in a linear fashion.

The XtremIO platform is designed to maximize the use of flash storage media. Key attributes of this platform are:

Incredibly high levels of I/O performance, particularly for random I/O workloads that are typical in virtualized environments

Consistently low (sub-millisecond) latency

True inline data reduction—the ability to remove redundant information in the data path and write only unique data on the storage array, thus lowering the amount of capacity required

XtremIO storage systems include the following components:

Host adapter ports—Provide host connectivity through fabric into the array.

Storage controllers (SCs)—The compute component of the storage array. SCs handle all aspects of data moving into, out of, and between arrays.

EMC XtremIO

Page 24: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

24 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Disk drives—Solid-state drives (SSDs) that contain the host/application data and their enclosures.

Infiniband switches—A computer network communications link used in multi-X-Brick configurations that is switched, high throughput, low latency, scalable, and quality-of-service and failover-capable.

XtremIO Operating System (XIOS)

The XtremIO storage cluster is managed by the XtremIO Operating System (XIOS), XtremIO’s powerful operating system. XIOS ensures that the system remains balanced and always delivers the highest levels of performance without any administrator intervention, as follows:

Ensures that all SSDs in the system are evenly loaded, providing both the highest possible performance as well as endurance that stands up to demanding workloads for the entire life of the array.

Eliminates the need to perform the complex configuration steps found on traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths, set caching policies, build aggregates, or set any other configuration parameters that require specialized storage skills.

Automatically and optimally configures every volume at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system.

Standards-based enterprise storage system

The XtremIO system interfaces with hosts using standard FC and iSCSI block interfaces. The system supports complete high-availability features, including support for multipath I/O with EMC PowerPath or native Microsoft multipath I/O, protection against failed SSDs, non-disruptive software and firmware upgrades, no single point of failure (SPOF), and hot-swappable components.

Real-time, inline data reduction

The XtremIO storage system deduplicates and compresses data, including desktop images, in real time, allowing a massive number of virtual desktops to reside in a small and economical amount of flash capacity. Furthermore, data reduction on the XtremIO array does not adversely affect input/output per second (IOPS) or latency performance; rather it enhances the performance of the end-user computing environment.

Scale-out design

The X-Brick is the fundamental building block of a scaled out XtremIO clustered system. Using a Starter X-Brick, virtual desktop deployments can start small (up to 1,750 virtual desktops) and grow to nearly any scale required by upgrading the Starter X-Brick to an X-Brick, and then configuring a larger XtremIO cluster if required. The system expands capacity and performance linearly as building blocks are added, making EUC sizing and management of future growth extremely simple.

Page 25: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

25 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

VAAI integration

The XtremIO array is fully integrated with vSphere through vStorage APIs for Array Integration (VAAI). All API commands are supported, including ATS, Clone Blocks/Full Copy/XCOPY, Zero Blocks/Write Same, Thin Provisioning, and Block Delete. This, in combination with the array’s inline data reduction and in-memory metadata management, enables nearly instantaneous virtual machine provisioning and cloning and makes it possible to use large volume sizes for management simplicity.

Massive performance

The XtremIO array is designed to handle very high, sustained levels of small, random, mixed read and write I/O, as is typical in virtual desktops, and to do so with consistent extraordinarily low latency.

Fast provisioning

XtremIO arrays deliver industry’s first writeable snapshot technology that is space-efficient for both data and metadata. XtremIO snapshots are free from limitations of performance, features, topology, or capacity reservations. With their unique in-memory metadata architecture, XtremIO arrays can instantly clone desktop environments of any size.

Ease of use

The XtremIO storage system requires only a few basic setup steps that can be completed in minutes with absolutely no tuning or ongoing administration in order to achieve and maintain high performance levels. In fact, the XtremIO system can be deployment ready in less than an hour after delivery.

Security with Data at Rest Encryption (D@RE)

XtremIO arrays securely encrypt all data stored on the all-flash array, delivering protection – especially for persistent virtual desktops – for regulated use cases in sensitive industries such as healthcare, finance, and the government.

Data center economics

Up to 3,500 virtual desktops are supported on an X-Brick, requiring just a few rack units of space and approximately 750 W of power.

EMC Isilon scale-out network attached storage (NAS) is ideal for storing large amounts of user data and Windows profiles in a Citrix XenDesktop infrastructure. It provides a simple, scalable, and efficient platform to store massive amounts of unstructured data and enable various applications to create a scalable and accessible data repository without the overhead associated with traditional storage systems. Key attributes of the Isilon platform are:

Isilon is multi-protocol, supporting network file system (NFS), common internet file system (CIFS), HTTP, FTP, Hadoop Distributed File System (HDFS) for Hadoop and Data Analytics, and Representation State Transfer (REST) for object and cloud computing requirements.

At the Client/Application layer, the Isilon NAS architecture supports a wide range of operating system environments, as shown here.

EMC Isilon

Page 26: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

26 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

At the Ethernet level, Isilon utilizes a 10 GbE network.

Isilon’s OneFS operating system is a single file system/single volume architecture, which makes it extremely easy to manage, regardless of the number of nodes in the storage cluster.

Isilon storage systems scale from a minimum of three nodes up to 144 nodes, all of which are connected by an Infiniband communications layer.

Figure 5. Isilon cluster components

Isilon OneFS

The Isilon OneFS operating system provides the intelligence behind all Isilon scale-out storage systems. It combines the three layers of traditional storage architectures—file system, volume manager, and data protection—into one unified software layer, creating a single intelligent file system that spans all nodes within an Isilon cluster.

Figure 6. EMC Isilon OneFS Operating System functionality

OneFS provides a number of important advantages:

Simple to Manage as a result of Isilon’s single file system, single volume, global namespace architecture

Massive Scalability with the ability to scale to 20 PB in a single volume

Unmatched Efficiency with over 80% storage utilization, automated storage tiering, and Isilon SmartDedupe

Page 27: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

27 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Enterprise data protection including efficient backup and disaster recovery, and N+1 thru N+4 redundancy

Robust security and compliance options with:

Role-based access control

Secure Access Zones

SEC 17a-4 compliant WORM data security

Data at Rest Encryption (D@RE) with Self-Encrypting Drives (SEDs) option

Integrated File System Auditing support

Operational Flexibility with multi-protocol support including native HDFS support; Syncplicity® support for secure mobile computing; and support for object and cloud computing including OpenStack Swift

Isilon offers a full suite of data protection and management software to help you protect your data assets, control costs, and optimize storage resources and system performance for your Big Data environment.

Data protection

SnapshotIQ: to protect data efficiently and reliably with secure, near instantaneous snapshots while incurring little to no performance overhead, and speed recovery of critical data with near-immediate, on-demand snapshot restores

SyncIQ: to replicate and distribute large, mission-critical data sets to multiple shared storage systems in multiple sites for reliable disaster recovery capability

SmartConnect: to enable client connection load balancing and dynamic NFS failover and failback of client connections across storage nodes to optimize use of cluster resources

SmartLock: to protect your critical data against accidental, premature, or malicious alteration or deletion with Isilon’s software-based approach to write once-read many (WORM) and meet stringent compliance and governance needs such as SEC 17a-4 requirements

Data management

SmartPools: to implement a highly efficient, automated, tiered storage strategy to optimize storage performance and costs

SmartDedupe: for data deduplication to reduce storage capacity requirements and associated costs by up to 35% without impacting performance

SmartQuotas: to assign and manage quotas that seamlessly partition and thin provision storage into easily managed segments at the cluster, directory, sub-directory, user, and group levels

InsightIQ: to gain innovative performance monitoring and reporting tools that can help you maximize performance of your Isilon scale-out storage system

Isilon for vCenter: to manage Isilon storage functions from vCenter

Page 28: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

28 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Isilon Scale-out NAS Product Family

The available Isilon nodes today are broken into several classes, according to their functionality:

S-Series: IOPS-intensive applications

X-Series: High-concurrency and throughput-driven workflows

NL-Series: Near-primary accessibility, with near-tape value

Performance Accelerator: Independent scaling for ultimate performance

Backup Accelerator: High-speed and scalable backup and restore solution

Figure 7. Isilon node classes

The EMC VNX flash-optimized unified storage platform is ideal for storing user data and Windows profiles in a Citrix XenDesktop infrastructure, and delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s virtualized application environments.

VNX storage includes the following components:

Host adapter ports (for block)—Provide host connectivity through fabric into the array.

Data Movers (for file)—Front-end appliances that provide file services to hosts (optional if providing CIFS/SMB or NFS services).

Storage processors (SPs)—The compute component of the storage array. SPs handle all aspects of data moving into, out of, and between arrays.

Disk drives—Disk spindles and SSDs that contain the host/application data, and their enclosures.

Note: Data Mover refers to a VNX hardware component, which has a CPU, memory, and input/output (I/O) ports. It enables the CIFS (SMB) and NFS protocols on the VNX array.

EMC VNX

Page 29: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

29 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

EMC VNX series

VNX includes many features and enhancements designed and built on the first generation’s success, including:

More capacity and better optimization with the VNX MCx™ technology components Multicore Cache, Multicore RAID, and Multicore Fully Automated Storage Tiering™ (FAST) Cache

Greater efficiency with a flash-optimized hybrid array

Better protection by increasing application availability with active/active storage processors

Easier administration and deployment with the new EMC Unisphere® Management Suite

VSPEX is built with VNX to deliver even greater efficiency, performance, and scale than ever before.

Flash-optimized hybrid array

VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks.

In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. Flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. EMC FAST Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives. It also boosts the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance.

Data generally is accessed most frequently at the time it is created; therefore, new data is first stored on flash drives to provide the best performance. As the data ages and becomes less active over time, FAST VP tiers the data from high-performance to high-capacity drives automatically, based on customer-defined policies. This functionality has been enhanced with four times better granularity and with new FAST VP SSDs based on enterprise multilevel cell (eMLC) technology to lower the cost per gigabyte.

FAST Cache uses flash drives as an expanded cache layer for the array to dynamically absorb unpredicted spikes in system workloads. Frequently accessed data is copied to the FAST Cache in 64 KB increments. Subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives, dramatically improving the response times for the active data and reducing data hot spots that can occur within the LUN.

All VSPEX use cases benefit from the increased efficiency provided by the FAST Suite. Furthermore, VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier.

Page 30: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

30 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Unisphere Management Suite

EMC Unisphere is the central management platform for the VNX series, providing a single, combined view of file and block systems, with all features and functions available through a common interface. Unisphere is optimized for virtual applications and provides Hyper-V integration, automatically discovering virtual machines and ESX servers, and providing end-to-end, virtual-to-physical mapping. Unisphere also simplifies configuration of FAST Cache and FAST VP on VNX platforms.

The Unisphere Management Suite extends the easy-to-use interface of Unisphere to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 8, the suite also includes Unisphere Remote for centrally managing thousands of VNX and VNXe systems with new support for EMC XtremCache™.

Figure 8. EMC Unisphere Management Suite

EMC VNX Virtual Provisioning

EMC VNX Virtual Provisioning™ enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures.

Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide predictable high performance for your applications. Both LUN types benefit from the ease-of-use features of pool-based provisioning.

Pools and pool LUNs are the building blocks for advanced data services such as FAST VP, VNX Snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and user-capacity threshold setting.

VNX file shares

In many environments, it is important to have a common location in which to store files accessed by many users. CIFS or NFS file shares, which are available from a file

Page 31: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

31 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

server, provide this ability. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. For more information about VNX file shares, refer to EMC VNX Series: Configuring and Managing CIFS on VNX.

EMC SnapSure

EMC SnapSure™ is a VNX software feature that lets you create and manage checkpoints that are point-in-time logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks; when a block within the PFS is modified, a copy containing the block's original contents is saved to a separate volume called SavVol.

Subsequent changes made to the same block in the PFS are not copied into the SavVol. SnapSure reads the original blocks from the PFS in the SavVol, and the unchanged PFS blocks remaining in the PFS, according to a bitmap and blockmap data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint.

A checkpoint reflects the state of a PFS at the time the checkpoint is created. SnapSure supports the following checkpoint types:

Read-only checkpoints—Read-only file systems created from a PFS

Writeable checkpoints—Read/write file systems created from a read-only checkpoint

SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data.

Note: Each writeable checkpoint is associated with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint.

For more details, refer to Using VNX SnapSure.

EMC Storage Integrator for Windows EMC Storage Integrator (ESI) for Windows is a management interface that lets you view, provision, and manage block and file storage for Windows environments. ESI simplifies the process for creating and provisioning storage to Hyper-V servers as a local disk or a mapped share. ESI also supports storage discovery and provisioning through PowerShell.

For more information, refer to the ESI for Windows documentation, available on EMC Online Support.

Data protection layer

Backup and recovery provides data protection by backing up data files or volumes using defined schedules and restoring data from the backup if recovery is needed

Virtualization management

Page 32: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

32 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

after a disaster. Avamar delivers the protection confidence needed to accelerate deployment of VSPEX end-user computing solutions.

Avamar empowers administrators to centrally back up and manage policies and end-user computing infrastructure components, while enabling end users to efficiently recover their own files from a simple and intuitive web-based interface. By moving only new, unique sub-file data segments, Avamar delivers fast full backups daily, with up to 90 percent reduction in backup times, while reducing the required daily network bandwidth by up to 99 percent. All Avamar recoveries are single-step for simplicity.

With Avamar, you can choose to back up virtual desktops using either image-level or guest-based operations. Avamar runs the deduplication engine at the virtual machine disk level for image backup and at the file level for guest-based backups.

Image-level protection enables backup clients to make a copy of all the virtual disks and configuration files associated with the particular virtual desktop in the event of hardware failure, corruption, or accidental deletion. Avamar significantly reduces the backup and recovery time of the virtual desktop by using change block tracking (CBT) on both backup and recovery.

Guest-based protection runs like traditional backup solutions. Guest-based backup can be used on any virtual machine running an OS for which an Avamar backup client is available. It enables fine-grained control over the content and inclusion/exclusion patterns. This can be used to prevent data loss due to user errors, such as accidental file deletion. Installing the desktop/laptop agent on the system to be protected enables end-user, self-service recovery of data.

Citrix ShareFile StorageZones solution

Citrix ShareFile is a cloud-based file sharing and storage service for enterprise-class storage and security. ShareFile enables users to securely share documents with other users. ShareFile users include employees and users who are outside of the enterprise directory (referred to as clients).

ShareFile StorageZones enables businesses to share files across the organization while meeting compliance and regulatory concerns. StorageZones enables customers to keep their data on their own on-premises storage systems. It facilitates sharing of large files with full encryption and enables the synchronization of files with multiple devices.

By keeping data on-premises and closer to users than data residing on the public cloud, StorageZones can provide improved performance and security.

The main features available to ShareFile StorageZones users are:

Use of StorageZones with or instead of ShareFile-managed cloud storage

Ability to configure Citrix CloudGateway Enterprise to integrate ShareFile services with Citrix Receiver for user authentication and user provisioning

Automated reconciliation between the ShareFile cloud and an organization’s StorageZones deployment

Page 33: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 3: Solution Overview

33 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Automated antivirus scans of uploaded files

File recovery from Storage Center backup (Storage Center is the server component of StorageZones). StorageZones enables you to browse the file records for a particular date and time and tag any files and folders to restore from Storage Center backup.

With additional infrastructure, the VSPEX End-User Computing for Citrix XenDesktop solution supports ShareFile StorageZones with Storage Center.

Page 34: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

34 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Chapter 4 Sizing the Solution

This chapter presents the following topics:

Overview .................................................................................................................. 35

Reference workload.................................................................................................. 35

VSPEX Private Cloud requirements ........................................................................... 36

VSPEX/XtremIO array configurations ....................................................................... 37

Isilon configuration .................................................................................................. 38

VNX array configurations ......................................................................................... 39

Choosing the appropriate reference architecture ..................................................... 40

Page 35: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

35 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Overview

This chapter describes how to design a VSPEX End-User Computing for Citrix XenDesktop solution and how to size it to fit the customer’s needs. It introduces the concepts of a reference workload, building blocks, and validated end-user computing maximums, and describes how to use these to design your solution. Table 4 outlines the high-level steps you need to complete when sizing the solution.

Table 4. VSPEX end-user computing: Design process

Step Action

1 Use the Customer Sizing Worksheet in Appendix A to collect the customer requirements for the end-user computing environment.

2 Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX reference architecture for your end-user computing solution, based on the customer requirements collected in Step 1.

Note: If the Sizing Tool is not available, you can manually size the end-user computing solution using the guidelines in this chapter.

Reference workload

VSPEX defines a reference workload to represent a unit of measure for quantifying the resources in the solution reference architectures. By comparing the customer’s actual usage to this reference workload, you can determine which reference architecture to choose as the basis for the customer’s VSPEX deployment.

For VSPEX end-user computing solutions, the reference workload is defined as a single virtual desktop—the reference virtual desktop—with the workload characteristics listed in Table 5.

To determine the equivalent number of reference virtual desktops for a particular resource requirement, use the VSPEX Customer Sizing Worksheet to convert the total actual resources required for all desktops into the reference virtual desktop format.

Table 5. Reference virtual desktop characteristics

Characteristic Value

Desktop OS (VDI) OS type Windows 7 Enterprise Edition (32-bit)

Windows 8.1 Enterprise Edition (32-bit)

Server OS (HSD) OS type Windows Server 2012 R2

Virtual processors per virtual desktop 1

RAM per virtual desktop 2 GB

Average IOPS per virtual desktop at steady state

10

Internet Explorer 11 (10 for Windows 7 )

Office 2010

Page 36: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

36 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Characteristic Value

Adobe Reader X1

Adobe Flash Player 11 ActiveX

Doro PDF printer 1.8

Workload generator Login VSI 4.1.2

Workload type officeworker

Note: We recommend formatting Windows C: and Cluster Shared Volumes (CSV) with Allocation Unit Size set to 8192 (8 KB).

This desktop definition is based on user data that resides on shared storage. The I/O profile is defined by using a test framework that runs all desktops concurrently with a steady load generated by the constant use of office-based applications such as browsers and office productivity software.

This solution is verified with performance testing conducted using Login VSI (www.loginvsi.com), the industry standard load testing solution for virtualized desktop environments.

Login VSI provides proactive performance management solutions for virtualized desktop and server environments. Enterprise IT departments use Login VSI products in all phases of their virtual desktop deployment—from planning to deployment to change management—for more predictable performance, higher availability and a more consistent end user experience. The world's leading virtualization vendors use the flagship product, Login VSI, to benchmark performance. With minimal configuration, Login VSI products works in VMware Horizon View, Citrix XenDesktop and XenApp, Microsoft Remote Desktop Services (Terminal Services) and any other Windows-based virtual desktop solution.

For more information, download a trial at www.loginvsi.com.

VSPEX Private Cloud requirements

This VSPEX End User Computing Proven Infrastructure requires multiple application servers. Unless otherwise specified, all servers use Microsoft Windows Server 2012 R2 as the base OS. Table 6 lists the minimum requirements of each infrastructure server required.

Table 6. Infrastructure server minimum requirements

Server CPU RAM (GB) IOPS Storage capacity (GB)

Domain controllers (each) 2 virtual CPUs (vCPUs)

4 25 32

SQL Server 2 vCPUs 6 100 200

SCVMM server 2 vCPUs 4 100 60

Login VSI

Page 37: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

37 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Server CPU RAM (GB) IOPS Storage capacity (GB)

Citrix XenDesktop Controllers (each)

2 vCPUs 8 50 32

Citrix PVS servers (each) 4 vCPUs 20 75 150

VSPEX for Citrix XenDesktop with ShareFile StorageZones solution provides the requirements for the optional Citrix ShareFile component.

This solution requires a 1.5 TB volume to host the infrastructure virtual machines, which can include the Microsoft SCVMM server, Citrix XenDesktop Controllers, Citrix PVS servers, optional Citrix ShareFile servers, Microsoft Active Directory Server, and Microsoft SQL Server.

VSPEX/XtremIO array configurations

We validated the VSPEX/XtremIO end-user computing configurations on the Starter X-Brick and X-Brick, which vary according to the number of SSDs they include and their total available capacity. For each array, EMC recommends a maximum VSPEX end-user computing configuration as outlined in this section.

The following XtremIO validated disk layouts provide support for a specified number of virtual desktops at a defined performance level. This VSPEX solution supports two XtremIO X-Brick configurations, which are selected based on the number of desktops being deployed:

XtremIO Starter X-Brick—Includes 13 SSDs and is validated to support up to 1,750 virtual desktops

XtremIO X-Brick—Includes 25 SSDs and is validated to support up to 3,500 virtual desktops

The XtremIO storage configuration required for this solution is in addition to the storage required by the VSPEX private cloud that supports the solution’s infrastructure services. For more information about the VSPEX private cloud storage pool, refer to the VSPEX Proven Infrastructure Guide in Essential reading.

Table 7 shows the number and size of the XtremIO volumes that will be presented to the Hyper-V servers to host the virtual desktops. Two configurations are listed for each desktop type: one that includes the space required to use the Citrix Personal vDisk (PvD) feature, and one that does not for solutions that will not use that component of Citrix XenDesktop. Please note that when deploying Citrix desktops using PVS or PvD, the following values are configured by default:

PVS write cache disk = 6 GB

Citrix Personal vDisk (PvD) = 10 GB

If either of these values is changed from the default, the volume sizes must also be changed as a result.

Private cloud storage layout

Validated XtremIO configurations

XtremIO storage layout

Page 38: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

38 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Table 7. XtremIO storage layout

XtremIO configuration

Number of desktops

Number of volumes

Type of desktop

Volume size (GB)

Starter X-Brick 1,750

7

PVS streamed 2,500

PVS with PvD streamed

5,000

14 MCS 750

MCS with PvD 2,000

X-Brick 3,500

14

PVS streamed 2,500

PVS with PvD streamed

5,000

28 MCS 750

MCS with PvD 2,000

This solution supports a flexible implementation model where it is easy to expand your environment as the needs of the business change.

To support future expansion, the XtremIO Starter X-Brick can be non-disruptively upgraded to an X-Brick by installing the XtremIO expansion kit, which adds an additional twelve 400 GB SSDs. The resulting X-Brick supports up to 3,500 desktops.

To support more than 3,500 reference virtual desktops, XtremIO supports scaling out online by adding more X-Bricks. Each additional X-Brick increases performance and virtual desktop capacity linearly. Two X-Brick, four X-Brick, or six X-Brick XtremIO clusters are all valid configurations.

Isilon configuration

This solution uses the Isilon system for storing user data, home directories, and profiles. A three-node Isilon cluster is used to support 2,500 users’ data with the reference workload validated in this solution. Each node has 36 drives (two Enterprise Flash Drives (EFD) and 34 Serial ATA (SATA)) and two 10 GbE Ethernet ports. Table 8 provides detailed information.

Table 8. User data resource requirement on Isilon

Number of reference

virtual desktops

Isilon configuration Max capacity/User (GB) Node number Node type

1~2,500 3 X410 36

2,501~3,500 4 X410 35

3,501~5,000 5 X410 30

Expanding existing VSPEX end-user computing environments

Page 39: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

39 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Table 8 shows the recommended Isilon configuration with the total number of CIFS calls as the fulfillment baseline. Each X410 node used in this solution can provide 30 TB of useable capacity. Additional nodes can be added if more capacity per user is needed. This solution is also capable of supporting other Isilon node types. Refer to the VSPEX Sizing Tool or check with your EMC sales representative for more information.

VNX array configurations

This solution also supports using VNX series storage arrays for user data storage, with FAST Cache enabled for the related storage pools. The VNX5400™ can support up to 1,750 users with the reference workload validated in this solution. The VNX5600™ can support up to 3,500 users with the reference workload validated in this solution. Table 9 shows the detailed requirements for 1,250 – 3,500 users.

Table 9. User data resource requirement on VNX

Number of users

VNX model SSD for

FAST Cache Number of 2 TB NL-SAS drives

Max capacity/User

(GB)

1,250 5400 2 16 15

1,750 5400 2 32 22

2,500 5600 4 40 19

3,500 5600 4 48 17

Table 9 shows the recommended VNX configuration with the total number of CIFS calls as the fulfillment baseline. Each 6+2 2 TB NL-SAS RAID 6 group used in this solution can provide 10 TB useable capacity. You can add more 6+2 2 TB NL-SAS RAID 6 group if more capacity per user is needed.

Refer to the VSPEX Sizing Tool or check with your EMC sales representative for more information about larger scale.

If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity.

Note: FAST VP can provide performance improvements when implemented for user data and roaming profiles.

The virtual desktops use four shared file systems—two for the Citrix XenDesktop Profile Management repositories and two to redirect user storage that resides in home directories. In general, redirecting users’ data out of the base image to VNX for File enables centralized administration and data protection and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. Each Persona Management repository share and home directory share serves an equal number of users.

EMC FAST VP

VNX shared file systems

Page 40: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

40 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Choosing the appropriate reference architecture

To choose the appropriate reference architecture for a customer environment, you must determine the resource requirements of the environment and then translate these requirements to an equivalent number of reference virtual desktops that have the characteristics defined in Table 10. This section describes how to use the Customer Sizing Worksheet to simplify the sizing calculations as well as additional factors you should take into consideration when deciding which architecture to deploy.

The Customer Sizing Worksheet helps you to assess the customer environment and calculate the sizing requirements of the environment.

Table 10 shows a completed worksheet for a sample customer environment. Appendix A provides a blank Customer Sizing Worksheet that you can print out and use to help size the solution for a customer.

Table 10. Example Customer Sizing Worksheet

User type vCPUs Memory IOPS Equivalent reference virtual desktops

No. of users

Total reference desktops

Heavy users

Resource requirements 2 8 GB 12 --- --- ---

Equivalent reference virtual desktops

2 4 2 4 200 800

Moderate users

Resource requirements 2 4 GB 8 --- --- ---

Equivalent reference virtual desktops

2 2 1 2 200 400

Typical users

Resource requirements 1 2 GB 8 --- --- ---

Equivalent reference virtual desktops

1 1 1 1 1,200 1,200

Total 2,400

To complete the Customer Sizing Worksheet:

1. Identify the user types planned for migration into the VSPEX end-user computing environment and the number of users of each type.

2. For each user type, determine the compute resource requirements in terms of vCPUs, memory (GB), storage performance (IOPS), and storage capacity.

3. For each resource type and user type, determine the equivalent reference virtual desktops requirements—that is, the number of reference virtual desktops required to meet the specified resource requirements.

4. Determine the total number of reference desktops needed from the resource pool for the customer environment.

Using the Customer Sizing Worksheet

Page 41: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

41 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Determining the resource requirements

CPU The reference virtual desktop outlined in Table 5 assumes that most desktop applications are optimized for a single CPU. If one type of user requires a desktop with multiple vCPUs, modify the proposed virtual desktop count to account for the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, consider that your pool needs to provide 120 virtual desktops of capability.

Memory Memory plays a key role in ensuring application functionality and performance. Each group of desktops will have different targets for the available memory that is considered acceptable. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of planned desktops to accommodate the additional resource requirements.

For example, if there are 200 desktops to be virtualized, but each one needs 4 GB of memory instead of the 2 GB that the reference virtual desktop provides, plan for 400 reference virtual desktops.

IOPS The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations.

Storage capacity The storage capacity requirement for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops in this solution rely on additional shared storage for user profile data and user documents. This requirement is an optional component that can be met by the addition of specific storage hardware defined in the solution. It can also be met by using existing file shares in the environment.

Determining the equivalent reference virtual desktops

With all of the resources defined, you determine the number of equivalent reference virtual desktops by using the relationships indicated in Table 11. Round all values up to the closest whole number.

Table 11. Reference virtual desktop resources

Resource Value for reference virtual desktop

Relationship between requirements and equivalent reference virtual desktops

CPU 1 Equivalent reference virtual desktops = resource requirements

Memory 2 Equivalent reference virtual desktops = (resource requirements)/2

Page 42: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

42 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Resource Value for reference virtual desktop

Relationship between requirements and equivalent reference virtual desktops

IOPS 10 Equivalent reference virtual desktops = (resource requirements)/10

For example, the heavy user type in Table 10 requires 2 vCPUs, 12 IOPS, and 8 GB of memory for each desktop. This translates to two reference virtual desktops of CPU, four reference virtual desktops of memory, and two reference virtual desktops of IOPS.

The number of reference virtual desktops required for each user type then equals the maximum required for an individual resource. For example, the number of equivalent reference virtual desktops for the heavy user type in Table 10 is four, as this number will meet the all resource requirements—IOPS, vCPU, and memory.

To calculate the total number of reference desktops for a user type, you multiply the number of equivalent reference virtual desktops for that user type by the number of users.

Determining the total reference virtual desktops

After the worksheet is completed for each user type that the customer wants to migrate into the virtual infrastructure, you compute the total number of reference virtual desktops required in the resource pool by calculating the sum of the total reference virtual desktops for all user types. In the example in Table 10, the total is 2,400 virtual desktops.

This VSPEX end-user computing reference architecture supports two separate points of scale, a Starter X-Brick capable of supporting up to 1,750 reference desktops, and an X-Brick capable of hosting up to 3,500 reference desktops. The total reference virtual desktops value from the completed Customer Sizing Worksheet can be used to verify that this reference architecture would be adequate for the customer requirements. In the example in Table 10, the customer requires 2,400 virtual desktops of capability from the pool. Therefore, this reference architecture provides sufficient resources for current needs as well as some room for growth.

However, there may be other factors to consider when verifying that this reference architecture will perform as intended. These factors can include concurrency and desktop workload.

Concurrency

The reference workload used to validate this solution assumes that all desktop users are active at all times. In other words, we tested this 3,500-desktop reference architecture with 3,500 desktops, all generating workload in parallel, all booted at the same time, and so on. If the customer expects to have 3,500 users, but only 50 percent of them are logged on at any given time due to time zone differences or alternate shifts, the reference architecture may be able to support additional desktops in this case.

Selecting a reference architecture

Page 43: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

43 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Heavier desktop workloads

The reference workload is considered a typical office worker load. However, some customers’ users might have a more active profile.

If a company has 3,500 users and, due to custom corporate applications, each user generates 50 predominantly write IOPS as compared to the 10 IOPS used in the reference workload, this customer will need 175,000 IOPS (3,500 users x 50 IOPS per desktop). This configuration would be underpowered in this case because the proposed I/O load is greater than the array maximum of 100,000 write IOPS. This company would need to deploy an additional X-Brick, reduce their current I/O load, or reduce the total number of desktops to ensure that the storage array performs as required.

In most cases, the Customer Sizing Worksheet suggests a reference architecture adequate for the customer‘s needs. However, in some cases you may want to further customize the hardware resources available to the system. A complete description of the system architecture is beyond the scope of this document but you can customize your solution further at this point.

Storage resources

The XtremIO array is deployed in one of two specialized configurations, one being a Starter X-Brick, the other an X-Brick. While more X-Bricks can be added to increase the capacity or performance capabilities of the XtremIO cluster, this solution is based on a either a Starter X-Brick or a single X-Brick. The XtremIO array requires no tuning, and the number of SSDs available in the array is fixed. The VSPEX Sizing Tool or Customer Sizing Worksheet should be used to verify that the XtremIO array can provided the necessary levels of capacity and performance.

Server resources

For the server resources in the solution, it is possible to customize the hardware resources more effectively. To do this, first total the resource requirements for the server components, as shown in Table 12. We added Total CPU resources and Total memory resources columns to the worksheet.

Fine tuning hardware resources

Page 44: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 4: Sizing the Solution

44 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Table 12. Server resource component totals

User types vCPUs Memory (GB)

Number of users

Total CPU resources

Total memory resources (GB)

Heavy users

Resource requirements

2 8 200 400 1,600

Moderate users

Resource requirements

2 4 200 400 800

Typical users

Resource requirements

1 2 1,200 1,200 2,400

Total 2,000 4,800

The example in Table 12 requires 2,000 virtual vCPUs and 4,800 GB of memory. The reference architectures assume five desktops per physical processor core and no memory over-provisioning. This converts to 500 processor cores and 4,800 GB of memory for this example. Use these calculations to more accurately determine the total server resources required.

Note: Keep high availability requirements in mind when customizing the resource pool hardware.

EMC considers the requirements stated in this solution to be the minimum set of resources needed to handle the workloads defined for a reference virtual desktop. In any customer implementation, the load of a system can vary over time as users interact with the system. If the number of customer virtual desktops differs significantly from the reference definition and varies in the same resource group, you might need to add more of that resource to the system.

Summary

Page 45: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

45 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Chapter 5 Solution Design Considerations and Best Practices

This chapter presents the following topics:

Overview .................................................................................................................. 46

Server design considerations ................................................................................... 46

Network design considerations ................................................................................ 51

Storage design considerations ................................................................................ 56

High availability and failover ................................................................................... 58

Validation test profile .............................................................................................. 60

EMC Data Protection configuration guidelines ......................................................... 61

VSPEX for Citrix XenDesktop with ShareFile StorageZones solution ........................ 61

Page 46: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

46 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Overview

This chapter describes best practices and considerations for designing the VSPEX End-User Computing solution. For more information on deployment best practices of various components of the solution, refer to the vendor-specific documentation.

Server design considerations

VSPEX solutions are designed to run on a wide variety of server platforms. VSPEX defines the minimum CPU and memory resources required, but not a specific server type or configuration. The customer can use any server platform and configuration that meets or exceeds the minimum requirements.

For example, Figure 9 shows how a customer could implement the same server requirements by using either white-box servers or high-end servers. Both implementations achieve the required number of processor cores and amount of RAM, but the number and type of servers differ.

Figure 9. Compute layer flexibility

Page 47: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

47 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

The choice of a server platform is not only based on the technical requirements of the environment, but also on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For example:

From a virtualization perspective, if a system’s workload is well understood, features such as memory ballooning and transparent page sharing can reduce the aggregate memory requirement.

If the virtual machine pool does not have a high level of peak or concurrent usage, you can reduce the number of vCPUs. Conversely, if the applications being deployed are highly computational in nature, you might need to increase the number of CPUs and the amount of memory.

The server infrastructure must meet the following minimum requirements:

Sufficient CPU cores and memory to support the required number and types of virtual machines

Sufficient network connections to enable redundant connectivity to the system switches

Sufficient excess capacity to enable the environment to withstand a server failure and failover

For this solution, EMC recommends that you consider the following best practices for the server layer:

Identical server units—Use identical or at least compatible servers to ensure that they share similar hardware configurations. VSPEX implements hypervisor-level high-availability technologies that might require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.

Recent processor technologies—For new deployments, use recent revisions of common processor technologies. It is assumed that these will perform as well as, or better than, the systems used to validate the solution.

High availability—Implement the high-availability features available in the virtualization layer to ensure that the compute layer has sufficient resources to accommodate at least single server failures. This will also allow you to implement minimal-downtime upgrades. High availability and failover provides further details.

Note: When implementing hypervisor layer-high availability, the largest virtual machine you can create is constrained by the smallest physical server in the environment.

Resource utilization—In any running system, monitor the utilization of resources and adapt as needed. For example, the reference virtual desktop and required hardware resources in this solution assume that there are no more than five vCPUs for each physical processor core (5:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual desktops, but this ratio may not be appropriate in all cases. EMC recommends monitoring

Server best practices

Page 48: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

48 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

CPU utilization at the hypervisor layer to determine if more resources are required and adding them as needed.

Table 13 identifies the server hardware and the configurations validated in this solution.

Table 13. Server hardware

Servers for virtual desktops

Configuration

CPU 1 vCPU per desktop (5 desktops per core)

350 cores across all servers for 1,750 virtual desktops

700 cores across all servers for 3,500 virtual desktops

Memory 2 GB RAM per virtual machine

3.5 TB RAM across all servers for 1,750 virtual desktops

7 TB RAM across all servers for 3,500 virtual machines

2 GB RAM reservation per Hyper-V host

Network 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server

Notes:

The 5:1 vCPU-to-physical-core ratio applies to the reference workload defined in this Design Guide. When deploying Avamar, add CPU and RAM as needed for components that are CPU or RAM intensive. Refer to the relevant product documentation for information on Avamar resource requirements.

No matter how many servers you deploy to meet the minimum requirements in Table 13, always add one more server to support Hyper-V HA. This server should have sufficient capacity to provide a failover platform in the event of a hardware outage.

Microsoft Hyper-V has a number of advanced features that help optimize performance and overall use of resources. This section describes the key features for memory management and considerations for using them with your VSPEX solution.

Figure 10 illustrates how a single hypervisor consumes memory from a pool of resources. Hyper-V memory management features such as memory over-commitment, transparent page sharing, and memory ballooning can reduce total memory usage and increase consolidation ratios in the hypervisor.

Validated server hardware

Hyper-V memory virtualization

Page 49: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

49 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Figure 10. Hypervisor memory consumption

Memory virtualization techniques enable the Hyper-V hypervisor to abstract physical host resources, such as dynamic memory, to provide resource isolation across multiple virtual machines, while avoiding resource exhaustion. In cases where advanced processors (such as Intel processors with EPT support) are deployed, memory abstraction takes place within the CPU. Otherwise, it occurs within the hypervisor itself.

Hyper-V provides several memory management techniques such as Dynamic Memory, non-uniform memory access, and Smart Paging.

Dynamic Memory

Dynamic Memory increases physical memory efficiency by treating memory as a shared resource and allocating it to the virtual machines dynamically. The actual consumed memory of each virtual machine is adjusted on demand. Dynamic Memory enables more virtual machines to run by reclaiming unused memory from idle virtual machines. In Windows Server 2012, Dynamic Memory can increase the maximum memory available to virtual machines.

Page 50: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

50 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Non-uniform memory access

Non-uniform memory access (NUMA) is a multinode computer technology that enables a CPU to access remote-node memory. This type of memory access is costly in terms of performance. However, Windows Server 2012 employs a process affinity that strives to keep threads pinned to a particular CPU to avoid remote-node memory access. In previous versions of Windows, this feature is available only to the host. Windows Server 2012 extends this functionality to virtual machines, where it improves performance.

Smart Paging

With Dynamic Memory, Hyper-V allows virtual machines to exceed the physical memory available. This means that when a virtual machine’s minimum memory is less than its start-up memory, Hyper-V might not always have additional memory available to meet the machine’s start-up requirements. Smart Paging bridges the gap between minimum memory and start-up memory and allows virtual machines to restart reliably by using disk resources as a temporary memory replacement. It swaps out less-used memory to disk and swaps it back in when needed. However, this can degrade performance. Hyper-V continues to use guest paging when the host memory is oversubscribed, because it is more efficient than Smart Paging.

Proper sizing and configuration of the solution requires care when configuring server memory. This section provides guidelines for allocating memory to virtual machines and takes into account Hyper-V memory overhead and the virtual machine memory settings.

Hyper-V memory overhead

Virtualization of memory resources incurs associated overhead, including the memory consumed by the Hyper-V parent partition, and additional overhead for each virtual machine. For this solution, leave at least 2 GB of memory for the Hyper-V parent partition.

Allocating memory to virtual machines

Server capacity is required for two purposes in the solution:

To support required infrastructure services such as authentication and authorization, DNS, and database

For further details on the hosting requirements for these infrastructure services, refer to the VSPEX Private Cloud Proven Infrastructure Guide listed in Essential reading.

To support the virtualized desktop infrastructure

In this solution, each virtual desktop is assigned 2 GB of memory, as defined in Table 13 on page 48. We validated the solution with statically assigned memory and with no over-commitment of memory resources. If memory over-commit is used in a real-world environment, you should regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results.

Memory configuration guidelines

Page 51: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

51 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Network design considerations

VSPEX solutions define minimum network requirements and provide general guidance on network architecture while enabling customers to choose any network hardware that meets the requirements. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server.

For reference purposes in the validated environment, EMC assumes that each virtual desktop generates 10 IOPS with an average size of 4 KB. This means that each virtual desktop generates at least 40 KB/s of traffic on the storage network. For an environment rated for 1,750 virtual desktops, this means a minimum of approximately 70 MB/sec, which is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for the following operations:

User network traffic

Virtual desktop migration

Administrative and management

The requirements for each of these operations depend on how the environment is used. It is not practical to provide concrete numbers in this context. However, the networks described for the reference architectures in this solution should be able to handle average workloads for these operations.

Regardless of the network traffic requirements, always have at least two physical network connections that are shared by a logical network to ensure that a single link failure does not affect system availability. Design the network so that if a failure happens, the aggregate bandwidth is sufficient to accommodate the full workload.

The network infrastructure must meet the following minimum requirements:

Redundant network links for the hosts, switches, and storage

Support for link aggregation

Traffic isolation based on industry best practices

Page 52: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

52 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Table 14 lists the hardware resources for the network infrastructure validated in this solution.

Table 14. Minimum switching capacity

Storage type Configuration

XtremIO Block: virtual desktop storage

2 physical switches

2 x FC/FCoE or 2 x10 GbE ports per Hyper-V server for storage network (FC or iSCSI, and live migration)

2 x FC or 2 x10 GbE ports per SC for desktop data

VNX for optional user data storage

2 physical switches

2 x 10 GbE ports per Hyper-V server

1 x 1 GbE port per Control Station for management

2 x 10 GbE ports per Data Mover for data

Isilon for optional user data storage

2 physical switches

2 x 10 GbE ports per Hyper-V server

1 x 1 GbE port per node for management

2 x 10 GbE ports per Data Mover for data

Notes:

The solution may use a 1 GbE network infrastructure as long as the underlying requirements for bandwidth and redundancy are fulfilled.

This configuration assumes that the VSPEX implementation is using rack-mounted servers; for blade server implementations, ensure that similar bandwidth and high-availability capabilities are provided.

This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines take into account network redundancy, link aggregation, traffic isolation, and jumbo frames.

The configuration examples are for IP-based networks, but similar best practices and design principles apply for the FC storage network option.

Network redundancy

The infrastructure network requires redundant network links for each Hyper-V host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. The configuration is also required regardless of whether the network infrastructure for the solution already exists, or is deployed with other solution components.

Validated network hardware

Network configuration guidelines

Page 53: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

53 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Figure 11 provides an example of a highly available XtremIO FC network topology.

Figure 11. Highly available XtremIO FC network design example

Page 54: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

54 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Figure 12 shows a highly available network setup example for user data with a VNX family storage array. The same high availability principal applies to an Isilon configuration as well. In both scenarios, each node will have two links to switches.

Figure 12. Highly available VNX Ethernet network design example

Link aggregation

VNX and Isilon arrays provide network high availability or redundancy using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address and, potentially, multiple IP addresses.2

In this solution, we configured the Link Aggregation Control Protocol (LACP) on the VNX or Isilon array to combine multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. We distributed all network traffic across the active links.

2 A link aggregation resembles an Ethernet channel but uses the LACP IEEE 802.3ad standard. This standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex.

Page 55: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

55 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Traffic isolation

This solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

VLANs segregate network traffic to enable traffic of different types to move over isolated networks. In some cases, physical isolation is required for regulatory or policy compliance reasons, but in most cases logical isolation using VLANs is sufficient.

This solution requires a minimum of two VLANs—one for client access and one for management. Figure 13 shows the design of these VLANs with VNX. An Isilon-based configuration would share the same design principals.

Figure 13. Required networks

The client access network is for users of the system (clients) to communicate with the infrastructure, including the virtual machines and the CIFS shares hosted by the VNX or Isilon array. The management network provides administrators with dedicated access to the management connections on the storage array, network switches, and hosts.

Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks can be implemented, but they are not required.

Page 56: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

56 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Storage design considerations

XtremIO offers inline de-duplication, inline compression and inline security-at-rest features, and native thin provisioning. Storage planning simply requires that you determine:

Volume size

Number of volumes

Performance requirements

Each volume must be greater than the logical space required by the server. An XtremIO cluster can fulfill the solution’s performance requirements.

Hyper-V supports more than one method of using storage when hosting virtual machines. We tested the configurations described in Table 15 using FC, and the storage layouts described adhere to all current best practices. If required, a customer or architect with the necessary training and background can make modifications based on their understanding of the system’s usage and load.

Table 15. Tested configurations

Purpose Configuration

XtremIO shared storage

Common:

2 x FC and 2 x 10 GbE interfaces per storage controller

1 x 1 GbE interface per storage controller for management

For 1,750 virtual desktops

Starter X-Brick configuration with 13 x 400 GB flash drives

For 3,500 virtual desktops

X-Brick configuration with 25 x 400 GB flash drives

Optional; Isilon shared storage disk capacity

Only required if deploying an Isilon cluster to host user data.

4 x X410 node

2 x 800 GB EFDs for each node

34 x 1 TB SATA drives each node

Optional; VNX shared storage disk capacity

For 1,750 virtual desktops:

34 x 2 TB/7,200 rpm/3.5-inch/NL-SAS disks

3 x 100 GB/3.5-inch flash drives

For 3,500 virtual desktops:

50 x 2 TB/7,200 rpm/3.5-inch/NL-SAS disks

5 x 100 GB/3.5-inch flash drives

Overview

Validated storage hardware and configuration

Page 57: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

57 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Note: For VNX arrays, EMC recommends configuring at least one hot spare for every 30 drives of a given type. The recommendations in Table 15 include hot spares.

Windows Server 2012 Hyper-V and Failover Clustering use Cluster Shared Volumes (CSV) v2 and Virtual Hard Disk Format (VHDX) features to virtualize storage presented from external shared storage systems to host virtual machines. A CSV is a shared disk that contains an NTFS volume made accessible by all nodes of a Windows Failover Cluster. It can be deployed over any SCSI-based local or network storage. We recommend formatting NTFS with Allocation Unit Size set to 8192(8 KB).

Figure 14 shows an example of a storage array presenting block-based LUNs (as CSVs) to the Windows hosts to host virtual machines. An additional option, pass-through disks, allows virtual machines to access a physical disk mapped to a Hyper-V host without a configured volume.

This solution uses CSVs for the infrastructure server and virtual desktop.

Figure 14. Hyper-V virtual disk types

New Virtual Hard Disk format

Hyper-V in Windows Server 2012 introduces an update to the VHD format called VHDX, which has a much larger capacity and built-in resiliency. The main features of the VHDX format are:

Support for virtual hard disk storage with the capacity of up to 64 TB

Additional protection against data corruption during power failures by logging updates to the VHDX metadata structures

Optimal structure alignment of the virtual hard disk format to suit large sector disks

The VHDX format has the following features:

Larger block sizes for dynamic and differential disks, which enables the disks to meet the needs of the workload

The 4 KB logical sector virtual disk that enables increased performance when used by applications and workloads that are designed for 4 KB sectors

Hyper-V storage virtualization

Page 58: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

58 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

The ability to store custom metadata about the files that the user might want to record, such as the operating system version or applied updates

Space reclamation features that can result in smaller file size and enable the underlying physical storage device to reclaim unused space (for example, TRIM requires direct-attached storage or SCSI disks and TRIM-compatible hardware)

High availability and failover

This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with minimal impact to business operations. This section describes the high availability features of the solution.

EMC recommends that you configure high availability in the virtualization layer and automatically allow the hypervisor to restart virtual machines that fail. Figure 15 illustrates the hypervisor layer responding to a failure in the compute layer.

Figure 15. High availability at the virtualization layer

By implementing high availability at the virtualization layer, the infrastructure attempts to keep as many services running as possible, even in the event of a hardware failure.

While the choice of servers to implement in the compute layer is flexible, it is best to use the enterprise-class servers designed for data centers. This type of server has redundant power supplies, as shown in Figure 16. You should connect these to separate power distribution units (PDUs) in accordance with your server vendor’s best practices.

Figure 16. Redundant power supplies

Virtualization layer

Compute layer

Page 59: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

59 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Both Isilon and VNX family storage arrays provide protection against network connection failures at the array. Each Hyper-V host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 17. You should spread these connections across multiple Ethernet switches to guard against component failure in the network.

Figure 17. VNX Ethernet network layer high availability

There are no single points of failure in the network layer, which ensures that the compute layer will be able to access storage and communicate with users even if a component fails.

XtremIO is designed for “five 9s” (99.999%) availability using redundant components throughout the array, as shown in Figure 18. All of the array components are capable of continued operation in case of hardware failure. XtremIO Data Protection (XDP) delivers the superior protection of RAID 6, while exceeding the performance of RAID 1 and the capacity utilization of RAID 5, ensuring against data loss due to drive failures.

Figure 18. XtremIO series high availability

EMC storage arrays, VNX or Isilon, are also designed to be highly available by default. Use the appropriate installation guides to ensure that any single-unit failures do not result in data loss or unavailability.

Network layer

Storage layer

Page 60: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

60 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Validation test profile

Table 17 shows the desktop definition and storage configuration parameters validated with the environment profile.

Table 17. Validated environment profile

Profile characteristic Value

EMC XtremIO 3.0.2

Hypervisor Windows Server 2012 R2 with Hyper-V

Desktop OS(VDI) OS type Windows 7 Enterprise Edition (32-bit)

Windows 8.1 Enterprise Edition (32-bit)

Server OS(HSD) OS type Windows Server 2012 R2

vCPU per virtual desktop 1

Number of virtual desktops per CPU core 5

RAM per virtual desktop 2 GB

Desktop provisioning method MCS or PVS

Average IOPS per virtual desktop at steady state

10 IOPS

Internet Explorer 11 (10 for Windows 7)

Office 2010

Adobe Reader X1

Adobe Flash Player 11 ActiveX

Doro PDF printer 1.8

Workload generator Login VSI

Workload type officeworker

Number of CSVs to store virtual desktops 14 for 1,750 virtual desktops

28 for 3,500 virtual desktops

Number of virtual desktops per CSV 125

Disk and RAID type for XtremIO virtual desktop CSV volume

400 GB eMLC SSDs

XtremIO proprietary data protection XDP that delivers RAID 6-like data protection, but better than the performance of RAID 10.

Note: We recommend formatting Windows C: and CSV volume with Allocation Unit Size set to 8192(8 KB). Refer to EMC Best Practices for the boot volume settings during OS installation.

Profile characteristics

Page 61: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

61 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

EMC Data Protection configuration guidelines

Table 18 shows the data protection environment profile validated for the solution.

Table 18. Data protection profile characteristics

Profile characteristic Value

User data 17.5 TB for 1,750 virtual desktops

35 TB for 3,500 virtual desktops

Note: 10 GB per desktop

Daily change rate for user data

User data 2%

Retention policy

Number per day 30 daily

Number per week 4 weekly

Number per month 1 monthly

The solution outlines the backup storage (initial and growth) and retention needs of the system. Gather additional information to further size Avamar, including tape-out needs, recovery point objective (RPO) and recovery time object (RTO) specifics, and multisite environment replication needs.

Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, the solution is deployed with an Avamar data store. This enables unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. This data protection solution unifies the backup process with the deduplication software and system and achieves the highest levels of performance and efficiency.

VSPEX for Citrix XenDesktop with ShareFile StorageZones solution

With some added infrastructure, the VSPEX End-User Computing for Citrix XenDesktop solution supports Citrix StorageZones with Storage Center.

Figure 19 shows the high-level architecture of a ShareFile StorageZones deployment.

Data protection profile characteristics

Data protection layout

ShareFile StorageZones architecture

Page 62: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

62 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Figure 19. ShareFile high-level architecture

The architecture consists of the following components:

Client—Accesses the ShareFile service through one of the native tools, such as a browser or Citrix Receiver, or directly through the ShareFile API.

Control Plane—Performs functions such as storing files, folders, and account information, access control, reporting, and various other brokering functions. The Control Plane resides in multiple Citrix data centers located worldwide.

StorageZones—Defines the locations where data is stored.

ShareFile Storage Center extends the ShareFile software-as-a-service (SaaS) cloud storage by providing on-premises private storage—that is, StorageZones. ShareFile on-premises storage differs from cloud storage as follows:

ShareFile-managed cloud storage is a public multitenant storage system maintained by Citrix. By default, ShareFile stores data in cloud storage.

A ShareFile Storage Center is a private single-tenant storage system maintained by the customer and accessible only by approved customer accounts. Storage Center enables you to configure private, on-premises StorageZones, which define the locations where data is stored and enable performance optimization by locating data storage close to users.

You can use StorageZones with or instead of the ShareFile-managed cloud storage.

Storage Center is a web service that handles all HTTPS operations from end users and the ShareFile control subsystem. The ShareFile control subsystem handles all operations not related to file content, such as authentication, authorization, file browsing, configuration, metadata, sending and requesting files, and load balancing. The control subsystem also performs Storage Center health checks and prevents off-

StorageZones

Page 63: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

63 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

line servers from sending requests. The ShareFile control subsystem is maintained in Citrix online data centers.

Based on an organization’s performance and compliance requirements, consider the number of StorageZones and where best to locate them. For example, if users are in Europe, storing files in a Storage Center in Europe provides both performance and compliance benefits. In general, assigning users to the StorageZones location that is closest to them geographically is the best practice for optimizing performance.

For a production deployment of ShareFile, the best practice is to use at least two servers with Storage Center installed for high availability. When you install Storage Center, you create a StorageZone. You can then install Storage Center on another server and join it to the same StorageZone. Storage Centers that belong to the same StorageZones must use the same file share for storage.

Figure 20 shows the logical architecture of the VSPEX for ShareFile StorageZones solution. You can select any server and networking hardware that meets or exceeds the minimum requirements, while the recommended storage delivers a highly available architecture for a ShareFile StorageZones deployment.

Figure 20. VSPEX for Citrix XenDesktop with ShareFile StorageZones: Logical architecture

Design considerations

VSPEX for ShareFile StorageZones architecture

Page 64: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 5: Solution Design Considerations and Best Practices

64 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Server requirements

A high-availability production environment requires a minimum of two servers (virtual machines) with Storage Center installed. The minimum requirements to implement Citrix ShareFile StorageZones with Storage Center are:

2 CPU (cores)

4 GB memory

For more information, refer to the Storage Center system requirements on the Citrix eDocs website. Network requirements

The networking components can be implemented using 1 Gb or 10 Gb IP networks, provided that bandwidth and redundancy are sufficient to meet the minimum requirements of the solution. You should provide sufficient network ports to support the additional two Storage Center servers.

Storage requirements

ShareFile StorageZones requires a CIFS share to provide private data storage for Storage Center. Table 19 details the recommended VNX storage for the StorageZones CIFS share.

Table 19. Recommended VNX storage for ShareFile StorageZones CIFS share

CIFS share for (number of users) Configuration

1,750 users 24 x 2 TB, 7,200 rpm 3.5-inch NL-SAS disks (6+2 RAID 6)

3,500 users 48 x 2 TB, 7,200 rpm 3.5-inch NL-SAS disks (6+2 RAID 6)

Note: The configuration assumes that each user has 10 GB of private storage space.

A three X410 node Isilon cluster can support the ShareFile storage requirement for up to 3,500 users.

Page 65: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 6: Reference Documentation

65 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Chapter 6 Reference Documentation

This chapter presents the following topics:

EMC documentation ................................................................................................. 66

Other documentation ............................................................................................... 66

Page 66: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 6: Reference Documentation

66 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

EMC documentation

The following documents, located on EMC Online Support, provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative.

Avamar Client for Windows on Citrix XenDesktop Technical Notes

Deploying Microsoft Windows 8 Virtual Desktops—Applied Best Practices White Paper

EMC Avamar 7 Administrator Guide

EMC Avamar 7 Operational Best Practices

EMC PowerPath Viewer Installation and Administration Guide

EMC Storage Integrator for Windows Suite Release Notes

EMC VNX Unified Best Practices for Performance—Applied Best Practices White Paper

EMC VNX5400 Unified Installation Guide

EMC XtremIO Storage Array Hardware Installation and Upgrade Guide

EMC XtremIO Storage Array Operations Guide

EMC XtremIO Storage Array Pre-Installation Checklist

EMC XtremIO Storage Array Security Configuration Guide

EMC XtremIO Storage Array Site Preparation Guide

EMC XtremIO Storage Array Software Installation and Upgrade Guide

EMC XtremIO Storage Array User Guide

VNX FAST Cache: A Detailed Review White Paper

VNX Installation Assistant for File/Unified Worksheet

Other documentation

Refer to the following topics on the Microsoft MSDN website:

Installing Windows Server 2012 R2

SQL Server Installation (SQL Server 2012 SP1)

Refer to the following topics on the Microsoft TechNet website:

Note: The links provided were working correctly at the time of publication.

Create VM from Template

Creating a Hyper-V Host Cluster in VMM Overview

Creating and Deploying Virtual Machines in VMM

Page 67: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 6: Reference Documentation

67 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Deploying Hyper-V Hosts Using Microsoft System Center 2012 Virtual Machine Manager

Failover Clustering Overview

How to Add a Node to a Hyper-V Host Cluster in VMM

How to Add Windows File Server Shares in VMM

How to Create a Virtual Machine Template

How to Create and Deploy a Virtual Machine from a Template

Hyper-V: How many network cards do I need?

Hyper-V Network Virtualization Overview

Hyper-V Overview

Install the Hyper-V Role and Configure a Virtual Machine

Installation for SQL Server 2012

Installing a VMM Agent Locally

Installing a VMM Management Server

Installing and Opening the VMM Console

Install and Deploy Windows Server 2012 R2 and Windows Server 2012

Windows Server 2012 Hyper-V Network Virtualization Survival Guide

The following documents, available on the Citrix website, provide additional and relevant information:

Definitive Guide to XenApp 7.6 and XenDesktop 7.6

Windows 7 Optimization Guide for Desktop Virtualization

Windows 8 and 8.1 Virtual Desktop Optimization Guide

The following documents, available on the Microsoft website, provide additional and relevant information:

Installing Windows Server 2012 R2

SQL Server Installation (SQL Server 2012)

Page 68: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 6: Reference Documentation

68 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

Appendix A Customer Sizing Worksheet

This appendix presents the following topic:

Customer Sizing Worksheet for end-user computing ............................................... 69

Page 69: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Appendix A: Customer Sizing Worksheet

69 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO

Design Guide

Customer Sizing Worksheet for end-user computing

Before selecting a reference architecture on which to base a customer solution, use the Customer Sizing Worksheet to gather information about the customer’s business requirements and to calculate the required resources.

Table 20 shows a blank worksheet. To enable you to easily print a copy, a standalone copy of the worksheet is attached to this Design Guide in Microsoft Office Word format.

Table 20. Customer Sizing Worksheet

User Type vCPUs Memory (GB)

IOPS Equivalent reference virtual desktops

No. of users

Total reference desktops

Resource requirements

--- --- ---

Equivalent reference virtual desktops

Resource requirements

--- --- ---

Equivalent reference virtual desktops

Resource requirements

--- --- ---

Equivalent reference virtual desktops

Resource requirements

--- --- ---

Equivalent reference virtual desktops

Total

Page 70: EMC VSPEX End-User Computing: Citrix XenDesktop ... … · EMC VSPEX END-USER COMPUTING: Citrix XenDesktop 7.6 ... 8 EMC VSPEX End-User Computing Citrix ... Infrastructure for Citrix

Chapter 6: Reference Documentation

70 EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and Microsoft Hyper-V with EMC XtremIO Design Guide

To view and print the worksheet:

1. In Adobe Reader, open the Attachments panel as follows:

Select View > Show/Hide > Navigation Panes > Attachments

or Click the Attachments icon, as shown in Figure 21.

Figure 21. Printable customer sizing worksheet

2. Under Attachments, double-click the attached file to open and print the worksheet.