100
Proven Solution Guide EMC Solutions Abstract This Proven Solution Guide describes the tests performed to validate an EMC infrastructure for VMware Horizon View 5.2 by using the EMC ® XtremIO™ all-flash array and VMware vSphere 5.1. This document focuses on sizing, scalability, design, and configuration, and highlights new features in the enabling technologies. November 2013 EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1 Simplify management and decrease total cost of ownership Guarantee a superior desktop experience Ensure a successful virtual desktop deployment

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

  • Upload
    dotram

  • View
    219

  • Download
    0

Embed Size (px)

Citation preview

Page 1: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Proven Solution Guide

EMC Solutions

Abstract

This Proven Solution Guide describes the tests performed to validate an EMC infrastructure for VMware Horizon View 5.2 by using the EMC® XtremIO™ all-flash array and VMware vSphere 5.1. This document focuses on sizing, scalability, design, and configuration, and highlights new features in the enabling technologies.

November 2013

EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1

• Simplify management and decrease total cost of ownership

• Guarantee a superior desktop experience

• Ensure a successful virtual desktop deployment

Page 2: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Contents

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

2

Copyright © 2013 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Part Number H12412

Page 3: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Contents

Chapter 1 Executive Summary 12

Business case .......................................................................................................... 12

Solution overview ..................................................................................................... 14

Key results and conclusions ..................................................................................... 15

Chapter 2 Introduction 17

Introduction to the EMC XtremIO all-flash array ......................................................... 17

Document overview .................................................................................................. 19

Use-case definition .............................................................................................. 19

Purpose ............................................................................................................... 19

Scope .................................................................................................................. 20

Audience ............................................................................................................. 20

Terminology ......................................................................................................... 20

Reference architecture .............................................................................................. 21

Corresponding reference architecture .................................................................. 21

Reference architecture diagram ........................................................................... 22

Configuration ........................................................................................................... 23

Hardware resources ............................................................................................. 23

Software resources .............................................................................................. 24

Chapter 3 Solution Infrastructure 25

VMware Horizon View 5.2 ......................................................................................... 25

Introduction ......................................................................................................... 25

Required components .......................................................................................... 26

VMware vSphere 5.1 infrastructure ........................................................................... 27

vSphere 5.1 overview .......................................................................................... 27

Desktop vSphere clusters .................................................................................... 27

Infrastructure vSphere cluster .............................................................................. 27

Microsoft Windows infrastructure ............................................................................. 28

Introduction ......................................................................................................... 28

Active Directory .................................................................................................... 28

SQL Server ........................................................................................................... 28

DNS server ........................................................................................................... 28

DHCP server ......................................................................................................... 29

Page 4: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Contents

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

4

Chapter 4 Storage Design 30

EMC XtremIO storage architecture ............................................................................ 30

Introduction ......................................................................................................... 30

Storage layout ..................................................................................................... 30

EMC Virtual Storage Integrator for VMware vSphere ............................................. 31

XtremIO storage layout overview .......................................................................... 31

Chapter 5 Network Design 32

Considerations ......................................................................................................... 32

Storage network layout overview ......................................................................... 32

Logical design considerations.............................................................................. 33

EMC XtremIO Storage Controller configuration .......................................................... 34

Storage Controller interfaces................................................................................ 34

VMware vSphere network configuration .................................................................... 34

vSphere vSwitch configuration ............................................................................. 34

vSphere vSwitch virtual ports............................................................................... 35

vSphere disk settings .......................................................................................... 36

vSphere host bus adapter queue depth optimizations ......................................... 37

vSphere storage multipathing .............................................................................. 38

Cisco Nexus 5020 Ethernet configuration ................................................................. 39

Overview .............................................................................................................. 39

Cabling ................................................................................................................ 39

Cisco Nexus 5020 Fibre Channel configuration ......................................................... 39

Overview .............................................................................................................. 39

Cabling ................................................................................................................ 39

Fibre Channel uplinks .......................................................................................... 39

Chapter 6 Installation and Configuration 42

Installation overview ................................................................................................ 42

Provisioning EMC XtremIO storage ............................................................................ 42

XtremIO Initiator Group and LUN provisioning ...................................................... 42

Creating VMware Horizon View desktop pools .......................................................... 45

Installation and configuration prerequisites......................................................... 45

Horizon View desktop pool configuration ............................................................. 45

Chapter 7 Testing and Validation: Full Clone Desktops 51

Overview .................................................................................................................. 51

Validated environment profile .................................................................................. 52

Profile characteristics .......................................................................................... 52

Use cases ............................................................................................................ 54

Page 5: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Contents

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

5

Login VSI ............................................................................................................. 54

Login VSI launcher ............................................................................................... 55

Boot storm test ......................................................................................................... 55

Test methodology ................................................................................................ 55

Individual drive load ............................................................................................ 55

Full-clone desktop LUN load ................................................................................ 56

XtremIO array IOPS and bandwidth ...................................................................... 56

Storage Controller utilization ............................................................................... 57

vSphere CPU load ................................................................................................ 58

I/O latency ........................................................................................................... 58

Antivirus test ............................................................................................................ 58

Test methodology ................................................................................................ 58

Individual drive load ............................................................................................ 58

Full-clone desktop LUN load ................................................................................ 59

XtremIO array IOPS and bandwidth ...................................................................... 60

Storage Controller utilization ............................................................................... 61

vSphere CPU load ................................................................................................ 61

I/O latency ........................................................................................................... 62

Patch install test ....................................................................................................... 62

Test methodology ................................................................................................ 62

Individual drive load ............................................................................................ 62

Full-clone desktop LUN load ................................................................................ 63

XtremIO array IOPS and bandwidth ...................................................................... 64

Storage Controller utilization ............................................................................... 64

vSphere CPU load ................................................................................................ 65

vSphere datastore response time ........................................................................ 65

Login VSI test ........................................................................................................... 66

Test methodology ................................................................................................ 66

Desktop logon time .............................................................................................. 66

Individual drive load ............................................................................................ 66

Full-clone desktop LUN load ................................................................................ 67

XtremIO array IOPS and bandwidth ...................................................................... 68

Storage Controller utilization ............................................................................... 68

vSphere CPU load ................................................................................................ 69

vSphere datastore response time ........................................................................ 69

Chapter 8 Testing and Validation: Linked Clone Desktops 70

Overview .................................................................................................................. 70

Validated environment profile .................................................................................. 71

Page 6: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Contents

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

6

Profile characteristics .......................................................................................... 71

Use cases ............................................................................................................ 74

Login VSI ............................................................................................................. 74

Boot storm test ......................................................................................................... 74

Test methodology ................................................................................................ 74

Individual drive load ............................................................................................ 74

Linked-clone LUN load ......................................................................................... 75

Replica-disk LUN load .......................................................................................... 76

XtremIO array IOPS and bandwidth ...................................................................... 76

Storage Controller utilization ............................................................................... 77

vSphere CPU load ................................................................................................ 77

I/O latency ........................................................................................................... 77

Antivirus test ............................................................................................................ 78

Test methodology ................................................................................................ 78

Individual drive load ............................................................................................ 78

Linked-clone LUN load ......................................................................................... 78

Replica-disk LUN load .......................................................................................... 79

XtremIO array IOPS and bandwidth ...................................................................... 79

Storage Controller utilization ............................................................................... 80

vSphere CPU load ................................................................................................ 80

I/O latency ........................................................................................................... 80

Patch install test ....................................................................................................... 81

Test methodology ................................................................................................ 81

Individual drive load ............................................................................................ 81

Linked-clone LUN load ......................................................................................... 82

Replica-disk LUN load .......................................................................................... 82

XtremIO array IOPS and bandwidth ...................................................................... 83

Storage Controller utilization ............................................................................... 83

vSphere CPU load ................................................................................................ 84

vSphere datastore response time ........................................................................ 84

Login VSI test ........................................................................................................... 85

Test methodology ................................................................................................ 85

Desktop logon time .............................................................................................. 85

Individual drive load ............................................................................................ 86

Linked-clone LUN load ......................................................................................... 86

Replica-disk LUN load .......................................................................................... 87

XtremIO array IOPS and bandwidth ...................................................................... 87

Storage Controller utilization ............................................................................... 88

Page 7: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Contents

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

7

vSphere CPU load ................................................................................................ 88

vSphere datastore response time ........................................................................ 89

Recompose test ........................................................................................................ 89

Test methodology ................................................................................................ 89

Individual drive load ............................................................................................ 90

Linked-clone LUN load ......................................................................................... 90

Replica-disk LUN load .......................................................................................... 91

XtremIO array IOPS and bandwidth ...................................................................... 91

Storage Controller utilization ............................................................................... 92

vSphere CPU load ................................................................................................ 92

vSphere datastore response time ........................................................................ 93

Refresh test .............................................................................................................. 93

Test methodology ................................................................................................ 93

Individual drive load ............................................................................................ 93

Linked-clone LUN load ......................................................................................... 94

Replica-disk LUN load .......................................................................................... 95

XtremIO array IOPS and bandwidth ...................................................................... 95

Storage Controller utilization ............................................................................... 96

vSphere CPU load ................................................................................................ 96

vSphere datastore response time ........................................................................ 97

Chapter 9 Conclusion 98

Summary .................................................................................................................. 98

Findings ................................................................................................................... 99

References ............................................................................................................. 100

Supporting documents ...................................................................................... 100

VMware documents ........................................................................................... 100

Page 8: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Tables

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

8

Tables

Table 1. Terminology......................................................................................... 20

Table 2. Solution hardware ............................................................................... 23

Table 3. Solution software ................................................................................ 24

Table 4. Storage requirements .......................................................................... 31

Table 5. vSphere port groups in vSwitch0 and vSwitch1 ................................... 35

Table 6. Test results summary: Full clone desktops ........................................... 52

Table 7. Horizon View: Full-clone desktop environment profile ......................... 52

Table 8. Test results summary: Linked clone desktops ...................................... 71

Table 9. Horizon View: Linked-clone desktop environment profile..................... 71

Page 9: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Figures

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

9

Figures

Figure 1. vSphere datastore latency ................................................................... 16

Figure 2. Reference architecture ......................................................................... 22

Figure 3. Horizon View: Logical representation of linked clone and replica disk.......................................................................................... 27

Figure 4. Storage network layout overview ......................................................... 33

Figure 5. XtremIO Storage Controllers ................................................................ 34

Figure 6. vSphere vSwitch configuration ............................................................ 34

Figure 7. vSphere vSwitch virtual ports .............................................................. 35

Figure 8. vSphere server: Configuration tab........................................................ 36

Figure 9. vSphere server: Advanced disk settings .............................................. 37

Figure 10. vSphere server: Storage devices .......................................................... 38

Figure 11. vSphere server: Manage paths ............................................................ 39

Figure 12. Example of single initiator zoning ........................................................ 40

Figure 13. Create an XtremIO initiator group......................................................... 43

Figure 14. Create an XtremIO volume ................................................................... 43

Figure 15. Map the XtremIO volume to an initiator group...................................... 44

Figure 16. XtremIO LUN configuration and zoning ................................................ 44

Figure 17. Horizon View: Select Automated Pool .................................................. 46

Figure 18. Horizon View: Select View Composer linked clones ............................. 46

Figure 19. Horizon View: Select Provision Settings ............................................... 47

Figure 20. Horizon View: vCenter Settings ............................................................ 48

Figure 21. Horizon View: Select Linked Clone Datastores ..................................... 48

Figure 22. Horizon View: Select Replica Disk Datastores ...................................... 49

Figure 23. Horizon View: Guest Customization ..................................................... 50

Figure 24. Storage capacity utilization: 2,500 full clone desktops ........................ 53

Figure 25. Boot storm: IOPS for a single eMLC drive ............................................. 55

Figure 26. Boot storm: IOPS for a full-clone desktop LUN ..................................... 56

Figure 27. Boot storm: XtremIO array total IOPS and bandwidth ........................... 56

Figure 28. Boot storm: Storage Controller utilization ............................................ 57

Figure 29. Boot storm: vSphere CPU load ............................................................. 58

Figure 30. Antivirus: IOPS for a single eMLC drive ................................................ 59

Figure 31. Antivirus: IOPS for a full-clone desktop LUN ......................................... 59

Figure 32. Antivirus: XtremIO array total IOPS and bandwidth .............................. 60

Figure 33. Antivirus: Storage Controller utilization ............................................... 61

Page 10: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Figures

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

10

Figure 34. Antivirus: vSphere CPU load ................................................................ 61

Figure 35. Patch install: IOPS for a single eMLC drive ........................................... 63

Figure 36. Patch install: IOPS for a full-clone desktop LUN .................................. 63

Figure 37. Patch install: XtremIO array total IOPS and bandwidth ......................... 64

Figure 38. Patch install: Storage Controller utilization .......................................... 64

Figure 39. Patch install: vSphere CPU load ........................................................... 65

Figure 40. Patch install: Average Guest Millisecond/Command counter ............... 65

Figure 41. Login VSI: Desktop login time .............................................................. 66

Figure 42. Login VSI: IOPS for a single eMLC drive ................................................ 67

Figure 43. Login VSI: IOPS for a full-clone desktop LUN ........................................ 67

Figure 44. Login VSI: XtremIO array total IOPS and bandwidth ............................. 68

Figure 45. Login VSI: Storage Controller utilization ............................................... 68

Figure 46. Login VSI: vSphere CPU load................................................................ 69

Figure 47. Login VSI: Average Guest Millisecond/Command counter .................... 69

Figure 48. Storage capacity utilization: 2,500 linked clone desktops ................... 73

Figure 49. Boot storm: IOPS for a single eMLC drive ............................................. 75

Figure 50. Boot storm: IOPS for a linked clone LUN .............................................. 75

Figure 51. Boot storm: IOPS for a replica disk LUN ............................................... 76

Figure 52. Boot storm: XtremIO array total IOPS and bandwidth ........................... 76

Figure 53. Boot storm: Storage Controller utilization ............................................ 77

Figure 54. Boot storm: vSphere CPU load ............................................................. 77

Figure 55. Antivirus: IOPS for a single eMLC drive ................................................ 78

Figure 56. Antivirus: IOPS for a linked clone LUN .................................................. 78

Figure 57. Antivirus: IOPS for a replica disk LUN ................................................... 79

Figure 58. Antivirus: XtremIO array total IOPS and bandwidth .............................. 79

Figure 59. Antivirus: Storage Controller utilization ............................................... 80

Figure 60. Antivirus: vSphere CPU load ................................................................ 80

Figure 61. Patch install: IOPS for a single eMLC drive ........................................... 81

Figure 62. Patch install: IOPS for a linked clone LUN ............................................ 82

Figure 63. Patch install: IOPS for a replica disk LUN ............................................. 82

Figure 64. Patch install: XtremIO array total IOPS and bandwidth ......................... 83

Figure 65. Patch install: Storage Controller utilization .......................................... 83

Figure 66. Patch install: vSphere CPU load ........................................................... 84

Figure 67. Patch install: Average Guest Millisecond/Command counter ............... 84

Figure 68. Login VSI: Desktop login time .............................................................. 85

Figure 69. Login VSI: IOPS for a single eMLC drive ................................................ 86

Figure 70. Login VSI: IOPS for a linked clone LUN ................................................. 86

Figure 71. Login VSI: IOPS for a replica disk LUN .................................................. 87

Figure 72. Login VSI: XtremIO array total IOPS and bandwidth ............................. 87

Page 11: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Figures

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

11

Figure 73. Login VSI: Storage Controller utilization ............................................... 88

Figure 74. Login VSI: vSphere CPU load................................................................ 88

Figure 75. Login VSI: Average Guest Millisecond/Command counter .................... 89

Figure 76. Recompose: IOPS for a single eMLC drive ............................................ 90

Figure 77. Recompose: IOPS for a linked clone LUN ............................................. 90

Figure 78. Recompose: IOPS for a replica disk LUN .............................................. 91

Figure 79. Recompose: XtremIO array total IOPS and bandwidth .......................... 91

Figure 80. Recompose: Storage Controller utilization ........................................... 92

Figure 81. Recompose: vSphere CPU load ............................................................ 92

Figure 82. Recompose: Average Guest Millisecond/Command counter ................ 93

Figure 83. Refresh: IOPS for a single eMLC drive .................................................. 94

Figure 84. Refresh: IOPS for a linked clone LUN .................................................... 94

Figure 85. Refresh: IOPS for a replica disk LUN ..................................................... 95

Figure 86. Refresh: XtremIO array total IOPS and bandwidth ................................ 95

Figure 87. Refresh: Storage Controller utilization ................................................. 96

Figure 88. Refresh: vSphere CPU load .................................................................. 96

Figure 89. Refresh: Average Guest Millisecond/Command counter ...................... 97

Page 12: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 1: Executive Summary

Chapter 1 Executive Summary

This chapter summarizes the proven solution described in this document. It includes the following sections:

• Business case

• Solution overview

• Key results and conclusions

Business case

Virtual desktop responsiveness is critical to successful end-user computing (EUC) project rollouts. Today, user experience expectations are increasingly being set based on devices such as ultrabooks and tablets that use flash memory. For example, the rapid application response time of a modern ultrabook is due in large part to the use of an SSD.

Knowledge workers accustomed to working with an ultrabook that easily peaks over 2,000 IOPS may experience unacceptably slow performance using a virtual desktop that delivers only between 7 and 25 IOPS (the common planning assumption range in traditional EUC reference architectures). A modern EUC deployment must deliver a better-than-local desktop user experience and a better cost per desktop relative to a physical machine, and it must enable IT to continue using existing desktop management tools and applications.

EUC exacerbates the need for higher desktop IOPS by centrally serving potentially tens of thousands of virtual operating systems and applications running concurrently. EUC also introduces its own unique challenges such as boot storms and login storms, which have peak IOPS requirements that often exceed the typical operational parameters of storage arrays. All of these challenges combined with the desire to build an economical solution have led to sub-par EUC infrastructures, such as those that under-size storage and downgrade desktop functionality by disabling various software components, resulting in a user experience that is less than desirable.

Page 13: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 1: Executive Summary

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

13

Using the EMC® XtremIO™ all-flash array as the foundation for EUC deployments provides several unique advantages that cannot be achieved with any other EUC deployment architecture:

• Complete flexibility in EUC deployments

Administrators can use persistent desktops or non-persistent desktops, deployed as either full clones or linked clones, or any combination thereof, without regard to underlying I/O performance or excessive capacity consumption. The XtremIO platform allows administrators the flexibility to simply do what is right for their business because either deployment method or any combination of deployment methods presents no inherent advantage or disadvantage in performance or cost. Full clones no longer have a cost disadvantage when compared to linked clones. More importantly, and defying conventional wisdom, a storage capacity penalty is no longer associated with deploying full clones over linked clones.

• Superior EUC user experience

Every desktop in an XtremIO deployment gets an all-SSD experience with reliable and massive I/O potential both in sustained IOPS and the ability to burst to much higher levels as dictated by demanding applications such as Microsoft Outlook, desktop search, and antivirus scanning. Users can run on fully functional desktops rather than de-featured ones. During our scale testing every simulated application operation completed in half or less of the acceptable user experience boundaries. This performance was superior by a wide margin to any previously tested shared storage array and has led to broadening the scope of EUC to now even include desktops used for engineering applications or game development.

• Lowest cost per virtual desktop

XtremIO EUC deployments are surprisingly affordable. Because of XtremIO’s in-line data reduction and massive performance density, the cost per desktop is lower than with other EUC solutions, allowing virtual desktops to be deployed at better economics than their physical desktop counterparts.

• Rapid provisioning and rollout

XtremIO is simple to set up and requires no tuning, any EUC deployment model can be chosen at will, and complex planning is eliminated. EUC deployments can be designed and rolled out quickly with assured success.

• No need for third-party tools

XtremIO solves all I/O-related EUC deployment challenges. Additional caching or host-based deduplication schemes, or any other point solutions that increase expense and complexity, are not required.

• No change to desktop administration

Whatever methods administrators are using to manage their existing physical desktops can be directly applied to the EUC deployment when XtremIO is used. No changes to software updates, operating system patching, antivirus scanning, or other procedures are needed to lighten the I/O load on shared

Page 14: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 1: Executive Summary

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

14

storage. Rather, administrators can confidently rely on XtremIO’s high performance levels to deliver.

• No change to desktop features

Virtual desktop best practices currently dictate dozens of changes to the desktop image to reduce the I/O load on the shared storage. XtremIO requires none of these changes, allowing the desktop to remain fully functional while providing a strong user experience.

• No nights and weekends

Administrators no longer need to plan outages over nights and weekends for routine but I/O-intensive desktop maintenance operations such as patching, upgrading, scanning, and refreshing desktops. They can rely on XtremIO to deliver during peak regular business hours. Large numbers of desktops can remain fully operational on XtremIO while select desktops undergo maintenance.

Solution overview

This solution aids in the design and successful deployment of virtual desktops on VMware Horizon View 5.2. This solution ensures the ultimate desktop performance, while at the same time delivering a highly attractive cost per desktop—not just for storage, but for the infrastructure overall.

Desktop virtualization enables organizations to exploit additional benefits such as:

• Increased security by centralizing business-critical information

• Increased compliance as information is moved from endpoints into the data center

• Simplified and centralized management of desktops

Customers deploying XtremIO will realize:

• A user experience that is superior to that of a physical desktop equipped with a dedicated SSD

• Increased control and security of their global, mobile desktop environment, typically their most at-risk environment

• Better end-user productivity with a more consistent environment

• Simplified management of desktop content confined to their data center

• Better support of service-level agreements and compliance initiatives

• Lower operational and maintenance costs

Page 15: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 1: Executive Summary

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

15

Key results and conclusions

The results from the testing of this solution revealed the following conclusions:

• The EMC XtremIO array delivers an outstanding user experience to each virtual desktop user by servicing a high number of I/Os at sub-millisecond latency for thousands of virtual desktops per X-Brick™ across a wide variety of desktop workloads. These desktops can be linked clones, full clones, or even a combination of both. Based on utilization statistics recorded during testing, were more desktops needed, a four X-Brick XtremIO cluster is capable of scaling up to 14,000 linked clone desktops, or 3,500 per X-Brick, and 10,000 full clone desktops or 2,500 per X-Brick.

• As the IOPS read/write ratio changes, the responsiveness of the XtremIO array remains virtually unchanged. The array does not require any system-level post-process garbage collection and does not exclusively lock SSDs being written to—practices that are commonly implemented in other all-flash arrays. As a result, XtremIO can provide consistent performance for any mix of read/write IOPS.

• The user experience does not degrade over time as the virtual desktops utilize additional physical storage. VMware Horizon View stakeholders (including end users, storage administrators, virtualization administrators, and desktop administrators) benefit from XtremIO’s predictable, consistent performance over time. Chances of seeing user support tickets with complaints about sudden loss of desktop responsiveness are minimal.

• While 2,500 virtual desktops are running, each X-Brick can easily support additional concurrent workloads or I/O-intensive routine maintenance operations because the aggregate demand from the virtual desktops is well below each X-Brick’s rated capacity of 150,000 mixed (50-percent read and 50-percent write) 4K random IOPS.

• As a result of this testing exercise, we can conclude that XtremIO storage will no longer be the bottleneck in VDI deployments. While deployments may still encounter bottlenecks and sub-par user experience, those issues are now more likely a result of under-sizing either the CPU or memory resources.

Figure 1 shows the average vSphere datastore latency observed during the steady-state portion of the EUC workload simulation.

Page 16: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 1: Executive Summary

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

16

Figure 1. vSphere datastore latency We used Login VSI to perform the simulation and the vdbench I/O generation utility to generate supplemental I/O within each desktop session. We used vdbench to stress-test the performance of the XtremIO all-flash array at much higher per-desktop IOPS levels than are possible with Login VSI alone, a practice that would have required significant changes to the disk layout of traditional spinning disk arrays.

The test results show that even as the per-desktop IOPS increase, while also maintaining 2:1 write-to-read ratio, the XtremIO array continues to provide sub-millisecond latency to the vSphere datastores that host the virtual desktops.

Page 17: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 2: Introduction

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

17

Chapter 2 Introduction

EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently faced by its customers.

This Proven Solution Guide summarizes a series of best practices that were discovered or validated during testing of the EMC infrastructure for VMware Horizon View 5.2 solution by using the following products:

• EMC XtremIO all-flash array

• VMware Horizon View Manager 5.2

• VMware Horizon View Composer 5.2

• VMware vSphere 5.1

This chapter introduces the solution and its components. It includes the following sections:

• Introduction to the EMC XtremIO all-flash array

• Document overview

• Reference architecture

• Configuration

Introduction to the EMC XtremIO all-flash array

The EMC XtremIO all-flash array is custom-designed for flash storage media. Furthermore, the XtremIO array scales out by design. Additional performance and capacity are added in a building block approach, with all building blocks forming a single clustered system. The following are some of the benefits of the EMC XtremIO platform:

• Incredibly high levels of I/O performance

The XtremIO storage system delivers high IOPS at a low (sub-millisecond) latency, particularly for random I/O workloads that are typical in virtualized environments.

• Enterprise array capabilities

The XtremIO storage system is inherently load- and capacity-balanced at all times and features N-way active controllers, high availability, strong data protection, and thin provisioning.

Page 18: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 2: Introduction

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

18

• Standards-based enterprise storage system

The XtremIO system interfaces with vSphere hosts using standard 8 Gb/s Fibre Channel (FC) and 10 GbE iSCSI block interfaces. The system supports complete high-availability features, including support for native VMware multipath I/O, protection against failed solid state disks (SSDs), nondisruptive software and firmware upgrades, no single point of failure (SPOF), and hot-swappable components.

• Real-time, in-line data reduction

The XtremIO storage system deduplicates desktop images in-line, allowing a massive number of virtual desktops to reside in a small and economical amount of flash capacity. Every bit of data is deduplicated up front before being written to flash. Data reduction on the XtremIO array does not adversely affect IOPS or latency; rather, it enhances the performance of the EUC environment. The more common the data, the faster XtremIO performs because in-line deduplication is purely an in-memory operation of metadata instead of actual IOs to SSDs.

• Scale-out design

A single X-Brick is the fundamental building block of a scaled-out XtremIO clustered system. You can start with a small deployment of about 1,000 virtual desktops and grow it to nearly any required scale by simply configuring a larger XtremIO cluster. As you add building blocks, the system expands capacity and performance linearly, making sizing EUC and managing future growth extremely simple.

• VAAI integration with in-memory metadata and in-line data reduction

The XtremIO array is fully integrated with vSphere through vStorage APIs for Array Integration (VAAI). It supports the following API primitives: ATS, Clone Blocks/Full Copy/XCOPY, Zero Blocks/Write Same, Thin Provisioning, and Block Delete. In combination with the array’s in-line data reduction and in-memory metadata management, XtremIO’s unique VAAI implementation enables nearly instantaneous virtual machine provisioning and cloning and the ability to use large volume sizes for unprecedented management simplicity.

• Massive performance

The XtremIO array is designed to handle very high, sustained levels of small, random, blended read and write I/O as is typical in virtual desktops, and to do so with consistent extraordinarily low latency.

• Ease of use

The XtremIO storage system requires only a few basic setup steps that can be completed in minutes. It requires no tuning or ongoing administration to achieve and maintain high performance levels. In fact, you can take the XtremIO system from shipping box to deployment readiness in less than an hour.

Page 19: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 2: Introduction

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

19

• Data center economics

A single X-Brick easily supports 2,500 or more full clone desktops and 3,500 or more linked clone desktops, requiring just a few rack units of space and approximately 750 W of power.

Document overview

This solution examines the following use cases:

• Boot storm

• Antivirus scan

• Microsoft security patch install

• User workload simulated with Login VSI 3.7 tool from Login Consultants

• Login storm (as part of the Login VSI user workload simulation)

• View recompose (linked clone desktops only)

• View refresh (linked clone desktops only)

These use cases all share a common thread—they are some of the most I/O-intensive operations on virtual desktops and were specifically chosen to showcase the consistent high performance of XtremIO, ensuring a great user experience under high load. For the first time, XtremIO enables customers the flexibility to deploy full clone desktops without changing their operational model for administering desktops. I/O-intensive operations such as anti-virus scans and patch installations can easily run on full-clone virtual desktops without affecting user experience.

Chapter 7, Testing and Validation: Full Clone Desktops, and Chapter 8, Testing and Validation: Linked Clone Desktops, contain the test definitions and results for each use case.

The purpose of this solution is to provide a virtualized infrastructure for virtual desktops powered by VMware Horizon View 5.2, VMware vSphere 5.1, View Composer 5.2, and the EMC XtremIO all-flash array.

This solution includes all the components required to run this environment including the infrastructure hardware, software platforms including Microsoft Active Directory, and the required VMware Horizon View configuration.

Information in this document can be used as the basis for a solution build, white paper, best practices document, or training.

Use-case definition

Purpose

Page 20: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 2: Introduction

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

20

This Proven Solution Guide contains the results observed from testing the EMC Infrastructure for VMware Horizon View 5.2 solution. The objectives of this testing are to establish:

• A reference architecture of validated hardware and software that permits easy and repeatable deployment of the solution.

• The best practices for storage configuration that provides optimal performance, scalability, and protection in the context of the enterprise virtual desktop market.

Implementation instructions are beyond the scope of this document. Information on how to install and configure VMware Horizon View 5.2 components, vSphere 5.1, and the required EMC products is outside the scope of this document. References to supporting documentation for these products are provided where applicable.

The intended audience for this Proven Solution Guide is:

• Internal EMC personnel

• EMC partners

• Customers

It is assumed that the reader has a general knowledge of the following products:

• VMware vSphere 5.1

• VMware Horizon View 5.2

• EMC XtremIO all-flash array

• Cisco Nexus switches

Table 1 lists the terms that are frequently used in this document.

Table 1. Terminology

Term Definition

Full clone A fully independent virtual desktop that is an exact replica of a source master desktop. Each full clone requires the same storage capacity as the master desktop.

Linked clone A virtual desktop that shares a replica disk with many other virtual machines, while using a small delta disk to write any changes that are required. Each linked clone requires only that storage required for the delta disk.

Login VSI A third-party benchmarking tool developed by Login Consultants that simulates real-world EUC workloads. Login VSI uses an AutoIT script and determines the maximum system capacity based on the response time of the users.

Scope

Audience

Terminology

Page 21: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 2: Introduction

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

21

Term Definition

Replica A read-only copy of a master image that is used to deploy linked clones.

Vdbench An open-source workload generator that can be used to generate a sustained and consistent I/O rate based on the parameters that you provide. For additional information about vdbench, visit http://sourceforge.net/projects/vdbench.

Reference architecture

This Proven Solution Guide has a corresponding reference architecture document that is available on EMC Online Support and EMC.com. EMC Infrastructure for VMware Horizon View 5.2: EMC XtremIO All-Flash Array, VMware vSphere 5.1, VMware Horizon View 5.2, and VMware Horizon View Composer 5.2—Reference Architecture provides more details.

If you do not have access to this document, contact your EMC representative.

The reference architecture and the results in this Proven Solution Guide are valid for 2,500 Windows 7 full-clone virtual desktops and 3,500 Windows 7 linked-clone virtual desktops per X-Brick, conforming to the workload described in the following sections:

• For full clone desktops: Validated environment profile on page 52

• For linked clone desktops: Validated environment profile on page 71

Corresponding reference architecture

Page 22: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 2: Introduction

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

22

Figure 2 shows the reference architecture of the solution.

Figure 2. Reference architecture

Reference architecture diagram

Page 23: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 2: Introduction

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

23

Configuration

Table 2 lists the hardware used to validate the solution.

Table 2. Solution hardware

Hardware Quantity Configuration Notes

EMC XtremIO 1 • A single managed system of 1 X-Brick

• 25 x 400 GB eMLC SSD drives per X-Brick

Shared storage for virtual desktops and infrastructure servers

Intel-based servers

20 • Memory: 144 GB of RAM

• CPU: 2 x Intel Xeon E7-2870 with 2.40 GHz deca-core processors

• Internal storage: 1 x 146 GB internal SAS disk

• External storage: XtremIO (FC)

• NIC: Dual-port 10 GbE adapter

• FC HBA: Dual-port 8 Gbps adapter

• 18 servers—vSphere desktop clusters 1 and 2

• 2 servers—vSphere cluster to host infrastructure virtual machines

Cisco Nexus 5020 2 • 40 x 10 Gb ports

• 2 Ethernet ports per server

• 2 FC ports per server

Redundant FC and LAN A/B configuration

Hardware resources

Page 24: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 2: Introduction

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

24

Table 3 lists the software used to validate the solution.

Table 3. Solution software

Software Version

EMC XtremIO (FC-connected shared storage for vSphere datastores)

Cisco Nexus

Cisco Nexus 5020 Version 4.2(1)N1(1)

VMware vSphere servers

vSphere 5.1.0 (1123961)

VMware vCenter Server

OS Windows 2008 R2 SP1

VMware Horizon View

View Connection Server 5.2

View Composer 5.2

Microsoft software platforms

Active Directory, including DNS and DHCP Windows Server 2012

SQL Server SQL Server 2012

System Center System Center Operations Manager 2012

Virtual desktops

Note: This software is used to generate the test load.

OS MS Windows 7 Enterprise SP1 (32-bit)

VMware Tools 9.0.5 build-1065307

Microsoft Office Office Enterprise 2007 (Version 12.0.6562.5003)

Microsoft Internet Explorer 9.0.8112.316421

Adobe Reader 9.1.0

McAfee Virus Scan 8.7 Enterprise

Adobe Flash Player 11

Bullzip PDF Printer 6.0.0.865

Login VSI (EUC workload generator) 3.7 Professional Edition

Vdbench (I/O workload generator) 5.03

Software resources

Page 25: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 3: Solution Infrastructure

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

25

Chapter 3 Solution Infrastructure

This chapter describes the specific components used during the development of this solution. It includes the following sections:

• VMware Horizon View 5.2

• VMware vSphere 5.1 infrastructure

• Microsoft Windows infrastructure

VMware Horizon View 5.2

VMware Horizon View delivers rich and personalized virtual desktops as a managed service from a virtualization platform built to deliver the entire desktop. With VMware Horizon View 5.2, administrators can virtualize the operating system, applications, and user data, and deliver modern desktops to end users. Horizon View 5.2 provides the following:

• Centralized, automated management of virtual desktops with increased control and cost savings

• Improved business agility as well as a flexible high-performance desktop experience for end users across a variety of network conditions

VMware Horizon View 5.2 integrates effectively with vSphere 5.1 to provide the following:

• Performance optimization—Optimizes storage utilization and performance using View Composer 5.2 to reduce the footprint of virtual desktops.

• Thin provisioning support—Enables efficient allocation of storage resources when virtual desktops are provisioned. This results in better utilization of storage infrastructure and reduced capital expenditure (CAPEX) and operating expenditure (OPEX).

Introduction

Page 26: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 3: Solution Infrastructure

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

26

This solution uses three VMware Horizon View Manager Server instances, each capable of scaling up to 2,000 virtual desktops.

This VMware Horizon View 5.2 implementation comprises the following core components:

• View Manager Server

• View Composer 5.2

• View Composer linked clone desktops

• View full clone desktops

Additionally, the following components are required to provide the infrastructure for a VMware Horizon View 5.2 deployment:

• Microsoft Active Directory

• Microsoft SQL Server

• DNS server

• Dynamic Host Configuration Protocol (DHCP) server

View Manager Server

The View Manager Server is the central management location for virtual desktops and performs the following key functions:

• Brokers connections between the users and the virtual desktops

• Controls the creation and retirement of virtual desktop images

• Assigns users to desktops

• Controls the state of the virtual desktops

• Controls access to the virtual desktops

View Composer 5.2

View Composer 5.2 works directly with vCenter Server to deploy, customize, and maintain the state of the virtual desktops when you are using linked clones. Desktops provisioned as linked clones share a common base image within a desktop pool and have a minimal storage footprint. The base image is shared among a large number of desktops.

This solution uses a standalone View Composer 5.2 server to deploy 2,500 dedicated virtual desktops running Windows 7 as linked clones. A standalone View Composer server was used to minimize the impact of virtual desktop provisioning and maintenance operations on the vCenter server.

View Composer linked clone desktops

VMware Horizon View with View Composer uses the concept of linked clones to quickly provision virtual desktops. With linked clone desktops, the operating system reads all the common data from the read-only replica and the unique data that is

Required components

Page 27: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 3: Solution Infrastructure

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

27

created by the operating system or user. This unique data is stored on the linked clone. Figure 3 shows a logical representation of this relationship.

Figure 3. Horizon View: Logical representation of linked clone and replica disk

View full clone desktops

VMware Horizon View 5.2 supports the use of full clone desktops for virtual desktop deployments. Horizon View uses traditional vSphere customization specifications and the Microsoft Sysprep utility to customize each desktop after it is cloned from a master desktop template.

VMware vSphere 5.1 infrastructure

VMware vSphere 5.1 is the market-leading virtualization hypervisor used across thousands of IT environments around the world. VMware vSphere 5.1 can transform or virtualize computer hardware resources, including the CPUs, RAM, hard disks, and network controllers to create fully functional virtual machines. The virtual machines run their own operating systems and applications just like physical computers.

The high-availability features in VMware vSphere 5.1 along with VMware Distributed Resource Scheduler (DRS) and VMware vSphere Storage vMotion enable seamless migration of virtual desktops from one vSphere server to another with minimal or no disruption to the customers.

This solution deploys two vSphere clusters to host virtual desktops. We chose these server types based on availability. You can achieve similar results with a variety of server configurations if the ratios of server RAM per desktop and number of desktops per CPU core is upheld.

The clusters consist of 9 dual deca-core vSphere 5.1 servers to support 1,250 desktops each, resulting in around 139 virtual machines per vSphere server. Each cluster has access to 10 FC datastores.

One vSphere cluster is deployed in this solution for hosting the infrastructure servers.

Note: This cluster is not required if the resources needed to host the infrastructure servers are already present within the host environment.

The infrastructure vSphere 5.1 cluster consists of two dual quad-core vSphere 5.1 servers. The cluster has access to a single datastore used for storing the infrastructure server virtual machines.

vSphere 5.1 overview

Desktop vSphere clusters

Infrastructure vSphere cluster

Page 28: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 3: Solution Infrastructure

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

28

The infrastructure cluster hosts the following virtual machines:

• Two Windows 2012 R2 domain controllers—Provide DNS, Active Directory, and DHCP services.

• One VMware vCenter 5.1 Server running on Windows 2008 R2 SP1—Provides management services for the VMware clusters and View Composer. This server also runs vSphere Update Manager.

• Three VMware Horizon View Manager 5.2 Servers, each running on Windows 2008 R2 SP1—Provide services to manage the virtual desktops.

• SQL Server 2012 on Windows 2008 R2 SP1—Hosts databases for the VMware Virtual Center Server, VMware Horizon View Composer, and VMware Horizon View Manager server event log.

• Windows 7 Key Management Service (KMS)—Provides a method to activate Windows 7 full clone desktops.

Microsoft Windows infrastructure

Microsoft Windows provides the infrastructure that is used to support the virtual desktops and includes the following components:

• Microsoft Active Directory

• Microsoft SQL Server

• DNS server

• DHCP server

The Windows domain controllers run the Active Directory service that provides the framework to manage and support the virtual desktop environment. Active Directory performs the following functions:

• Manages the identities of users and their information

• Applies group policy objects

• Deploys software and updates

Microsoft SQL Server is a relational database management system (RDBMS). A dedicated SQL Server 2012 is used to provide the required databases to vCenter Server and View Composer.

DNS is the backbone of Active Directory and provides the primary name resolution mechanism for Windows servers and clients.

In this solution, the DNS role is enabled on the domain controllers.

Introduction

Active Directory

SQL Server

DNS server

Page 29: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 3: Solution Infrastructure

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

29

The DHCP server provides the IP address, DNS server name, gateway address, and other information to the virtual desktops.

In this solution, the DHCP role is enabled on one of the domain controllers. The DHCP scope is configured with an IP range large enough to support 2,500 virtual desktops.

DHCP server

Page 30: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 4: Storage Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

30

Chapter 4 Storage Design

This chapter describes the storage design that applies to the specific components of this solution.

EMC XtremIO storage architecture

The EMC XtremIO all-flash array scales out in performance and capacity by design. Additional capacity and performance can be configured to meet virtually any EUC requirement. Each cluster building block is by itself a highly available, high-performance, fully active/active storage system with no SPOF. When multiple building blocks form a cluster, XtremIO inherently stays in balance so all desktops benefit from the entire performance potential of the cluster at all times.

The XtremIO storage cluster is managed by XtremIO Operating System (XIOS), XtremIO’s powerful operating system. XIOS ensures that the system remains balanced and always delivers the highest levels of performance without any administrator intervention.

• XIOS ensures that all SSDs in the system are evenly loaded, providing both the highest possible performance as well as endurance that stands up to demanding workloads for the entire life of the array.

• XIOS eliminates the need to perform the complex configuration steps found on traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths, set caching policies, build aggregates, or do any other such configuration.

• With XIOS, every volume is automatically and optimally configured at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system.

Once deployed, the EMC XtremIO all-flash array does not require any further configuration prior to creating LUNs. During deployment, the XtremIO array creates Data Protection Groups, a proprietary form of a RAID group used to protect data in the event of a failed eMLC drive. This is completely transparent to the administrator.

Introduction

Storage layout

Page 31: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 4: Storage Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

31

EMC Virtual Storage Integrator for VMware vSphere is a plug-in to the vSphere Client that provides a single management interface for managing EMC XtremIO storage within the vSphere environment. Features can be added and removed from Virtual Storage Integrator independently, which provides flexibility to customize VSI user environments. The features are managed by using the Virtual Storage Integrator Feature Manager. Virtual Storage Integrator provides a unified user experience that allows new features to be introduced rapidly in response to changing customer requirements.

We used the following Virtual Storage Integrator features during the validation testing:

• Storage Viewer—Extends the vSphere client to facilitate the discovery and identification of EMC XtremIO storage devices that are allocated to VMware vSphere hosts and virtual machines. Storage Viewer presents the underlying storage details to the virtual data center administrator, merging the data of several different storage mapping tools into a few seamless vSphere client views.

• Unified Storage Management—Simplifies storage administration of the EMC XtremIO platform. Unified Storage Management enables VMware administrators to provision new Virtual Machine File System (VMFS) datastores and raw device mapping (RDM) volumes seamlessly within the vSphere client.

The EMC Virtual Storage Integrator for VMware vSphere product guides, available on EMC Online Support, provide more information.

The EMC XtremIO array is configured with the following LUNs for desktop and infrastructure storage:

• Twenty LUNs for full-clone desktop storage, with each LUN being used to store 125 desktops. We used 4 TB LUNS for full clone desktops and 375 GB LUNS for linked clone desktops. XtremIO supports the VAAI ATS primitive, thereby enhancing desktop performance.

• One 2 TB LUN for infrastructure server storage.

Table 4 lists the storage requirements for each of the virtual desktop types.

Table 4. Storage requirements

Item Capacity (GB) Number of items

Total capacity (GB)

Linked-clone virtual desktop 3 GB (average) 2,500 7.5 TB

Linked-clone replica disks (one per desktop datastore)

20 GB 20 400 GB

Full-clone virtual desktop 20 GB 2,500 50 TB

EMC Virtual Storage Integrator for VMware vSphere

XtremIO storage layout overview

Page 32: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

32

Chapter 5 Network Design

This chapter describes the network design used in this solution. It contains the following sections:

• Considerations

• EMC XtremIO Storage Controller configuration

• VMware vSphere network configuration

• Cisco Nexus 5020 Ethernet configuration

• Cisco Nexus 5020 Fibre Channel configuration

Considerations

Figure 4 shows the 10 Gb Ethernet (GbE) and 8 Gb FC connectivity between the Cisco Nexus 5020 switches and the EMC XtremIO storage. Uplink Ethernet ports coming off the Nexus switches can be used to connect to a 10 Gb or a 1 Gb external LAN. In this solution, we used the 10 Gb LAN through Cisco Nexus switches to extend Ethernet connectivity to the desktop clients, VMware Horizon View components, and Windows Server infrastructure.

Storage network layout overview

Page 33: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

33

Figure 4. Storage network layout overview

This solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.

The IP scheme for the virtual desktop network must be designed with enough IP addresses in one or more subnets for the DHCP server to assign them to each virtual desktop.

Logical design considerations

Page 34: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

34

EMC XtremIO Storage Controller configuration

Figure 5 shows the back of the XtremIO Storage Controllers for one X-Brick. The ports marked as A1 and A2 are connected to one FC-enabled switch, while ports B1 and B2 are connected to a separate FC-enabled switch.

Figure 5. XtremIO Storage Controllers

VMware vSphere network configuration

All network interfaces on the vSphere servers in this solution use 10 GbE connections. All virtual desktops are assigned an IP address by using a DHCP server. The Intel-based servers use two on-board Broadcom GbE Controllers for all the network connections. Figure 6 shows the vSwitch configuration in vCenter Server.

Figure 6. vSphere vSwitch configuration

Virtual switch vSwitch0 uses two physical network interface cards (NICs). Table 5 lists the configured port groups in vSwitch0 and vSwitch1.

Storage Controller interfaces

vSphere vSwitch configuration

Page 35: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

35

Table 5. vSphere port groups in vSwitch0 and vSwitch1

Virtual switch

Configured port groups Used for

vSwitch0 ManagementNetwork VMkernel port for vSphere host management

DesktopNetwork Network connection for virtual desktops and LAN traffic

By default, a vSwitch is configured with 120 virtual ports, which may not be sufficient in an EUC environment. On the vSphere servers that host the virtual desktops, each virtual desktop consumes one port. Set the number of ports based on the number of virtual desktops that will run on each vSphere server, as shown in Figure 7.

Note: Reboot the vSphere server for the changes to take effect.

Figure 7. vSphere vSwitch virtual ports

If a vSphere server fails or needs to be placed in the maintenance mode, other vSphere servers within the cluster must accommodate the additional virtual desktops that will be migrated from the vSphere server that goes offline. Consider the worst-case scenario when determining the maximum number of virtual ports per vSwitch. If the number of virtual ports is insufficient, the virtual desktops will not be able to obtain an IP address from the DHCP server.

vSphere vSwitch virtual ports

Page 36: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

36

The following disk configuration changes were made to the vSphere hosts so that they can accept more concurrent storage requests from the guest virtual machines. With these changes the vSphere hosts will be able to make even greater use of the performance capabilities of the XtremIO array. Update the disk settings using the following procedure:

1. From the Configuration tab of each vSphere host, click Advanced Settings as shown in Figure 8.

Figure 8. vSphere server: Configuration tab

2. Select Disk, and update the Disk.SchedNumReqOutstanding parameter to 256 and the Disk.SchedQuantum parameter to 64.

Figure 9 shows these changes.

vSphere disk settings

Page 37: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

37

Figure 9. vSphere server: Advanced disk settings

3. Click OK to apply the changes.

These changes can also be applied using other methods such as vSphere host profiles or PowerCLI.

The following configuration changes were made to the host bus adapters (HBAs) on the vSphere hosts so that they can accept more concurrent storage requests from the vSphere. With these changes the HBAs on the vSphere host will be able to make even greater use of the performance capabilities of the XtremIO array. Update the queue depth settings using the following procedure:

1. Connect to the vSphere host shell using the root account.

2. Verify which HBA module is currently loaded by entering one of the following commands:

For Qlogic: esxcli system module list | grep qla

For Emulex: esxcli system module list | grep lpfc

3. To adjust the HBA queue depth, run one of these two commands:

For Qlogic: esxcli system module parameters set -p ql2xmaxqdepth=256 -m qla2xxx

For Emulex: esxcli system module parameters set -p lpfc0_lun_queue_depth=256 -m lpfc820

vSphere host bus adapter queue depth optimizations

Page 38: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

38

4. Restart the vSphere host.

5. Connect to vSphere host shell using the root account.

6. Run the following command to confirm the queue depth adjustment: esxcli system module parameters list -m <driver> | grep ql12maxdepth

Sample command for a Qlogic HBA with queue depth set to 256: esxcli system module parameters list -m qla2xxx | grep ql2maxqdepth

7. Verify from the command output that the indicated queue depth is 256.

Sample command output: ql2maxqdepth int 256 Maximum queue depth to report for target devices

VMware KB article 1267 provides additional information about adjusting HBA queue depth.

XtremIO supports the native multipathing technology that is part of the VMware vSphere suite. To ensure optimal performance of the solution, set the path selection policy to Round Robin (VMware) on the XtremIO volumes presented to vSphere. This will ensure optimal distribution and availability of load among the I/O paths to the XtremIO storage. Update the multipathing settings using the following procedure:

1. From the Configuration tab of each vSphere host click Devices, right-click one of the XtremIO LUNs, and then select Manage Paths as shown in Figure 10.

Figure 10. vSphere server: Storage devices

vSphere storage multipathing

Page 39: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

39

2. In the Managed Paths window, change the Path Selection drop down to Round Robin (VMware) and click Change as shown in Figure 11.

Figure 11. vSphere server: Manage paths

3. Repeat this process for each of the other XtremIO LUNs.

These changes can also be applied using PowerCLI.

Cisco Nexus 5020 Ethernet configuration

Two Cisco Nexus 5020 switches provide redundant high-performance, low-latency 10 GbE and 8 Gb FC networking. The Ethernet connections are delivered by a cut-through switching architecture for 10 GbE server access in next-generation data centers.

In this solution, the cabling is spread across two Nexus 5020 switches to provide redundancy and load balancing of the network traffic.

Cisco Nexus 5020 Fibre Channel configuration

Two Cisco Nexus 5020 switches provide redundant high-performance, low-latency 10 GbE and 8 Gb FC networking.

In this solution, the FC and Data Mover cabling is evenly distributed across two Nexus 5020 switches to provide redundancy and load balancing of the FC and network traffic.

The FC uplinks are configured using single initiator zoning to provide optimal security and minimize interference. Single initiator zoning requires four FC zones per each vSphere host; each vSphere host FC port is zoned individually to each of the two XtremIO Storage Controller FC ports. Figure 12 provides a visual representation of single initiator zoning.

Overview

Cabling

Overview

Cabling

Fibre Channel uplinks

Page 40: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

40

Figure 12. Example of single initiator zoning

The following is an example of the configuration required to create the necessary FC zones for one vSphere host on one of the four Nexus 5020 switches. In this example, we are zoning one of the two vSphere host FC ports to each of the two XtremIO Storage Controller ports. The remaining Nexus switches would have a similar configuration, with the second vSphere host FC port zoned to each of the XtremIO Storage Controllers.

vsan database vsan 100 interface fc2/1 no shutdown interface fc2/2 no shutdown interface fc2/3 no shutdown

Page 41: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 5: Network Design

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

41

fcalias name rtpxio99-sc1 vsan 100 member pwwn 20:00:e8:b7:48:XX:XX:XX fcalias name rtpxio99-sc2 vsan 100 member pwwn 20:00:e8:b7:48:XX:XX:XX fcalias name rtpucs1-port1 vsan 100 member pwwn 20:00:e8:b7:48:XX:XX:XX zone name rtpucs1-port1_rtpxio99-spa vsan 100 member fcalias rtpucs1-port1 member fcalias rtpxio99-sc1 zone name rtpucs1-port1_rtpxio99-spb vsan 100 member fcalias rtpucs1-port1 member fcalias rtpxio99-sc2 zoneset name rtplab-1 vsan 100 member rtpucs1-port1_rtpxio99-sc1 member rtpucs1-port1_rtpxio99-sc2 zoneset activate name rtplab-1 vsan 100

Page 42: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

42

Chapter 6 Installation and Configuration

This chapter describes how to install and configure this solution. It includes the following sections:

• Installation overview

• Provisioning EMC XtremIO storage

• Creating VMware Horizon View desktop pools

Installation overview

This chapter includes instructions for the following activities:

• Creating initiator groups and provisioning storage on the XtremIO array

• Creating desktop pools

See the VMware website for installation and configuration instructions for the following components:

• VMware Horizon View Manager Server 5.2

• VMware Horizon View Composer 5.2

• VMware vSphere 5.1

This document does not include installation and configuration instructions for the following components:

• Microsoft System Center Configuration Manager (SCCM) 2012

• Microsoft Active Directory, Group Policies, DNS, and DHCP

• Microsoft SQL Server 2012

Provisioning EMC XtremIO storage

You can easily configure the EMC XtremIO array, creating volumes and associating them with clients in just three steps:

1. From the XtremIO Configuration page, click Add in the Initiator Groups column to create an initiator group and populate it with the clients that need access to the XtremIO array, as shown in Figure 13.

XtremIO Initiator Group and LUN provisioning

Page 43: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

43

Figure 13. Create an XtremIO initiator group

2. From the XtremIO Configuration page, click Add in the Volumes column and create a volume of the required size, as shown in Figure 14.

Figure 14. Create an XtremIO volume

Page 44: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

44

3. Follow the steps shown in Figure 15 from the XtremIO Configuration page: Select the volume (1) and initiator group (2), click Map All (3), and then click Apply (4). The volume is now available to the hosts in the selected initiator group.

Figure 15. Map the XtremIO volume to an initiator group

Figure 16 shows the LUN configuration in the EMC XtremIO user interface, as well as the LUN mapping for one of the two initiator groups. In the example, each group contains the World Wide Names (WWNs) of the hosts in the indicated vSphere cluster.

Figure 16. XtremIO LUN configuration and zoning

Page 45: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

45

Creating VMware Horizon View desktop pools

View Manager Server and View Composer

The VMware Horizon View Installation document, available on the VMware website, has detailed procedures on how to install View Manager Server and View Composer 5.2. This solution requires no special configuration instructions.

vCenter Server and vSphere

The vSphere Installation and Setup Guide, available on the VMware website, contains detailed procedures that describe how to install and configure vCenter Server and vSphere. As a result, these subjects are not covered in further detail in this paper. This solution requires no special configuration instructions.

Horizon View

Before deploying the desktop pools, ensure that the following activities, described in the VMware Horizon View Installation document, have been completed:

1. Prepare Active Directory.

2. Install View Composer 5.2 on the vCenter Server.

3. Install the View Manager Server.

4. Add the vCenter Server instance to View Manager and enable host caching for View.

In this solution, we used two persistent automated desktop pools to deploy the virtual desktops.

To create one of the persistent automated desktop pools as configured for this solution, complete the following steps.

Note: The example provided in this section demonstrates the creation of a linked-clone desktop pool. Where applicable, the steps include information about creating a full-clone desktop pool.

1. Log in to the VMware Horizon View Administration page, which is located at https://server/admin, where server is the IP address or DNS name of the View Manager server.

2. Click Pools in the left pane.

3. Click Add under the Pools banner.

The Add Pool page appears.

4. Under Pool Definition, click Type.

The Type page appears.

5. Select Automated Pool as shown in Figure 17.

Installation and configuration prerequisites

Horizon View desktop pool configuration

Page 46: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

46

Figure 17. Horizon View: Select Automated Pool

6. Click Next.

The User Assignment page appears.

7. Select Dedicated and leave the Enable automatic assignment checkbox checked.

8. Click Next.

The vCenter Server page appears.

9. Select View Composer linked clones and select a vCenter Server that supports View Composer as shown in Figure 18. For full clone desktops, select Full virtual machines.

Figure 18. Horizon View: Select View Composer linked clones

10. Click Next.

The Pool Identification page appears.

11. Type the required information.

12. Click Next.

The Pool Settings page appears.

13. Make any required changes.

14. Click Next.

The Provisioning Settings page appears.

Page 47: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

47

15. Complete the following steps, as shown in Figure 19:

a. Select Use a naming pattern.

b. In the Naming Pattern field, type the naming pattern.

c. In the Max number of desktops field, type the number of desktops to provision.

Figure 19. Horizon View: Select Provision Settings

16. Click Next.

If you are creating a linked clone pool, the View Composer Disks page appears.

17. Make any required changes.

18. Click Next.

If you are creating a linked clone pool, the Storage Optimization page appears.

19. Select the Select separate datastores for replica and OS disk checkbox.

20. Click Next.

The vCenter Settings page appears.

21. Complete the following steps:

a. Click Browse next to each of the following items to select a default image for the items.

Note: Figure 20 shows the vCenter Settings page for linked clones. It does not display options for full clones.

− Parent VM (linked clones only)—Linked clones

− Snapshot (linked clones only)—Snapshot to use for the default image

− Template (full clones only) (not shown in Figure 17)

Page 48: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

48

− VM folder location—Folder for the virtual machines

− Host or cluster—Cluster hosting the virtual desktops

− Resource pool—Resource pool to store the desktops

Figure 20. Horizon View: vCenter Settings

b. At Linked clone datastores, shown in Figure 20, click Browse.

The Select Linked Clone Datastores page appears.

c. If you are creating a full-clone desktop pool, click Browse under the line item Datastores, and select the datastores that you will use to host your full clone desktops.

This line item appears under the Resource Settings section, which is shown in Figure 20.

d. Select the checkboxes for the eight LUNs that were provisioned for linked clone storage, as shown in Figure 21, or for full clone storage, and click OK.

Figure 21. Horizon View: Select Linked Clone Datastores

Page 49: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

49

e. If you are creating a linked-clone desktop pool, in the configuration line item for Replica disk datastores click Browse. In the Select Replica Disk Datastores page that appears, select the LUN that was provisioned for replica disk storage, as shown in Figure 22, and click OK.

Note: The configuration line item referenced in this step is not displayed if you are creating a full-clone desktop pool.

Figure 22. Horizon View: Select Replica Disk Datastores

22. Click OK.

The Advanced Storage Options page appears.

23. Make any required changes.

24. Click Next.

The Guest Customization page appears.

Page 50: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 6: Installation and Configuration

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

50

25. Complete the following steps, as shown in Figure 23, for a linked-clone desktop pool:

a. In the Domain list box select the domain.

b. In the AD container field click Browse and then select the AD container.

c. Select Use QuickPrep.

Figure 23. Horizon View: Guest Customization

26. If you are creating a full-clone desktop pool, select Use a customization specification (Sysprep) and select a vCenter customization specification to use to customize the virtual desktops.

27. Click Next.

The Ready to Complete page appears.

28. Verify the settings for the pool.

29. Click Finish.

The deployment of the virtual desktops starts.

30. Repeat this process as needed to provision additional desktop pools.

Page 51: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

51

Chapter 7 Testing and Validation: Full Clone Desktops

This chapter includes the following sections:

• Overview

• Validated environment profile

• Boot storm test

• Antivirus test

• Patch install test

• Login VSI test

Overview

This chapter provides a description of the tests performed to validate the solution, and the performance of the solution and component subsystems under the following tests:

• Boot storm of all desktops

• McAfee antivirus full scan on all desktops

• Security patch install with Microsoft SCCM 2012 on all desktops

• User workload testing using Login VSI on all desktops

We performed the testing with an XtremIO cluster that contained a single X-Brick running 2,500 desktops. XtremIO performance scales linearly with each X-Brick added to the cluster, meaning that an XtremIO cluster with four X-Bricks that hosts 10,000 desktops will provide similar performance to a single X-Brick cluster running 2,500 desktops. All tests were run on a fully pre-conditioned XtremIO system1.

1 Refer to the IDC paper at http://idcdocserv.com/241856 for information on how to test an all-flash array.

Page 52: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

52

Table 6 shows a summary of the test results.

Table 6. Test results summary: Full clone desktops

Operation Peak IOPS from 2,500 full clone desktops running concurrently

Average storage latency

Total IOPS capability of a 4 X-Brick cluster

Boot storm 65,213 Sub-1milisecond

600K mixed (50%:50%) IOPS

Anti-virus scan 52,787

Patching 36,546

Login VSI 28,183

As demonstrated in Table 6, each X-Brick can easily sustain even the most I/O-intensive applications for 2,500 full clone desktops concurrently per X-Brick.

Validated environment profile

Table 7 provides the validated environment profile.

Table 7. Horizon View: Full-clone desktop environment profile

Profile characteristic Value

Number of virtual desktops 2,500

Virtual desktop OS Windows 7 Enterprise SP1 (32-bit)

CPU per virtual desktop 1 vCPU

Number of virtual desktops per CPU core 6.9

RAM per virtual desktop 1 GB

Average storage available for each full clone desktop 20 GB

Average storage used for each full clone desktop (used by Windows and applications)

12.74 GB

Average physical storage used for each full clone desktop on the XtremIO array (after dedupe)

197 MB

Dedupe ratio of full clone desktops (after provisioning) 81:1

Average IOPS per virtual desktop at steady state Varied based on test configuration, ranging from 5 to 33 IOPS per desktop

Peak IOPS observed per virtual desktop during boot storm

101.3

Average IOPS observed per virtual desktop throughout boot storm

22.1

Profile characteristics

Page 53: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

53

Profile characteristic Value

Time required to patch each desktop using SCCM 5 minutes

Time required to complete full anti-virus scan on 2,500 desktops

2 hours 51 minutes

Time required by View to deploy 2,500 desktops 6 hours 45 minutes

Number of datastores used to store virtual desktops 20

Number of virtual desktops per datastore 125

Drive and RAID type for datastores 400 GB eMLC SSD drives

EMC XtremIO proprietary data protection XDP that delivers RAID 6-like data protection but better than the performance of RAID 10

Number of VMware clusters used for desktops 2

Figure 24 shows the storage capacity utilization of the XtremIO array after the deployment of 2,500 full clone desktops.

Figure 24. Storage capacity utilization: 2,500 full clone desktops

Page 54: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

54

We tested the following use cases to validate whether the solution performed as expected under heavy-load situations:

• Simultaneous boot of all desktops

• Full antivirus scan of all desktops

• Installation of a monthly release of security updates using Microsoft SCCM 2012 on all desktops

• Login and steady-state user load simulated using the Login VSI medium workload on all desktops

In each use case, we present a number of key metrics that show the overall performance of the solution.

Note: The results presented are those obtained in the EMC Solutions lab. Results may vary based on environmental conditions.

We used VSI version 3.7 to run a user load on the desktops. VSI provided the guidance to gauge the maximum number of users a desktop environment can support. The Login VSI workload is categorized as light, medium, heavy, multimedia, core, and random (also known as workload mashup). The medium workload selected for this testing had the following characteristics:

• The workload emulated a medium knowledge worker who used Microsoft Office Suite, Internet Explorer, Adobe Acrobat Reader, Bullzip PDF Printer, and 7-zip.

• After a session started, the medium workload repeated every 12 minutes.

• The response time was measured every 2 minutes during each loop.

• The medium workload opened up to five applications simultaneously.

• The type rate was 160 ms for each character.

• Approximately 2 minutes of idle time was included to simulate real-world users.

Each loop of the medium workload used the following applications:

• Microsoft Outlook 2007—Ten email messages were browsed.

• Microsoft Internet Explorer (IE)—On one instance of IE, the BBC.co.uk website was opened. Another instance browsed Wired.com and Lonelyplanet.com. Finally, another instance opened a flash-based 480p video file.

• Microsoft Word 2007—One instance of Microsoft Word 2007 was used to measure the response time, while another instance was used to edit a document.

• Bullzip PDF Printer and Adobe Acrobat Reader—The Word document was printed to PDF and reviewed.

• Microsoft Excel 2007—A very large Excel worksheet was opened and random operations were performed.

Use cases

Login VSI

Page 55: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

55

• Microsoft PowerPoint 2007—A presentation was reviewed and edited.

• 7-zip—Using the command line version, the output of the session was zipped.

A Login VSI launcher is a Windows system that launches desktop sessions on target virtual desktops. A launcher is either of two types—master or slave. A given test bed has only one master, but there can be several slave launchers as required.

The number of desktop sessions a launcher can run is typically limited by CPU or memory resources. By default, the graphics device interface (GDI) limit is not tuned. In such a case, Login Consultants recommends using a maximum of 45 sessions per launcher with two CPU cores (or two dedicated vCPUs) and 2 GB of RAM. When the GDI limit is tuned, this limit extends to 60 sessions per two-core machine.

In this validated testing, we launched 2,500 desktop sessions from 79 launchers, with approximately 32 sessions per launcher. PC over IP (PCoIP) was used for the View client connections. We allocated two vCPUs and 4 GB of RAM for each launcher. No bottlenecks were observed on the launchers during the Login VSI tests.

Boot storm test

We conducted this boot storm test by selecting all the desktops in vCenter Server and then selecting Power On. The overlays on the figures in this section show when the array IOPS achieved a steady state.

All 2,500 desktops were fully powered on within 4 minutes and achieved a steady state 10 minutes later. All desktops were available for login approximately 12 minutes after the test began. This section details the performance characteristics of various components of the View infrastructure during the boot storm test.

Figure 25 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

Figure 25. Boot storm: IOPS for a single eMLC drive

Login VSI launcher

Test methodology

Individual drive load

Page 56: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

56

During peak load, the drive serviced a maximum of 660.3 IOPS. Because of the random data placement and load balancing inherent in XtremIO’s architecture, all the SSDs share the load and capacity equally at all times, ensuring that the entire system has no hot spots. This ensures the best user experience for EUC end users.

Figure 26 shows the IOPS for one of the 20 LUNs used to store the full-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 26. Boot storm: IOPS for a full-clone desktop LUN

During peak load, the LUN serviced 3,611.5 IOPS. Because of XtremIO’s scale-out design and inherent load balancing, all the LUNs enjoy the processing power of the entire XtremIO cluster. There is no asymmetry. All LUNs perform equally well. In no case are certain LUNs preferred over others. LUNs do not need to be pinned down to certain controllers or caches, thereby greatly simplifying storage design and avoiding any unpredictable performance.

Figure 27 shows the total IOPS and bandwidth serviced by the XtremIO array during the test.

Figure 27. Boot storm: XtremIO array total IOPS and bandwidth

Full-clone desktop LUN load

XtremIO array IOPS and bandwidth

Page 57: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

57

During peak load, the XtremIO array serviced 65,213.0 IOPS and 1,637.6 MB of bandwidth.

Note that the aggregate I/Os to all the LUNs and to the array overall far outnumber the aggregate I/Os to the eMLC SSDs as a result of in-line data reduction, which in XtremIO is always on under any load whatsoever. In addition, there is no system-level garbage collection on XtremIO causing unnecessary I/Os to flash. Minimizing I/Os to SSDs ensures that they enjoy the maximum longevity in XtremIO. Also note that the generated IOPS are a tiny fraction of the total I/O capability of the X-Brick.

Figure 28 shows the XtremIO Storage Controller utilization during the boot storm test.

Figure 28. Boot storm: Storage Controller utilization

The virtual desktops generated high levels of I/O during the peak load of the boot storm test. The peak Storage Controller utilization was 59 percent. The XtremIO array had sufficient scalability headroom for this workload. This further illustrates that XtremIO’s controllers are truly active/active and evenly balance load between them. I/Os on any port and on any controller enjoy the performance of the entire system and are distributed across the XtremIO system so efficiently that it is virtually impossible to create hot spots.

Storage Controller utilization

Page 58: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

58

Figure 29 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 29. Boot storm: vSphere CPU load

The vSphere server achieved a peak CPU utilization of 52.8 percent. Hyper-threading was enabled to double the number of logical CPUs. By removing storage bottlenecks, XtremIO enables processors to run at their full potential at high utilization levels, maximizing the efficiency of the customer’s spend.

All I/Os completed in sub-millisecond latency on an average.

Antivirus test

We conducted this antivirus test using McAfee VirusScan 8.7i by scheduling a full scan of all desktops using a custom script to initiate an on-demand scan. Scans for all 2,500 desktops completed within 2 hours and 51 minutes.

Figure 30 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

vSphere CPU load

I/O latency

Test methodology

Individual drive load

Page 59: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

59

Figure 30. Antivirus: IOPS for a single eMLC drive

During peak load, the drive serviced 1,108.0 IOPS. Once again, because of the random data placement and load balancing inherent in XtremIO’s architecture, all the SSDs share the load and capacity equally at all times, ensuring that the entire system has no hot spots. This ensures the best user experience for EUC end users. The remaining results in the rest of this paper illustrate similar functionality.

Figure 31 shows the IOPS for one of the 20 LUNs used to store the full-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 31. Antivirus: IOPS for a full-clone desktop LUN

During peak load, the LUN serviced 2,920.0 IOPS. Once again, due to XtremIO’s scale-out design and inherent load balancing, all the LUNs enjoy the processing power of the entire XtremIO cluster. There is no asymmetry. All LUNs perform equally well. In no case are certain LUNs preferred over others. LUNs do not need to be pinned down to

Full-clone desktop LUN load

Page 60: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

60

certain controllers or caches, thereby greatly simplifying storage design and avoiding any unpredictable performance. The results in the rest of this paper illustrate similar functionality.

Figure 32 shows the total IOPS and bandwidth serviced by the XtremIO array during the test.

Figure 32. Antivirus: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 52,786.5 IOPS and 1,584.1 MB of bandwidth. Once again, note that the aggregate I/Os to all the LUNs and to the array overall far outnumber the aggregate I/Os to the eMLC SSDs as a result of in-line data reduction, which on XtremIO is always on under any load whatsoever. In addition, there is no system-level garbage collection on XtremIO causing unnecessary I/Os to flash. Minimizing I/Os to SSDs ensures that they enjoy the maximum longevity in XtremIO. Also note that the generated IOPS are a tiny fraction of the total I/O capability of the X-Brick. The results in the rest of this paper illustrate similar functionality.

XtremIO array IOPS and bandwidth

Page 61: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

61

Figure 33 shows the Storage Controller utilization during the antivirus scan test.

Figure 33. Antivirus: Storage Controller utilization

During peak load, the antivirus scan operations caused moderate CPU utilization of 48.0 percent. The XtremIO array had sufficient scalability headroom for this workload. Once again, we can conclude from the virtually identical utilizations that XtremIO’s controllers are truly active/active and evenly balances load between them. I/Os on any port and on any controller enjoy the performance of the entire system and are distributed so efficiently across the XtremIO system that it is virtually impossible to create hot spots. The results in the rest of this paper illustrate similar functionality.

Figure 34 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 34. Antivirus: vSphere CPU load

Storage Controller utilization

vSphere CPU load

Page 62: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

62

The vSphere server achieved a peak CPU utilization of 22.8 percent. Hyper-threading was enabled to double the number of logical CPUs. By removing storage bottlenecks, XtremIO enables processors to run at their full potential at high utilization levels, maximizing the efficiency of the customer’s spend. The results in the rest of this paper illustrate similar functionality.

All I/Os completed in sub-millisecond latency on an average.

Note: Ideally, any virtual desktop solution should use an antivirus platform that is optimized for use in virtual environments. Products such as VMware vShield Endpoint operate at the hypervisor level, rather than the individual virtual machine level, and provide a much more efficient means of protecting virtual desktops. If a traditional client-based antivirus platform will be used, you should stagger any scheduled scan operations to limit the impact on the virtual desktop infrastructure.

Patch install test

We performed the patch install test by using Microsoft SCCM 2012 to push a monthly release of ten Microsoft security updates to all desktops. All 2,500 desktops were placed in single collection within SCCM. We configured the collection to install updates within a 2-hour window that began 45 minutes after the patches were available for download. Each desktop installed the patches within approximately 5 minutes.

Note: While the array delivered high levels of performance throughout the 2-hour window, you should perform any large-scale Windows patching over a longer period of time because some patch installations might require significantly more infrastructure resources than others.

Figure 35 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

I/O latency

Test methodology

Individual drive load

Page 63: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

63

Figure 35. Patch install: IOPS for a single eMLC drive

During peak load, the drive serviced 1,128.0 IOPS.

Figure 36 shows the IOPS for one of the 20 LUNs used to store the full-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 36. Patch install: IOPS for a full-clone desktop LUN

During peak load, the LUN serviced 3,497.5 IOPS.

Full-clone desktop LUN load

Page 64: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

64

Figure 37 shows the total IOPS and bandwidth serviced by the XtremIO array during the test.

Figure 37. Patch install: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 36,546.0 IOPS and 1,704.9 MB of bandwidth.

Figure 38 shows the Storage Controller utilization during the test.

Figure 38. Patch install: Storage Controller utilization

The patch install operations caused moderate CPU utilization during peak load, reaching a maximum of 30.2 percent utilization. The XtremIO array had sufficient scalability headroom for this workload.

XtremIO array IOPS and bandwidth

Storage Controller utilization

Page 65: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

65

Figure 39 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 39. Patch install: vSphere CPU load

The vSphere server CPU load was well within the acceptable limits during the test, reaching a maximum of 28.9 percent utilization. Hyper-threading was enabled to double the number of logical CPUs.

Figure 40 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. Each server had similar results; therefore, the figure shows only the results from a single server. The value displayed is an average latency for the ten LUNs used to host the full-clone virtual desktops.

Figure 40. Patch install: Average Guest Millisecond/Command counter

vSphere CPU load

vSphere datastore response time

Page 66: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

66

The peak average GAVG of the virtual desktop LUNs was 1.8 ms. The latency spikes are co-related with spikes in server processing and attributed to delay in the server/hypervisor/application stacks. I/O latency was consistently sub-millisecond on an average on XtremIO.

Login VSI test

We conducted the Login VSI test by scheduling 2,500 users to connect through remote desktops in a 30-minute window and then starting the Login VSI medium-with-Flash workload. We ran the workload for 1 hour in a steady state to observe the load on the View infrastructure.

Figure 41 shows the time required for the desktops to complete the user login process.

Figure 41. Login VSI: Desktop login time

The time required to complete the login process reached a maximum of 3.41 seconds during peak load of the 2,500-desktop logon storm.

Figure 42 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

Test methodology

Desktop logon time

Individual drive load

Page 67: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

67

Figure 42. Login VSI: IOPS for a single eMLC drive

During peak load, the drive serviced 725.7 IOPS.

Figure 43 shows the IOPS for one of the 20 LUNs used to store the full-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 43. Login VSI: IOPS for a full-clone desktop LUN

During peak load, the LUN serviced 2,041.3 IOPS.

Full-clone desktop LUN load

Page 68: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

68

Figure 44 shows the total IOPS and bandwidth serviced by the XtremIO array during the test.

Figure 44. Login VSI: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 28,183.8 IOPS and 563.6 MB of bandwidth.

Figure 45 shows the Storage Controller utilization during the test.

Figure 45. Login VSI: Storage Controller utilization

The Storage Controller peak utilization was 29.3 percent during the login storm. The XtremIO array had sufficient scalability headroom for this workload.

XtremIO array IOPS and bandwidth

Storage Controller utilization

Page 69: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 7: Testing and Validation: Full Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

69

Figure 46 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 46. Login VSI: vSphere CPU load

The CPU load on the vSphere server reached a maximum of 40.8 percent utilization during peak load. Hyper-threading was enabled to double the number of logical CPUs.

Figure 47 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. Each server had similar results; therefore, the figure shows only the results from a single server. The value displayed is an average latency for the ten LUNs used to host the full-clone virtual desktops.

Figure 47. Login VSI: Average Guest Millisecond/Command counter

The peak average GAVG of the virtual desktop LUNs was 0.76 ms.

vSphere CPU load

vSphere datastore response time

Page 70: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

70

Chapter 8 Testing and Validation: Linked Clone Desktops

This chapter includes the following sections:

• Overview

• Validated environment profile

• Boot storm test

• Antivirus test

• Patch install test

• Login VSI test

• Recompose test

• Refresh test

Overview

This chapter provides a description of the tests performed to validate the solution, and the performance of the solution and component subsystems, under the following tests:

• Boot storm of all desktops

• McAfee antivirus full scan on all desktops

• Security patch install with Microsoft SCCM 2012 on all desktops

• User workload testing using Login VSI on all desktops

• View recompose

• View refresh

Page 71: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

71

Table 8 shows a summary of the test results.

Table 8. Test results summary: Linked clone desktops

Operation

Peak IOPS from 2,500 linked clone desktops running concurrently

Average storage latency

Total IOPS capability of a 4 X-Brick cluster

Boot storm 104,208

Sub-1milisecond

600K mixed (50%:50%) IOPS

Anti-virus scan 74,297

Patching 59,205

Login VSI 31,278

View recompose 41,283

View refresh 64,125

We performed the testing with an XtremIO cluster that contained a single X-Brick running 2,500 desktops. But, with plenty of performance and capacity headroom, each X-Brick can easily accommodate 3,500 linked clone desktops. XtremIO performance scales linearly with each X-Brick added to the cluster, meaning that an XtremIO cluster with four X-Bricks that hosts 14,000 desktops will provide similar performance to a single X-Brick cluster running 3,500 desktops. All tests were run on a fully pre-conditioned XtremIO system2.

Validated environment profile

Table 9 provides the validated environment profile.

Table 9. Horizon View: Linked-clone desktop environment profile

Profile characteristic Value

Number of virtual desktops 2,500

Virtual desktop OS Windows 7 Enterprise SP1 (32-bit)

CPU per virtual desktop 1 vCPU

Number of virtual desktops per CPU core 6.9

RAM per virtual desktop 1 GB

Average storage available for each linked clone desktop (not including the shared replica disk)

3 GB

2 Refer to the IDC paper at http://idcdocserv.com/241856 for information on how to test an all-flash array.

Profile characteristics

Page 72: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

72

Profile characteristic Value

Average storage used for each linked clone desktop (used by Windows and applications)

12.74 GB

Average physical storage used for each linked clone desktop on the XtremIO array (after dedupe)

57 MB

Dedupe ratio of linked clone desktops (after provisioning)

2.9:1

Average IOPS per virtual desktop at steady state Varied based on test configuration, ranging from 5 to 33 IOPS per desktop

Peak IOPS observed per virtual desktop during boot storm

89.5

Average IOPS observed per virtual desktop throughout boot storm

28.9

Time required by View to deploy 2,500 desktops 3 hours 20 minutes

Time required to ready 2,500 desktops for login 12 minutes

Time required to complete anti-virus scan of 2,500 desktops

2 hours 50 minutes

Time required to patch each desktop 5 minutes

Time required to recompose 2,500 desktops 5 hours

Time required to refresh 2,500 desktops 1 hour 27 minutes

Number of datastores used to store virtual desktops 20

Number of virtual desktops per datastore 125

Drive and RAID type for datastores 400 GB eMLC SSD drives

EMC XtremIO proprietary data protection XDP that delivers RAID 6-like data protection but better than the performance of RAID 10

Number of VMware clusters used for desktops 2

Note: For this testing we placed the linked-clone replica disks on dedicated vSphere datastores so that the IO they service could be individually measured. Because of the deduplication capabilities of the XtremIO array, this configuration option is not explicitly required. Rather, the default View setting of placing a replica disk on each linked clone datastore would be acceptable, as the XtremIO array would deduplicate the multiple replica disks.

Page 73: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

73

Figure 48 shows the storage capacity utilization of the XtremIO array after the deployment of 2,500 linked clone desktops.

Figure 48. Storage capacity utilization: 2,500 linked clone desktops

Page 74: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

74

This solution tested linked clone desktops using the following use cases, which also were used for the full-clone desktop testing:

• Simultaneous boot of all desktops

• Full antivirus scan of all desktops

• Installation of a monthly release of security updates using Microsoft SCCM 2012 on all desktops

• Login and steady-state user load simulated using the Login VSI medium workload on all desktops

We also tested two additional use cases that apply to linked clone desktops only:

• Recompose of all desktops

• Refresh of all desktops

In each use case, we present a number of key metrics that show the overall performance of the solution.

Note: The results presented are those obtained in the EMC Solutions lab. Results may vary based on environmental conditions.

The linked clone testing also utilized Login VSI to simulate a user workload. Consult the sections Login VSI on page 54 and Login VSI launcher on page 55 for more information about the components that compose the Login VSI test suite.

Boot storm test

We conducted this boot storm test by selecting all the desktops in vCenter Server and then selecting Power On. The overlays in the figures in this section show when the array IOPS achieved a steady state.

All 2,500 desktops were fully powered on within 3 minutes and achieved a steady state 11 minutes later. All desktops were available for login approximately 12 minutes after the test began. This section details the performance characteristics of various components of the View infrastructure during the boot storm test.

Figure 49 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

Use cases

Login VSI

Test methodology

Individual drive load

Page 75: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

75

Figure 49. Boot storm: IOPS for a single eMLC drive

During peak load, the drive serviced a maximum of 800.6 IOPS.

Figure 50 shows the IOPS for one of the 20 LUNs used to store the linked-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 50. Boot storm: IOPS for a linked clone LUN

During peak load, the LUN serviced 2,548.5 IOPS.

Linked-clone LUN load

Page 76: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

76

Figure 51 shows the IOPS for one of the LUNs used to store the linked-clone replica disks. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 51. Boot storm: IOPS for a replica disk LUN

During peak load, the LUN serviced 37,830.5 IOPS.

Figure 52 shows the total IOPS and bandwidth serviced by the XtremIO array during the test.

Figure 52. Boot storm: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 104,207.8 IOPS, and 1,632.6 MB of bandwidth.

Replica-disk LUN load

XtremIO array IOPS and bandwidth

Page 77: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

77

Figure 53 shows the XtremIO Storage Controller utilization during the boot storm test.

Figure 53. Boot storm: Storage Controller utilization

The virtual desktops generated high levels of I/O during the peak load of the boot storm test. The peak Storage Controller utilization was 73.5 percent. The XtremIO array had sufficient scalability headroom for this workload.

Figure 54 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 54. Boot storm: vSphere CPU load

The vSphere server achieved a peak CPU utilization of 52.8 percent. Hyper-threading was enabled to double the number of logical CPUs.

All IOs completed in sub-millisecond latency on an average.

Storage Controller utilization

vSphere CPU load

I/O latency

Page 78: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

78

Antivirus test

We conducted this antivirus test using McAfee VirusScan 8.7i. We scheduled a full scan of all desktops using a custom script to initiate an on-demand scan. The difference between start time and finish time was 2 hours and 50 minutes.

Figure 55 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

Figure 55. Antivirus: IOPS for a single eMLC drive

During peak load, the drive serviced 831.8 IOPS.

Figure 56 shows the IOPS for one of the 20 LUNs used to store the linked-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 56. Antivirus: IOPS for a linked clone LUN

Test methodology

Individual drive load

Linked-clone LUN load

Page 79: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

79

During peak load, the LUN serviced 866.3 IOPS.

Figure 57 shows the IOPS for one of the LUNs used to store the linked-clone replica disks. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 57. Antivirus: IOPS for a replica disk LUN

During peak load, the LUN serviced 32,104.0 IOPS.

Figure 58 shows the total IOPS serviced by the XtremIO array during the test.

Figure 58. Antivirus: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 74,296.8 IOPS, and 1,591.1 MB of bandwidth.

Replica-disk LUN load

XtremIO array IOPS and bandwidth

Page 80: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

80

Figure 59 shows the Storage Controller utilization during the antivirus scan test.

Figure 59. Antivirus: Storage Controller utilization

During peak load, the antivirus scan operations caused moderate CPU utilization of 54.5 percent. The XtremIO array had sufficient scalability headroom for this workload.

Figure 60 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 60. Antivirus: vSphere CPU load

The vSphere server achieved a peak CPU utilization of 20.7 percent. Hyper-threading was enabled to double the number of logical CPUs.

All IOs completed in sub millisecond latency on an average.

Storage Controller utilization

vSphere CPU load

I/O latency

Page 81: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

81

Note: Ideally, any virtual desktop solution should use an antivirus platform that is optimized for use in virtual environments. Products such as VMware vShield Endpoint operate at the hypervisor level, rather than the individual virtual machine level, and provide a much more efficient means of protecting virtual desktops. If a traditional client-based antivirus platform will be used, you should stagger any scheduled scan operations to limit the impact on the virtual desktop infrastructure.

Patch install test

We performed the patch install by using Microsoft SCCM 2012 to push a monthly release of ten Microsoft security updates to all desktops. All 2,500 desktops were placed in single collection within SCCM. We configured the collection to install updates within a 2-hour window that began 45 minutes after the patches were available for download. Each desktop installed the patches within approximately 5 minutes.

Note: While the array delivered high levels of performance throughout the 2-hour window, you should perform any large-scale Windows patching over a longer period of time because some patch installations might require significantly more infrastructure resources than others.

Figure 61 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

Figure 61. Patch install: IOPS for a single eMLC drive

During peak load, the drive serviced 1,166.0 IOPS.

Test methodology

Individual drive load

Page 82: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

82

Figure 62 shows the IOPS for one of the 20 LUNs used to store the linked-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 62. Patch install: IOPS for a linked clone LUN

During peak load, the LUN serviced 3,171.0 IOPS.

Figure 63 shows the IOPS for one of the LUNs used to store the linked-clone replica disks. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 63. Patch install: IOPS for a replica disk LUN

During peak load, the LUN serviced 14,129.5 IOPS.

Linked-clone LUN load

Replica-disk LUN load

Page 83: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

83

Figure 64 shows the total IOPS serviced by the XtremIO array during the test.

Figure 64. Patch install: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 59,205.3 IOPS, and 797.3 MB of bandwidth.

Figure 65 shows the Storage Controller utilization during the test.

Figure 65. Patch install: Storage Controller utilization

The patch install operations caused moderate CPU utilization during peak load, reaching a maximum of 40.5 percent utilization. The XtremIO array had sufficient scalability headroom for this workload.

XtremIO array IOPS and bandwidth

Storage Controller utilization

Page 84: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

84

Figure 66 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 66. Patch install: vSphere CPU load

The vSphere server CPU load was well within the acceptable limits during the test, reaching a maximum of 22.9 percent utilization. Hyper-threading was enabled to double the number of logical CPUs. Figure 67 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. Each server had similar results; therefore, the figure shows only the results from a single server. The value displayed is an average latency for the 11 LUNs used to host the linked-clone virtual desktops and replica disks.

Figure 67. Patch install: Average Guest Millisecond/Command counter

vSphere CPU load

vSphere datastore response time

Page 85: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

85

The peak average GAVG of the virtual desktop LUNs was 3.2 ms. The latency spikes are because of delays in the host/application/ESX stack. All IOs completed in sub millisecond latency on an average.

Login VSI test

We conducted the Login VSI test by scheduling 2,500 users to connect through remote desktops in a 30-minute window and then starting the Login VSI medium-with-Flash workload. We ran the workload for 1 hour in a steady state to observe the load on the View infrastructure.

Figure 68 shows the time required for the desktops to complete the user login process.

Figure 68. Login VSI: Desktop login time

The time required to complete the login process reached a maximum of 4.44 seconds during peak load of the 2,500-desktop logon storm.

Test methodology

Desktop logon time

Page 86: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

86

Figure 69 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

Figure 69. Login VSI: IOPS for a single eMLC drive

During peak load, the drive serviced 860.3 IOPS.

Figure 70 shows the IOPS for one of the 20 LUNs used to store the linked-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 70. Login VSI: IOPS for a linked clone LUN

During peak load, the LUN serviced 1,037.3 IOPS.

Individual drive load

Linked-clone LUN load

Page 87: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

87

Figure 71 shows the IOPS for one of the LUNs used to store the linked-clone replica disks. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 71. Login VSI: IOPS for a replica disk LUN

During peak load, the LUN serviced 13,885.0 IOPS.

Figure 72 shows the total IOPS serviced by the XtremIO array during the test.

Figure 72. Login VSI: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 31,278.3 IOPS, and 590.4 MB of bandwidth.

Replica-disk LUN load

XtremIO array IOPS and bandwidth

Page 88: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

88

Figure 73 shows the Storage Controller utilization during the test.

Figure 73. Login VSI: Storage Controller utilization

The Storage Controller peak utilization was 27.5 percent during the login storm. The XtremIO array had sufficient scalability headroom for this workload.

Figure 74 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 74. Login VSI: vSphere CPU load

The CPU load on the vSphere server reached a maximum of 41.5 percent utilization during peak load. Hyper-threading was enabled to double the number of logical CPUs.

Storage Controller utilization

vSphere CPU load

Page 89: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

89

Figure 75 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. Each server had similar results; therefore, the figure shows only the results from a single server. The value displayed is an average latency for the 11 LUNs used to host the linked-clone virtual desktops and replica disks.

Figure 75. Login VSI: Average Guest Millisecond/Command counter

The peak average GAVG of the virtual desktop LUNs was 0.74 ms.

Recompose test

We conducted this test by performing a VMware Horizon View desktop recompose operation of all desktop pools. We took a new virtual machine snapshot of the master virtual desktop image to serve as the target for the recompose operation. Additionally, we reconfigured VMware Horizon View to support the maximum number of concurrent recompose operations.

A recompose operation deletes the existing virtual desktops and creates new ones. To enhance the readability of the graphs in this section and to show the array behavior during high I/O periods, we performed only those tasks involved in creating new desktops. We initiated all desktop recompose operations simultaneously and the entire process took 300 minutes to complete.

vSphere datastore response time

Test methodology

Page 90: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

90

Figure 76 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

Figure 76. Recompose: IOPS for a single eMLC drive

During peak load, the drive serviced 440.8 IOPS.

Figure 77 shows the IOPS for one of the 20 LUNs used to store the linked-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 77. Recompose: IOPS for a linked clone LUN

During peak load, the LUN serviced 1,498.8 IOPS.

Individual drive load

Linked-clone LUN load

Page 91: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

91

Figure 78 shows the IOPS for one of the LUNs used to store the linked-clone replica disks. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 78. Recompose: IOPS for a replica disk LUN

During peak load, the LUN serviced 31,761.0 IOPS.

Figure 79 shows the total IOPS serviced by the XtremIO array during the test.

Figure 79. Recompose: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 41,283.5 IOPS, and 973.1 MB of bandwidth.

Replica-disk LUN load

XtremIO array IOPS and bandwidth

Page 92: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

92

Figure 80 shows the Storage Controller utilization during the test.

Figure 80. Recompose: Storage Controller utilization

The Storage Controller utilization reached 32.5 percent during the recompose operation. The XtremIO array had sufficient scalability headroom for this workload.

Figure 81 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 81. Recompose: vSphere CPU load

The vSphere server reached a peak CPU load of 17.1 percent. Hyper-threading was enabled to double the number of logical CPUs.

Storage Controller utilization

vSphere CPU load

Page 93: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

93

Figure 82 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. Each server had similar results; therefore, the figure shows only the results from a single server. The value displayed is an average latency for the 11 LUNs used to host the linked-clone virtual desktops and replica disks.

Figure 82. Recompose: Average Guest Millisecond/Command counter

The peak average GAVG of the virtual desktop LUNs was 0.95 ms.

Refresh test

We conducted this test by selecting a refresh operation for all desktop pools from the View Manager administration console. We initiated the refresh operations for all pools at the same time by scheduling the refresh operation within the View Manager administration console. No users were logged in during the test. Additionally, we reconfigured VMware Horizon View to support the maximum number of concurrent refresh operations.

A refresh operation discards any changes that were made to the linked clone desktop since it was last deployed or recomposed, excluding the user persistent data disk (if present). The refresh operation took 87 minutes to complete.

Figure 83 shows the IOPS for one of the 25 eMLC drives in the XtremIO array. Each drive had similar results; therefore, the figure shows only the results from a single drive.

vSphere datastore response time

Test methodology

Individual drive load

Page 94: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

94

Figure 83. Refresh: IOPS for a single eMLC drive

During peak load, the drive serviced 808.8 IOPS.

Figure 84 shows the IOPS for one of the 30 LUNs used to store the linked-clone virtual desktops. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 84. Refresh: IOPS for a linked clone LUN

During peak load, the LUN serviced 1,806.0 IOPS.

Linked-clone LUN load

Page 95: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

95

Figure 85 shows the IOPS for one of the LUNs used to store the linked-clone replica disks. Each LUN had similar results; therefore, the figure shows only the results from a single LUN.

Figure 85. Refresh: IOPS for a replica disk LUN

During peak load, the LUN serviced 30,273.0 IOPS.

Figure 86 shows the total IOPS serviced by the XtremIO array during the test.

Figure 86. Refresh: XtremIO array total IOPS and bandwidth

During peak load, the XtremIO array serviced 64,125.3 IOPS, and 1,484.9 MB of bandwidth.

Replica-disk LUN load

XtremIO array IOPS and bandwidth

Page 96: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

96

Figure 87 shows the Storage Controller utilization during the test.

Figure 87. Refresh: Storage Controller utilization

The Storage Controller peak utilization was 47.5 percent during the refresh test. The XtremIO array had sufficient scalability headroom for this workload.

Figure 88 shows the CPU load from one of the vSphere servers in the VMware clusters. Each server had similar results; therefore, the figure shows only the results from a single server.

Figure 88. Refresh: vSphere CPU load

The vSphere server reached a peak CPU load of 17.9 percent. Hyper-threading was enabled to double the number of logical CPUs.

Storage Controller utilization

vSphere CPU load

Page 97: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 8: Testing and Validation: Linked Clone Desktops

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

97

Figure 89 shows the Average Guest Millisecond/Command counter, which is shown as GAVG in ESXTOP. This counter represents the response time for I/O operations initiated to the storage array. Each server had similar results; therefore, the figure shows only the results from a single server. The value displayed is an average latency for the 11 LUNs used to host the linked-clone virtual desktops and replica disks.

Figure 89. Refresh: Average Guest Millisecond/Command counter

The peak average GAVG of the virtual desktop LUNs was 1.25 ms.

vSphere datastore response time

Page 98: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 9: Conclusion

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

98

Chapter 9 Conclusion

This chapter includes the following sections:

• Summary

• Findings

• References

Summary

As shown in Chapter 7, Testing and Validation: Full Clone Desktops, and Chapter 8, Testing and Validation: Linked Clone Desktops, the features of the EMC XtremIO all-flash array enable VMware Horizon View environments to achieve high levels of performance, scale as needed, be administered more easily, and require fewer overall infrastructure resources.

The performance capabilities of the EMC XtremIO array enable virtual desktop application response times to mirror the “SSD” experience of modern all-flash devices such as ultrabooks, without de-featuring the desktop to minimize I/Os as is required with most storage solutions.

The performance capabilities of the EMC XtremIO array also enable virtual desktops to power on and off or suspend and resume much more quickly than possible with non-all-flash arrays. This allows organizations to potentially improve virtual desktop infrastructure resource utilization by powering off or suspending desktops when they are not needed.

The data reduction capabilities of the EMC XtremIO array further reduce the storage required for both full-clone and linked-clone virtual desktops, allowing View administrators to select whichever desktop type best suits their environment. This allows the storage cost per desktop to be very attractive, even though the storage is 100-percent flash.

The elegant engineering of the EMC XtremIO array brings unprecedented flexibility and speedup for routine desktop, server and storage administration tasks. Storage configuration is accomplished in a few simple clicks. There is no complicated storage to size and RAID to configure. Administrators no longer need to plan outages over nights and weekends just for routine desktop maintenance operations – they can do them while desktops are running. And fresh full clone desktops roll out in just a few seconds each.

Page 99: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 9: Conclusion

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

99

Findings

By using the XtremIO storage system as the foundation for VMware Horizon View deployments, you gain the following unique advantages that cannot be achieved with any other View deployment architecture:

• Superior View user experience—Every desktop in an XtremIO deployment gets reliable and massive I/O potential both in sustained IOPS and the ability to burst to much higher levels that are required by demanding applications such as Microsoft Outlook, desktop search, and antivirus scanning. During the 2,500-desktop scale testing, every Login VSI simulated application operation was completed much more quickly than the acceptable user experience boundaries. This performance is superior by a wide margin to all other all-flash shared storage arrays.

• As a result of this testing exercise, we can conclude that storage is no longer the bottleneck in VDI deployments. While deployments may still encounter bottlenecks and sub-par user experience, it is now more likely a result of under sizing either the CPU or memory resources.

• Though each X-Brick can easily accommodate 2,500 full clone desktops and 3,500 linked clone desktops, actual numbers deployed may be much higher depending on the configuration of the actual desktops themselves and their workload.

• Lowest cost per virtual desktop—Because of XtremIO’s in-line data reduction and performance density, the cost per desktop is lower than that of other View solutions. With the XtremIO array, you can deploy virtual desktops for less than their physical desktop counterparts.

• Rapid provisioning and rollout—Because XtremIO is simple to set up and requires no tuning, and because any View deployment model (full clone or linked clone, or any combination thereof) can be chosen at will, complex planning is eliminated. You can design and roll out View deployments quickly with assured success.

• No need for third-party tools—XtremIO solves all I/O-related View deployment challenges. Additional caching or host-based deduplication schemes, or any other point solutions that increase expense and complexity, are not needed.

• No change to desktop administration—Whatever methods administrators are using to manage their existing physical desktops can be directly applied to the View deployment when XtremIO is used. No changes to software updates, operating system patching, antivirus scanning or other procedures are needed to lighten the I/O load on shared storage. Rather, administrators can confidently rely on XtremIO’s high performance levels to deliver.

• No change to desktop setup—Horizon View best practices currently dictate dozens of changes to the desktop image in order to reduce the I/O load on shared storage. None of these changes are required with XtremIO, allowing the desktop to remain fully functional while maintaining a strong user experience.

Page 100: EMC INFRASTRUCTURE FOR VMWARE HORIZON VIEW 5 · Chapter 1 Executive Summary 12 ... VMware vSphere 5.1 infrastructure ... EMC Infrastructure for VMware Horizon View 5.2

Chapter 9: Conclusion

EMC Infrastructure for VMware Horizon View 5.2 Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1—Proven Solution Guide

100

References

The following documents provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative.

EMC website:

• Deploying Microsoft Windows 7 Virtual Desktops with VMware View—Applied Best Practices

• EMC Infrastructure for VMware View 5.2, Enabled by the EMC XtremIO All-Flash Array and VMware vSphere 5.1–Reference Architecture

• EMC PowerPath/VE Installation and Administration Guide

EMC Online Support:

• PowerPath Viewer Installation and Administration Guide

The following documents, located on the VMware website, also provide useful information:

• Anti-Virus Best Practices for VMware Horizon View 5.x

• VMware Horizon View Administration

• VMware Horizon View Architecture Planning

• VMware Horizon View Installation

• VMware Horizon View Integration

• VMware Horizon View User Profile Migration

• VMware Horizon View Security

• VMware Horizon View Upgrades

• VMware Horizon View Optimization Guide for Windows 7 and Windows 8

• VMware Horizon View 5.2—Performance and Best Practices

• VMware Horizon View—Large-Scale Reference Architecture

• vSphere Installation and Setup

Supporting documents

VMware documents