16
Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT - Lab Guide How to configure VMware Horizon View 6.0 and Gen 5 Fibre Channel adapter workloads and understand the characteristics of virtual desktops in a Fibre Channel infrastructure

Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

  • Upload
    others

  • View
    26

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - L ab Guide

How to configure VMware Horizon View 6.0 and Gen 5

Fibre Channel adapter workloads and understand the

characteristics of virtual desktops in a Fibre Channel infrastructure

Page 2: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

2 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Table of contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Targeted audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Hardware components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

x86 servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Emulex LPe16002B Gen 5 (16Gb) Fibre Channel Host Bus Adapters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Native vs. legacy mode driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Device queue depth. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Gen 5 (16Gb) Fibre Channel switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Storage array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Software components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

VMware vSphere 5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

VMware Horizon View 6.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

VMware View Composer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Login Virtual Session Indexer 4.1 benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Testing methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Design layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

Desktop image configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Test results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Analyzing the results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Task worker results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

Office worker results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

Knowledge worker results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

Boot storms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

Comparing I/O operations per second and throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

Page 3: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

3 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Figure 1. A screen shot of the VMware Hardware Compatibility Guide showing ESXi compatibility for the Dell PowerEdge R720.

Introduction Virtual desktop infrastructure (VDI) creates a lot of demand on hardware and software resources. It also creates demand on I/O for networking and storage resources. The challenges become critical in selecting the right compute and storage resource. The choice for selecting the correct server or storage should be based on testing and creating a proof of concept (POC) in order to understand your VDI requirements. There are many components to a successful VDI deployment, but the primary focus for this lab guide is to illustrate how to configure a host on VMware View 6.0 on a Fibre Channel (FC) infrastructure using VMware best practices. There are several videos, webcasts and seminars available on VDI, so there’s no shortage of information. We recommend, however, that you perform a POC on your own. The results from our POC documented in this lab guide confirm that FC is still the preferred storage protocol for server performance, scalability and availability.

Purpose The purpose of this lab guide is to demonstrate VMware Horizon View 6.0 and Emulex LightPulse® Gen 5 (16Gb) FC Host Bus Adapter (HBA) workloads and to understand the characteristics of virtual desktops in a FC infrastructure. A single host to a single all flash storage array was used to test the scalability of the host. A FC analyzer was used to capture I/O and to identify the size of the blocks. In addition, we measured the impact of all virtual desktops powered on at the same time for the so-called boot storm.

In summary, this lab guide details the hardware and software components used, the best practices applied and the test results discovered in a POC on three virtual desktop workloads.

This lab guide is part of the Implementer’s Lab website, which hosts a series of technical resources for easier I/O implementation and best practices for deploying today’s leading storage and server solutions. Visit www.implementerslab.com.

Targeted audience The document is intended for virtual desktop administrators, VMware View administrators, vCenter Server administrators and storage administrators.

Hardware components

x86 servers

Many of today’s x86 servers are capable of supporting a few hundred virtual machines (VMs) with new CPU virtualization features and support for larger memory and faster solid state disks (SSDs). For this POC, a Dell PowerEdge R720 server was deployed with 196GB of RAM and local storage for ESXi 5.5. Always check the VMware Hardware Compatibility Guide to make sure the server of choice has been tested and approved by VMware: www.vmware.com/resources/compatibility/search.php.

Best practice

n Always validate that the latest firmware is loaded on the server.

n Always enable Intel VT-d and AMD IOMMU for VMware vSphere 5.5.

n For best performance, use an available PCI 3.0 x8 slot for Emulex Gen 5 FC HBAs.

Page 4: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

4 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Emulex LPe16002B Gen 5 Fibre Channel Host Bus Adapters VMware vSphere 5.5 has full support for end-to-end 16Gb Fibre Channel (16GFC), including Emulex LightPulse Gen 5 FC HBAs and its OEM-branded versions. The Dell-branded Emulex LPe16002B-M6-D dual-port Gen 5 FC HBA was used in this POC.

Figure 2. A screen shot of the VMware Hardware Compatibility Guide showing ESXi support for Dell-branded Emulex Gen 5 FC HBAs.

Native vs. legacy mode driver VMware ESXi 5.5 has a legacy driver called lpfc820, which is the inbox for Emulex FC adapters. The legacy driver has been replaced in ESXi 5.5 as a native mode driver called lpfc, which is an out-of-box driver.

To install the driver:

1. Download the latest lpfc driver from www.vmware.com.

2. Enable Secure Shell (SSH) on the host.

3. Remove the legacy lpfc820 driver.

4. Copy the recently downloaded vib file to a temporary directory on the ESXi host, such as /tmp.

5. Login to the host with your preferred SSH client, such as putty.exe.

6. Run the command esxcli software vib install –v /tmp/name of the vib file.vib.

7. Reboot the host.

Device queue depth The device queue depth for ESXi 5.0 has changed from 32 to 64 for storage I/O control improvements. Emulex adapters are still at 30—as two are reserved—but there are other FC adapters whose queue depths have changed. Proceed with caution before making any changes to the adapter queue depth. Both 32 and 64 queue depths were changed in our tests and did not make much of a difference in performance.

To adjust the queue depth for Emulex FC HBAs:

1. Ensure the HBA module is loaded. # esxcli system module list | grep lpfc

2. Change queue depth. # esxcli system module parameters set –p lpfc_lun_queue_depth=64 –m lpfc

3. Reboot host.

4. Confirm the Logical Unit Number (LUN) queue depth has changed. # esxcli system module parameters list –m lpfc |grep lun_queue_depth lpfc_lun_queue_depth int 64 Max number of FCP commands we can queue to a specific LUN

Best practice

n Remove the legacy lpfc820 FC driver and update to the latest native lpfc driver.

n Update the LUN queue depth if recommended by the storage vendor array.

n Install the latest Emulex Common Information Model (CIM) provider to use the Emulex OneCommand ® Manager application.

Page 5: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

5 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Gen 5 (16Gb) Fibre Channel switches Fibre Channel switches play an important role in a FC Storage Area Network (SAN) infrastructure. In our test infrastructure, we used a Brocade 6510 Gen 5 FC switch. Configuring Brocade Gen 5 FC switches with Emulex Gen 5 FC HBAs provides a scalable and robust 16GFC infrastructure. With VMware vSphere 5.5 now supporting end-to-end 16GFC connectivity, a simple FC implementation is now possible for high performing, low latency VDI deployments.

Best practice

n Implement zoning in your SAN infrastructure to avoid LUN corruption.

Storage array Storage configurations for VDI deployments are not easy to deploy as there are many variables involved in trying to achieve a low cost, robust, highly available and high performing solution. The storage array selected for this test was the Violin Array 6000 from Violin Memory Systems, one of the fastest all flash arrays we have tested so far.

The Violin Array 6000 was configured as a FC array with four dual-port 8GFC HBAs. The firmware version running on the Violin Array 6000 can be used to support the VMware vSphere API for Storage Awareness (VASA). The VMware vSphere API for Array Integration (VAAI) and Violin Array 6000 support two key primitive features: full copy and block zeroing. Multipathing is also supported, which will improve storage I/O performance and reliability. An additional tool that will assist with LUN provisioning and array manageability is the Violin Memory Storage Management Plug-in (VSMP). This free tool is a VMware vCenter plug-in array used to manage and create datastores. The sample screen image in figure 3 captures the capabilities of the plug-in with the vSphere client.

Figure 3. The Violin plug-in is monitoring a container named 41207F00148.

Best practice

n Use memory flash storage where possible or store your desktop replicas on SSD if possible.

n Queue depth can vary according to storage array vendor. Use the recommended settings from the storage array vendor for VDI workloads.

n VMware provides a multipath I/O (MPIO) driver plug-in. Consult with your storage vendor for the appropriate MPIO driver.

Page 6: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

6 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Networking We used a standard network infrastructure for this POC with 1Gb Ethernet (1GbE) and 10GbE network switches. Two switches were used to separate the network traffic. The host used an Emulex OneConnect® 10GbE OCe14102 Network Adapter for 10GbE connectivity and used a 1GbE LAN on Motherboard (LOM) for management ports. The host was connected to a Cisco Catalyst 3560 for 1GbE management and a Cisco Nexus 5548 for 10GbE management. Configuring any firewall or load balancing was out of the scope of this test.

The hosts’ management ports were all configured with a standard vSwitch. Each host had a dual 1GbE uplink in a teamed configuration.

Figure 4. Team management networks are attached to a 1GbE port.

Figure 5. The vSphere distributed switch is listed for all 125 VMs.

A distributed virtual switch was used to manage VM traffic and vMotion. The host under stress used a dual-port OneConnect OCe14102 10GbE Network Adapter. The traffic was carefully separated by using two virtual Local Area Networks (VLANs).

Page 7: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

7 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Figure 6. A VMware View deployment.

Best practice

n Use two adapters when possible to provide resiliency across network adapters.

n Use VLANs to separate network traffic on 10GbE links.

- VLAN for vMotion

- VLAN for VM traffic

- VLAN for storage protocol (if not using FC)

Software componentsVMware vSphere 5.5 VMware vSphere is the industry-leading virtualization platform for building cloud infrastructures. vSphere accelerates the shift to cloud computing for existing data centers and underpins compatible public cloud offerings.

For VMware vSphere 5.5 deployments, Dell provides an ISO image, which contains all vSphere ESXi 5.5 and all Dell CIM providers. To install ESXi 5.5, we mounted the ISO image with a CD because it’s easier to mount a virtual CD-ROM or use the physical CD-ROM with a single host. (Note: there are many other ways to install the operating system (OS) on the server.)

VMware Horizon View 6.0 VMware Horizon (with View) is available in three editions that deliver simple, cost-effective desktop and application virtualization solutions that are fully optimized for VMware management and software-defined data center (SDDC) stack. The editions include: VMware Horizon View Standard, VMware Horizon Advanced and VMware Horizon Enterprise.

VMware Horizon View is a complex deployment as there are several components involved. Figure 6 shows an example of a VMware View deployment demonstrating all of the components.

Best practice

n Always enable Content-Based Read Cache (CBRC), which is on by default. It will help reduce boot storm and login I/O operations per second (IOPS).

Page 8: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

8 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Figure 7. VMware View Administrator shows desktop pool size of 125 linked clone virtual desktops.

VMware View Composer View Composer gives the ability to deploy multiple linked clone desktops from a single centralized base image. The advantage of using VMware View Composer for provisioning VMs is evident in this example: A full clone of 125 virtual desktops at 30GB each requires 3.75TB of storage space. (A full clone is an independent copy of a VM that shares nothing with the parent VM after the cloning.) A linked clone, on the other hand, requires only 700GB of storage space. (A linked clone is a copy of a VM that shares virtual disks ongoing with the parent VM.) This saves disk space and allows the multiple VMs to use the same software installation. The linked clone is made from a snapshot of the parent.

Page 9: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

9 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Login VSI 4.1 benchmarking Login VSI (www.loginvsi.com) is the industry standard benchmarking tool to test the performance and scalability of centralized Windows desktop environments, and is used for testing and benchmarking by most major hardware and software vendors. Login VSI is vendor-independent and works with standardized user workloads, therefore, conclusions that are based on Login VSI test data are objective, verifiable and replicable.

Testing methods A single Intel Xeon E5-2950 v2 x86-based server with an Emulex LPe16002 Gen 5 FC HBA connected to a single 4TB datastore was used to test three different workloads for 125 linked clones. See Figure 8.

For the purpose of the testing, Login VSI was used with three different workloads for the virtual desktops: Task, Office and Knowledge.

The following are examples of the scenarios used for desktop deployment and utilization:

Design layout

The design layout listed here is the configuration used in our lab to test 125 VMs on a single host for the three workloads. The breakdown is as follows:

Scenario Definition

Task worker (light workload) A diverse workload using Outlook, Adobe Reader, copy and zip functions.

Office worker (medium workload) A new workload based out of the knowledge worker, but using fewer resources.

Knowledge worker (heavy workload) A medium workload without flash games.

Quantity Description Purpose

5 Login VSI Launchers (VMs) 25 sessions per launcher to initiate the 125 VMs.

1 Login VSI VSIshare (VM) Maintains scripts for Login VSI users, keeps startup scripts and log files.

1 Login VSI Analyzer (VM) Processes the data collected during the VSI workload.

1 VMware View Server 6.0

1 VMware vCenter Server Appliance (VM) Appliance to manage the vSphere VMware 5.5 test environment.

1 ESXi 5.5 host Host used for resources of 125 VMs.

2 Gen 5 FC SAN switch SAN switch for 16Gb server-to-array connectivity.

1 10GbE networking switch Networking switch used for VM and VMotion traffic.

1 Memory array Array used for the 125 VMDK files.

Login VSI provides proactive performance management solutions for virtualized desktop and server environments. Enterprise IT departments use Login VSI products in all phases of their virtual desktop deployment—from planning to deployment to change management—for more predictable performance, higher availability and a more consistent end user experience. The world’s leading virtualization vendors use the flagship product, Login VSI, to benchmark performance. With minimal configuration, Login VSI products work in VMware Horizon View, Citrix XenDesktop and XenApp, Microsoft Remote Desktop Services (Terminal Services) and any other Windows-based virtual desktop solution. For more information, download a trial at www.loginvsi.com.

Page 10: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

10 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Figure 8. Design configuration used for server, storage, VMs and networking.

Host configuration The Dell PowerEdge server was configured with 196GB of RAM and enough CPU to support all 125 virtual desktops. The configuration depends on the workloads of the virtual desktops during the time of testing. During our tests, the workloads provided by Login VSI made a small difference between the three workloads. As the load of the virtual desktops increased, the resource usage for memory started to saturate, but not to the point of making the server useless.

Name Detail

Dell PowerEdge R720 Dual 12 Core Intel Xeon CPU E5-2697 v2 2.7GHz

RAM 196GB

Network1GbE Network Adapters (4 each) Emulex OCe14102 10GbE Network Adapters (2 each)

HDD 300GB SATA drives (2 each)

iDRAC Management port (1 each)

HBA Dell-branded Emulex dual-port LPe16002B-M6-D Gen 5 FC HBA

Page 11: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

11 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Desktop image configuration Microsoft Windows 7 Professional was deployed for the virtual desktop image and included all of the applications and important Microsoft security updates. The single virtual desktop created was used as the golden master image. A snapshot from the golden image was saved as “Gold Image Snap Win 7” and from it, 125 virtual desktops were created using VMware Composer.

The virtual hardware configuration is described in the table below:

Figure 17. Desktop image configuration.

Test results Analyzing the results Login VSI was used to simulate a real world user workload for each virtual desktop. The values represent the time it takes for an application or task to complete. The tests were set to run for all VMs once they were powered on and the workload was defined. Each test would run four segments for a total of 2,886 seconds.

For a complete listing and definition for each workload, you can refer to the Login VSI documentation: http://www.loginvsi.com/documentation/index.php?title=Changes_old_and_new_workloads

Evaluation was quantified using the following metrics:

Minimum response—The minimum response time for all the measurements taken when the indicated number of sessions on the X-axis was active.

Average response—The average response time for all the measurements taken when the indicated number of sessions on the X-axis was active.

Maximum response—The maximum response time for all the measurements taken when the indicated number of sessions on the X-axis was active.

VSImax v4 detailed—The individual measurements taken during a test in a combined graph. This graph shows the minimum, average and maximum response times for each individual measurement. There is also a total metric that combines all of the metrics into a single number. The minimum, average and maximum for this combined value is shown as well.

VSI index average—The average value as calculated by VSI. The VSI index average differs from average response in the fact that average response is the pure average. VSI index average applies certain statistical rules to the average to avoid spikes from influencing the average too much.

VSImax v4—The amount of sessions that can be active on a system before the system is saturated. The blue X shows the point where VSImax was reached. This number provides an indication of the scalability of the environment (higher is better).

Attribute Name Description

Desktop OS Windows 7 Professional

Hardware Virtual Machine Hardware version 10

vCPU 1

vMemory 1500MB

vNICs VMXNet3

Virtual SCS Controller 9 LSI Logic SAS

Virtual Disk – VMDK 30GB

Virtual floppy drive None

Virtual D/DVD drive None

VMware View Agent 6.0

Desktop applications MS Office 2010 (64-bit), Adobe Acrobat 11, Adobe Flash Player 12, Doro PDF 1.82, FreeMind, IE11

Page 12: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

12 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

VSIbase (line)—The VSI index average for the environment when there is little to no load on the environment. This number is used as an indication of the base performance of the environment (lower is better). This number, in combination with the VSImax number, will tell you:

n How well an environment performs (VSIbase)

n How long the environment can maintain that performance and how scalable the VSIbase performance is (VSImax)

Task worker results The threshold for 125 VMs was at 1993, which is the point when the system would become unresponsive. The task worker workload confirms that the host, storage and HBAs have enough resources to support 125 VMs on a single host.

Figure 9. Task worker results.

Figure 10. Office worker results.

Office worker results The threshold for 125 VMs was at 1986, which is the point when the system would become unresponsive. The office worker workload shows the host, storage and HBAs have enough resources to support 125 VMs on a single host.

Page 13: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

13 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Knowledge worker results The threshold for 125 VMs was at 1993, which is the point when the system would become unresponsive. The knowledge worker workload shows the host, storage and HBAs have enough resources to support 125 VMs on a single host.

Figure 11. Knowledge worker results.

Figure 12. Workload comparisons

Comparison We compared the workloads of all three tests as shown in Figure 12. The threshold, as well as the baseline for each workload was linear. All workloads were well below the threshold before the host was considered saturated.

Page 14: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

14 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Boot storms Boot storms are always a topic of conversation in regards to virtual desktops and especially critical in certain industries (such as healthcare) where desktops need to boot quickly in the event of power outages. Boot storms can be addressed if proper planning and testing is performed. There are two ways to look at boot storms: power on all servers at the same time or login to all virtual desktops at the same time. In this POC, boot storms were tested through VMware View Administrator by resetting all 125 VMs at the same time. It took about two minutes for all 125 VMs to return and show a status of “available” in VMware View Administrator.

Figure 13. CPU utilization is briefly high during a boot storm, as expected.

Figure 14. Host wattage is fairly high during a boot storm, as expected.

Page 15: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

15 CONNEC T | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters

CONNECT - LAB GU IDE

Figure 15. VDI_Datastore was measured by Violin Memory in real-time on read and write for both bandwidth and throughput during a boot storm.

Comparing IOPS and throughput: Emulex Gen 5 FC Adapters provide maximum scalability performance We captured a trace to view the block size and activity of 125 virtual desktops using a fabric analyzer. The results showed that 80 percent of the traffic was on writes and 20 percent was on reads. The block size was 4K on random traffic.

To measure the throughput, we gathered the data sets for the knowledge worker and calculated the 95th percentile for I/O and throughput. Calculating to the 95th percentile means this: in a single host with 125 VMs, 95 percent of the time the usage is below this amount.

The results were compared against a physical system to measure maximum two-port IOPS and throughput and confirmed that the traffic of 125 virtual desktops is not sufficient enough to saturate the bandwidth of the Emulex Gen 5 FC HBA. At no time did we exceed the maximum capabilities of the adapter. Emulex adapters can easily support the three different workloads generated by Login VSI during the course of the testing, providing maximum scalability performance.

Conclusion The results of our test using VMware Horizon 6.0 (with View) demonstrate that a single host running ESXi 5.5 is no match for Gen 5 FC SANs. VDI with FC SANs, on the other hand, is ideal for scalability and performance. Block size for VDI workloads are fairly small at 4K. The I/O for VDI workloads is low, ranging from 6 -12 IOPS depending on the workload.

Page 16: Configure VMware Horizon View 6.0 with Emulex Gen 5 …...3 CONNECT | Configure VMware Horizon View 6.0 with Emulex Gen 5 (16Gb) Fibre Channel Host Bus Adapters CONNECT ffi LAB GUIDE

CONNECT - DEPLOYMENT GU IDE

Referenceswww.ImplementersLab.com

www.emulex.com

www.vmware.com

VMware Horizon View Documentation https://www.vmware.com/support/pubs/view_pubs.html

Vmware Horizon 6 Reference Architecture https://www.vmware.com/resources/techresources/10432

www.vmem.com

www.dell.com

Setting the maximum outstanding disk request for virtual machines: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1268

Changing the queue depth for QLogic, Emulex and Brocade HBAs (1267) - http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1267

www.loginvsi.com

Login VSI bears no responsibility for this publication in any way and cannot be held liable for any damages following from or related to any information in this publication or any conclusions that may be drawn from it.

ELX15-2531 · 3/15

World Headquarters 3333 Susan Street, Costa Mesa, CA 92626 +1 714 662 5600Bangalore, India +91 80 40156789 | Beijing, China +86 10 84400221Dublin, Ireland +35 3 (0) 1 652 1700 | Munich, Germany +49 (0) 89 97007 177Paris, France +33 (0) 158 580 022 | Tokyo, Japan +81 3 5325 3261 | Singapore +65 6866 3768Wokingham, United Kingdom +44 (0) 118 977 2929 | Brazil +55 11 3443 7735

©2015 Emulex, Inc. All rights reserved. This document refers to various companies and products by their trade names. In most cases, their respective companies claim these designations as trademarks or registered trademarks. This information is provided for reference only. Although this information is believed to be accurate and reliable at the time of publication, Emulex assumes no responsibility for errors or omissions. Emulex reserves the right to make changes or corrections without notice. This document is the property of Emulex and may not be duplicated without permission from the Company.

www.emulex.com