Upload
ngonguyet
View
225
Download
0
Embed Size (px)
Citation preview
Managing the IOPS Challenge in Large-scale Desktop Virtualization Deployments using Microsoft Hyper-V and Quest HyperCache
Written ByMatthew Evans and Terry LewisQuest Software
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
1
Contents
Abstract .......................................................................................................................................................................... 2
Introduction ................................................................................................................................................................... 3
About the Study ........................................................................................................................................................... 3
About Quest Workspace Desktop Virtualization .......................................................................................................... 3
Summary of Results ...................................................................................................................................................... 4
Test Details .................................................................................................................................................................... 5
Infrastructure ............................................................................................................................................................... 5
Operations ................................................................................................................................................................... 5
Test 2: IOPS Provisioning 75 Windows 7 SP1 (VDI) Desktops ................................................................................. 10
Test 3: IOPS for RD Session Host Provisioning ........................................................................................................ 13
Test 4: IOPS for RD Session Host with Simulated Load ........................................................................................... 16
Test 4: IOPS for VDI with Simulated Load ................................................................................................................ 18
Final Words .................................................................................................................................................................. 20
About the Authors ....................................................................................................................................................... 21
Appendix A: Test Hardware Details ........................................................................................................................... 22
Appendix B: Quest Workspace Desktop Virtualization Tuning .............................................................................. 24
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
2
Abstract IOPS is the most challenging resource in hosted desktop environments. This study reports on tests that demonstrate how Quest HyperCache, a feature of Quest® Workspace™ Desktop Virtualization, can significantly reduce IOPS consumption and thereby reduce workload execution time, including provision-ing time.
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
3
Introduction About the Study
IOPS is the most challenging resource in hosted desktop environments. The goal of this study was to assess how Quest HyperCache, at scale, can help with IOPS consumption. HyperCache is a feature of Quest Workspace Desktop Virtualization (formerly vWorkspace) that is designed to drive down the IOPS requirement while delivering faster performance for both VDI and RD Session Host. HyperCache will drive down both disk read and disk write operations, with more emphasis on reads. Many studies are available that look at the IOPS consumption of individual machines or over a single hypervisor. This paper, however, looks at the impact on 3,000 virtual machines. To monitor IOPS usage we used the Disk Reads/sec and Disk Writes/sec counters in PerfMon. The tests and simulations were carried out in the Microsoft Enterprise Engineering Center (EEC) in Redmond, Washington in April 2012. The software tested was Quest Workspace Desktop Virtualization 7.6 (QWDV), Hyper-V 2008 R2 SP1 and SCVMM 2012.The provisioning tests were done using parent-child clone technologies. Parent-child means that there is a parent virtual machine (VM) and any desktops subsequently provisioned have access to that parent and store the differences in a child VM, which is therefore a fraction of the size of the parent. This approach reduces provisioning time and storage, but it does not address the challenge of IOPS.This study would not have been possible without the support from everyone within the EEC; however, Loay Shbeilat of Microsoft deserves special recognition for going above and beyond.
About Quest Workspace Desktop Virtualization
Quest Workspace Desktop Virtualization manages, secures and provisions virtual desktop environments and also brokers connections. It delivers virtual applications and desktops from multiple hypervisors, Remote Desktop Session Hosts (Terminal Servers) and blade PCs through a single user access point and management center, while increasing user adoption with an optimal user experience. For more information, see http://www.quest.com/desktop-virtualization/.
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
4
Summary of Results
This study yielded the following key results and conclusions:
• 3,000 Windows 7 SP1 virtual desktops were provisioned in 41 minutes using Quest HyperCache and Instant Provisioning, as compared to approximately seven hours using native SCVMM and Hyper-V provisioning methods.
• When provisioning 50 virtual desktops, we see a reduction in Read IOPS of 91% and Write IOPS of 61%.
• The HyperCache feature of Quest Workspace Desktop Virtualization also reduces IOPS during provisioning of Microsoft Windows RD Session Hosts.
• When executing a medium workload with LoginVSI on a RD Session Host on a single Hyper-V host, we see a saving of 80% for Read IOPS and 16% for Write IOPS.
• When executing a medium workload with LoginVSI in 150 Windows 7 virtual desktops on a single Hyper-V host, we see a saving of 62% for Read IOPS and 69% for Write IOPS.
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
5
Test Details Infrastructure
The following high-level diagram illustrates the basic topology of the test environment:
Diagram 1. Basic topology of the test environment.
Full details of the hardware and software used in this study are provided in Appendix A. Specific configu-ration for Quest Workspace Desktop Virtualization can be found in Appendix B.
Operations
Provisioning and simulating a workload allows us to validate the HyperCache feature and prove that it lowers IOPS requirements, thus reducing storage requirements and helping to drive down costs of hosted desktop solutions. The tests were all carried out on the same infrastructure: 40 Hyper-V Hosts, two SCVMM servers and one Quest Workspace Desktop Virtualization server. For the PerfMon results provided in graph format in this document, a moving average of 2 was used.Test 1: IOPS Provisioning 50 Windows 7 SP1 (VDI) Desktops
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
6
Provisioning virtual desktops is normally one of the most time-consuming operations. With HyperCache and Instant Provisioning (Quest's built-in Sysprep mechanism), we were able to reduce Windows 7 VDI provisioning time from 15–20 minutes per desktop (depending on hardware) to about two minutes (also depending on hardware).
In the first test, we looked at the impact on time and IOPS when creating 1,000 Windows 7 SP1 virtual desktops over 20 Hyper-V hosts. During the creation of the desktops, we utilize the built-in QWDV load balancing mechanism. The result is that each host ends up with roughly 50 virtual desktops. In this test, all the Hyper-V hosts have direct attached storage (DAS).Figure 1 shows specific disk counters from HyperV-33 provisioning 50 desktops with HyperCache. You will notice that Disk Writes (in red) never exceeds 300 and Disk Reads (in purple) never exceeds 8; this is because HyperCache is serving the master template from memory-based cache. The averages were 5.12 Disk Reads/sec and 186.86 Disk Writes/sec.
Figure 1. Disk Reads/sec and Disk Writes/sec on HyperV-33 provisioning 50 desktops with HyperCache
0
2
4
6
8
10
12
14
16
18
20
0
50
100
150
200
250
300
350
18:56 18:58 19:00 19:02 19:04 19:06 19:08 19:10 19:12 19:14 19:16 19:18 19:20 19:22 19:24
Dis
k R
ead
s/se
c, (
Purp
le)
Dis
k W
rite
s/se
c (R
ed)
HyperV-33 (DAS) Provisioning 50 Virtual Desktops with HyperCache
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
7
Figure 2 shows the same Hyper-V host, based on hardware spec., provisioning 100 desktops without HyperCache. You will notice that without HyperCache, the IOPS is significantly higher. (In Test 2 you will see the results from the provisioning of 3,000 desktops over 40 hosts, which is a better comparison based on number of virtual machines being created per host.)
Figure 2. Disk Reads/sec and Disk Writes/sec on HyperV-33 provisioning 100 desktops without HyperCache
0
500
1000
1500
2000
2500
3000
0
200
400
600
800
1000
1200
03/23/2011 15:45:30.939
03/23/2011 16:18:31.069
03/23/2011 16:51:31.198
03/23/2011 17:24:31.295
03/23/2011 17:57:31.408
03/23/2011 18:30:31.506
Dis
k R
ead
s/se
c (P
urp
le)
Dis
k W
rite
s/se
c (R
ed)
HyperV-33 (DAS) Provisioning 100 Virtual Desktops
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
8
Figure 3 shows CPU and disk usage on HyperV-33 provisioning 50 desktops. (HyperV-33 was one of the 20 Hyper-V hosts.) CPU and disk are both underutilized, and this operation took roughly 25 minutes to complete, which means we were provisioning only two desktops a minute.
Figure 3. CPU and disk usage on HyperV-33 provisioning 50 desktops
0
0.5
1
1.5
2
2.5
0
1
2
3
4
5
6
18:54 18:56 18:58 19:00 19:02 19:04 19:06 19:08 19:10 19:12 19:14 19:16 19:18 19:20 19:22 19:24
Cur
rent
Dis
k Q
ueue
Len
gth
, (G
reen
Lin
e)
% P
roce
ssor
Tim
e, (
Blue
Lin
e)
HyperV-33 (DAS) Provisioning 50 Virtual Desktops
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
9
Figure 4 shows the CPU and disk usage on the Quest connection broker during the provisioning process. It reveals how under-utilized the broker is and how the provisioning of 1,000 desktops in 30 minutes puts no demand on CPU and memory.
Figure 4. CPU and disk usage with one broker provisioning 1,000 desktops
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
10
20
30
40
50
60
18:55 18:56 18:58 19:00 19:02 19:03 19:05 19:07 19:09 19:10 19:12 19:14 19:16 19:17 19:19 19:21 19:23 19:24
Cur
rent
Dis
k Q
ueue
Len
gth
, (G
reen
Lin
e)
% P
roce
ssor
Tim
e, (
Blue
Lin
e)
One Broker Provisioning 1,000 Virtual Desktops
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
10
Test 2: IOPS Provisioning 75 Windows 7 SP1 (VDI) Desktops
In this test, we provisioned 3,000 desktops more than 40 Hyper-V hosts, so each host contained approx-imately 75 virtual desktops. The total time to provision the 3,000 virtual desktops using Quest HyperCache was 41 minutes, as compared to approximately seven hours using native SCVMM and Hyper-V provisioning methods.
Figure 5 shows specific disk counters from HyperV-35 provisioning 75 desktops. You will notice that Disk Writes/sec doesn’t exceed 1000 and Disk Reads/sec doesn’t exceed 30; this is because HyperCache is serving the master template from memory-based cache. Even as we scaled up the number of desktops per host from Test 1 to Test 2, you can see that Disk Reads/sec remains extremely low.The average Disk Reads/sec was 5.8 and the average Disk Writes/sec was 156.4. When provisioning 100 desktops without HyperCache, the average Disk Reads/sec was 1,011 and the average Disk Writes/sec was 478.
Figure 5. Disk Reads/sec and Disk Writes/sec on HyperV-35 provisioning 75 desktops with HyperCache
0
5
10
15
20
25
30
35
40
0
200
400
600
800
1000
1200
1400
13:56 13:59 14:03 14:06 14:09 14:12 14:16 14:19 14:22 14:25 14:29 14:32 14:35 14:38 14:42
Dis
k R
ead
s/se
c, (
Purp
le)
Dis
k W
rite
s/se
c, (
Red
)
HyperV-35 (DAS) Provisioning 75 Virtual Desktops
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
11
Figure 6 shows CPU and disk usage on HyperV-35 provisioning 75 desktops. (HyperV-35 was one of the 40 Hyper-V hosts.) CPU and disk are both underutilized, and the operation took roughly 41 minutes to complete, which means we were provisioning approximately 73 desktops a minute across all hosts, or two per host.
Figure 6. CPU and disk usage on HyperV-35 provisioning 75 desktops
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
1
2
3
4
5
6
7
8
9
10
13:56 13:59 14:03 14:06 14:09 14:12 14:16 14:19 14:22 14:25 14:29 14:32 14:35 14:38 14:42 14:45
Cur
rent
Dis
k Q
ueue
Lem
gth
, (G
reen
Lin
e)
% P
roce
ssor
Tim
e, (
Blue
Lin
e)
HyperV-35 (DAS) Provisioning 75 Virtual Desktops
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
12
Figure 7 highlights the impact on CPU and disk of the QWDV broker when provisioning 3,000 virtual desktops. As you can see, we weren’t resource-constrained by the action.
Figure 7. CPU and disk usage for one broker provisioning 3,000 desktops
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
10
20
30
40
50
60
70
80
13:53 13:58 14:01 14:05 14:09 14:13 14:16 14:20 14:24 14:28 14:31 14:35 14:39 14:43 14:46 14:50
Cur
rent
Dis
k Le
ngth
Que
ue, (
Gre
en L
ine)
% P
roce
ssor
Tim
e, (
Blue
Lin
e)
One Broker Provisioning 3,000 Virtual Desktops
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
13
Test 3: IOPS for RD Session Host Provisioning
The Quest HyperCache feature is not limited to VDI. With Quest Workspace Desktop Virtualization 7.5, or above, you can use many of the techniques used for VDI to provision RD Session Hosts, such as Microsoft Differencing Disks and Quest HyperCache. In test 3, we investigated the performance of Microsoft RD Session Hosts in conjunction with Quest Workspace Desktop Virtualization and Hyper-Cache. Provisioning 200 RD Session Hosts over 40 Hyper-V hosts took approximately 40 minutes to complete. The Hyper-V hosts typically ended up with four RD Session Hosts each, but because we were using two different server architectures, those with the higher spec. ended up with five or six virtual machines.
Figure 8 shows the Disk Reads/sec and Disk Writes/sec for HyperV-39 when provisioning five RD Session Hosts. Even with the spike in Disk Reads/Sec, we still maintained an average of 12.4; the average for Disk Writes/Sec was 11.49.
Figure 8. Disk Reads/sec and Disk Writes/sec on HyperV-39 provisioning five RD Session Hosts with HyperCache
0
200
400
600
800
1000
1200
0
100
200
300
400
500
600
700
800
09:05 09:09 09:13 09:17 09:21 09:25 09:29 09:33 09:37 09:41 09:45 09:49 09:53 09:57 10:01 10:05 10:09
Dis
k R
ead
s/se
c, (
Purp
le)
Dis
k W
rite
s/se
c, (
Red
)
HyperV-39 (DAS) Provisioning Five RD Session Hosts
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
14
Figure 9 shows the CPU and disk usage of a Hyper-V host provisioning five Microsoft RD Session Hosts; the results are similar to those when provisioning VDI virtual machines (Figure 6). The main difference when provisioning VDI and RD Session Hosts is that Session Hosts yield a lower disk queue length; however, this is to be expected since we are provisioning five virtual machines as compared to 75 virtual machines per host in our VDI tests.
Figure 9. CPU and disk usage on HyperV-39 provisioning five RD Session Hosts with HyperCache
0
0.2
0.4
0.6
0.8
1
1.2
0
0.5
1
1.5
2
2.5
3
3.5
09:05 09:09 09:13 09:17 09:21 09:25 09:29 09:33 09:37 09:41 09:45 09:49 09:53 09:57 10:01 10:05 10:09
Cur
rent
Dis
k Q
ueue
Len
gth
, (G
reen
)
% P
roce
ssor
Tim
e, (
Blue
)
HyperV-39 (DAS) Provisioning Five RD Session Hosts
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
15
Figure 10 shows the CPU and disk usage of the Quest Workspace Desktop Virtualization Broker provisioning the RD Session Hosts. It reveals how under-utilized the broker is.
Figure 10. CPU and disk usage on one broker provisioning 200 RD Session Hosts with HyperCache
0
0.2
0.4
0.6
0.8
1
1.2
0
10
20
30
40
50
60
09:05 09:09 09:13 09:17 09:21 09:25 09:29 09:33 09:37 09:41 09:45 09:49 09:53 09:57 10:01 10:05 10:09
Cur
rent
Dis
k Q
ueue
Len
gth
, (G
reen
)
% P
roce
ssor
Tim
e, (
Blue
)
One Broker Provisioning 200 RD Session Hosts
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
16
Test 4: IOPS for RD Session Host with Simulated Load
For the next test, we used the LoginVSI to simulate a workload within the RD Session Host virtual machines. With VSI, the “medium” workload option was used. Unfortunately we ran out of time in the EEC and therefore had to conduct this testing on a Dell T7500 after the engagement. Figure 11 shows the Disk Reads/sec and Disk Writes/sec for the Hyper-V host with Quest HyperCache when executing a medium workload with LoginVSI on a single virtual RD Session Host. The average Disk Reads/sec was 20 and the average Disk Writes/sec was 49. For a Hyper-V host without HyperCache (see Figure 12), the average Disk Reads/sec was 102 and Disk Writes/sec was 58.
Bear in mind these figures are a single RD Session Host. Typically we see customers running four RD Session Hosts per Hyper-V host (depending on hardware). Assuming a linear increase, this equates to 80 Disk Reads/sec and 196 Disk Writes/sec with HyperCache, and 408 Disk Reads/sec and 232 Disk Writes/sec without HyperCache.
Figure 11. Disk reads and disk writes on Hyper-V with HyperCache when executing LoginVSI
0
50
100
150
200
250
0
20
40
60
80
100
120
140
160
180
14:44:50 14:49:20 14:53:50 14:58:20 15:02:50 15:07:20 15:11:50 15:16:20 15:20:50
Dis
k R
ead
s/se
c, (
Purp
le)
Dis
k W
rite
s/se
c, (
Red
)
LoginVSI Simulated Workload on RDSH (with HyperCache)
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
17
Figure 12. Disk reads and disk writes on Hyper-V when executing LoginVSI without HyperCache
0
50
100
150
200
250
300
350
0
20
40
60
80
100
120
140
160
15:36:11 15:40:11 15:44:11 15:48:11 15:52:11 15:56:11 16:00:11 16:04:11 16:08:11 16:12:11
Dis
k R
ead
s/se
c, (
Purp
le)
Dis
k W
rite
s/se
c, (
Red
)
LoginVSI Simulated Workload for RDSH (without HyperCache)
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
18
Test 4: IOPS for VDI with Simulated Load
Figure 13 highlights the Disk Reads/sec and Disk Writes/sec when executing the LoginVSI medium workload on a Hyper-V host running 150 Windows 7 SP1 virtual desktops with Quest HyperCache enabled. The average Disk Reads/sec was 247 and the average Disk Writes/Sec was 203.
Figure 13. Disk Reads/sec and Disk Writes/sec on Hyper-V0 when executing LoginVSI with HyperCache
0
100
200
300
400
500
600
0
100
200
300
400
500
600
700
11/23/2011 14:46:14.082
11/23/2011 15:08:29.081
11/23/2011 15:30:44.079
11/23/2011 15:52:59.114
11/23/2011 16:15:14.399
11/23/2011 16:37:29.756
Dis
k R
ead
s/se
c, (
Purp
le)
Dis
k W
rite
s/se
c, (
Red
)
LoginVSI Simulated Workload for VDI (HyperCache)
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
19
Figure 14 shows the Disk Reads/sec and Disk Writes/sec when executing the LoginVSI medium workload on a Hyper-V host running 150 Windows 7 SP1 virtual desktops without HyperCache. The average Disk Reads/Sec was 653 and the average Disk Writes/Sec was 648.
Figure 14. Disk reads and disk writes on Hyper-V when executing LoginVSI without HyperCache
Note that the data used for Figure 13 and Figure 14 came from a previous round of LoginVSI and Hyper-Cache testing in November 2011 (see Desktop Virtualization: A Cost and Performance Comparison). These tests were done by configuring HyperCache to perform write optimisations for all virtual desktops.
0
200
400
600
800
1000
1200
1400
0
200
400
600
800
1000
1200
1400
1600
11/23/2011 21:33:23.280
11/23/2011 21:53:58.813
11/23/2011 22:14:33.803
11/23/2011 22:35:08.933
11/23/2011 22:55:44.295
11/23/2011 23:16:20.390
Dis
k R
ead
s/se
c, (
Purp
le)
Dis
k W
rite
s/se
c, (
Red
)
LoginVSI simulated workload for VDI (without HyperCache)
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
20
Final Words The conclusions are at the start of this paper, so all that is left to say is thank you to Microsoft and the team at the Enterprise Engineering Center who worked so hard to help us run all these tests and achieve such a strong set of results.
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
21
About the Authors Matthew Evans Matthew Evans, a 17-year veteran of the IT industry, is a team lead for Quest’s User Workspace Man-agement (UWM) system consultants in Europe. He joined Quest in 2008 after spending 10 years at Wyse Technology. Matthew has participated in multiple large-scale VDI/RDSH projects and has served as the project manager to deliver three engagements at the Microsoft Enterprise Engineering Center (EEC) in Redmond, Washington. He is a VMware Certified Professional. Terry Lewis Employed by Quest since 2000, Terry Lewis is a solutions architect for the company’s User Workspace Management business. A Microsoft Certified Trainer, VMware Certified Professional, and former Citrix Certified Instructor, he brings years of experience as a system integrator, trainer and service provider for server-based computing, virtualization, security, and core enterprise technologies.
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
22
Appendix A: Test Hardware Details Autoclient (Windows 7) 50 x Dell Optiplex
• Intel Dual Core 6600 @ 2.40 GHZ • 2GB RAM • Windows 7 Enterprise (7601) Service Pack 1
Microsoft Hyper-V Hosts (Windows 2008 R2 SP1) HyperV-1 – 20: Dell R710
• 2 x Intel Xeon Hex Core X5650 @ 2.67 GHz • 96GB RAM • 2 x SAS 15K 450GB RAID1 (OS), DATA drive is SAN • Windows 2008 R2 Enterprise (7601) Service Pack 1
HyperV-31–50: Dell R710
• 2 x Intel Xeon Hex Core X5670 @ 2.93 GHz • 72 GB RAM • 2 x SAS 10K 300 GB RAID1 (OS), 6 x SAS 10K 300 GB (DATA) RAID0 • Windows 2008 R2 Enterprise (7601) Service Pack 1
Dell T7500 (used for additional LoginVSI testing)
• 2 x Intel Xeon Hex Core X5650 @ 2.67 GHz • 48 GB RAM • 1 x SATA 250 GB C:(OS) • 1 x SATA 1.8 TB H:(HYPER-V) • Windows 2008 R2 DataCenter (7601) Service Pack 1
Microsoft SCVMM 2012 (Two Servers) 2 x Dell PowerEdge 2970
• 2 x dual-core AMD Opteron processor 2222 SE 3.00 GHz • 32 GB RAM • 2 x 10K 146 GB RAID1 (OS) and 6 x 10K 146 GB RAID10 (DATA) • Windows 2008 R2 Enterprise (7601) Service Pack 1 • SCVMM 2012, version 3.0.6005.0; with hotfixes KB2663959 and KB2663960.
SCVMM2 – Hyperv-1–20 (SAN)
SCVMM3 – Hyperv-31–50 (DAS)
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
23
Quest Workspace Desktop Virtualization (Two Servers) 2 x Dell PowerEdge 2970
• 2 x dual-core AMD Opteron processor 2222 SE 3.00GHz • 32 GB RAM • 2 x 10K 146 GB RAID1 (OS) and 6 x 10K 146 GB RAID10 (DATA) • Windows 2008 R2 Enterprise (7601) Service Pack 1 • Quest Workspace Desktop Virtualization 7.6
Microsoft SQL 2008 (Two Servers) 2 x Dell PowerEdge 2970 (For SCVMM and QWDV)
• 2 x Dual-Core AMD Opteron Processor 2222 SE 3.00GHz • 32GB RAM, 2 x 10K 146GB RAID1 (OS) and 6 x 10K 146GB RAID10 (DATA) • Windows 2008 R2 Enterprise (7601) Service Pack 1 • Microsoft SQL Server 2008 R2
Quest Foglight for Virtual Desktops 5.6.3 (Two Servers) 2 x Dell PowerEdge 2970
• 2 x dual-core AMD Opteron processor 2222 SE 3.00 GHz • 32 GB RAM, 2 x 10K 146 GB RAID1 (OS) and 6 x 10K 146 GB RAID10 (DATA) • Windows 2008 R2 Enterprise (7601) Service Pack 1 • Microsoft SQL Server 2008 R2
Virtual Desktops Windows 7 Enterprise Service Pack 1
Dynamic Memory • Startup memory 512 MB • Maximum memory 2048 MB • Memory buffer 20%
SAN Compellent
120 disks: • 12 – 200 GB SSD • 48 – 1 TB SAS • 60 – 600 GB SAS • 8 – 8 Gb FC ports for host connections (currently using 4) • 8 – 10 Gb FCoE • 4 – 10 Gb iSCSI
12 GB cache
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
24
Appendix B: Quest Workspace Desktop Virtualization Tuning The number of concurrent actions was changed throughout the study to take timings; however, most timings were taken with a setting of 7.
Figure 15. Setting the maximum number of concurrent operations on a server
Due to the high number of virtual desktops per pool, it is recommended that you set Refresh to Off in the console and then manually click Refresh when it is required.
Figure 16. Setting Refresh to Off in the console
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
25
It is also recommended that you reduce the heartbeat interval from 1 minute to 10 minutes.
Figure 17. Changing the heartbeat interval from 1 minute to 10 minutes
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
26
© 2012 Quest Software, Inc. ALL RIGHTS RESERVED.
This document contains proprietary information protected by copyright. No part of this document may be reproduced or transmitted in any
form or by any means, electronic or mechanical, including photocopying and recording, for any purpose without the written permission of
Quest Software, Inc. (“Quest”).
The information in this document is provided in connection with Quest products. No license, express or implied, by estoppel or otherwise, to
any intellectual property right is granted by this document or in connection with the sale of Quest products. EXCEPT AS SET FORTH IN
QUEST'S TERMS AND CONDITIONS AS SPECIFIED IN THE LICENSE AGREEMENT FOR THIS PRODUCT, QUEST ASSUMES NO
LIABILITY WHATSOEVER AND DISCLAIMS ANY EXPRESS, IMPLIED OR STATUTORY WARRANTY RELATING TO ITS PRODUCTS
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE,
OR NON-INFRINGEMENT. IN NO EVENT SHALL QUEST BE LIABLE FOR ANY DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE,
SPECIAL OR INCIDENTAL DAMAGES (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF PROFITS, BUSINESS
INTERRUPTION OR LOSS OF INFORMATION) ARISING OUT OF THE USE OR INABILITY TO USE THIS DOCUMENT, EVEN IF QUEST
HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Quest makes no representations or warranties with respect to the
accuracy or completeness of the contents of this document and reserves the right to make changes to specifications and product
descriptions at any time without notice. Quest does not make any commitment to update the information contained in this document.
If you have any questions regarding your potential use of this material, contact:
Quest Software World Headquarters LEGAL Dept
5 Polaris Way
Aliso Viejo, CA 92656
www.quest.com
email: [email protected]
Refer to our Web site for regional and international office information.
Trademarks
Quest, Quest Software, the Quest Software logo, AccessManager, ActiveRoles, Aelita, Akonix, AppAssure, Benchmark Factory, Big Brother,
BridgeAccess, BridgeAutoEscalate, BridgeSearch, BridgeTrak, BusinessInsight, ChangeAuditor, ChangeManager, Defender,
DeployDirector, Desktop Authority, DirectoryAnalyzer, DirectoryTroubleshooter, DS Analyzer, DS Expert, Foglight, GPOADmin, Help Desk
Authority, Imceda, IntelliProfile, InTrust, Invirtus, iToken, I/Watch, JClass, Jint, JProbe, LeccoTech, LiteSpeed, LiveReorg, LogADmin,
MessageStats, Monosphere, MultSess, NBSpool, NetBase, NetControl, Npulse, NetPro, PassGo, PerformaSure, Point,Click,Done!,
PowerGUI, Quest Central, Quest vToolkit, Quest vWorkSpace, ReportADmin, RestoreADmin, ScriptLogic, Security Lifecycle Map,
SelfServiceADmin, SharePlex, Sitraka, SmartAlarm, Spotlight, SQL Navigator, SQL Watch, SQLab, Stat, StealthCollect, Storage Horizon,
Tag and Follow, Toad, T.O.A.D., Toad World, vAutomator, vControl, vConverter, vFoglight, vOptimizer, vRanger, Vintela, Virtual DBA,
VizionCore, Vizioncore vAutomation Suite, Vizioncore vBackup, Vizioncore vEssentials, Vizioncore vMigrator, Vizioncore vReplicator,
WebDefender, Webthority, Xaffire, and XRT are trademarks and registered trademarks of Quest Software, Inc in the United States of
America and other countries. Other trademarks and registered trademarks used in this guide are property of their respective owners.
Updated—JUNE 2012
TECHNICAL BRIEF Managing the IOPS Challenge in Large Desktop Virtualization Deployments
27
About Quest Software, Inc.
Established in 1987, Quest Software (Nasdaq: QSFT) provides simple and innovative IT management solutions that enable more than
100,000 global customers to save time and money across physical and virtual environments. Quest products solve complex IT challenges
ranging from database management, data protection, identity and access management, monitoring, and user workspace management to
Windows management. For more information, visit www.quest.com.
Contacting Quest Software
PHONE 800.306.9329 (United States and Canada)
If you are located outside North America, you can find your local office information on our Web site.
EMAIL [email protected]
MAIL Quest Software, Inc.
World Headquarters
5 Polaris Way
Aliso Viejo, CA 92656
USA
Contacting Quest Support
Quest Support is available to customers who have a trial version of a Quest product or who have purchased a commercial version and have
a valid maintenance contract.
Quest Support provides around-the-clock coverage with SupportLink, our Web self-service.
Visit SupportLink at https://support.quest.com.
SupportLink gives users of Quest Software products the ability to:
Search Quest’s online Knowledgebase
Download the latest releases, documentation and patches for Quest products
Log support cases
Manage existing support cases
View the Global Support Guide for a detailed explanation of support programs, online services, contact information and policies and
procedures.
TBV-ManagingIOPSChallenge-US-VG