20
Tegile Storage Array for VMware vSphere with Best Practices Recommendations Last Update: 10/29/2013

Best Practices Guide for VMware VSphere - V5.x

Embed Size (px)

DESCRIPTION

Best Practices

Citation preview

Page 1: Best Practices Guide for VMware VSphere - V5.x

Tegile Storage Array for VMware vSphere with Best Practices Recommendations

Last Update: 10/29/2013

Page 2: Best Practices Guide for VMware VSphere - V5.x

Copyright © 2013 Tegile Systems. All Rights Reserved.

Page 3: Best Practices Guide for VMware VSphere - V5.x

Contents

Page 4: Best Practices Guide for VMware VSphere - V5.x

ContentsContents........................................................................................................................................................3

The VMware vSphere Story .......................................................................................................................... 6

Adhere to vendor configuration best practices............................................................................................6

Tegile Best Practices .................................................................................................................................6

Build the environment expecting failure to occur......................................................................................10

Tegile Zebi Storage Array High Availability .............................................................................................11

Active/active dual-redundant controllers...........................................................................................11

End-to-end data integrity checks........................................................................................................11

RAID protection...................................................................................................................................11

Space efficient snapshots....................................................................................................................11

Remote replication .............................................................................................................................11

Understand performance and failure points ..............................................................................................11

Tegile Zebi Storage Array Monitoring.....................................................................................................12

Lower costs and leverage VAAI and SIOC whenever possible ....................................................................12

VAAI primitives........................................................................................................................................12

Block Zero............................................................................................................................................13

Full Copy..............................................................................................................................................13

Hardware Assisted Locking (also known as atomic test and set, or ATS)...........................................13

Thin Provisioning Space Reclaim/Stun................................................................................................13

vSphere 5 VAAI enhancements...............................................................................................................13

ATS-only volumes....................................................................................................................................14

Storage I/O Resource Allocation (SIOC):.................................................................................................14

Tegile Integration with VMware .............................................................................................................14

Use thin provisioning where possible, but keep watch..............................................................................14

Tegile Best Practices ...............................................................................................................................15

Be mindful of alignment issues...................................................................................................................15

Alignment vs. non-alignment..................................................................................................................15

Tegile Best Practices ...............................................................................................................................16

Be Mindful of Snapshots.............................................................................................................................17

Crash-Consistent snapshots....................................................................................................................17

Application-Consistent snapshots...........................................................................................................17

Tegile Best Practices ...............................................................................................................................18

Page 5: Best Practices Guide for VMware VSphere - V5.x

Don’t stress over transport choice..............................................................................................................18

Tegile Best Practices ...............................................................................................................................18

Fibre Channel ......................................................................................................................................19

iSCSI.....................................................................................................................................................19

NFS ......................................................................................................................................................19

Summary .....................................................................................................................................................20

Page 6: Best Practices Guide for VMware VSphere - V5.x

The VMware vSphere StoryVMware has managed to create what many organizations consider to be “must have” software and, in the process, has led the charge in enabling customers to reinvent the data center around a “virtual first” approach to workload execution. This new data center paradigm, coupled with VMware’s leadership in the paradigm shift, has given rise to an entire ecosystem of software and hardware providers dedicated to helping organizations maximize the positive results that come from the solutions.

However, in order to truly maximize results, administrators must bear in mind a number of best practices that pertain to the operation of such environments. These practices are more than just vSphere configuration items; rather, the best practices discussed in this paper include, in addition to specific configuration recommendations, design issues and overarching knowledge issues that, if ignored, can result in less than optimal results.

Adhere to vendor configuration best practicesEvery storage array can be optimized. Only the array vendor has the complete spectrum of knowledge as to how best to optimize the array to make sure that the array provides a customer with maximum performance with minimal hassle. Perhaps the most important best practice of all, when it comes to storage in a vSphere environment is to, whenever possible, adhere to the configuration best practices outlined by the vendor.

Tegile Best PracticesIn this spirit, Tegile provides customers with comprehensive guidance as to what steps you need to take for the best outcomes when using the array in a vSphere environment. As is the case with every array vendor, Tegile provides a number of general and specific recommendations that are intended tohelp customers get the most from their storage investment. These general guidelines are outlined below:

If possible, spread the I/O load across the two controllers in the Zebi storage array. This requiresconfiguring at least two storage pools, and distributing the pools across the two controllers. Note that multiple pool configurations depend on the number of available SSDs.

Storage Pool configuration

Page 7: Best Practices Guide for VMware VSphere - V5.x

o Ensure that you configure a minimum of 3 SSDs per storage pool: 2 SSDs for mirrored metadata 1 SSD for read/write secondary cache

o Configure a minimum of 1 hot spare HDD per pool.o Use RAID 10 when configuring data drives for best performance.

LUN configuration for ESXi OS images using VMFS datastores

Page 8: Best Practices Guide for VMware VSphere - V5.x

o Use 32KB for block sizeo Configure thin provisioned LUNs.Enablededuplication (choose SHA 256 for checksum)o Enable compression (choose lzjb or lz4).Choose Virtualizatio” for the Purpose

parameter

Fileshare configuration for ESXiOS images using NFS datastores

Page 9: Best Practices Guide for VMware VSphere - V5.x
Page 10: Best Practices Guide for VMware VSphere - V5.x

o Use 32KB for block sizeo Enable deduplication (select SHA 256 for checksum)o Enable compression (select lzjb or lz4)o Choose “Virtualization” for the “Purpose” parametero Ensure that Servers is set to 1024o Ensure that Lockd Servers is set to 1024

Build the environment expecting failure to occurNo one wants failure to occur in their data center environment. Unfortunately, failures can and do happen and data center architects need to bear this in mind. As the saying goes, “Those that fail to plan,plan to fail.” With this in mind, when designing a storage environment to support a vSphere implementation, there should be an expectation that failure will eventually occur at any point in the environment, even when using the best equipment.

In order to fully accommodate such issues, administrators should choose to use hypervisor features such as high availability or fault tolerance. These services inject into the environment software-driven availability mechanisms. However, you must support them withadequate physical hardware in order to be truly useful. For example, when configuring virtual switches on vSphere hosts, use multiple physical network links to protect against failure of a link for any reason. This is also true for storage networking connectivity in a vSphere environment. The storage network must be robust. Enable multiple data pathways to provide maximum performance and tolerance for hardware failure.

In addition, consider features available in the storage environment for their ability to help organizations plan for the worst while hoping for the best. While the hypervisor will certainly have a lot of features that can help protect from failure, Tegile recommends leveraging all aspects of the environment.

Page 11: Best Practices Guide for VMware VSphere - V5.x

Tegile Zebi Storage Array High AvailabilityUnderstanding that the unexpected will always occur, Tegile provides customers with a comprehensive set of features intended to protect customers from eventual failure. These redundancy and availability features are outlined in the sections below.

Active/active dual-redundant controllersTegile’s storage arrays are highly reliable even with just a single controller, but with two controllers, downtime is practically a thing of the past. While some vendors choose to implement controller redundancy in an active/passive configuration, Tegile proves that you can obtain maximum performance benefits through the simultaneous use of both controllers as often as possible. Besides providing additional reliability, this configuration helps organizations scale to larger workloads.

End-to-end data integrity checksEnd-to-end integrity is a feature of the core software in Tegile’s Zebi line of storage arrays. This feature ensures that data is always consistent by constantly comparing each read data block against an independent checksum. By using an independent checksum rather than just comparing a block read from disk and placed into RAM with the same block from disk, the integrity feature can protect organizations from many different forms of data damage.

RAID protectionTegile’s storage arrays provide a set of rich RAID options including mirroring, single-parity, dual-parity and triple-parity RAID protection.

Space efficient snapshotsDisasters do not always take the form of a fire, hurricane, or tornado. Sometimes, a disaster can be the result of a simple wrong click of the mouse, which, for example, could result in an accidentally deleted database or virtual machine. With Tegile’s space efficient snapshots, recovery from such incidences isswift.

Remote replicationNatural disasters can strike fast and without warning. When disaster strikes, it can destroy an entire data center in the blink of an eye. With Tegile’s built-in remote replication capability, customers can add powerful additional data protection layer to existing data protection mechanisms by replicating array data to a geographically disparate data center, ensuring that a natural disaster does not mean the end of the business.

Understand performance and failure pointsIn any complex virtual environment, there are a myriad of points at which performance issues can be introduced into the environment and negatively impact business workloads.At the same time, understanding failure points can help administrators head off small problems before they become larger ones.

Performance and failure issues can take place at the host, virtual machine, transport, array, or disk leveland at locations within each of those areas. For example, within the array itself, memory utilization may run too high. Administrators must make every effort to ensure that there is visibility into as many

Page 12: Best Practices Guide for VMware VSphere - V5.x

potential performance or failure points as possible, with such information being easily retrievable and actionable.

Tegile Zebi Storage Array MonitoringThe Zebi storage arrays are easy to configure and monitor. The dashboard provides a birds-eye view into the health and overall status of the system. It allows an administrator to see a summary of CPU activity, memory utilization, space usage, network activity, disk activity, and the status of key services on the system.

This dashboard allows an administrator too quickly:

View the overall space usage on the appliance to ensure there is enough capacity. Monitor network and disk activity to ensure that the system is not nearing network throughput

limits or ensure that the disk activity is not uncommonly high, etc. Monitor the basic processing and memory resources in the system to ensure that the system is

not unusually busy or pressed for processing/memory resources. View the status of key services on the system (e.g. NFS, CIFS, iSCSI, FC, etc.) and enable/disable

them if needed.

For NFS datastores, the dashboarddisplays VM-level storage metrics, allowing virtualization administrators to identify storage performance bottlenecks at an individual VM level of granularity. The VM-level metrics are also available from VMware vCenter through the Tegile vCenter client plugin.

In addition to the birds-eye view of the system, Zebi also enables the administrator to monitor the system in greater detail using the analytics dashboard and monitoring worksheets. You can perform this additional monitoring at a protocol level (e.g. look at NFS traffic versus iSCSI traffic), or at deeper granularity levels (e.g. view traffic per iSCSI target, etc.)

You can customize a monitoring worksheet by the administrator by simply choosing the widgets that represent the areas of interest. This view is commonly used when an administrator wants to detect potential bottlenecks in the system or to verify that all layers are functioning satisfactorily.

Additionally, the dashboard enables an administrator to understand how space is used over time making it possible to estimate future capacity requirements based on current usage patterns.

Lower costs and leverage VAAI and SIOC whenever possiblevStorage APIs for Array Integration (VAAI) is a feature that was originally introduced in vSphere 4.1 and significantly improved in vSphere 5.0. VAAI is a set of storage APIs provided by VMware, which individual storage array vendors can integrate into their storage products to offload specific storage related operations from host to storage toachieve other efficiencies.

VAAI primitivesVAAI consists of a number of primitives, which are API stubs to which vendors connect their code. In ESX 4.1, VAAI is available to block storage devices only, and only the first three primitives are supported through vendor-specific plug-ins. The last primitive -- Thin Provisioning Space Reclaim -- is available in

Page 13: Best Practices Guide for VMware VSphere - V5.x

vSphere 4.1, but it is undocumented and not well implemented in the storage ecosystem. It is fully supported in vSphere 5.

Block ZeroThe Block Zero primitive enables the array to zero-out large numbers of blocks very quickly and with a single command. This is especially useful, for example, during the VM creation process. Without the Block Zero VAAI primitive, an administrator must wait while the hypervisor itself individually zeroes out storage during the creation of thick volumes. With the Block Zero VAAI primitive, the hypervisor can simply provide the array with a single instruction to perform the zeroing process.

Full CopyFull Copy enables the storage array to make full copies of virtual disks within the array rather than having to copy the machine over the storage network to the hypervisor and then back to the storage array. This is especially useful during virtual machine cloning operations and can significantly reduce the performance impact of such operations on the storage fabric and the host.

But, that “simple” cloning operation, when performed in a traditional environment, is actually quite complex and resource intensive. Here’s a look at the process.

1. The vSphere host needs to contact the storage array and begin locating the contents that are to be cloned.

2. On a block-by-block basis, that content is transferred over the storage communications network to the vSphere host, where it undergoes some processing.

3. The vSphere host then writes each block back to the storage array.

Hardware Assisted Locking (also known as atomic test and set, or ATS)ATS provides a more effective mechanism for handling SCSI reservations to protect VMFS metadata during updates. In fact, in older editions of ESX/ESXi/vSphere, SCSI reservations were one of the issues affecting both performance and the ability to easily scale VMFS volumes. The more virtual machine VMDK files on a VMFS volume, the more likely that SCSI locking issues would affect other virtual machines.

Thin Provisioning Space Reclaim/StunAllows the reclamation of unused space and protects the VM and application from out-of-space condition on thin provisioned volumes.

vSphere 5 VAAI enhancementsThere are a few enhancements to VAAI that came with ESX 5.0. First, VAAI is available for NFS storage, and they added a thin provisioning primitive. Also, VMware standardized all four primitives as T10 standard SCSI commands, which brings VAAI in line with industry standards. No new VAAI primitives were added in vSphere 5.1 or vSphere 5.5, but VMware improved existing VAAI primitives for efficiencyand simplicity.

Page 14: Best Practices Guide for VMware VSphere - V5.x

ATS-only volumesTry to use ATS-only volumes whenever possible, but be careful! ATS volumes are not supported by every version of vSphere, so there may be a period of time during which interoperability between volumes becomes problematic as organizations shift to ATS volumes.

Storage I/O Resource Allocation (SIOC):SIOC enables I/O prioritization for specific VMs in an environment where multiple VMs across multiple ESXi hosts access a shared storage pool. ESXi provides dynamic storage resource allocation mechanisms which allow critical workloads to maintain performance even during peak load periods where there is a contention for I/O resources.

Tegile Integration with VMwareTegile Zebi storage arrays are tightly integrated with VMware, enabling customers to take advantage of the VMware storage-centric features, including VAAI and SIOC.

Use thin provisioning where possible, but keep watchThin provisioning provides customers with a method to expand the return on their storage investment by gaining the ability to allocate more logical disk space to virtual machines than is actually available on the physical array.

At first, it may seem like a foolishidea to enable such capability, but once placed into the context of the larger data center, thin provisioning makes more sense. That said, use of thin provisioning does demand that organizations maintain careful watch over actual capacity use to make sure that sufficient physical capacity remains available to meet workload demands.

So, what are some of the reasons that thin provisioning may make sense for an organization? In looking back at how servers used to be provisioned and comparing that with how virtual machines are provisioned, not a lot has changed. Administrators still allocate far more disk space than is really needed, leading to potentially wasted storage space.

Thin provisioning can makean array seem bigger than it actually is.As administrators overallocate storage to individual virtual machines, thin provisioning only suppliesthe actual physical space needed by a virtual machine. The remaining space is kept in the pool of allocable storage. So, if someone provisions a 100 GB virtual disk, but only uses 20 GB, that other 80 GB can be used by other virtual machines.

Thin provisioning allows an organization to take a template-based approach to virtual machine provisioning, even if the capacity assignments in those templates aren’t that efficient. With thin provisioning, administrators don’t risk wasting valuable storage resources. This results in improved return on investment figures for the storage array.

There is often confusion as to on which device thin provisioning should occur. You can create thin provisioned disks on vSphere at the time you create a virtual disk, but most modern enterprise-class storage arrays also provide thin provisioning services. So, administrators are left with a decision as to where best to enable the service. For the best possible use of disk space, many advocate for a thin-on-thin configuration wherein an administrator creates thin provisioned virtual disks and stores them on

Page 15: Best Practices Guide for VMware VSphere - V5.x

thin provisioned LUNs on a storage array. In other words, thin provisioning is enahypervisor layer as well as at the storage layer.them on thin provisioned arrays, which carries most of the same benefits.

The bottom line is this: thin provisioensure that physical capacity remains sufficient to meet the needs of the virtual machines. monitoring and alerting tools to ensure that an array doesn’t run out of space.

Particularly astute readers may wonder what happens if the array actually runs out of physical space. After all, with thin provisioning, it’s entirely possible to allocate more storage to virtual machines than is physically available on the array. If an aradditional space, VAAI’s STUN primitive machine to prevent catastrophic data loss.

Without VAAI support, a virtual machine wouldfail. This could be particularly bad for databases, which would rebuild/repair process.

Tegile Best PracticesConfigure thin provisioned LUNs from the Zebi storage array and use LUNs on ESXi. Since Zebi storage arrays are VAAIhost to the Zebi storage array, increasing the performance of the eager zero operation.

Be mindful of alignment issuesAlthough less of an issue than it was in the past, misaligned storThis happens when the hypervisor, physical I/O block offset cases, unnecessary physical I/O takes place at the storage side, negatively impacting the entire environment. In a virtual environment, misalignment if often magnified due to the hypervisor virtual disk abstraction taking place between the guest operating system and the storage device.

Alignment vs. non-alignment

layer, hence impacting performance.

Misaligned guest and major cause of degraded performance. For VMFS that the guest virtual and partitions are properly aligned to the storage array LUNs. Misalignment is mainly a concern with

thin provisioned LUNs on a storage array. In other words, thin provisioning is enabled at both the hypervisor layer as well as at the storage layer. Other adminscreate thick virtual hard disks and store them on thin provisioned arrays, which carries most of the same benefits.

oning make sense, but requires constant administrator attention to ensure that physical capacity remains sufficient to meet the needs of the virtual machines.

to ensure that an array doesn’t run out of space.

rticularly astute readers may wonder what happens if the array actually runs out of physical space. After all, with thin provisioning, it’s entirely possible to allocate more storage to virtual machines than is physically available on the array. If an array runs out of capacity when a virtual machine requests additional space, VAAI’s STUN primitive – fully supported by Tegile – safely pauses the affected virtual machine to prevent catastrophic data loss.

Without VAAI support, a virtual machine would simply crash with a write error, or the application inside fail. This could be particularly bad for databases, which would then generally require a

Configure thin provisioned LUNs from the Zebi storage array and use eager zero provisioning for these LUNs on ESXi. Since Zebi storage arrays are VAAI-capable, it offloads eager zero operations

increasing the performance of the eager zero operation.

Be mindful of alignment issuesit was in the past, misaligned storage can create performance problems

ypervisor, physical I/O block offset and allocation size are out of synccases, unnecessary physical I/O takes place at the storage side, negatively impacting the entire environment. In a virtual environment, misalignment if often magnified due to the hypervisor virtual

ion taking place between the guest operating system and the storage device.

alignmentWhen the host I/O requests, Logical Block Address (I/O size, are rounded to the boundary of storage block size, it iscalled aligned, and it results in optimal I/O performance. When the I/O starting LBA or the I/O size are not rounded to the boundary of storage block size, it is called misaligned, and it results in unnecessary I/O at the storage layer or hype

the overall system

VMFS partitions are a virtual machine datastores, vSphere

ns are properly aligned to the storage array LUNs. Misalignment is mainly a concern with

bled at both the create thick virtual hard disks and store

sense, but requires constant administrator attention to ensure that physical capacity remains sufficient to meet the needs of the virtual machines. Deploy

rticularly astute readers may wonder what happens if the array actually runs out of physical space. After all, with thin provisioning, it’s entirely possible to allocate more storage to virtual machines than is

ray runs out of capacity when a virtual machine requests safely pauses the affected virtual

simply crash with a write error, or the application inside generally require a

provisioning for these eager zero operations from the ESXi

increasing the performance of the eager zero operation.

age can create performance problems. are out of sync. In such

cases, unnecessary physical I/O takes place at the storage side, negatively impacting the entire environment. In a virtual environment, misalignment if often magnified due to the hypervisor virtual

ion taking place between the guest operating system and the storage device.

Logical Block Address (LBA) and , are rounded to the boundary of storage block size, it is

O performance. When O size are not rounded to the

boundary of storage block size, it is called misaligned, and it storage layer or hypervisor

the overall system

VMFS partitions are a virtual machine datastores, ensure vSphere VMFS

ns are properly aligned to the storage array LUNs. Misalignment is mainly a concern with

Page 16: Best Practices Guide for VMware VSphere - V5.x

Windows XP and Windows 2003 guest VMs.It’sthese operating system include additional intelligence to avoid this issue

Tegile Best PracticesTegile Zebi storage array supports userTherefore, you can fine-tune a Zebirunning on the same Zebi array.

The supported block sizes are: 4KB, 8KB, 16KB, 32KB, 64KB, and 128KB. Zebi configuration based on the storage usage

By selecting the suitable application type, Zebi as the best practice for the application. The cases. Furthermore, users can fine-

Below are the general best practices array, based on the usage purpose:

Use case 4KVirtualizationExchange DBExchange LogSQL ServerOther DBFile SharingBackup

*Note: When deduplication is enabled on the LUN or Share, 32KB or more.

s XP and Windows 2003 guest VMs.It’s not an issue for Windows 2008 and Windows 2012 as these operating system include additional intelligence to avoid this issue.

supports user-configurable block size at per LUN or per NAS Zebi to achieve optimal performance for multiple different applications

The supported block sizes are: 4KB, 8KB, 16KB, 32KB, 64KB, and 128KB. Zebi offers some configuration based on the storage usage purpose: Virtualization, Database, File Sharing and Backup

By selecting the suitable application type, Zebi automatically selects the LUN block size that or the application. The default block size configuration suffices for

-tune the block size to best suit their application environment.

are the general best practices for configuring the block size for both LUN and on the usage purpose:

8K 16K 32K 64K��

��

��

nabled on the LUN or Share, Tegile recommends that you

r Windows 2008 and Windows 2012 as

configurable block size at per LUN or per NAS share level. to achieve optimal performance for multiple different applications

some a default LUN purpose: Virtualization, Database, File Sharing and Backup:

the LUN block size that is validated ices for most of the

their application environment.

the block size for both LUN and share on a Zebi

128K

recommends that you use block size

Page 17: Best Practices Guide for VMware VSphere - V5.x

Be Mindful of SnapshotsOne of the most useful features of virtualization is the ability to snapshot a virtual machine. That point-in-time picture of the virtual machine’s memory and virtual disk are useful should an app or OS upgrade get corruptedwhentesting configuration changes. Virtualization backup and replication applications also use the snapshot to capture virtual disk changes without causing downtime. However, many people make the mistake of overusing snapshots , not only taking up disk space, but causing poor performance for backup applications, vMotions, and other storage-related functions. Only use hypervisor-based snapshots temporarily, and then delete them.

It is also important to understand snapshots in general.

Crash-Consistent snapshotsCrash-consistent snapshots enable live backup and guarantee the data stored is consistent for the pointin-time when created. However, when it comes to attempting to perform a full recovery, this kind of snapshot is not sufficient, as the application and the file system may not be aware of the snapshot or a current backup. The snapshot doesnot capture the additional writes from the incomplete transaction, or the cache data in the file system or application. When recovering data from such crash-consistent snapshots, the file system and the application would have to go through a crash recovery process. This does not guarantee the application can recover.

Application-Consistent snapshotsApplication-consistent (also called transactional-consistent) snapshots ensure that the snapshot is application-aware. Before Zebi takes a snapshot, it coordinates with the applications and the file systems to quiesce the application and to flush the transactional and cached data onto storage. Then the application resumes after it takes the snapshot. When restored from an application-consistent snapshot, the applications can recover more accurately and consistently. The snapshots are application and hypervisor-aware, and enable consistent application and virtual machine recovery.

In a Tegile environment, application consistency is accomplished through the use of Tegile Data Protection Services (TDPS) software installed on a Windows guest VM.It works with the VSS service onWindows, and communicates with the Zebi array to coordinate the creation of application and hypervisor consistent snapshots. TDPS consists of two sub components:

Tegile Data Protection Agent (TDPA) Tegile VSS HardwareProvider

Page 18: Best Practices Guide for VMware VSphere - V5.x

Tegile Best PracticesTegile’s storage arrays support VM-consistent snapshots for virtual machines using VMDK files –VMware’s virtual disk file format – in a VMware ESXi environment, through VMTools. To work in a seamless manner, Zebi communicates with the VMware host or VMware vCenter through the management port. As such, users need to make sure that they connect theESXi hosts, vCenter, and the Zebi storage array to the same management network. To ensure VM-consistent data protection, install VMTools, and configure the VMware Snapshot Provider and Volume Shadow Copy services to automatically start on all Windows VMs.

Tegile storage arrays also support application-consistent snapshots in the case of Windows guest VMs with direct LUN access through a physical Raw Device Mapping (RDM) or VMDirectPath I/O to Zebi storage arrays. This is accomplished through Tegile Data Protection Services (TDPS) software installed on the Windows guest VMs.

For application-consistent snapshots, also verify the following items:

There are no dynamic disks or IDE disks in use There are plenty of empty SCSI slots available in the virtual machine The vSphere version is 4.1 or higher

Don’t stress over transport choiceFiber Channel, iSCSI, NFS… there are certainly pros and cons to each, but don’t worry too much about making the perfect selection. In the real world, from a performance perspective, it is really hard to saturate the storage transport. Sure, the biggest organizations out there will certainly push even the fastest Fiber Channel to its limits, but the vast majority of businesses are just fine with any of the options. As such, make a transport choice based on other factors, such as ease of implementation or ability to get detailed performancemetrics.

Tegile Best PracticesTegile’s storage arrays are unique in the hybrid storage space in that they provide the full range of connectivity options for customers, so they can meet any use case. For each transport choice, Tegile has

Page 19: Best Practices Guide for VMware VSphere - V5.x

a set of specific recommendations that administrators should follow for best results. These best practices are outlined in the sections below.

Fibre Channel Balance active I/O across all available FC paths for the LUNs. Configure all the Zebi target FC ports into a single FC target group. Set the maximum number of outstanding disk requests from the ESXi host by setting the

Disk.SchedNumReqOutstanding parameter in ESXi to 32, 64 or 128 based on the formulato prevent flooding the target port queue depth. The command queue depth for each FC port on the Zebi storage array is set to 2048. The formula is:

o (Disk.SchedNumReqOutstanding) * (number of LUNs) * (number of ESXi hosts in the cluster) <= 2048 * (number of FC ports in the Zebi storage array)

iSCSI If possible, configure jumbo frames. Ensure that all devices along the I/O path are configured for

jumbo frames. On each controller, configure all Ethernet interfaces used for iSCSI into a single IPMP group. Use separate subnets for NFS and management networks. If possible, add multiple floating IP addresses on the iSCSI IPMP group to balance the load across

the interfaces. The number of floating IP addresses should match the number of interfaces in the IPMP group.

Use software iSCSI initiators instead of hardware iSCSI adapters to ensure that there are no compatibility issues.

Set the maximum number of outstanding disk requests from the ESXi host by setting the Disk.SchedNumReqOutstanding parameter in ESXi to 32, 64 or 128 based on the formulato prevent flooding the target port queue depth. The command queue depth for each iSCSI port on the Zebi storage array is set to 2048. The formula is:

o (Disk.SchedNumReqOutstanding) * (number of LUNs) * (number of ESXi hosts in the cluster) <= 2048 * (number of FC ports in the Zebi storage array)

NFS Configure all Ethernet interfaces on each controller used for NFSinto a single IPMP group. Use separate subnets for NFS and management networks. Add multiple floating IP addresses on the NFS IPMP group to balance the load across the

interfaces, if possible. The number of floating IP addresses should match the number of interfaces in the IPMP group.

On the ESXi hosts, distribute the access to NFS shares evenly across the floating IP addresses. Configure the ESXiNFS.MaxQueueDepth parameter to 32, 64 or 128 to avoidcongestion on

the storage arrays By default, the value of NFS.MaxQueueDepthin ESXi 5.x hosts is set to a very high value (4294967295).. Use the formula:

o (NFS.MaxQueueDepth) * (number of Shares) * (number of ESXi hosts) <= 1024

Page 20: Best Practices Guide for VMware VSphere - V5.x

SummaryTegile Zebi storage arrays are built from the ground up with virtualization in mind, to address the unique IOPS requirements of virtualized workloads. Zebi storage arrays are also tightly integrated with VMware for better performance, availability and management of VMware environments. The best practices listed above enable customers to take advantage of the Zebi feature set and the integration with VMware for the best functionality and most optimized results.