98
Unified Storage Division Abstract This document describes various best practices for configuring iSCSI multipathing and NFS path redundancy for VMware ESXi 4.1 on EMC ® VNXeseries storage arrays. It breaks down four use cases for multipathing and path redundancy with VNXe storage arrays and VMware ESXi 4.1 into actionable best practices that can easily be implemented. April, 2012 EMC ® SOLUTIONS FOR iSCSI MULTIPATHING AND NFS PATH REDUNDANCY WITH EMC VNXe SERIES and VMWARE ESXi 4.1 Applied Best Practices Guide

EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

  • Upload
    lamphuc

  • View
    217

  • Download
    3

Embed Size (px)

Citation preview

Page 1: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Unified Storage Division

Abstract

This document describes various best practices for configuring iSCSI multipathing and NFS path redundancy for VMware ESXi 4.1 on EMC® VNXe™ series storage arrays. It breaks down four use cases for multipathing and path redundancy with VNXe storage arrays and VMware ESXi™ 4.1 into actionable best practices that can easily be implemented.

April, 2012

EMC® SOLUTIONS FOR iSCSI MULTIPATHING AND NFS PATH REDUNDANCY WITH EMC VNXe™ SERIES and VMWARE ESXi™ 4.1 Applied Best Practices Guide

Page 2: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

2

Copyright © 2012 EMC Corporation. All rights reserved.

Published April, 2012

EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice.

The information in this publication is provided “as is”. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

VMware, ESX, VMware vCenter, vMotion, and VMware vSphere are registered trademarks or trademarks of VMware, Inc., in the United States and/or other jurisdictions. All other trademarks used herein are the property of their respective owners.

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1

Applied Best Practices Guide

Part Number H10599

Page 3: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

3

Contents

Chapter 1 Introduction ........................................................................ 13

Introduction to the VNXe Series .................................................................... 14

Software suites available ..................................................................................... 14

Software packs available ..................................................................................... 14

Path redundancy with link aggregation ................................................................ 15

Chapter 2 iSCSI without Link Aggregation ............................................ 17

Configure multipathing for iSCSI without link aggregation ............................ 18

Overview .............................................................................................................. 18

Configure jumbo frames ....................................................................................... 18

Create iSCSI servers ............................................................................................. 19

Create storage pools ............................................................................................ 20

Configure ESX server networking .......................................................................... 26

Provision datastores ............................................................................................ 31

Chapter 3 iSCSI with Link Aggregation ................................................. 39

Configure multipathing for iSCSI with link aggregation ................................. 40

Overview .............................................................................................................. 40

Configure the switches to support LACP ............................................................... 40

Configure jumbo frames and link aggregation groups .......................................... 41

Create iSCSI servers ............................................................................................. 43

Create storage pools ............................................................................................ 45

Configure switch ports for link aggregation .......................................................... 50

Configure ESX server networking .......................................................................... 51

Add the vCenter server to the Virtualization Hosts table. ...................................... 54

Provision datastores ............................................................................................ 55

Chapter 4 NFS without Link Aggregation .............................................. 61

Configure path redundancy for NFS without link aggregation ........................ 62

Overview .............................................................................................................. 62

Configure ESX server networking .......................................................................... 62

Create shared folder servers ................................................................................ 64

Create storage pools ............................................................................................ 66

Page 4: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Contents

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

4

Add the vCenter server to the Virtualization Hosts table. ...................................... 71

Provision datastores ............................................................................................ 71

Chapter 5 NFS with Link Aggregation ................................................... 77

Configure path redundancy for NFS with link aggregation ............................. 78

Overview .............................................................................................................. 78

Configure the switches to support LACP ............................................................... 78

Configure jumbo frames and link aggregation groups .......................................... 79

Configure switch ports for link aggregation .......................................................... 81

Configure ESX server networking .......................................................................... 82

Create shared folder servers ................................................................................ 85

Create storage pools ............................................................................................ 87

Add the vCenter server to the Virtualization Hosts table. ...................................... 92

Provision datastores ............................................................................................ 92

Page 5: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

5

Figures

Figure 1. Set MTU size ....................................................................................... 18 Figure 2. Create iSCSI server ............................................................................. 19 Figure 3. iSCSI server details............................................................................. 20 Figure 4. Disk Configuration Wizard – select configuration method ................... 21 Figure 5. Disk Configuration Wizard – pool name and description ..................... 22 Figure 6. Disk Configuration Wizard – specify drive type ................................... 23 Figure 7. Disk Configuration Wizard – select number of disks ........................... 24 Figure 8. Disk Configuration Wizard – add hot spares ....................................... 25 Figure 9. Disk Configuration Wizard – specify number of spares ....................... 26 Figure 10. Configure NIC teaming ........................................................................ 28 Figure 11. Vmkernel port names ......................................................................... 29 Figure 12. Vmhba name ...................................................................................... 29 Figure 13. VMware Storage Wizard – specify datastore name.............................. 32 Figure 14. VMware Storage Wizard – provision VMFS datastore .......................... 32 Figure 15. VMware Storage Wizard – select storage pool and iSCSI server .......... 33 Figure 16. VMware Storage Wizard – select protection ........................................ 34 Figure 17. VMware Storage Wizard – specify host access .................................... 35 Figure 18. Manage paths .................................................................................... 36 Figure 19. Specify path ....................................................................................... 36 Figure 20. Set eth2 MTU ...................................................................................... 42 Figure 21. Aggregate eth3 with eth2 ................................................................... 43 Figure 22. Create iSCSI server with link aggregation ............................................ 44 Figure 23. iSCCSI server details .......................................................................... 45 Figure 24. Disk Configuration Wizard – select configuration method ................... 46 Figure 25. Disk Configuration Wizard – pool name and description ..................... 46 Figure 26. Disk Configuration Wizard – specify drive type ................................... 47 Figure 27. Disk Configuration Wizard – select number of disks ........................... 48 Figure 28. Disk Configuration Wizard – add hot spares ....................................... 49 Figure 29. Disk Configuration Wizard – specify number of spares ....................... 50 Figure 30. Select load balancing and active adapters ......................................... 53 Figure 31. Do not override vSwitch failover order ................................................ 54 Figure 32. VMware Storage Wizard – specify datastore name.............................. 55 Figure 33. VMware Storage Wizard – provision VMFS datastore .......................... 56 Figure 34. VMware Storage Wizard – select storage pool and iSCSI server .......... 56 Figure 35. VMware Storage Wizard – select protection ........................................ 57 Figure 36. VMware Storage Wizard – specify host access .................................... 58 Figure 37. Manage paths for the first AVE datastore ............................................ 59 Figure 38. Set active adapter .............................................................................. 64 Figure 39. Create shared folder server ................................................................. 65

Page 6: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Figures

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

6

Figure 40. Set shared folder for NFS .................................................................... 66 Figure 41. Disk Configuration Wizard – select configuration method ................... 67 Figure 42. Disk Configuration Wizard – pool name and description ..................... 67 Figure 43. Disk Configuration Wizard – specify drive type ................................... 68 Figure 44. Disk Configuration Wizard – select number of disks ........................... 69 Figure 45. Disk Configuration Wizard – add hot spares ....................................... 70 Figure 46. Disk Configuration Wizard – specify number of spares ....................... 71 Figure 47. VMware Storage Wizard – specify datastore name.............................. 72 Figure 48. VMware Storage Wizard – provision NFS datastore ............................. 73 Figure 49. VMware Storage Wizard – select storage pool and

shared folder server ........................................................................... 74 Figure 50. VMware Storage Wizard – select protection ........................................ 75 Figure 51. VMware Storage Wizard – specify host access .................................... 76 Figure 52. Set eth2 MTU ...................................................................................... 80 Figure 53. Aggregate eth3 with eth2 ................................................................... 81 Figure 54. Select load balancing and active adapters ......................................... 84 Figure 55. Do not override vSwitch failover order ................................................ 85 Figure 56. Create shared folder server ................................................................. 86 Figure 57. Set shared folder for NFS .................................................................... 87 Figure 58. Disk Configuration Wizard – select configuration method ................... 88 Figure 59. Disk Configuration Wizard – pool name and description ..................... 88 Figure 60. Disk Configuration Wizard – specify drive type ................................... 89 Figure 61. Disk Configuration Wizard – select number of disks ........................... 90 Figure 62. Disk Configuration Wizard – add hot spares ....................................... 91 Figure 63. Disk Configuration Wizard – specify number of spares ....................... 92 Figure 64. VMware Storage Wizard – specify datastore name.............................. 93 Figure 65. VMware Storage Wizard – provision NFS datastore ............................. 94 Figure 66. VMware Storage Wizard – select storage pool and

shared folder server ........................................................................... 95 Figure 67. VMware Storage Wizard – select protection ........................................ 96

Page 7: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

7

Tables

Table 1. Network interface/runtime name relationship .................................... 37

Page 8: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Tables

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

8

Page 9: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

9 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Preface

About this document

This document discusses how to configure path redundancy for VMware NFS and VMFS datastores on the EMC VNXe Series. It describes the Unisphere for VNXe and VMware ESX interfaces, and explains how an IT generalist can easily complete storage- and network-related tasks to configure path redundancy.

Business case

In today’s world businesses cannot afford downtime caused by failures in the network infrastructure. One way to protect against network failures is to configure path resiliency on the storage network

Storage administrators are constantly looking for ways to simplify the configuration and management processes. This can be very difficult, because many storage-management operating environments assume an in-depth knowledge of storage concepts. For customers who manage different storage products from different vendors, this becomes a real challenge, especially when different vendors use different terminology. Furthermore, in most cases, navigation is based on storage concepts rather than management tasks, which makes it very hard for the IT generalist to manage storage.

To address these concerns, and make it easier to configure path resiliency, EMC uses Unisphere™ for VNXe™, a fundamentally different and new approach to storage management. VNXe allows you to manage storage within the context of an application using easy-to-understand language instead of arcane storage terms. It also embeds best practices into the user interface for a faster, simpler experience when completing everyday administrative tasks.

Unisphere for VNXe is a graphical, application-oriented model with a “web-familiar” look and feel. Management of VNXe storage systems is simplified, allowing the utilization of advanced features such as thin provisioning, file deduplication, and compression, without requiring an in-depth understanding of these technologies. A support ecosystem provides access to learning materials and support resources, making storage management easier than ever. The result is immediate productivity and efficiency.

The sample configurations used to test the procedures described in this document use EMC Avamar, but the application wizards available in EMC Unisphere for EMC VNXe storage systems can be used to provision the necessary storage for other

Page 10: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Preface

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

10

bandwidth-intensive application environments. The techniques described in this paper allow for higher possible bandwidth by segregating multiple sequential I/O streams onto separate physical NIC ports on both the VNXe and the ESX hosts, and on to separate sets of physical disks.

Link aggregation can provide additional benefits for the network performance of an application environment. When the highest level of network performance is required, link aggregation through a switch can provide higher performance if the primary network interface fails. The performance of the standby network device on the switch is higher than the performance available through the VNXe native Fail-Safe Networking capability.

Link aggregation provides the most benefit when used in combination with NFS storage. Shared folder servers are accessed from only one IP address at a time per client. Link aggregation is required to achieve the best path redundancy for these use cases in the event of a link failure from the VNXe to the switch. Without link aggregation, a link failure will result in I/O being routed through a Fail Safe Networking (FSN) connection to the corresponding port on the other storage processor (SP).

For iSCSI storage, link aggregation provides only a limited benefit because the same LUN can be accessed from more than one physical interface/IP address per client. The hosts recognize and use all the available paths.

Review the application and environment requirements carefully before implementing link aggregation. In some cases, the complexity of the implementation can outweigh the performance benefits.

Audience

This document is intended for EMC customers, partners, and employees who are considering the use of EMC® Unisphere for VNXe to configure path redundancy for VMware storage environments. It is assumed that the reader is at least an IT generalist who has experience as a system or network administrator.

Page 11: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Preface

11 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Scope

This document describes how to configure path redundancy for VMware NFS and VMFS datastores on EMC VNXe storage, with or without the use of link aggregation. The document covers the following procedures:

Configure jumbo frames

Create iSCSI servers for VMFS datastores, or shared folder servers for NFS datastores

Create storage pools

Configure link aggregation, if applicable

Provision datastores

Configure I/O paths and SP access to the datastores

All other tasks, such as VNXe setup, and VMware ESX installation and configuration, are beyond the scope of this document.

Terminology

Unisphere for VNXe – The new management interface for managing EMC VNXe storage systems.

Common Internet File System (CIFS) – An access protocol that allows users to access files and folders from Windows hosts located on a network. User authentication is maintained through Active Directory and file access is determined by directory access controls.

iSCSI – The internet small computer system interface (iSCSI) protocol provides a mechanism for accessing raw block-level data storage over network connections. The iSCSI protocol is based on a network-standard client/server model with iSCSI initiators (hosts) acting as storage clients and iSCSI targets acting as storage servers. Once a connection is established between an iSCSI host and the iSCSI server, the host can request storage resources and services from the server.

Network File System (NFS) – An access protocol that allows users to access files and folders from Linux/UNIX hosts located on a network.

Storage processor (SP) – A hardware component that performs VNXe storage operations such as creating, managing, and monitoring storage resources.

Page 12: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Preface

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

12

Page 13: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

13

Chapter 1 Introduction

This chapter presents the following topics:

Introduction to the VNXe Series .................................................................. 14

Software suites available .............................................................................. 14

Software packs available .............................................................................. 14

Path redundancy with link aggregation ......................................................... 15

Page 14: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Introduction

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

14

Introduction to the VNXe Series EMC VNXe series delivers exceptional flexibility for the small-to-medium business user, combining a unique, application-driven management environment with a complete consolidation for all IP storage needs. Customers can benefit from the new VNXe features such as:

Next-generation unified storage, optimized for virtualized applications.

Capacity optimization features including file deduplication and compression, thin provisioning, and application-consistent snapshots and replicas (only available for VNXe for file).

High availability, designed to deliver five 9s availability.

Multiprotocol support for file and block.

Simplified management with EMC Unisphere™ for a single management interface for all file, block, and replication needs.

The VNXe series includes four new software suites and two new software packs, making it easier and simpler to protect your data.

Software suites available VNXe Local Protection Suite—Practices safe data protection and repurposing.

VNXe Remote Protection Suite—Protects data against localized failures, outages and disasters.

VNXe Application Protection Suite—Automates application copies and proves compliance.

VNXe Security and Compliance Suite—Keeps data safe from changes, deletions, and malicious activity.

Software packs available VNXe Total Protection Pack—Includes local, remote and application protection

suites.

VNXe Total Value Pack—Includes all three protection software suites and the Security and Compliance Suite (the VNXe3100 exclusively supports this package).

Page 15: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Introduction

15 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Path redundancy with link aggregation Using link aggregation on the switches involves some trade-offs.

In a configuration with redundant switches that are connected with a stacking interconnect that allows the switches to present as a single logical switch allows for the use of “cross stack” link aggregation. In this type of configuration, a connection failure of one of the VNXe ports or a switch failure no longer requires traffic to be rerouted using the VNXe Fail Safe Networking (FSN) mechanism. Failover will occur between the members of the link aggregation group. Performance of the FSN mechanism is somewhat less than performance through a switch.

Load distribution within a link aggregation group is not as predictable as most users assume. Link aggregation limits the amount of control users have over which physical interface is used for a given NFS datastore.

Be aware of the following points about load-balancing through link aggregation:

Link aggregation load-balancing, in general, works better when the number of connections from the server to its clients is large. In a small IP storage configuration with VMware hosts and VNXe storage, the number of connections is usually small. This limits the utility of link aggregation load balancing.

This configuration uses load balancing policies on the switch and the ESX host of source/destination IP hash. On the VNXe, the policy is source/destination MAC hash. On the VNXe the policy is not settable.

All of these policies are “transmit” policies. The sender of a packet decides which link in a group is used. With different transmit policies on the ESX host, switch and VNXe it is possible to see load distribution vary depending on data direction, i.e. reads versus writes.

The algorithms used to determine which link is used for a given source/destination pair make an arbitrary assignment to a link based on the numerical value of the addresses. It is possible to have a set of source and destination addresses that all resolve to the same link. If this occurs, no load balancing will take place.

For reads from the VNXe, the VNXe determines which link in the group from the VNXe to the switch is used based on the result of an XOR of the source MAC of the VNXe interface and the destination MAC of the vmkernel interface. The result of the XOR is divided by the number of links in the group. The remainder of the division yields a value that determines which link is used.

Each link aggregation group on the VNXe has the same MAC address. The MAC address is not settable. The vmkernel interfaces on the ESX hosts each have a unique MAC address that is also not settable.

Page 16: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

Introduction

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

16

For writes from the ESX hosts to the switch, the load balancing across the links in a group is determined by the ESX host. For writes from the switch to the VNXe the load balancing is determined by the switch. In this example, the algorithm is set to source/destination IP hash. This creates the opportunity for but does not guarantee better load balancing for write I/Os.

For NFS storage, the VMware ESX kernel only makes one TCP connection to the shared folder server for data traffic. There is only ever one logical path to a datastore, and I/O only uses one physical link at a time. Fail-safe Networking (FSN) is automatically configured on VNXe systems with two storage processors, and provides path redundancy in the event of link failure, or through link aggregation.

For iSCSI storage, the ESX kernel can recognize and configure multiple paths to a VMFS datastore. All the paths go to the same storage processor (SP). By default, ESX configures a VNXe with a load balancing policy of “fixed”, and only uses one of the configured paths for I/O until a path failure occurs. The active path for a given VMFS datastore can be set manually. For multiple datastores, use this mechanism to statically distribute the I/O load between the available paths.

Page 17: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

17

Chapter 2 iSCSI without Link Aggregation

This chapter presents the following topics:

Configure multipathing for iSCSI without link aggregation ........................... 18

Overview ....................................................................................................... 18

Configure jumbo frames ................................................................................ 18

Create iSCSI servers ...................................................................................... 19

Create storage pools ..................................................................................... 20

Configure ESX server networking ................................................................... 26

Provision datastores ..................................................................................... 31

Page 18: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

18

Configure multipathing for iSCSI without link aggregation This section describes how to create a configuration that leverages the native multipathing features in ESX 4.1 and the VNXe storage system to provide path redundancy, and to distribute I/O load across the available resources on the VNXe, the switch and the ESX server.

Overview Avamar Virtual Edition (AVE) serves as a real world example of an application with I/O requirements that necessitate the use of the best practices explained in this document. At the 2 TB license level, the current AVE product offering requires the presentation of 12 virtual disks to a single AVE virtual machine. The AVE application requires substantial concurrent sequential read and write throughput from the VNXe. To meet the necessary aggregate throughput levels, the 12 virtual disks must be presented from datastores that are distributed evenly across all of the disk, storage processor (SP) and port resources available on the VNXe.

For this scenario, run the virtual machine on a two node ESXi 4.1 cluster. In addition to the AVE instance, present two Windows 2008 R2 server virtual machines, each with 540 GB of storage from the servers as CIFS shares. The servers will backup to the AVE instance.

Jumbo frames can provide a small increase in possible throughput.

Configure jumbo frames Complete the following steps to configure jumbo frames:

1. Configure the switches so that jumbo frames are enabled for the interfaces that are attached to the data ports on the VNXe, and to the vmkernel ports on the ESX server.

2. Configure the VNXe ports for jumbo frames:

a. Select Settings > More Configuration > Advanced Configuration.

b. Click eth 2.

c. In the MTU Size list box, select 9000.

d. Click Apply Changes.

3. Repeat steps 2b-2d for eth 3.

Figure 1. Set MTU size

Page 19: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

19 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Create iSCSI servers

Two iSCSI servers are required, one on each SP. Each iSCSI server should have two network interfaces. All of the network interfaces should be on separate physical ports. The first iSCSI server uses ports eth2 and eth3 on SP A, and the second iSCSI server uses ports eth2 and eth3 on SP B.

Note: Although the ports on SP A and SP B have the same logical names, they are different physical ports. This configuration creates two physical paths for I/O to each SP.

Complete the following steps to create one iSCSI server on each SP:

1. Create an iSCSI server on each SP:

a. Select Settings > iSCSI Server Settings.

b. Click Add iSCSI Server. The iSCSI Server window appears.

c. In the Server Name field, type a name for the iSCSI server.

Note: Include the SP where the server resides as part of the server name. This will help in later steps.

d. In the IP Address field, type an IP address for the iSCSI server.

e. In the Netmask field, type a netmask for the iSCSI server.

f. In the Gateway field, type a gateway for the iSCSI server.

g. Click Show advanced.

h. In the Storage Processor list box, select SP A.

i. In the Ethernet Port list box, select eth 2 (Link Up).

j. Click Next.

k. Click Finish.

Figure 2. Create iSCSI server

Page 20: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

20

l. Select the new iSCSI server and then click Details. The iSCSI Server Details window appears.

m. In the IP Address field, type a second IP address.

n. Click Show advanced.

o. In the Ethernet Port list box, select a different Ethernet port than the one used for the first network interface.

p. Click Next.

q. Click Finish.

Figure 3. iSCSI server details

2. Repeat steps 1b-1q to create an iSCSI server on SP B.

Note: In step 1i, use the eth2 port on SP B. Although the logical name is the same as SP A, it is a different physical port.

Create storage pools

For this scenario, create custom pools that contain only one disk group. This allows for the creation of datastores segregated onto separate physical spindles. Four storage pools are required. For the AVE data disks, create pools with the 3+3 RAID 10 profile for the best possible throughput. Space requirements dictate that two pools are necessary for the AVE data disks and, one pool each is required to provide storage for the file server virtual machines and the OS disks for both the AVE instance and the W2k8R2 virtual machines. Create the pools for the file server shares and OS instances with the balanced performance capacity profile. This storage profile uses RAID 5.

Note: The first pool created with drives of the type that are in slots 0-3 in the DPE will have less space available than pools created later. The system selects drives for pools starting with the lowest numbered drive in the DPE of the selected disk type. Drives 0-3 have approximately 48 GB of space reserved for internal use by the VNXe. All drives in the first pool will have this amount of space reserved. Take this into account when planning the size of the datastores that will be configured on the first pool created.

Page 21: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

21 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

The storage pools will consume a total of 22 disks. Two more disks will be used for hot spares. When creating custom pools, disks must be manually added to the hot spare pool.

Complete the following steps to create the storage pools:

1. Select System > Storage Pools.

2. Click Configure Disks.

3. Select Manually create a new pool.

4. In the list box, select by Disk Type.

5. Click Next.

Figure 4. Disk Configuration Wizard – select configuration method

6. In the Name field, type a distinctive name for the storage pool.

7. In the Description field, optionally type a description for the storage pool.

8. Click Next.

Page 22: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

22

Figure 5. Disk Configuration Wizard – pool name and description

9. Select the following drives:

Disk Type: SAS

Storage Profile: Balanced Perf/Capacity

10. Click Next.

Page 23: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

23 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 6. Disk Configuration Wizard – specify drive type

11. In the list box, select the option to use five disks.

12. Click Next.

Page 24: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

24

Figure 7. Disk Configuration Wizard – select number of disks

13. Verify the information is correct, and then click Finish.

14. Repeat steps 1-13 to create a second pool with the same settings.

15. Select System > Storage Pools.

16. Click Configure Disks.

17. Select Manually create a new pool.

18. In the list box, select by Disk Type.

19. Click Next.

20. In the Name field, type a distinctive name for the storage pool.

21. In the Description field, optionally type a description for the storage pool.

22. Click Next.

23. Select the following drives:

Disk Type: SAS

Storage Profile: High Performance

24. Click Next.

25. In the list box, select the option to use six disks.

26. Click Next.

27. Verify the information is correct, and then click Finish.

Page 25: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

25 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

28. Repeat steps 15-27 to create a second pool with the same settings.

Complete the following steps to create two hot spares:

1. Select System > Storage Pools.

2. Click Configure Disks.

3. Select Manually add disks to an existing pool.

4. In the list box, select Hot Spare Pool.

5. Click Next.

Figure 8. Disk Configuration Wizard – add hot spares

6. In list box, select 2.

7. Click Next.

Page 26: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

26

Figure 9. Disk Configuration Wizard – specify number of spares

8. Verify the information is correct, and then click Finish.

Configure ESX server networking Configure ESX server networking to spread the I/O load as evenly as possible across all four ports that are connected on the VNXe, and all four ports on each ESX server. To do this, create two vSwitches. Each vSwitch will have two vmnics, and two vmkernel interfaces associated with it. In ESX 4.x, the command line is the only way to create vmkernel interfaces that support jumbo frames.

Run the following commands to create the vSwitch, link the vmnics, add the interfaces, configure the switch to use jumbo frames, and add the IP interfaces that can accept jumbo frames:

Note: In vSphere 5, creation of vSwitches and vmkernel interfaces with jumbo frame support can be done from the GUI.

Note: These steps must be run against each ESX host in the cluster.

1. Run the following commands against the first ESX host:

esxcfg-vswitch.pl -a vSwitch1

esxcfg-vswitch.pl vSwitch1 -L vmnic0

esxcfg-vswitch.pl vSwitch1 -L vmnic2

esxcfg-vswitch.pl vSwitch1 -A Storage-iSCSI-54

esxcfg-vswitch.pl vSwitch1 -A Storage-iSCSI-55

esxcfg-vswitch.pl vSwitch1 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.54 -n 255.255.255.0 -m 9000

Storage-iSCSI-54

Page 27: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

27 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

esxcfg-vmknic.pl -a -i 192.168.140.55 -n 255.255.255.0 -m 9000

Storage-iSCSI-55

esxcfg-vswitch.pl -a vSwitch2

esxcfg-vswitch.pl vSwitch2 -L vmnic3

esxcfg-vswitch.pl vSwitch2 -L vmnic4

esxcfg-vswitch.pl vSwitch2 -A Storage-iSCSI-56

esxcfg-vswitch.pl vSwitch2 -A Storage-iSCSI-57

esxcfg-vswitch.pl vSwitch2 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.56 -n 255.255.255.0 -m 9000

Storage-iSCSI-56

esxcfg-vmknic.pl -a -i 192.168.140.57 -n 255.255.255.0 -m 9000

Storage-iSCSI-57

2. Run the following commands against the other ESX host in the cluster:

esxcfg-vswitch.pl -a vSwitch1

esxcfg-vswitch.pl vSwitch1 -L vmnic0

esxcfg-vswitch.pl vSwitch1 -L vmnic2

esxcfg-vswitch.pl vSwitch1 -A Storage-iSCSI-58

esxcfg-vswitch.pl vSwitch1 -A Storage-iSCSI-59

esxcfg-vswitch.pl vSwitch1 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.58 -n 255.255.255.0 -m 9000

Storage-iSCSI-58

esxcfg-vmknic.pl -a -i 192.168.140.59 -n 255.255.255.0 -m 9000

Storage-iSCSI-59

esxcfg-vswitch.pl -a vSwitch2

esxcfg-vswitch.pl vSwitch2 -L vmnic3

esxcfg-vswitch.pl vSwitch2 -L vmnic4

esxcfg-vswitch.pl vSwitch2 -A Storage-iSCSI-60

esxcfg-vswitch.pl vSwitch2 -A Storage-iSCSI-61

esxcfg-vswitch.pl vSwitch2 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.60 -n 255.255.255.0 -m 9000

Storage-iSCSI-60

esxcfg-vmknic.pl -a -i 192.168.140.61 -n 255.255.255.0 -m 9000

Storage-iSCSI-61

Configure the vSwitches so that one vmnic is dedicated to each vmkernel interface. This is required for iSCSI multipathing when using vswitches with more than one physical NIC.

3. In the vSphere GUI, select Configuration.

4. Click Networking.

5. Select the first vmkernel interface and then click Edit. The vmkernel properties window appears.

6. Click NIC Teaming.

7. Select Override vSwitch failover order.

8. Click Move Down to move one of the adapters to the Unused Adapters area.

9. Click OK.

Page 28: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

28

Figure 10. Configure NIC teaming

10. Repeat steps 3-5 for the second vmkernel interface.

Note: Do not use the same adapter for both vmkernel interfaces.

11. Repeat steps 3-8 for the vSwitches on each of the other ESX hosts.

Create port bindings for each vmkernel interface to the software vmhba instance. In vSphere 4.x, this can only be done with the command line. In vSphere 5 it can also be done in the GUI.

The names that need to be used to refer to the vmkernel ports that appear to the left of the IP addresses. In this example the names are vmk1, vmk2, vmk3 and vmk4.

Page 29: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

29 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 11. Vmkernel port names

The name of the vmhba instance can be found on the storage adapter link on the Configuration tab for each ESX server in the VMware GUI.

Figure 12. Vmhba name

The first ESX host has the software iSCSI device instance at vmhba35. The name that ESX assigns to the software initiator may vary depending on the hardware configuration of the ESX server.

12. Run the following commands on the first ESX host:

esxcli swiscsi nic add -n vmk1 -d vmhba35

esxcli swiscsi nic add -n vmk2 -d vmhba35

esxcli swiscsi nic add -n vmk3 -d vmhba35

esxcli swiscsi nic add -n vmk4 -d vmhba35

The second ESX host has the software iSCSI instance at vmhba33.

Page 30: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

30

13. Run the following commands on the second ESX host:

esxcli swiscsi nic add -n vmk1 -d vmhba33

esxcli swiscsi nic add -n vmk2 -d vmhba33

esxcli swiscsi nic add -n vmk3 -d vmhba33

esxcli swiscsi nic add -n vmk4 -d vmhba33

14. Run the following command to verify the port bindings. Check the output to make sure that the MTU and the IP are correct for each vmk.

esxcli swiscsi nic list -d vmhba35

vmk1

pNic name: vmnic2

ipv4 address: 192.168.140.54

ipv4 net mask: 255.255.255.0

ipv6 addresses:

mac address: 00:21:9b:9c:d2:eb

mtu: 9000

toe: false

tso: true

tcp checksum: false

vlan: true

vlanId: 0

ethernet speed: 1000

packets received: 25020

packets sent: 4426

NIC driver: bnx2

driver version: 2.0.7d-2vmw

firmware version: 5.0.11 NCSI 2.0.5

vmk2

pNic name: vmnic0

ipv4 address: 192.168.140.55

ipv4 net mask: 255.255.255.0

ipv6 addresses:

mac address: 00:21:9b:9c:d2:e7

mtu: 9000

toe: false

tso: true

tcp checksum: false

vlan: true

vlanId: 0

ethernet speed: 1000

packets received: 24006

packets sent: 3976

NIC driver: bnx2

driver version: 2.0.7d-2vmw

firmware version: 5.0.11 NCSI 2.0.5

vmk3

pNic name: vmnic4

ipv4 address: 192.168.140.56

ipv4 net mask: 255.255.255.0

ipv6 addresses:

mac address: 00:10:18:91:c3:3c

mtu: 9000

Page 31: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

31 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

toe: false

tso: true

tcp checksum: false

vlan: true

vlanId: 0

ethernet speed: 1000

packets received: 116545

packets sent: 96174

NIC driver: bnx2

driver version: 2.0.7d-2vmw

firmware version: 5.0.6

vmk4

pNic name: vmnic3

ipv4 address: 192.168.140.57

ipv4 net mask: 255.255.255.0

ipv6 addresses:

mac address: 00:21:9b:9c:d2:ed

mtu: 9000

toe: false

tso: true

tcp checksum: false

vlan: true

vlanId: 0

ethernet speed: 1000

packets received: 21836

packets sent: 11

NIC driver: bnx2

driver version: 2.0.7d-2vmw

firmware version: 5.0.11 NCSI 2.0.5

Add the vCenter server to the Virtualization Hosts table.

15. Select Hosts > VMware.

16. Click Find ESX Hosts.

17. Click Next.

18. Specify the credentials for the vCenter server:

a. In the User Name field, type the username for the vCenter server.

b. In the Password field, type the password for the vCenter server.

19. Click OK.

20. Click Next.

21. Click Close.

Provision datastores The AVE instance requires 12 250GB virtual disks. Four datastores are required in order to distribute the I/O load evenly across two paths to each SP. Create two datastores on each of the two High Performance RAID 10 pools, for a total of four datastores. Create 3 vdisks on each datastore.

The two Balanced Perf/Capacity pools created earlier are not the same size. The first one created is smaller because it is created on the drives in the DPE that have part of

Page 32: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

32

the space reserved for internal use by the VNXe. There should be a roughly equal amount of space available on each of the W2k8r2 server instances. Provision the system drives for the AVE instance and the server instances from the larger pool. Each server instance will have one large additional vdisk provisioned to it for use as a CIFS share.

Provision a datastore on SPA from the first RAID 10 pool:

1. Select Storage > VMware.

2. Click Create.

3. In the Name field, type a name for the datastore.

4. In the Description field, optionally type a description for the datastore.

5. Click Next.

Figure 13. VMware Storage Wizard – specify datastore name

6. Select Virtual Disk (VMFS).

7. Click Next.

Figure 14. VMware Storage Wizard – provision VMFS datastore

Page 33: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

33 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

8. Select the pool and server combination that corresponds to the larger RAID 10 storage pool and the iSCSI server on SP A.

9. In the Size field, type 764, and then select GB from the list box to the right.

764GB allows enough space for three 250GB vdisks while accounting for some VMFS overhead.

10. Click Next.

Figure 15. VMware Storage Wizard – select storage pool and iSCSI server

11. Select Do not configure protection storage for this storage resource.

All backup is handled by AVE so no protection storage is required.

12. Click Next.

Page 34: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

34

Figure 16. VMware Storage Wizard – select protection

13. Click the host that will have access to the datastore.

14. In the Access list box for the software iSCSI initiator, select Datastore.

Compare the IQNs in the list with the IQNs on the Storage Adapters window in the vSphere Client to determine which IQN is the software iSCSI initiator. In this example, the host named npi-node-01 has a Broadcom NIC that has TCP and iSCSI offload capability. When a device of this type is installed, ESX 4.1 and later will create dependent hardware iSCSI adapter device instances for each NIC port. Each dependent hardware iSCSI adapter device instance has a distinct IQN. This example does not use the dependent device instances. Provide access to the IQN for the software iSCSI initiator only.

15. Click Next.

Page 35: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

35 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 17. VMware Storage Wizard – specify host access

16. Review the details and then click Finish.

17. Repeat steps 1-16 to create a second datastore with the same settings as the first.

18. Follow steps 1-16 to create two datastores on the second RAID 10 High performance pool and the iSCSI server on SP B.

VMware assigns a path selection policy of Fixed(VMware) to a VNXe iSCSI datastore. In the sample configuration, there are four vmkernel interfaces that VMware can use to connect to the two IP interfaces on each VNXe iSCSI server instance for any given datastore. The VMware software iSCSI initiator creates a path for each connection it creates from each vmkernel interface to each target portal. In this case there are eight paths per datastore. With the Fixed policy, the selection of the preferred path is random and not usually optimal.

1. Set the preferred path for each datastore in order to meet the AVE I/O

requirements:

2. Navigate to the storage page in the VMware GUI.

3. Highlight the first AVE datastore.

4. Click Properties.

5. Click Manage Paths.

Page 36: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

36

Figure 18. Manage paths

The Paths window shows eight paths. Each path is assigned a Runtime Name in the form vmhbaww:Cx:Ty:Lz.

Figure 19. Specify path

The first part, vmhbaww is the adapter name. For a device on the software iSCSI adapter this is always the name of the adapter, in this case vmhba35. The index value in Cx is assigned in order found.

Page 37: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

37 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

The index assignment starts with the connection from the lowest numbered vmk interface to the lowest numbered target portal IP interface. In this example there are four interfaces, vmk2, vmk3, vmk4 and vmk5. Table 1 shows the relationship between the interfaces and the runtime names for this datastore.

Table 1. Network interface/runtime name relationship

Runtime Name From vmk interface To VNXe IP interface

Vmhba35:C0:T0:L0 Vmk2 192.168.140.50

Vmhba35:C1:T0:L0 Vmk3 192.168.140.50

Vmhba35:C2:T0:L0 Vmk4 192.168.140.50

Vmhba35:C3:T0:L0 Vmk5 192.168.140.50

Vmhba35:C4:T0:L0 Vmk2 192.168.140.51

Vmhba35:C5:T0:L0 Vmk3 192.168.140.51

Vmhba35:C6:T0:L0 Vmk4 192.168.140.51

Vmhba35:C7:T0:L0 Vmk5 192.168.140.51

Set the preferred paths of the four datastores:

1. In the Properties window for the first datastore, right-click vmhba35:C0:T0:L0 and select Preferred.

2. In the Properties window for the second datastore, right-click vmhba35:C5:T0:L1.

3. In the Properties window for the third datastore, right-click vmhba35:C3:T1:L0.

4. In the Properties window for the fourth datastore, right-click vmhba35:C7:T0:L0.

Each AVE datastore resides on a separate path. The datastores are divided evenly between the SPs.

Create the remaining datastores for the CIFS shares and system disks on the W2k8r2 servers:

1. Follow the steps above to create the datastore for the CIFS share on the first

Windows server from the first Balanced Perf/Capacity storage pool on SP A.

2. Follow the steps above to create the datastore for the CIFS share on the second Windows server from the second Balanced Perf/Capacity storage pool on SP B.

3. Follow the steps above to create the datastore for the system drives from the second Balanced Perf/Capacity storage pool on SP B.

4. Set the paths for the CIFS server and system disk datastores to match the paths specified for the AVE datastores.

Page 38: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

38

The preferred path for each of the file server datastores should be set to a different path. Since the I/O requirements for the OS instance are relatively low, the path selection can be left at the default value.

Page 39: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

39

Chapter 3 iSCSI with Link Aggregation

This chapter presents the following topics:

Configure multipathing for iSCSI with link aggregation ................................ 40

Overview ....................................................................................................... 40

Configure the switches to support LACP ........................................................ 40

Configure jumbo frames and link aggregation groups ................................... 41

Create iSCSI servers ...................................................................................... 43

Create storage pools ..................................................................................... 45

Configure switch ports for link aggregation ................................................... 50

Configure ESX server networking ................................................................... 51

Add the vCenter server to the Virtualization Hosts table. .............................. 54

Provision datastores ..................................................................................... 55

Page 40: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

40

Configure multipathing for iSCSI with link aggregation This section describes how to create a configuration that leverages the native multipathing features in ESX 4.1 and the VNXe storage system, along with link aggregation to provide path redundancy, and to distribute I/O load across the available resources on the VNXe, the switch and the ESX server.

Overview Avamar Virtual Edition (AVE) serves as a real world example of an application with I/O requirements that necessitate the use of the best practices explained in this document. At the 2 TB license level, the current AVE product offering requires the presentation of 12 virtual disks to a single AVE virtual machine. The AVE application requires substantial concurrent sequential read and write throughput from the VNXe. To meet the necessary aggregate throughput levels, the 12 virtual disks must be presented from datastores that are distributed evenly across all of the disk, storage processor (SP) and port resources available on the VNXe.

For this scenario, run the virtual machine on a two node ESXi 4.1 cluster. In addition to the AVE instance, present two Windows 2008 R2 server virtual machines, each with 540 GB of storage from the servers as CIFS shares. The servers will backup to the AVE instance.

Jumbo frames can provide a small increase in possible throughput.

Configure the switches to support LACP Link Aggregation Control Protocol (LACP) must be configured on the switches before link aggregation can be configured on the corresponding VNXe ports. For a dual SP system, configure two link aggregation groups. One group for ports eth2 and eth3 on SP A and one group for ports eth2 and eth3 on SP B.

In this example, eth2 and eth3 on SP A are connected to ports 10 and 17 on a Cisco switch. Ports eth2 and eth3 on SP B are connected to ports 4 and 6 on the switch.

Run the following commands to configure LACP:

interface Port-channel3

switchport trunk encapsulation dot1q

switchport mode trunk

interface Port-channel4

switchport trunk encapsulation dot1q

switchport mode trunk

interface GigabitEthernet0/10

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 3 mode active

interface GigabitEthernet0/17

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 3 mode active

interface GigabitEthernet0/4

Page 41: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

41 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 4 mode active

interface GigabitEthernet0/6

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 4 mode active

Configure jumbo frames and link aggregation groups Creating a link aggregation group on ports eth2 and eth3 in the GUI creates two link aggregation groups on a dual SP system. Two corresponding link aggregation groups must be created on the switch side, for ports eth2 and eth3 on SP A and one for ports eth2 and eth3 on SP B.

Complete the following steps to configure jumbo frames and link aggregation groups:

1. Configure the switches so that jumbo frames are enabled for the interfaces that are attached to the data ports on the VNXe, and to the vmkernel ports on the ESX server.

2. Configure the VNXe ports for jumbo frames:

a. Select Settings > More Configuration > Advanced Configuration.

b. Click eth2.

c. In the MTU Size list box, select 9000.

d. Click Apply Changes.

e. Click eth3.

f. In the MTU Size list box, select 9000.

g. Select Aggregate with eth2.

h. Click Apply Changes.

Page 42: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

42

Figure 20. Set eth2 MTU

Page 43: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

43 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 21. Aggregate eth3 with eth2

Note: Setting the MTU on one of the ports, eth2 for example, sets the MTU value for the eth2 port on both SPA and SPB. The MTU size on the VNXe ports, the physical switch ports, and the vSwitches and vmkernel interfaces on the ESX host must match.

Create iSCSI servers Two iSCSI servers are required, one on each SP. Each iSCSI server should have two network interfaces. All of the network interfaces should be on separate physical ports. The first iSCSI server uses ports eth2 and eth3 on SP A, and the second iSCSI server uses ports eth2 and eth3 on SP B.

Note:Although the ports on SP A and SP B have the same logical names, they are different physical ports. This configuration creates two physical paths for I/O to each SP.

Complete the following steps to create one iSCSI server on each SP:

1. Create an iSCSI server on each SP:

a. Select Settings > iSCSI Server Settings.

b. Click Add iSCSI Server. The iSCSI Server window appears.

c. In the Server Name field, type a name for the iSCSI server.

Page 44: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

44

Note: Include the SP where the server resides as part of the server name. This will help in later steps.

d. In the IP Address field, type an IP address for the iSCSI server.

e. In the Netmask field, type a netmask for the iSCSI server.

f. In the Gateway field, type a gateway for the iSCSI server.

g. Click Show advanced.

h. In the Storage Processor list box, select SP A.

i. In the Ethernet Port list box, select eth2/eth3 (Link Up).

j. Click Next.

k. Click Finish.

Figure 22. Create iSCSI server with link aggregation

l. Select the new iSCSI server and then click Details. The iSCSI Server Details window appears.

m. In the IP Address field, type a second IP address.

n. Click Show advanced.

o. In the Ethernet Port list box, select the same Ethernet port.

Page 45: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

45 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

p. Click Next.

q. Click Finish.

Figure 23. iSCCSI server details

2. Repeat steps 1b-1q to create an iSCSI server on SP B.

Note: In step 1i, use the same Ethernet port that was used for the first interface of the iSCSI server on SP A.

Create storage pools Create custom pools that contain only one disk group. This allows for the creation of datastores segregated onto separate physical spindles. Four storage pools are required. For the AVE data disks, create pools with the 3+3 RAID 10 profile for the best possible throughput. Space requirements dictate that two pools are necessary for the AVE data disks and, one pool each is required to provide storage for the file server virtual machines and the OS disks for both the AVE instance and the W2k8R2 virtual machines.

The storage pools will consume a total of 22 disks. Two more disks will be used for hot spares. When creating custom pools, disks must be manually added to the hot spare pool.

Complete the following steps to create the storage pools:

1. Select System > Storage Pools.

2. Click Configure Disks.

3. Select Manually create a new pool.

4. In the list box, select by Disk Type.

5. Click Next.

Page 46: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

46

Figure 24. Disk Configuration Wizard – select configuration method

6. In the Name field, type a distinctive name for the storage pool.

7. In the Description field, optionally type a description for the storage pool.

8. Click Next.

Figure 25. Disk Configuration Wizard – pool name and description

9. Select the following drives:

Disk Type: SAS

Page 47: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

47 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Storage Profile: Balanced Perf/Capacity

10. Click Next.

Figure 26. Disk Configuration Wizard – specify drive type

11. In the list box, select the option to use five disks.

12. Click Next.

Page 48: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

48

Figure 27. Disk Configuration Wizard – select number of disks

13. Verify the information is correct, and then click Finish.

14. Repeat steps 1-13 to create a second pool with the same settings.

15. Select System > Storage Pools.

16. Click Configure Disks.

17. Select Manually create a new pool.

18. In the list box, select by Disk Type.

19. Click Next.

20. In the Name field, type a distinctive name for the storage pool.

21. In the Description field, optionally type a description for the storage pool.

22. Click Next.

23. Select the following drives:

Disk Type: SAS

Storage Profile: High Performance

24. Click Next.

25. In the list box, select the option to use six disks.

26. Click Next.

27. Verify the information is correct, and then click Finish.

Page 49: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

49 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

28. Repeat steps 15-27 to create a second pool with the same settings.

Complete the following steps to create two hot spares:

1. Select System > Storage Pools.

2. Click Configure Disks.

3. Select Manually add disks to an existing pool.

4. In the list box, select Hot Spare Pool.

5. Click Next.

Figure 28. Disk Configuration Wizard – add hot spares

6. In list box, select 2.

7. Click Next.

Page 50: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

50

Figure 29. Disk Configuration Wizard – specify number of spares

8. Verify the information is correct, and then click Finish.

Configure switch ports for link aggregation Link aggregation between the ESX hosts and the switch is also required. Each host has two vSwitches dedicated to storage access. Each vSwitch has two physical NICs associated with it. Create two link aggregation groups, one for the port pair on each of the two vSwitches dedicated to storage access. Note the difference in the channel-group command for ports connected to the ESX host. These will use “mode on”. Ports connected to the VNXe should be configured for “mode active”.

1. For a Cisco 3750 switch, run the following commands to configure the

switch ports connected to the ESX hosts for link aggregation:

interface Port-channel5

switchport trunk encapsulation dot1q

switchport mode trunk

interface Port-channel6

switchport trunk encapsulation dot1q

switchport mode trunk

interface GigabitEthernet0/3

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 5 mode on

interface GigabitEthernet0/9

switchport trunk encapsulation dot1q

Page 51: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

51 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

switchport mode trunk

channel-group 5 mode on

interface GigabitEthernet0/13

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 6 mode on

interface GigabitEthernet0/14

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 6 mode on

Configure ESX server networking Configure ESX server networking to spread the I/O load as evenly as possible across all four ports that are connected on the VNXe, and all four ports on each ESX server. To do this, create two vSwitches. Each vSwitch will have two vmnics, and two vmkernel interfaces associated with it. In ESX 4.x, the command line is the only way to create vmkernel interfaces with jumbo frames enabled.

Run the following commands to create the vSwitch, link the vmnics, add the interfaces, configure the switch to use jumbo frames, and add the IP interfaces that can accept jumbo frames:

Note: In vSphere 5, this can be done from the GUI.

Note: These steps must be run against each ESX host in the cluster.

1. Run the following commands against the first ESX host:

esxcfg-vswitch.pl -a vSwitch1

esxcfg-vswitch.pl vSwitch1 -L vmnic0

esxcfg-vswitch.pl vSwitch1 -L vmnic2

esxcfg-vswitch.pl vSwitch1 -A Storage-iSCSI-54

esxcfg-vswitch.pl vSwitch1 -A Storage-iSCSI-55

esxcfg-vswitch.pl vSwitch1 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.54 -n 255.255.255.0 -m 9000

Storage-iSCSI-54

esxcfg-vmknic.pl -a -i 192.168.140.55 -n 255.255.255.0 -m 9000

Storage-iSCSI-55

esxcfg-vswitch.pl -a vSwitch2

esxcfg-vswitch.pl vSwitch2 -L vmnic3

esxcfg-vswitch.pl vSwitch2 -L vmnic4

esxcfg-vswitch.pl vSwitch2 -A Storage-iSCSI-56

esxcfg-vswitch.pl vSwitch2 -A Storage-iSCSI-57

esxcfg-vswitch.pl vSwitch2 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.56 -n 255.255.255.0 -m 9000

Storage-iSCSI-56

esxcfg-vmknic.pl -a -i 192.168.140.57 -n 255.255.255.0 -m 9000

Storage-iSCSI-57

2. Run the following commands against the other ESX hosts in the cluster:

esxcfg-vswitch.pl -a vSwitch1

esxcfg-vswitch.pl vSwitch1 -L vmnic0

esxcfg-vswitch.pl vSwitch1 -L vmnic2

esxcfg-vswitch.pl vSwitch1 -A Storage-iSCSI-58

Page 52: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

52

esxcfg-vswitch.pl vSwitch1 -A Storage-iSCSI-59

esxcfg-vswitch.pl vSwitch1 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.58 -n 255.255.255.0 -m 9000

Storage-iSCSI-58

esxcfg-vmknic.pl -a -i 192.168.140.59 -n 255.255.255.0 -m 9000

Storage-iSCSI-59

esxcfg-vswitch.pl -a vSwitch2

esxcfg-vswitch.pl vSwitch2 -L vmnic3

esxcfg-vswitch.pl vSwitch2 -L vmnic4

esxcfg-vswitch.pl vSwitch2 -A Storage-iSCSI-60

esxcfg-vswitch.pl vSwitch2 -A Storage-iSCSI-61

esxcfg-vswitch.pl vSwitch2 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.60 -n 255.255.255.0 -m 9000

Storage-iSCSI-60

esxcfg-vmknic.pl -a -i 192.168.140.61 -n 255.255.255.0 -m 9000

Storage-iSCSI-61

Configure the vSwitches for link aggregation.

3. In the vSphere GUI, select Configuration.

4. Click Networking.

5. Select the vSwitch and then click Edit. The vSwitch properties window appears.

6. Click NIC Teaming.

7. In the Load Balancing list box, select Route based on IP hash.

8. Verify that both vmnic adapters are listed in the Active Adapters area.

9. Click OK.

Page 53: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

53 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 30. Select load balancing and active adapters

10. Select the first vmkernel interface and then click Edit. The vmkernel properties window appears.

11. Click NIC Teaming.

12. Verify that Override vSwitch failover order is not selected.

13. Click OK.

Page 54: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

54

Figure 31. Do not override vSwitch failover order

14. Repeat steps 11-14 for the second vmkernel interface.

15. Repeat steps 3-14 for the vSwitches on each of the other ESX hosts.

Add the vCenter server to the Virtualization Hosts table. 1. Select Hosts > VMware.

2. Click Find ESX Hosts.

3. Click Next.

4. Specify the credentials for the vCenter server:

a. In the User Name field, type the username for the vCenter server.

b. In the Password field, type the password for the vCenter server.

5. Click OK.

6. Click Next.

7. Click Close.

Page 55: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

55 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Provision datastores The AVE instance requires 12 250GB virtual disks. Four datastores are required in order to distribute the I/O load evenly across two paths to each SP. Create two datastores on each of the two High Performance RAID 10 pools, for a total of four datastores. Create 3 vdisks on each datastore.

The two Balanced Perf/Capacity pools created earlier are not the same size. The first one created is smaller because it is created on the drives in the DPE that have part of the space reserved for internal use by the VNXe. There should be a roughly equal amount of space available on each of the W2k8r2 server instances. Provision the system drives for the AVE instance and the server instances from the larger pool. Each server instance will have one large additional vdisk provisioned to it for use as a CIFS share.

Provision a datastore on SPA from the first RAID 10 pool:

1. Select Storage > VMware.

2. Click Create.

3. In the Name field, type a name for the datastore.

4. In the Description field, optionally type a description for the datastore.

5. Click Next.

Figure 32. VMware Storage Wizard – specify datastore name

6. Select Virtual Disk (VMFS).

7. Click Next.

Page 56: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

56

Figure 33. VMware Storage Wizard – provision VMFS datastore

8. Select the pool and server combination that corresponds to the larger RAID 10 storage pool and the iSCSI server on SP A.

9. In the Size field, type 764, and then select GB from the list box to the right.

764GB will allows enough space for three 250GB vdisks while accounting for some VMFS overhead.

10. Click Next.

Figure 34. VMware Storage Wizard – select storage pool and iSCSI server

11. Select Do not configure protection storage for this storage resource.

All backup is handled by AVE so no protection storage is required.

12. Click Next.

Page 57: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

57 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 35. VMware Storage Wizard – select protection

13. Click the host that will have access to the datastore.

14. In the Access list box for the software iSCSI initiator, select Datastore.

Compare the IQNs in the list with the IQNs on the Storage Adapters window in the vSphere Client to determine which IQN is the software iSCSI initiator. In this example, the host named npi-node-01 has a Broadcom NIC that has TCP and iSCSI offload capability. When a device of this type is installed, ESX 4.1 and later will create dependent hardware iSCSI adapter device instances for each NIC port. Each dependent hardware iSCSI adapter device instance has a distinct IQN. This example does not use the dependent device instances. Provide access to the IQN for the software iSCSI initiator only.

15. Click Next.

Page 58: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

58

Figure 36. VMware Storage Wizard – specify host access

16. Review the details and then click Finish.

17. Repeat steps 1-16 to create a second datastore with the same settings as the first.

18. Follow steps 1-16 to create two datastores on the second RAID 10 High performance pool and the iSCSI server on SP B.

Link aggregation limits the amount of control the user has over which physical interfaces are used for a given iSCSI datastore.

VMware assigns a path selection policy of Fixed(VMware) to a VNXe iSCSI datastore. In the sample configuration, there are four vmkernel interfaces that VMware can use to connect to the two IP interfaces on each VNXe iSCSI server instance for any given datastore.

In this example, the connections from the ESX hosts to the switch use link aggregation. When configured this way, the vmkernel will discover fewer paths. When there is more than one active adapter for a vmkernel interface, port bindings cannot be configured. Without port bindings, if all vmkernel interfaces are on the same subnet, only one interface will be used. With two iSCSI interfaces on the VNXe, only two paths will be discovered even though there are four vmkernel interfaces.

With the Fixed policy, the selection of the preferred path is random and not usually optimal.

In the VMware GUI, select <First_AVE_datastore> > Properties > Manage Paths.

Page 59: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

59 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 37. Manage paths for the first AVE datastore

In this configuration all of the datastores have the controller with runtime name vmhba33:C1:T0:L0 assigned as the preferred path by default. With a Path Selection policy of “Fixed”, all I/O is sent to the preferred path.

The link aggregation Load Balancing policy for the vSwitch in this configuration is set to “Route based on IP hash”. With the default path selections all I/O will be sent from one IP address. In a link aggregation group with this policy, the possible load balancing across the available links is determined in part by the number of source and destination IP pairs.

In this example the four datastore to be used as AVE data disks are the most performance critical. To get the best load balancing possible, set the preferred path to the other interface for two of the four datastores.

Right-click on the path to change to, and then click Preferred.

Verify that the desired path is marked Preferred.

Be aware of the following points about load-balancing through link aggregation:

Link aggregation load-balancing, in general, works better when the number of connections from the server to its clients is large. In a small IP storage configuration with VMware hosts and VNXe storage, the number of connections is usually small. This limits the utility of link aggregation load balancing.

This configuration uses load balancing policies on the switch and the ESX host of source/destination IP hash. On the VNXe, the policy is source/destination MAC hash. On the VNXe the policy is not settable.

All of these policies are “transmit” policies. The sender of a packet decides which link in a group is used. With different transmit policies on the ESX

Page 60: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

iSCSI with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

60

host, switch and VNXe it is possible to see load distribution vary depending on data direction, i.e. reads versus writes.

The algorithms used to determine which link is used for a given source/destination pair make an arbitrary assignment to a link based on the numerical value of the addresses. It is possible to have a set of source and destination addresses that all resolve to the same link. If this occurs, no load balancing will take place.

For reads from the VNXe, the VNXe determines which link in the group from the VNXe to the switch is used based on the result of an XOR of the source MAC of the VNXe interface and the destination MAC of the vmkernel interface. The result of the XOR is divided by the number of links in the group. The remainder of the division yields a value that determines which link is used.

Each link aggregation group on the VNXe has the same MAC address. The MAC address is not settable. The vmkernel interfaces on the ESX hosts each have a unique MAC address that are also not settable.

For writes from the ESX hosts to the switch, the load balancing across the links in a group is determined by the ESX host. For writes from the switch to the VNXe the load balancing is determined by the switch. In this example, the algorithm is set to source/destination IP hash. Setting the preferred paths so that half of the critical datastores use one “controller” and half use the other causes an additional source/destination IP pair to be used for two of the connections. This creates the opportunity for but does not guarantee better load balancing for write I/Os.

Create the remaining datastores for the CIFS shares and system disks on the W2k8r2 servers:

1. Follow the steps above to create the datastore for the CIFS share on the first

Windows server from the first Balanced Perf/Capacity storage pool on SP A.

2. Follow the steps above to create the datastore for the CIFS share on the second Windows server from the second Balanced Perf/Capacity storage pool on SP B.

3. Follow the steps above to create the datastore for the system drives from the second Balanced Perf/Capacity storage pool on SP B.

4. Set the paths for the CIFS server and system disk datastores to match the paths specified for the AVE datastores.

The preferred path for each of the file server datastores should be set to a different path. Since the I/O requirements for the OS instance are relatively low , the path selection can be left at the default value.

Page 61: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

61

Chapter 4 NFS without Link Aggregation

This chapter presents the following topics:

Configure path redundancy for NFS without link aggregation ....................... 62

Overview ....................................................................................................... 62

Configure ESX server networking ................................................................... 62

Create shared folder servers......................................................................... 64

Create storage pools ..................................................................................... 66

Add the vCenter server to the Virtualization Hosts table. .............................. 71

Provision datastores ..................................................................................... 71

Page 62: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

62

Configure path redundancy for NFS without link aggregation This section describes how to create a configuration that leverages NIC teaming on the ESX hosts and Failsafe Networking functionality on the VNXe to provide path redundancy, and to distribute I/O load across the available resources on the VNXe, the switch and the ESX server.

Overview Avamar Virtual Edition (AVE) serves as a real world example of an application with I/O requirements that necessitate the use of the best practices explained in this document. At the 2 TB license level, the current AVE product offering requires the presentation of 12 virtual disks to a single AVE virtual machine. The AVE application requires substantial concurrent sequential read and write throughput from the VNXe. To meet the necessary aggregate throughput levels, the 12 virtual disks must be presented from datastores that are distributed evenly across all of the disk, storage processor (SP) and port resources available on the VNXe.

Run the virtual machine on a two node ESXi 4.1 cluster. In addition to the AVE instance, present two Windows 2008 R2 server virtual machines, each with 540 GB of storage from the servers as CIFS shares. The servers will backup to the AVE instance.

Configure ESX server networking Configure ESX server networking to spread the I/O load as evenly as possible across all four ports that are connected on the VNXe, and all four ports on each ESX server. To do this, create two vSwitches. Each vSwitch will have two vmnics, and two vmkernel interfaces associated with it. In ESX 4.x, the command line is the only way to create vmkernel interfaces with jumbo frames configured.

Run the following commands to create the vSwitch, link the vmnics, add the interfaces, configure the switch to use jumbo frames, and add the IP interfaces that can accept jumbo frames:

Note: In vSphere 5, this can be done from the GUI.

Note: These steps must be run against each ESX host in the cluster.

1. Run the following commands against the first ESX host:

esxcfg-vswitch.pl -a vSwitch1

esxcfg-vswitch.pl vSwitch1 -L vmnic0

esxcfg-vswitch.pl vSwitch1 -L vmnic2

esxcfg-vswitch.pl vSwitch1 -A Storage-NFS-140

esxcfg-vswitch.pl vSwitch1 -A Storage-NFS-141

esxcfg-vswitch.pl vSwitch1 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.55 -n 255.255.255.0 -m 9000

Storage-NFS-140

esxcfg-vmknic.pl -a -i 192.168.141.55 -n 255.255.255.0 -m 9000

Storage-NFS-141

esxcfg-vswitch.pl -a vSwitch2

esxcfg-vswitch.pl vSwitch2 -L vmnic3

esxcfg-vswitch.pl vSwitch2 -L vmnic4

esxcfg-vswitch.pl vSwitch2 -A Storage-NFS-142

esxcfg-vswitch.pl vSwitch2 -A Storage-NFS-143

esxcfg-vswitch.pl vSwitch2 -m 9000

Page 63: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

63 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

esxcfg-vmknic.pl -a -i 192.168.142.55 -n 255.255.255.0 -m 9000

Storage-NFS-142

esxcfg-vmknic.pl -a -i 192.168.143.55 -n 255.255.255.0 -m 9000

Storage-NFS-143

2. Run the following commands against the other ESX hosts in the cluser:

And for the second ESX host:

esxcfg-vswitch.pl -a vSwitch1

esxcfg-vswitch.pl vSwitch1 -L vmnic0

esxcfg-vswitch.pl vSwitch1 -L vmnic2

esxcfg-vswitch.pl vSwitch1 -A Storage-NFS-140

esxcfg-vswitch.pl vSwitch1 -A Storage-NFS-141

esxcfg-vswitch.pl vSwitch1 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.54 -n 255.255.255.0 -m 9000

Storage-NFS-140

esxcfg-vmknic.pl -a -i 192.168.141.54 -n 255.255.255.0 -m 9000

Storage-NFS-141

esxcfg-vswitch.pl -a vSwitch2

esxcfg-vswitch.pl vSwitch2 -L vmnic3

esxcfg-vswitch.pl vSwitch2 -L vmnic4

esxcfg-vswitch.pl vSwitch2 -A Storage-NFS-142

esxcfg-vswitch.pl vSwitch2 -A Storage-NFS-143

esxcfg-vswitch.pl vSwitch2 -m 9000

esxcfg-vmknic.pl -a -i 192.168.142.54 -n 255.255.255.0 -m 9000

Storage-NFS-142

esxcfg-vmknic.pl -a -i 192.168.143.54 -n 255.255.255.0 -m 9000

Storage-NFS-143

Configure the vSwitches so that one vmnic is dedicated to each vmkernel interface. This is required for NFS multipathing.

3. In the vSphere GUI, select Configuration.

4. Click Networking.

5. Select the first vmkernel interface and then click Edit. The vmkernel properties window appears.

6. Click NIC Teaming.

7. Select Override vSwitch failover order.

8. Click Move Down to move one of the adapters to the Standby Adapters area.

9. Click OK.

Page 64: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

64

Figure 38. Set active adapter

10. Repeat steps 3-5 for the second vmkernel interface.

Note: Do not use the same adapter for both vmkernel interfaces.

11. Repeat steps 3-8 for the vSwitches on each of the other ESX hosts.

Create shared folder servers Complete the following steps to create two shared folder servers on each SP:

1. Create two shared folder servers on each SP:

a. Select Settings > Shared Folder Server Settings.

b. Click Add Shared Folder Server. The Shared Folder Server window appears.

c. In the Server Name field, type a name for the shared folder server.

Note: Include the SP where the server resides as part of the server name. This will help in later steps.

d. In the IP Address field, type an IP address for the shared folder server.

e. In the Netmask field, type a netmask for the shared folder server.

f. In the Gateway field, type a gateway for the shared folder server.

g. Click Show advanced.

h. In the Storage Processor list box, select SP A.

Page 65: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

65 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

i. In the Ethernet Port list box, select eth 2 (Link Up).

j. Click Next.

Figure 39. Create shared folder server

k. Select Linux/Unix shares (NFS).

l. Click Next.

Page 66: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

66

Figure 40. Set shared folder for NFS

m. Click Finish.

n. Repeat steps b-m to create a shared folder server on SP A, eth3

o. Repeat steps b-n to create shared folder servers on SP B, eth2 and SP B eth 3.

Create storage pools Create custom pools that contain only one disk group. This allows for the creation of datastores segregated onto separate physical spindles. Four storage pools are required. For the AVE data disks, create pools with the 3+3 RAID 10 profile for the best possible throughput. Space requirements dictate that two pools are necessary for the AVE data disks and, one pool each is required to provide storage for the file server virtual machines and the OS disks for both the AVE instance and the W2k8R2 virtual machines.

The storage pools will consume a total of 22 disks. Two more disks will be used for hot spares. When creating custom pools, disks must be added to the hot spare pool

Complete the following steps to create the storage pools:

1. Select System > Storage Pools.

2. Click Configure Disks.

3. Select Manually create a new pool.

4. In the list box, select by Disk Type.

Page 67: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

67 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

5. Click Next.

Figure 41. Disk Configuration Wizard – select configuration method

6. In the Name field, type a distinctive name for the storage pool.

7. In the Description field, optionally type a description for the storage pool.

8. Click Next.

Figure 42. Disk Configuration Wizard – pool name and description

9. Select the following drives:

Page 68: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

68

Disk Type: SAS

Storage Profile: Balanced Perf/Capacity

10. Click Next.

Figure 43. Disk Configuration Wizard – specify drive type

11. In the list box, select the option to use five disks.

12. Click Next.

Page 69: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

69 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 44. Disk Configuration Wizard – select number of disks

13. Verify the information is correct, and then click Finish.

14. Repeat steps 1-13 to create a second pool with the same settings.

15. Select System > Storage Pools.

16. Click Configure Disks.

17. Select Manually create a new pool.

18. In the list box, select by Disk Type.

19. Click Next.

20. In the Name field, type a distinctive name for the storage pool.

21. In the Description field, optionally type a description for the storage pool.

22. Click Next.

23. Select the following drives:

Disk Type: SAS

Storage Profile: High Performance

24. Click Next.

25. In the list box, select the option to use six disks.

26. Click Next.

27. Verify the information is correct, and then click Finish.

Page 70: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

70

28. Repeat steps 15-27 to create a second pool with the same settings.

Complete the following steps to create two hot spares:

1. Select System > Storage Pools.

2. Click Configure Disks.

3. Select Manually add disks to an existing pool.

4. In the list box, select Hot Spare Pool.

5. Click Next.

Figure 45. Disk Configuration Wizard – add hot spares

6. In list box, select 2.

7. Click Next.

Page 71: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

71 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 46. Disk Configuration Wizard – specify number of spares

8. Verify the information is correct, and then click Finish.

Add the vCenter server to the Virtualization Hosts table. 1. Select Hosts > VMware.

2. Click Find ESX Hosts.

3. Click Next.

4. Specify the credentials for the vCenter server:

a. In the User Name field, type the username for the vCenter server.

b. In the Password field, type the password for the vCenter server.

5. Click OK.

6. Click Next.

7. Click Close.

Provision datastores The AVE instance requires 12 250GB virtual disks. Four datastores are required in order to distribute the I/O load evenly across two paths to each SP. Create two datastores on each of the two High Performance RAID 10 pools, for a total of four datastore. Create 3 vdisks on each datastore.

The two Balanced Perf/Capacity pools created earlier are not the same size. The first one created is smaller because it is created on the drives in the DPE that have part of the space reserved for internal use by the VNXe. There should be a roughly equal

Page 72: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

72

amount of space available on each of the W2k8r2 server instances. Provision the system drives for the AVE instance and the server instances from the larger pool. Each server instance will have one large additional vdisk provisioned to it for use as a CIFS share.

Provision a datastore on SP A from the first RAID 10 pool:

1. Select Storage > VMware.

2. Click Create.

3. In the Name field, type a name for the datastore.

4. In the Description field, optionally type a description for the datastore.

5. Click Next.

Figure 47. VMware Storage Wizard – specify datastore name

6. Select Network File System

7. Click Next.

Page 73: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

73 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 48. VMware Storage Wizard – provision NFS datastore

8. Select the pool and server combination that corresponds to the larger RAID 10 storage pool and the first shared folder server on SP A.

9. In the Size field, type 764, and then select GB from the list box to the right.

764GB allows enough space for three 250GB vdisks while accounting for some VMFS overhead.

10. Click Next.

Page 74: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

74

Figure 49. VMware Storage Wizard – select storage pool and shared folder server

11. Select Do not configure protection storage for this storage resource.

All backup is handled by AVE so no protection storage is required.

12. Click Next.

Page 75: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

75 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 50. VMware Storage Wizard – select protection

13. Click the host that will have access to the datastore.

14. In the Access list box for the shared folder server, select Read/Write, allow Root.

15. Click Next.

Page 76: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS without Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

76

Figure 51. VMware Storage Wizard – specify host access

16. Review the details and then click Finish.

17. Repeat steps 1-16 to create a datastore with the same settings as the first on the second shared folder server on SP A.

18. Follow steps 1-16 to create a datastore on the second RAID 10 High performance pool and the first shared folder server on SP B.

19. Follow steps 1-16 to create a datastore on the second RAID 10 High performance pool and the second shared folder server on SP B.

Create the remaining datastores for the CIFS shares and system disks on the W2k8r2 servers:

1. Follow the steps above to create the datastore for the CIFS share on the first

Windows server from the second Balanced Perf/Capacity storage pool on the first shared folder server SP A.

2. Follow the steps above to create the datastore for the CIFS share on the second Windows server from the first Balanced Perf/Capacity storage pool on the first shared folder server on SP B.

3. Follow the steps above to create the datastore for the system drives from the second Balanced Perf/Capacity storage pool on the second shared folder server on SP A.

4. Set the access levels for the CIFS server and system disk datastores to match the access levels specified for the AVE datastores.

The access level must be set for each ESX host that is able to access the datastore.

Page 77: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

77 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Chapter 5 NFS with Link Aggregation

This chapter presents the following topics:

Configure path redundancy for NFS with link aggregation ............................ 78

Overview ....................................................................................................... 78

Configure the switches to support LACP ........................................................ 78

Configure jumbo frames and link aggregation groups ................................... 79

Configure switch ports for link aggregation ................................................... 81

Configure ESX server networking ................................................................... 82

Create shared folder servers ......................................................................... 85

Create storage pools ..................................................................................... 87

Add the vCenter server to the Virtualization Hosts table. .............................. 92

Provision datastores ..................................................................................... 92

Page 78: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

78

Configure path redundancy for NFS with link aggregation This section describes how to create a configuration that leverages NIC teaming on the ESX hosts and Failsafe Networking functionality on the VNXe to provide path redundancy, and to distribute I/O load across the available resources on the VNXe, the switch and the ESX server.

Overview Avamar Virtual Edition (AVE) serves as a real world example of an application with I/O requirements that necessitate the use of the best practices explained in this document. At the 2 TB license level, the current AVE product offering requires the presentation of 12 virtual disks to a single AVE virtual machine. The AVE application requires substantial concurrent sequential read and write throughput from the VNXe. To meet the necessary aggregate throughput levels, the 12 virtual disks must be presented from datastores that are distributed evenly across all of the disk, storage processor (SP) and port resources available on the VNXe.

Run the virtual machine on a two node ESXi 4.1 cluster. In addition to the AVE instance, present two Windows 2008 R2 server virtual machines, each with 540 GB of storage from the servers as CIFS shares. The servers will backup to the AVE instance.

Configure the switches to support LACP Link Aggregation Control Protocol (LACP) must be configured on the switches before link aggregation can be configured on the corresponding VNXe ports. For a dual SP system, configure two link aggregation groups. One group for ports eth2 and eth3 on SP A, and one group for ports eth2 and eth3 on SP B.

In this example, eth2 and eth3 on SP A are connected to ports 10 and 17 on a Cisco switch. Ports eth2 and eth3 on SP B are connected to ports 4 and 6 on the switch.

Run the following commands to configure LACP:

interface Port-channel3

switchport trunk encapsulation dot1q

switchport mode trunk

interface Port-channel4

switchport trunk encapsulation dot1q

switchport mode trunk

interface GigabitEthernet0/10

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 3 mode active

interface GigabitEthernet0/17

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 3 mode active

interface GigabitEthernet0/4

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 4 mode active

Page 79: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

79 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

interface GigabitEthernet0/6

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 4 mode active

Configure jumbo frames and link aggregation groups Creating a link aggregation group on ports eth2 and eth3 in the GUI creates two link aggregation groups on a dual SP system. Two corresponding link aggregation groups must be created on the switch side, for ports eth2 and eth3 on SP A and one for ports eth2 and eth3 on SP B.

Complete the following steps to configure jumbo frames and link aggregation groups:

1. Configure the switches so that jumbo frames are enabled for the interfaces that are attached to the data ports on the VNXe, and to the vmkernel ports on the ESX server.

2. Configure the VNXe ports for jumbo frames:

a. Select Settings > More Configuration > Advanced Configuration.

b. Click eth2.

c. In the MTU Size list box, select 9000.

d. Click Apply Changes.

e. Click eth3.

f. In the MTU Size list box, select 9000.

g. Select Aggregate with eth2.

h. Click Apply Changes.

Page 80: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

80

Figure 52. Set eth2 MTU

Page 81: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

81 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 53. Aggregate eth3 with eth2

Note: Setting the MTU on one of the ports, eth2 for example, sets the MTU value for the eth2 port on both SPA and SPB. The MTU size on the VNXe ports, the physical switch ports, and the vSwitches and vmkernel interfaces on the ESX host must match .

Configure switch ports for link aggregation Link aggregation between the ESX hosts and the switch is also required. Each host has two vSwitches dedicated to storage access. Each vSwitch has two physical NICs associated with it. Create two link aggregation groups, one for the port pair on each of the two vSwitches dedicated to storage access. Note the difference in the channel-group command for ports connected to the ESX host. These will use “mode on”. Ports connected to the VNXe should be configured for “mode active”.

1. For a Cisco 3750 switch, run the following commands to configure the

switch ports connected to the ESX hosts for link aggregation:

interface Port-channel5

switchport trunk encapsulation dot1q

switchport mode trunk

interface Port-channel6

switchport trunk encapsulation dot1q

switchport mode trunk

Page 82: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

82

interface GigabitEthernet0/3

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 5 mode on

interface GigabitEthernet0/9

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 5 mode on

interface GigabitEthernet0/13

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 6 mode on

interface GigabitEthernet0/14

switchport trunk encapsulation dot1q

switchport mode trunk

channel-group 6 mode on

Configure ESX server networking Configure ESX server networking to spread the I/O load as evenly as possible across all four ports that are connected on the VNXe, and all four ports on each ESX server. To do this, create two vSwitches. Each vSwitch will have two vmnics, and two vmkernel interfaces associated with it. In ESX 4.x, the command line is the only way to create vmkernel interfaces with jumbo frames enabled.

Run the following commands to create the vSwitch, link the vmnics, add the interfaces, configure the switch to use jumbo frames, and add the IP interfaces that can accept jumbo frames:

Note: In vSphere 5, this can be done from the GUI.

Note: These steps must be run against each ESX host in the cluster.

1. Run the following commands against the first ESX host:

esxcfg-vswitch.pl -a vSwitch1

esxcfg-vswitch.pl vSwitch1 -L vmnic0

esxcfg-vswitch.pl vSwitch1 -L vmnic2

esxcfg-vswitch.pl vSwitch1 -A Storage-NFS-140

esxcfg-vswitch.pl vSwitch1 -A Storage-NFS-141

esxcfg-vswitch.pl vSwitch1 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.55 -n 255.255.255.0 -m 9000

Storage-NFS-140

esxcfg-vmknic.pl -a -i 192.168.141.55 -n 255.255.255.0 -m 9000

Storage-NFS-141

esxcfg-vswitch.pl -a vSwitch2

esxcfg-vswitch.pl vSwitch2 -L vmnic3

esxcfg-vswitch.pl vSwitch2 -L vmnic4

esxcfg-vswitch.pl vSwitch2 -A Storage-NFS-142

esxcfg-vswitch.pl vSwitch2 -A Storage-NFS-143

esxcfg-vswitch.pl vSwitch2 -m 9000

esxcfg-vmknic.pl -a -i 192.168.142.55 -n 255.255.255.0 -m 9000

Storage-NFS-142

Page 83: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

83 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

esxcfg-vmknic.pl -a -i 192.168.143.55 -n 255.255.255.0 -m 9000

Storage-NFS-143

2. Run the following commands against the other ESX hosts in the cluser:

And for the second ESX host:

esxcfg-vswitch.pl -a vSwitch1

esxcfg-vswitch.pl vSwitch1 -L vmnic0

esxcfg-vswitch.pl vSwitch1 -L vmnic2

esxcfg-vswitch.pl vSwitch1 -A Storage-NFS-140

esxcfg-vswitch.pl vSwitch1 -A Storage-NFS-141

esxcfg-vswitch.pl vSwitch1 -m 9000

esxcfg-vmknic.pl -a -i 192.168.140.54 -n 255.255.255.0 -m 9000

Storage-NFS-140

esxcfg-vmknic.pl -a -i 192.168.141.54 -n 255.255.255.0 -m 9000

Storage-NFS-141

esxcfg-vswitch.pl -a vSwitch2

esxcfg-vswitch.pl vSwitch2 -L vmnic3

esxcfg-vswitch.pl vSwitch2 -L vmnic4

esxcfg-vswitch.pl vSwitch2 -A Storage-NFS-142

esxcfg-vswitch.pl vSwitch2 -A Storage-NFS-143

esxcfg-vswitch.pl vSwitch2 -m 9000

esxcfg-vmknic.pl -a -i 192.168.142.54 -n 255.255.255.0 -m 9000

Storage-NFS-142

esxcfg-vmknic.pl -a -i 192.168.143.54 -n 255.255.255.0 -m 9000

Storage-NFS-143

Configure the vSwitches for link aggregation.

3. In the vSphere GUI, select Configuration.

4. Click Networking.

5. Select the vSwitch and then click Edit. The vSwitch properties window appears.

6. Click NIC Teaming.

7. In the Load Balancing list box, select Route based on IP hash.

8. Verify that both vmnic adapters are listed in the Active Adapters area.

9. Click OK.

Page 84: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

84

Figure 54. Select load balancing and active adapters

10. Select the first vmkernel interface and then click Edit. The vmkernel properties window appears.

11. Click NIC Teaming.

12. Verify that Override vSwitch failover order is not selected.

13. Click OK.

Page 85: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

85 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 55. Do not override vSwitch failover order

14. Repeat steps 11-14 for the second vmkernel interface.

15. Repeat steps 3-14 for the vSwitches on each of the other ESX hosts.

Create shared folder servers Complete the following steps to create two shared folder servers on each SP:

1. Create two shared folder servers on each SP:

a. Select Settings > Shared Folder Server Settings.

b. Click Add Shared Folder Server. The Shared Folder Server window appears.

c. In the Server Name field, type a name for the shared folder server.

Note: Include the SP where the server resides as part of the server name. This will help in later steps.

d. In the IP Address field, type an IP address for the shared folder server.

e. In the Netmask field, type a netmask for the shared folder server.

f. In the Gateway field, type a gateway for the shared folder server.

Page 86: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

86

g. Click Show advanced.

h. In the Storage Processor list box, select SP A.

i. In the Ethernet Port list box, select eth 2 (Link Up).

j. Click Next.

Figure 56. Create shared folder server

k. Select Linux/Unix shares (NFS).

l. Click Next.

Page 87: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

87 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 57. Set shared folder for NFS

m. Click Finish.

n. Repeat steps b-m to create a second shared folder server on SP A

o. Repeat steps b-n to create two shared folder servers on SP B.

Create storage pools Create custom pools that contain only one disk group. This allows for the creation of datastores segregated onto separate physical spindles. Four storage pools are required. For the AVE data disks, create pools with the 3+3 RAID 10 profile for the best possible throughput. Space requirements dictate that two pools are necessary for the AVE data disks and, one pool each is required to provide storage for the file server virtual machines and the OS disks for both the AVE instance and the W2k8R2 virtual machines.

The storage pools will consume a total of 22 disks. Two more disks will be used for hot spares. When creating custom pools, disks must be added to the hot spare pool

Complete the following steps to create the storage pools:

1. Select System > Storage Pools.

2. Click Configure Disks.

3. Select Manually create a new pool.

4. In the list box, select by Disk Type.

5. Click Next.

Page 88: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

88

Figure 58. Disk Configuration Wizard – select configuration method

6. In the Name field, type a distinctive name for the storage pool.

7. In the Description field, optionally type a description for the storage pool.

8. Click Next.

Figure 59. Disk Configuration Wizard – pool name and description

9. Select the following drives:

Disk Type: SAS

Page 89: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

89 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Storage Profile: Balanced Perf/Capacity

10. Click Next.

Figure 60. Disk Configuration Wizard – specify drive type

11. In the list box, select the option to use five disks.

12. Click Next.

Page 90: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

90

Figure 61. Disk Configuration Wizard – select number of disks

13. Verify the information is correct, and then click Finish.

14. Repeat steps 1-13 to create a second pool with the same settings.

15. Select System > Storage Pools.

16. Click Configure Disks.

17. Select Manually create a new pool.

18. In the list box, select by Disk Type.

19. Click Next.

20. In the Name field, type a distinctive name for the storage pool.

21. In the Description field, optionally type a description for the storage pool.

22. Click Next.

23. Select the following drives:

Disk Type: SAS

Storage Profile: High Performance

24. Click Next.

25. In the list box, select the option to use six disks.

26. Click Next.

27. Verify the information is correct, and then click Finish.

Page 91: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

91 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

28. Repeat steps 15-27 to create a second pool with the same settings.

Complete the following steps to create two hot spares:

1. Select System > Storage Pools.

2. Click Configure Disks.

3. Select Manually add disks to an existing pool.

4. In the list box, select Hot Spare Pool.

5. Click Next.

Figure 62. Disk Configuration Wizard – add hot spares

6. In list box, select 2.

7. Click Next.

Page 92: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

92

Figure 63. Disk Configuration Wizard – specify number of spares

8. Verify the information is correct, and then click Finish.

Add the vCenter server to the Virtualization Hosts table. 1. Select Hosts > VMware.

2. Click Find ESX Hosts.

3. Click Next.

4. Specify the credentials for the vCenter server:

a. In the User Name field, type the username for the vCenter server.

b. In the Password field, type the password for the vCenter server.

5. Click OK.

6. Click Next.

7. Click Close.

Provision datastores The AVE instance requires 12 250GB virtual disks. Four datastores are required in order to distribute the I/O load evenly across two paths to each SP. Create two datastores on each of the two High Performance RAID 10 pools, for a total of four datastore. Create 3 vdisks on each datastore.

The two Balanced Perf/Capacity pools created earlier are not the same size. The first one created is smaller because it is created on the drives in the DPE that have part of the space reserved for internal use by the VNXe. There should be a roughly equal

Page 93: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

93 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

amount of space available on each of the W2k8r2 server instances. Provision the system drives for the AVE instance and the server instances from the larger pool. Each server instance will have one large additional vdisk provisioned to it for use as a CIFS share.

Provision a datastore on SPA from the first RAID 10 pool:

1. Select Storage > VMware.

2. Click Create.

3. In the Name field, type a name for the datastore.

4. In the Description field, optionally type a description for the datastore.

5. Click Next.

Figure 64. VMware Storage Wizard – specify datastore name

6. Select Network File System

7. Click Next.

Page 94: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

94

Figure 65. VMware Storage Wizard – provision NFS datastore

8. Select the pool and server combination that corresponds to the larger RAID 10 storage pool and the first shared folder server on SP A.

9. In the Size field, type 764, and then select GB from the list box to the right.

764GB allows enough space for three 250GB vdisks while accounting for some VMFS overhead.

10. Click Next.

Page 95: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

95 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

Figure 66. VMware Storage Wizard – select storage pool and shared folder server

11. Select Do not configure protection storage for this storage resource.

All backup is handled by AVE so no protection storage is required.

12. Click Next.

Page 96: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

96

Figure 67. VMware Storage Wizard – select protection

13. Click the host that will have access to the datastore.

14. In the Access list box for the shared folder server, select Read/Write, allow Root.

15. Click Next.

Page 97: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

97 EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

16. VMware Storage Wizard – specify host access

17. Review the details and then click Finish.

18. Repeat steps 1-16 to create a datastore with the same settings as the first on the second shared folder server on SP A.

19. Follow steps 1-16 to create a datastore on the second RAID 10 High performance pool and the first shared folder server on SP B.

20. Follow steps 1-16 to create a datastore on the second RAID 10 High performance pool and the second shared folder server on SP B.

Create the remaining datastores for the CIFS shares and system disks on the W2k8r2 servers:

1. Follow the steps above to create the datastore for the CIFS share on the first

Windows server from the second Balanced Perf/Capacity storage pool on the first shared folder server SP A.

2. Follow the steps above to create the datastore for the CIFS share on the second Windows server from the first Balanced Perf/Capacity storage pool on the first shared folder server on SP B.

3. Follow the steps above to create the datastore for the system drives from the second Balanced Perf/Capacity storage pool on the second shared folder server on SP A.

4. Set the access levels for the CIFS server and system disk datastores to match the access levels specified for the AVE datastores.

The access level must be set for each ESX host that is able to access the datastore.

Page 98: EMC Solutions for iSCSI Multipathing and NFS Path ... · multipathing and NFS path redundancy for VMware ... Best Practices Guide 4 Add the vCenter server to the ... ESXi 4.1 Applied

NFS with Link Aggregation

EMC Solutions for iSCSI Multipathing and NFS Path Redundancy with EMC VNXe Series and VMware ESXi 4.1 Applied Best Practices Guide

98