52

Best Practices Guide_for XIOtech ISE Storage

Embed Size (px)

DESCRIPTION

Best practices for using Xiotech ISE storage products in your SAN

Citation preview

Page 1: Best Practices Guide_for XIOtech ISE Storage
Page 2: Best Practices Guide_for XIOtech ISE Storage

X-IO Technologies9950 Federal Drive, Suite 100Colorado Springs, CO 80921-3686

www.x-io.com

Main: 719.388.5500Fax: 719.388.5300Customer Support: 1.800.734.4716

IMPORTANT NOTICE

This manual is provided to you by Xiotech Corporation (“Xiotech”), and your receipt and use of this manual is subject to your agreement to the following conditions:

• This manual and the information it contains are the property of Xiotech and are confidential. You agree to keep this manual in a secure location and not disclose or make available its contents to any third party without Xiotech’s written permission.

• Reproduction or distribution of this document, in whole or in part, may only be made with Xiotech’s writ-ten permission.

• This manual is not an endorsement or recommendation of any of the products or services provided by third parties and referred to herein.

• Use of this manual does not assure legal or regulatory compliance.

• This document is not a warranty of any kind. The only warranties given in connection with this product are those contained in Xiotech’s standard terms and conditions.

• Xiotech, Magnitude, Magnitude 3D, and TimeScale are registered trademarks of Xiotech Corporation. Emprise, ISE, Dimensional Storage Cluster, DataScale, and GeoRAID are trademarks of Xiotech Cor-poration. Other trademarks or service marks contained herein are the property of their respective own-ers. Trademarks and service marks belonging to third parties appear in this document for the purposes of identification of the goods or services of that third party only. No reproduction of any trademark or service mark is authorized by this document. No right or title or other proprietary interest in any mark is transferred because of this document.

© 2012 Xiotech Corporation. All rights reserved.Publication Number: 160347-000 Rev CJanaury 2013

Page 3: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page i

Table of Contents Best Practices Guide

Table of Contents

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Hardware Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3ISE-2 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

MRC—Managed Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4DataPacs—Types and Capacities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4End-to-End Solution Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Sample Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

ISE Physical Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Fibre Channel Cables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Direct Attached . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7ISE Host Limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7FC Switched Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Switch Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Brocade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

ISE Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Open Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Host-Based Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Single Initiator Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14One-To-One Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

HBA Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Connection Options (QLogic specific) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Data Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Adapter BIOS/Boot from SAN Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16LUNs per Target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Execution Throttle / Queue-Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Vendor-Specific Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

ISE Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19Thrashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Minimize Thrashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Operating System Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21

ISE-2 Upgrade Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Online Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Offline Upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Windows 2003 R2 SP2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Multi-pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Disabling FUA for ISE Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Required Hot Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23ISE Firmware Upgrade Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Page 4: Best Practices Guide_for XIOtech ISE Storage

Page ii 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confidential

Best Practices Guide Table of Contents

Windows 2008 SP2 / Windows 2008 R2 SP1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Multi-pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Native MPIO Registry Key Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Disabling FUA for ISE Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Required Hot Fixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26ISE Firmware Upgrade Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Windows 2012 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Multi-pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) . . . . . . . . . . . 32

Unix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

VMware ESXi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365.x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Using IBM SVC with ISE Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41Physical Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41SVC Logical Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Single Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Redundant Fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Zoning Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43ISE Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Volume Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Volume Placement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Host Queue Depth Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

External References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45Storage and Networking Industry Association (SNIA) Dictionary . . . . . . . . . . . . . . . . . . . . . . . . . . 45IBM Fiber Cable Ratings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45VMware Queue Depth Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45VMware Maximum Disk Requests per VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Red Hat Enterprise Linux (RHEL) MPIO Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45SUSE Linux Enterprise Server (SLES) MPIO Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45QLogic Red Hat Enterprise Linux 6.x Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Page 5: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 1

ISE Best Practices

Introduction

Storage solutions such as the ISE-2 can be configured in many different ways, each with their own set of risks and rewards. This document presents the optimal configurations for various purposes and demonstrates options at various points from ISE-2 to the HBA and Operating Systems.

Intended AudienceThis document is intended for IT Administrators, Desktop Specialists, Partners, and Field Services.

Page 6: Best Practices Guide_for XIOtech ISE Storage

Page 2 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confidential

Best Practices ISE

Page 7: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 3

ISE Best Practices

Hardware Configuration

ISE-2 ComponentsRefer to the Introduction of the ISE User Guide for a more complete description of each component. This is a quick reference only.

Figure 1. ISE-2 Major Components

1. DataPac(s)

2. System Status Module

3. Managed Reliability Controllers (MRCs)

4. Supercapacitors (Supercaps)

5. System Chassis

6. Power Supplies

Page 8: Best Practices Guide_for XIOtech ISE Storage

Page 4 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confidential

Best Practices ISE

MRC—Managed Reliability

Figure 2. Managed Reliability Controllers (MRCs)

The X-IO Managed Reliability Controller (two per ISE) uses built-in diagnostics and recovery tools to analyze, identify, and recover from a large number of common drive problems. These are the same diagnostics and tools that are used by the drive manufacturer to repair and refurbish a drive that has been sent back for repairs. This ability to take a drive offline and repair it is what X-IO calls “managed reliability,” and it helps drive the 5-year warranty on DataPacs.

Each MRC has the ability to send information back to X-IO headquarters and notify the services organization that something has gone wrong. This is disabled by default and should be enabled on every ISE upon installation unless the customer’s business does not allow it.

X-IO also makes an ISE Analyzer product that can be installed at the customer site to capture this telemetry data from multiple ISEs and display it to the customer. It is not necessary to have an ISE Analyzer at the customer site if the customer is willing to monitor their ISE products through other means because the telemetry can be sent to X-IO headquarters with or without an ISE Analyzer.

Each ISE also has SNMP capability built in, which allows external monitoring tools to be used to monitor and alert on changes in state of the ISE.

DataPacs—Types and CapacitiesThe ISE-2 has several different models of DataPacs that can be installed, and the details of each are summarized in the table below.

DataPac Capacity ISE Model ISE Max Capacity Drive Type

4.8TB ISE-2 9.6 9.6TB SAS HDD

7.2TB ISE-2 14.4H 14.4TB SAS HDD and SSD

Page 9: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 5

ISE Best Practices

Table 1: DataPac Types and Capacities

Note that the ISE-2 14.4H also goes by the name Hyper-ISE (the “H” stands for “Hyper”) because it uses solid state disks (SSD) to provide faster access to the most frequently accessed data via X-IO’s proprietary CADP algorithm.

Each DataPac contains either 10 or 20 industry-standard hard drives or a mix of 10 hard drives and 10 solid state drives in the case of the Hyper DataPacs. These drives are grouped together into one or two pools. When the ISE is initialized, 20% of the drive capacity is reserved for data recovery operations (sparing). This spare capacity is equivalent to hot-spare drives in a more traditional storage solution.

End-to-End Solution DiagramsAny storage solution is made of multiple components that must function together to serve the needs for which it was designed. A typical list of components, from the storage solution to the applications, is listed below in the order that they might appear:1. Storage device (ISE-2 or similar)

2. Fiber Channel cables

3. Fibre Channel Switches—Optional if doing a direct-attached storage solution

4. Server HBA

5. Server Hardware

6. Server Operating System

7. Server Applications

Sample EnvironmentsA typical environment is more than just a single server and more than just one application. In the environment below there are three ISE-2s, two Fibre Channel switches for redundant fabrics, five dual-port HBAs (one per server), five physical servers, and five different applications. Each switch represents a separate fabric, and the lines representing Fibre Channel cables are color-coded to represent which fabric they are on.

9.6TB ISE-2 19.2 19.2TB SAS HDD

14.4TB ISE-2 28.8 28.8TB SAS HDD

DataPac Capacity ISE Model ISE Max Capacity Drive Type

Page 10: Best Practices Guide_for XIOtech ISE Storage

Page 6 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confidential

Best Practices ISE

Figure 3. Sample End-to-End Environment

ISE Physical ConnectivityNote that although the diagrams and charts below call out specific ISE MRC ports, there is no hard requirement to use these specific ports. Any ISE MRC port is equivalent to any other port, and the diagrams and charts show just one for simplicity.

Each LUN on an ISE has a preferred MRC, which is assigned at the time of LUN creation and alternates between the two MRCs. Since the MRCs are active-active (any MRC can talk to any LUN), it is possible for a LUN to receive I/O requests from either MRC. When a LUN receives an I/O request from the non-preferred MRC, a small performance hit occurs per I/O since the non-preferred MRC tries to forward the data to the preferred MRC and then sends the appropriate response back to the server.

For Windows servers, X-IO provides the ISE Multipath Suite (ISE-MP). Version 6.6.0.5 and later includes performance improvements that detect which MRC is preferred for each LUN and send I/O to those paths connected to the preferred MRC. The non-preferred MRC receives I/O only if paths to the preferred MRC become unavailable.

Fibre Channel CablesFor cable lengths up to 35 meters (114 feet), an OM2 cable works for all speeds from 1 Gb/sec to 16 Gb/sec. A lower bit-rate supports longer distances for the same cable. See the IBM FC cable details link in “External References” on page 45 for exact bitrate to distance details.

Each ISE-2 MRC has four fiber-channel (FC) ports available and ships with one SFP per MRC. Additional SFPs can be purchased through X-IO to increase available ports for increased throughput or redundancy.

Page 11: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 7

ISE Best Practices

Direct AttachedWhen using one of the supported Operating System (OS) versions and supported HBAs (see the lists on X-IO’s Support Matrix), a host can connect directly to the MRC ports with a Fibre Channel cable. In a direct attach scenario, it is required to have at least two ports per server, with each port connected to a different MRC. This ensures that the host stays online in the event of an MRC failure or a normal failover as part of an ISE-2 firmware upgrade process.

X-IO strongly recommends setting the HBA ports and the ISE ports to the same data rate for direct attached configurations. Auto-speed negotiation does not always work reliably during failover/failback scenarios or certain failure conditions.

Figure 4. Direct Attached Cabling

ISE Host LimitsEach ISE-2 is limited to 256 host port WWNs logged into the ISE at any one time. If a customer environment has 25 physical servers and none are running virtual machines, each with two HBAs, then the total number of host port WWNs used is 50. If these servers each run four virtual machines with two NPIV ports presented to the ISE-2, then the number of host port WWNs used is 250 (4 VMs * 2 ports * 25 hosts + 2 ports per server * 25 servers).

FC Switched Fabric

Single Switch—Non-Redundant

It is possible to connect both MRCs and all hosts to a single fiber switch. X-IO does not recommend the use of a single switch as this type of configuration implies a single point of failure.

Single Switch—Redundant Fabrics

The default configuration for an ISE-2 is to connect one MRC to one fabric, and the other MRC connects to the other fabric.

Page 12: Best Practices Guide_for XIOtech ISE Storage

Page 8 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confidential

Best Practices ISE

Figure 5. Single MRC per Fabric (Default Configuration)

The best practice for an ISE-2 is to connect at least one port from each MRC to each fabric. This provides at least one path to the preferred MRC for any given LUN on each fabric.

Figure 6. Both MRCs on Each Fabric–Best Practice for Performance and Redundancy

Multi-switch Fabrics

If there are two switches in a given fabric, cable one port per MRC to each switch and distribute the host ports across both switches in each fabric equally. Traffic between devices on the same fabric but on different switches must share the interswitch link (ISL). X-IO recommends having a minimum of two ISLs between each switch. If the switch supports high-speed ISLs, it is preferable to use them.

X-IO recommends using unique domain IDs for each switch in a solution, regardless of whether the switches are on the same fabric. This allows easier troubleshooting when viewing event logs and vendor support tools.

Page 13: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 9

ISE Best Practices

Figure 7. Dual Fabrics with Two Switches and Dual ISLs

Very Large Fabrics

Daisy-chaining multiple switches together in large fabrics is not advised, as the traffic between devices on different switches must share the link and daisy-chaining reduces throughput. To alleviate this condition, use a mesh topology; however, this is generally considered cost-prohibitive at any scale beyond three or four switches. X-IO recommends using a Core / Edge model, which has a director class switch in the center of each fabric, with top-of-rack edge switches forming the edge.

X-IO recommends connecting the ISE MRCs directly to the director class switches. This minimizes the number of ISL links data has to cross to reach a given host. In the model below, a director class switch connects to multiple edge switches, which then connect to hosts. In this large fabric, X-IO recommends connecting each edge switch to at least two different blades on the director and connecting two ports on each ISE-2 MRC to different blades on the director as well. This reduces the impact of a blade failure and minimizes the number of hops data has to travel through a shared ISL.

Figure 8. Dual Fabric Core/Edge Model

Page 14: Best Practices Guide_for XIOtech ISE Storage

Page 10 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

If the servers that communicate with a given ISE are in the same rack, it is advised to connect the ISE MRC ports and the server ports to the same top-of-rack switches. This avoids any extra traffic on the core.

Switch Configuration SettingsX-IO recommends using unique domain IDs for each switch in a solution, regardless of whether the switches are on the same fabric. This allows easier troubleshooting when viewing event logs and vendor support tools.

Brocade

Port Settings

Brocade switches allow ports to be set to different types. X-IO recommends disabling the Loop option to prevent attempts to use FL-Port as an option. This speeds up the switch login time for an ISE.

From CLI, type switch> portcfggport <port#> 1

From GUI, uncheck L Port.

Figure 9. Brocade Port Administration

Page 15: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 11

ISE Best Practices

Figure 10. Brocade Port Configuration Wizard

Brocade Node WWN Checking Configuration

Brocade switches have a feature known as Node WWN checking, which can cause some unintended paths to storage LUNs to appear on certain ISE MRC ports. This happens because the ISE uses the port WWN of the first port on MRC A as the Node WWN for all ports. See the table below as an example:

Table 2: Node WWN, Port WWN, MRC and Port Number

Consider the solution below, in which there are two ports from a server as well as the first port on each MRC connected to a Brocade switch. The zoning is configured to allow one port on the server to see MRC A and the other port to see MRC B. This should result in a total of two paths to any LUN presented from the ISE to the host. When Brocade’s Node WWN checking feature is enabled, the real number of paths found for each LUN is three.

Node WWN Port WWN MRC and Port Number

2000001F93103688

2000001F93103688 MRC A Port 1

2000001F93103689 MRC A Port 2

2000001F9310368A MRC A Port 3

2000001F9310368B MRC A Port 4

2000001F9310368C MRC B Port 5

2000001F9310368D MRC B Port 6

2000001F9310368E MRC B Port 7

2000001F9310368F MRC B Port 8

Page 16: Best Practices Guide_for XIOtech ISE Storage

Page 12 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Figure 11. Sample Brocade Fabric—Two Paths Expected

Assuming that the host WWNs are 21000024ff26415e and 21000024ff26415f, the zones would look similar to the following.

Table 3: Zone Name and Member Port WWNs

Brocade switches can build zones based on either Port WWNs or Node WWNs. When a zone is created that contains a member with identical Node and Port WWNs, the zone is assumed to be using Node WWN zoning; therefore, all other ports with the same Node WWN are added to this zone.

The Node WWN checking option impacts any zones containing the first port of the first MRC on any ISE. This can be avoided by either using a different port on the first MRC of an ISE-2 or by disabling the Node WWN checking on each Brocade switch in a given fabric.

Node WWN Checking

Brocade Fibre Channel (FC) switches have a configuration option that is on by default and that can cause some unexpected zoning behaviors with an ISE (both ISE-1 and ISE-2). The ISE best practice is to set zoning.check.nodeNameDisabled to 1, which disables checking the node WWN.

Each FC device has a minimum of two worldwide names (WWNs). One is assigned to the device and is the node WWN, and the other is assigned to the FC port. A two-port FC HBA has a common node WWN for the HBA, and each port has a unique port WWN. The ISE WWN assignment mechanism uses the same value for both the node WWN and the WWN of the first port on the first MRC.

Consider a Brocade fabric with one switch and Node WWN checking enabled (the default behavior) that has an ISE-2 connected and zoning set up per the following list:

• HBA Port 1 to MRC A Port 1 (Port WWN)• HBA Port 2 to MRC B Port 5 (Port WWN)

The expected number of paths to a LUN should be two; however, the observed number of paths is three. HBA Port 1 sees both MRC A Port 1 and MRC B Port 5 because the zoning is built by node WWN first and then by port WWNs. MRC B Port 5 has the same node WWN as MRC A Port 1’s port WWN and so is included in the zone.

Use the following procedure to disable and reconfigure the switch parameter.

WARNING: This procedure disrupts I/O through the switch until the configuration process is complete.1. Disable the switch by typing the following line at the switch command prompt:

Zone Name Member Port WWNs

MrcAPort1_HostPort1 21000024ff26415e; 2000001F93103688

MrcBPort5_HostPort2 21000024ff26415f; 2000001F9310368C

Page 17: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 13

ISE Best Practices

switchdisable

2. Configure the setting using the following steps.

3. At the switch command prompt, type: Configure

4. Select Zoning Operation Parameters from the menu.

5. Change the Disable NodeName Zone Checking to one.

6. Re-enable the switch by typing the following line at the switch command prompt:

Switchenable

7. Check the setting by typing the following line at the switch command prompt:

configshow | grep zoning

8. Verify that the switch displays zoning.check.nodeNameDisabled:1.

Note. If your previous configuration utilized zones, review the configuration to ensure the change correctly represents your zoning preferences.

ISE ZoningThink of Fiber Channel zones as virtual SCSI cables with two or more connectors (up to the maximum number of zone members supported by the switch). If a zone does not connect two devices explicitly, then the devices do not communicate unless the switch is configured for open zones. Open zoning allows all devices to connect and communicate with all other devices since the storage products (targets) are required to implement a LUN masking model, which selectively presents storage to individual hosts. If there are older storage devices that do not allow selective presentation, then zoning is required to provide the isolation necessary to make things work as expected.

There are several Fiber Channel zoning models, each of which has its own set of challenges and benefits. The most common zoning models are open-zoning, single-host, single-initiator, and one-to-one. One of the largest impacts the zoning model has is reducing the number of fabric-wide events that happen when a change is made. For example, rebooting a single host will cause each FC port to log out of the fabric, log back in to the fabric, and then connect to storage devices and begin I/O. Every time a port logs into or out of a fabric, a registered state change notification (RSCN) event occurs, which is sent to every device already connected to the fabric. If the zoning does not handle this to prevent the broadcasts, server performance and access to storage can be negatively impacted.

Figure 12. Sample Fabric

Page 18: Best Practices Guide_for XIOtech ISE Storage

Page 14 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

In this sample fabric, the host port WWNs are 21000024ff26415e and 21000024ff26415f, and the ISE port WWNs are 2000001F93103688 and 2000001F9310368C. It is assumed that under each zoning model described, each host port will see both MRC ports for redundancy.

Open ZoningOpen zoning is the simplest and easiest Fibre Channel fabric model to use, because under this model all nodes connected to the fabric can automatically see each other. Open zoning increases the number of devices that a host port must scan during boot or bus rescan operations. This can increase the host boot times, especially when using boot from SAN. In contrast, this model requires very little documentation to manage, and zone configurations do not need to be backed up. This model is most appropriate for fabrics consisting of a single FC switch due to the risk of RSCN broadcasts taking hosts offline.

Host-Based ZoningHost-based zoning is the next simplest to implement. In this model, all target ports that a given host needs to access are included in the same zone with all initiators from the same host. This model reduces the RSCN broadcasts to only the zones containing the device that logged into or out of the fabric. In situations where one host port has a problem and sends out a lot of packets incorrectly, it is possible that the healthy host port will also be impacted because the same zone includes both host ports. This model has the fewest possible number of zones to manage, back up, and document. This model is most appropriate for smaller fabrics of just a few switches.

The host-based zone membership for the fabric in Figure 12 would look like the following table:

Figure 13. Host-based Zone Membership

Single Initiator ZoningSingle initiator zoning would be identical to host-based zoning if the host has only one FC port. In cases where the host has two or more ports, there is one zone for each initiator port that contains all targets with which the initiator port should be able to communicate. This limits the impact that any host port can have on any other host port and also limits the RSCN broadcasts to the zones containing the port that is logging into or out of the fabric. This model has more zones to manage the host-based zoning, which can increase the documentation and configuration back-up requirements.

The single initiator-based zone membership for the fabric in Figure 12 should be similar to that listed in the following table:

Figure 14. Single initiator-based Zone Membership

Zone Name Members

HostA_ISE 21000024ff26415e; 2000001F93103688; 21000024ff26415f; 2000001F9310368C

Zone Name Members

HostAPort1_ISE 21000024ff26415e; 2000001F93103688; 2000001F9310368C

HostAPort2_ISE 21000024ff26415f; 2000001F93103688; 2000001F9310368C

Page 19: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 15

ISE Best Practices

One-To-One ZoningOne-to-One zoning is the most controlled model. Under this model, each zone represents a single virtual SCSI cable with exactly two connectors. If a host has two initiator ports per fabric and there are three storage devices with two ports on each fabric, then there are 12 zones per fabric per server. Obviously, this can get to be a very large number of zones very quickly, but RSCN broadcasts are kept to an extremely narrow set of devices, so the impact of a failing FC port is very small.

The One-To-One based zone membership for the fabric in Figure 1 should be similar to that listed in the following table.

Figure 15. One-to-One-based Zone Membership

RecommendationsX-IO recommends the use of Single Initiator or One-To-One zoning for most applications to prevent RSCN broadcasts from negatively impacting an environment and to reduce the risk of accidentally presenting storage to the wrong server(s).

Note that this zoning covers ISE-2 to servers and does not apply to zoning requirements for clusters, storage virtualizers such as Datacore, IBM’s SVC, or ISE Mirroring. These types of zones may need to have multiple initiators in the same zone. Refer to the appropriate user guide for the cluster, storage virtualizer, or mirroring tool for the specific zoning configurations needed.

HBA SettingsA host bus adapter (HBA) is used to connect a server to a storage solution, such as an ISE. An HBA can have one, two, or four ports each and can communicate at 2, 4, 8, or 16 Gb per second. An HBA has several configurable parameters that control how it behaves with storage solutions. Different HBA vendors may have different terms for these parameters; please refer to vendor documentation for exact details on how to set these values. The individual parameters that are commonly adjusted for an ISE-2 are described below.

Note. When storage products have conflicting HBA settings we recommend that different HBAs be used. This may apply to storage from different vendors or to storage products from the same vendor with different requirements.

Connection Options (QLogic specific)An HBA can be connected to multiple types of ports. QLogic offers “Loop Only,” “Point to Point Only,” and “Loop Preferred, Otherwise Point to Point.” Refer to the SNIA dictionary (http://www.snia.org/education/dictionary) for more information on arbitrated loop and point-to-point connections.

X-IO recommends that a QLogic HBA be set to “Loop Preferred, Otherwise Point to Point.” This option is required for direct attach solutions due to a known compatibility issue when using “Point to Point Only.” The ISE bypasses Loop and log in with Point to Point.

QLogic defines the connection options as follows:

Zone Name Members

HostAPort1_MrcAPort1 21000024ff26415e; 2000001F93103688

HostAPort1_MrcBPort5 21000024ff26415e; 2000001F9310368C

HostAPort2_MrcAPort1 21000024ff26415f; 2000001F93103688

HostAPort2_MrcAPort5 21000024ff26415f; 2000001F9310368C

Page 20: Best Practices Guide_for XIOtech ISE Storage

Page 16 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

0—Loop Only

1—Point to Point Only

2—Loop Preferred, Otherwise Point to Point (default)

3—Point to Point, Otherwise Loop

Data RateAn HBA port is normally set to auto-detect the maximum speed possible when connected to a switch or storage port. ISE-1 requires the MRC, switch, and HBA ports to be set to a fixed data rate. An ISE-2 can use auto-speed negotiation; however, X-IO recommends that the MRC and HBA ports be set to a fixed data rate when directly connected together (no switch between).

Adapter BIOS/Boot from SAN SettingsThe Adapter BIOS is disabled by default. In order to boot from a SAN device, the adapter BIOS must be enabled and configured to boot from a specific storage device. Refer to the HBA vendor’s support documentation for instructions on doing this.

LUNs per TargetAn HBA in a server is known as an initiator since it can initiate a SCSI read or write command. Most storage solutions (including the ISE-2) are targets that can respond only to requests started by an initiator. See the SNIA dictionary (http://www.snia.org/education/dictionary) for more on these terms.

An ISE is capable of handling up to 240 LUNs and up to a maximum of 240 host port WWNs for ISE-1 and 250 host port WWNs for ISE-2. The LUNs per target parameter should be raised to at least 240 so that any HBA in a server can see all possible LUNs.

Execution Throttle / Queue-DepthThe Execution Throttle value, also known as “Queue Depth,” controls the number of I/O requests an HBA can make per volume or target before it must wait for a previous request to complete. All storage solutions have a queue depth value, and if the combined I/O of all connected hosts exceeds that value, the storage returns a busy response.

An ISE-2 allows a maximum of 2,048 I/O requests to be queued per MRC, for a total of 4,096 queued requests. QLogic’s HBA parameters allow this to be set as high as 65,535, which exceeds the ISE-2 queue depth, assuming the servers and applications can generate sufficient I/O. The Emulex HBA tools allow a queue depth of 254 for Windows hosts and 128 for Linux. X-IO recommends setting the QLogic queue depth to 256 per LUN for Linux hosts.

As a best practice, identify which servers are generating heavy workload (databases, large file shares, digital imaging, etc.) and set those queue depths to a higher value.

Page 21: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 17

ISE Best Practices

Vendor-Specific Parameters

QLogic

Table 4: QLogic HBA Settings

*Depends on the HBA model, driver, and software being used.

Emulex

Table 5: Emulex HBA Settings

Cisco UCS

To date there has been no need to alter the default HBA parameters for a Cisco UCS to work with an ISE-2. The adapter defaults for the HBA are shown in the figure below:

QLogic HBA Settings

Parameter Name Default Value Fabric Value Direct Attach Value

Connection Option 2 (Loop pref) 2 2 (Required)

Data Rate Auto 2/4/8 Gbps 2/4/8 Gbps

HBA BIOS Disabled Enabled (if Boot from SAN)

Enabled (if Boot from SAN)

Execution Throttle 16 or 32* 256 256

LUNs per Target 8 256 256

Emulex HBA Settings

Parameter Name Default Value Fabric Value Direct Attach Value

Link Timeout 30 30 ISE Not Supported

Node Timeout 30 30 ISE Not Supported

Queue Depth 32 254 ISE Not Supported

Queue Target / LUN 0 0 ISE Not Supported

Page 22: Best Practices Guide_for XIOtech ISE Storage

Page 18 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Figure 16. UCS M81KR HBA Parameter Defaults

The defaults apply to all HBA profiles, with the exception of VMware, which changes the highlighted Port Down Timeout (ms) from 30000 (30 seconds) to 10000 (10 seconds). All other values are identical between all profiles as of ISE-2 firmware 2.0.0.

Page 23: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 19

ISE Best Practices

ISE Performance

ThrashingAny array of hard disks lowers its performance characteristics as the disks are sliced up and populated with data since the heads require additional time to seek back and forth to the next requested block of data. This is commonly known as thrashing. The common root causes of thrashing are:

• Multiple applications accessing the same spinning media• Frequently accessed data in LUNs at opposite ends of the storage pool• Files at opposite ends of a file system being accessed (same LUN)• The data access pattern of each application (sequential read / sequential write / random read / random

write)

Note. The data access pattern is typically the single largest contributor to thrashing.

Minimize ThrashingFollow the best practices below to minimize thrashing.1. Put frequently accessed data on separate DataPacs if at all possible.

2. Separate the high traffic applications onto separate ISEs or DataPacs.

3. Use separate LUNs for frequently accessed data and create these LUNs one after the other so that they are close to each other on the disks to minimize seek time.

4. Avoid deleting LUNs and creating new ones or resizing LUNs as this may “fragment” the LUN across the disks and impact performance.

5. Run a file system defragmentation tool of some sort.a. Ideally this tool will put the most frequently accessed files together at the front of the file system tominimize seek time.b. At a minimum, files should be re-arranged so that all blocks of a file are next to the other blocks of the file.

6. Consider RAID-10 for read-intensive applications since the data can be accessed from two different sources at once.

All X-IO DataPacs ship with either 10 or 20 spinning drives, and the Hyper DataPac includes 10 solid state drives in addition to the 10 spinning drives. The ISE data placement algorithm distributes the data across these drives to maintain the performance of reading and writing to all drives in a DataPac at any point in time while maintaining the proper RAID level for each ISE volume (LUN).

Page 24: Best Practices Guide_for XIOtech ISE Storage

Page 20 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Page 25: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 21

ISE Best Practices

Operating System Considerations

Performance and upgrade options for the ISE-2 vary depending on the operating system and applications in use. Performance can be impacted by caching mechanisms in each OS, and ISE-2 firmware upgrades may require downtime under certain conditions.

ISE-2 Upgrade Types

Online UpgradeAn online upgrade is when the ISE-2 firmware is upgraded while the servers connected to the ISE-2 are under load. This requires that the server and the applications be able to handle paths to a disk going away and coming back while the load transfers from one MRC to the other.

Offline UpgradeAn offline upgrade is when an ISE-2 firmware upgrade is performed while the connected servers are shut down or the applications that use the storage are stopped. When an HBA sees a path go away, it waits for some period of time (typically 30 seconds) to see if the path is restored and then moves I/O to an alternate path. This timeout period and the associated lack of I/O is HBA specific and not directly related to the ISE-2. If the operating system, file system, or application cannot survive without I/O for this period of time, an offline upgrade is required.

Windows 2003 R2 SP2

Multi-pathingISE Multi-pathing is required for running Windows Server 2003.

Disabling FUA for ISE VolumesWindows Server operating systems have a mechanism that writes directly to the disk and instructs the disk to not use any write caching mechanisms it may have. This uses part of the SCSI block commands known as “Force Unit Access,” or FUA. This is done to protect against data loss on storage devices (USB hard drives, etc.) that do not have battery-backed caching capability. The ISE-1 and ISE-2 products support write caching and have a battery backup system to preserve that cache, so performance can be increased by disabling this mechanism. For details on this, see http://msdn.microsoft.com/enus/library/windows/desktop/dd979523(v=vs.85).aspx.1. Open Disk Manager.

2. Right-click the ISE LUN (ISE-1 or ISE-2).

Page 26: Best Practices Guide_for XIOtech ISE Storage

Page 22 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

3. Select Properties.

Figure 17. Selecting Disk Properties in Disk Manager

4. On the Disk Device Properties dialog, select the Policies tab.

Page 27: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 23

ISE Best Practices

Figure 18. Multi-Path Disk Device Properties

5. Ensure that the Enable write caching on the disk and Enable advanced performance check boxes are selected.

6. Click OK.

7. Repeat for each ISE volume.

Required Hot Fixes• Microsoft Knowledge Base Article ID 950903

ISE Firmware Upgrade OptionsWhen an ISE-2 is connected to one or more Windows Server 2003 R2 SP2 servers (the only supported versions of Windows Server 2003), a firmware upgrade is an offline process. Refer to the X-IO support matrix for specific operating system versions supported.

• All required hot-fixes should be installed.• ISE Multi-Path Suite (ISE MP) 6.6.0.10 or later, which improves path stability during path failures or

interruptions between the servers and the ISE.• Any ISE-2 running firmware greater than 2.1.0 can be upgraded. Refer to the Release Notes of the

new firmware version for any additional pre-requisites.

Windows 2008 SP2 / Windows 2008 R2 SP1

Multi-pathingWith the exception of Cisco UCS systems, X-IO does not currently support native multipath I/O for the ISE-2.

Page 28: Best Practices Guide_for XIOtech ISE Storage

Page 24 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Native MPIO Registry Key ChangesFor Cisco UCS systems, X-IO recommends the use of Native MPIO on Windows Server 2008 and newer operating systems. When using Active-Active Mirroring, X-IO ISE MP 6.6.0.10 or higher is required for Windows Server 2008 SP2 and Windows Server 2008 R2 SP1. X-IO ISE MP is not required for Windows Server 2012. In order to enable the Windows servers to handle an MRC failover/failback scenario or a firmware upgrade, the following registry keys will need to be set as follows.

Table 6: HKLM /System/Current Control Set/Service/MPIO/Parameters

Disabling FUA for ISE VolumesWindows Server operating systems have a mechanism that writes directly to the disk and instructs the disk to not use any write caching mechanisms it may have. This uses part of the SCSI block commands known as “Force Unit Access,” or FUA. This is done to protect against data loss on storage devices (USB hard drives, etc.) that do not have a battery-backed caching capability. The ISE-1 and ISE-2 products support write caching and have a battery backup systems to preserve that cache, so performance can be increased by disabling this mechanism. For details on this, see http://msdn.microsoft.com/en-us/library/windows/desktop/dd979523(v=vs.85).aspx.1. Open Disk Manager.

2. Right-click the ISE LUN (ISE-1 or ISE-2).

3. Select Properties.

Parameter Name Default Value X-IO Value

PathVerifyEnabled 0 0

PathVerificationPeriod 30 30

PDORemovePeriod 20 50

RetryCount 3 10

RetryInterval 1 1

UeeCustomPathRecoveryInterval 0 1

PethRecoveryInterval 40 25

Page 29: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 25

ISE Best Practices

Figure 19. Selecting Disk Properties in Disk Manager

4. On the Disk Device Properties dialog, select the Policies tab.

Page 30: Best Practices Guide_for XIOtech ISE Storage

Page 26 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Figure 20. Disk Properties Dialog

5. Ensure that the Enable write caching on the disk and Enable advanced performance check boxes are selected.

6. Click OK.

7. Repeat for each ISE volume.

Required Hot Fixes

Required Hot-Fixes for Windows 2008 R2 SP1• Microsoft Knowledge Base 2718576

• X-IO recommends the following values for the registry keys specified in this hotfix:• DiskPathCheckEnabled = 1• DiskPathCheckInterval = 25

Note. Microsoft Knowledge Base 2718576 and the registry key bullets below it apply to clusters only. There's no benefit to installing this on standalone Windows Servers.

• Microsoft Knowledge Base Article ID 2661794 • Microsoft Knowledge Base Article ID 2522766• Microsoft Knowledge Base Article ID 2511962

Page 31: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 27

ISE Best Practices

• Microsoft Knowledge Base Article ID 2468345 • Microsoft Knowledge Base Article ID 2460971

Required Hot-Fixes for Windows 2008 SP2• Microsoft Knowledge Base Article ID 968675• Microsoft Knowledge Base Article ID 970525 • Microsoft Knowledge Base Article ID 972797• Microsoft Knowledge Base Article ID 974878 • Microsoft Knowledge Base Article ID 976674 • Microsoft Knowledge Base Article ID 976748• Microsoft Knowledge Base Article ID 977001• Microsoft Knowledge Base Article ID 977153 • Microsoft Knowledge Base Article ID 977287 • Microsoft Knowledge Base Article ID 977675• Microsoft Knowledge Base Article ID 977890 • Microsoft Knowledge Base Article ID 978157 • Microsoft Knowledge Base Article ID 979458 • Microsoft Knowledge Base Article ID 979743 • Microsoft Knowledge Base Article ID 979764• Microsoft Knowledge Base Article ID 981357 • Microsoft Knowledge Base Article ID 2406705

ISE Firmware Upgrade OptionsThere are some differences between cluster nodes and standalone servers when it comes to upgrading the ISE-2. Check "Online Upgrade (clusters)" below to perform an online upgrade.

Online Upgrade (standalone servers)• ISE firmware 2.2.3 or later.• All hot fixes listed must be installed.• ISE Multi-Path Suite (ISE-MP) 6.6.0.10 or newer (improves path stability); exception for Cisco UCS.

Offline Upgrade• ISE-2 Firmware 2.2.1 or older must be upgraded offline due to the amount of time required to perform

certain parts of the upgrade.• All hot fixes listed must be installed.• ISE Multi-Path Suite (ISE-MP) 6.6.0.10 or newer (improves path stability); exception for Cisco UCS.

Online Upgrade (clusters) • ISE firmware 2.2.3 or later.• All hot fixes listed must be installed.• ISE Multi-Path Suite (ISE-MP) 6.6.0.10 or newer (improves path stability); exception for Cisco UCS.• No boot from SAN volumes—Some issues have been reported that seem to occur only on cluster

nodes booting from SAN. Until the cause is determined and a workaround or a fix is available, X-IO recommends performing offline upgrades with cluster nodes configured to boot from SAN.

• “Cluster Service Resource Maximum Restart” count set to be equal to three times the number of ISE-2s + 1. As an example, if there are four ISEs that contribute storage to the cluster, the restart count would be set to 13 ((3 * 4) + 1).

Page 32: Best Practices Guide_for XIOtech ISE Storage

Page 28 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

• Required for the duration of the upgrade process—can be set to other values for normal operations.

• X-IO recommends using the cluster maintenance mode to perform upgrades.

Configuring Cluster Disk Resource Failover Policies

Follow the steps below to configure the cluster resource failover policies.1. Open the Failover Cluster Manager GUI.

2. Select the Storage node in the left-hand navigation pane.

3. Select an ISE-2 clustered disk, right-click, and then select Properties from the drop-down menu.

Figure 21. Cluster Disk Properties

4. Select the Policies tab.

Page 33: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 29

ISE Best Practices

Figure 22. Cluster Disk Policies Tab

5. Change the Maximum restarts in the specified period value. This should be equal to 1 + three times the number of ISE-2s in the cluster. In Figure 22 the value is set to 4, which is appropriate for a single ISE-2.

6. Repeat for each ISE-2 disk in the cluster.

Configuring Clustered Services and Applications1. Expand the Services and Applications node in the left-hand navigation pane.

2. Select each service or application, right-click, and then select Properties from the drop-down.

3. Select the Failover tab.

Page 34: Best Practices Guide_for XIOtech ISE Storage

Page 30 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Figure 23. Cluster Application Failover

4. Change the Maximum failures in the specified period. This value should be equal to 1 + three times the number of ISE-2s in the cluster.

5. In the main pane for each clustered application, set the failover properties for each resource.

Page 35: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 31

ISE Best Practices

Figure 24. Clustered Application Resource List

6. Right-click each resource in the resource list (see Figure 24), select Properties, and then select the Policies tab.

7. Change the Maximum restarts in the specified period to the same value used for the cluster disk resource.

8. Repeat these steps for each resource in each clustered service or application.

Setting Maintenance Mode During an Upgrade

X-IO strongly recommends placing clustered storage into maintenance mode, which prevents failover of cluster disks while running ISE firmware upgrades. The following process demonstrates how to enable and then disable maintenance mode.1. Open Failover Cluster Manager.

2. Navigate to the Storage node in the left-hand navigation pane.

3. Right-click each ISE-2 volume, select More Actions, then select Turn On Maintenance Mode for this disk.

4. The GUI shows that the disk is in maintenance mode.

Figure 25. Disk Maintenance Mode

5. Once all ISE-2s are upgraded, turn off maintenance mode using the same process (select Turn Off Maintenance Mode for this disk).

Page 36: Best Practices Guide_for XIOtech ISE Storage

Page 32 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Windows 2012ISE-1 will require 1.8.0 firmware; ISE-2 will require 2.4.0.

Required Hot-Fixes for Windows 2012

Microsoft Knowledge Base Article 2779768

Multi-pathingX-IO recommends the use of Windows Native MPIO for ISE storage. In order to improve the availability of the storage during common path loss scenarios, including firmware upgrade, the following MPIO DSM registry key changes are recommended:

Claiming MPIO disks will cause the host the reboot. To configure native MPIO to claim the ISE-1 and ISE-2 volumes, use the command lines below. Note that this assumes that the MPIO feature is already installed, and that the items in the quotes are case sensitive.

For ISE-1:

mpclaim -r -i -d "XIOTECH ISE1400"

For ISE-2:

mpclaim -r -i -d "XIOTECH ISE2400"

Linux

Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES)The below are best practices recommendations for RHEL and SLES.

HBA Configuration

For direct attached systems, the queue depth parameter ql2xmaxqdepth should be 256, and the login timeout ql2xlogintimeout should be 5. Fabric connected systems should use defaults.

In order for this parameter to persist across reboots, follow the instructions for rebuilding the RAMDISK image in the QLogic’s Linux driver documentation from QLogic’s web site. See the “Deploying the Driver” section and find the related OS version to follow the instructions for “Automatically load the driver by rebuilding the RAM disk.” This requires a reboot to take effect.

Parameter Name Default Value X-IO Value

PathVerifyEnabled 0 0

PathVerificationPeriod 30 30

PDORemovePeriod 20 50

RetryCount 3 10

RetryInterval 1 1

UseCustomPathRecoveryInterval 0 1

PathRecoveryInterval 40 25

Page 37: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 33

ISE Best Practices

Multi-path Settings

Refer to the RHEL or SLES documentation for the chosen multi-path product for additional information on these parameters. Device-specific modifications must be made in the /etc/multipath.conf file. Verify that the polling interval is set to 10 in the defaults section of the /etc/multipath.conf file, as follows:

Defaults {

} polling_interval 10

}

The following are examples of what the device entries would look like in the multipath.conf file:

Note. For ISE-1, use product ID ISE1400, and for ISE-2, use product ID ISE2400

Page 38: Best Practices Guide_for XIOtech ISE Storage

Page 34 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

RHEL/CentOS/Oracle EL 4.x

RHEL/CentOS/Oracle EL 5.x

RHEL/CentOS/Oracle EL 6.0/6.1

devices {device {

vendor “XIOTECH”product “ISE1400”path_grouping_policy multibusgetuid_callout "/sbin/scsi_id -g -u -s /block/%n"path_checker turprio_callout “none”path_selector “round-robin 0”failback immediateno_path_retry 12user_friendly_names yes

}}

devices {device {

vendor “XIOTECH”product “ISE1400”path_grouping_policy multibusgetuid_callout "/sbin/scsi_id -g -u -s /block/%n"path_checker turprio_callout “none”path_selector “round-robin 0”failback immediateno_path_retry 12user_friendly_names yes

}}

defaults {udev_dir /devpolling_interval 10selector “round-robin 0”path_grouping_policy multibusgetuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"

# prio aluapath_checker tur

# rr_min_io 100# max_fds 8192# rr_weight priorities# failback immediate# no_path_retry fail

fast_io_fail_tmo 5dev_loss_tmo 600user_friendly_names yes

}

devices {device {

vendor "XIOTECH" product "ISE1400"path_grouping_policy multibus prio_callout “none”path_checker turrr_min_io 100rr_weight prioritiesfailback immediatedev_loss_tmo infinityno_path_retry 12user_friendly_names yes

}}

Page 39: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 35

ISE Best Practices

RHEL/CentOS/Oracle EL 6.2

SLES 10 SPx

SLES 11

defaults {udev_dir /devpolling_interval 10selector “round-robin 0”path_grouping_policy multibusgetuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"

# prio aluapath_checker tur

# rr_min_io 100# max_fds 8192# rr_weight priorities# failback immediate# no_path_retry fail

fast_io_fail_tmo 5dev_loss_tmo infinityuser_friendly_names yes

}

devices {device {

vendor "XIOTECH" product "ISE1400"path_grouping_policy multibus prio_callout “none”path_checker turrr_min_io 100rr_weight prioritiesfailback immediatefast_io_fail_tmo 5dev_loss_tmo infinityno_path_retry 12user_friendly_names yes

}}

devices {device {

vendor "XIOTECH" product "ISE1400"path_grouping_policy multibus getuid_callout "/sbin/scsi_id -g -u -s /block/%n"path_checker turprio_callout "none" path_selector "round-robin 0" failback immediateno_path_retry 12 user_friendly_names no

}}

devices {device {

vendor "XIOTECH" product "ISE1400"path_grouping_policy multibus getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"path_checker turprio_callout "none" path_selector "round-robin 0" failback immediateno_path_retry 12 user_friendly_names no

}} defaults {

udev_dir /devpolling_interval 10path_selector "round-robin 0"path_grouping_policy multibusgetuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n”path_checker turrr_min_io 100failback immediateno_path_retry 12

Page 40: Best Practices Guide_for XIOtech ISE Storage

Page 36 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Unix

AIXX-IO does not support online upgrade of ISE-2s and Hyper ISEs on AIX operating systems.

VMware ESXiSome general guidelines for both ISE-1 and ISE-2 with VMware ESXi are as follows:

• X-IO recommends using fixed (preferred path) for the native multi-path settings for ISE LUNs.• X-IO also supports the use of Round-Robin MPIO for ESXi.• For Windows VMs, use the preferred allocation size settings.

Refer here for more information about Boot From SAN using ESX systems.

Note. This requires Qlogic driver qla2xxx-934.5.4.0-855364 or the inbox driver from Emulex.

Note. We currently require round-robin load-balancing to use this.

5.xRefer to “HBA Configuration” on page 32 to ensure that the HBA parameters are set correctly.

A reboot of the ESXi host will most likely be required when changing these parameters. If you are using a VMware HA configuration, cluster nodes can be rebooted one at a time. The assumption is that virtual machines are migrated to other nodes while doing maintenance on a given cluster node. Verify the changes have been made and then migrate virtual machines back to their preferred nodes if needed after the reboot.

Direct Attach Login Timeout Values for QLogic HBAs

To facilitate faster logins following link down or link up events in Direct Attach configurations with ISE-2, the following ql2xlogintimeout parameter for QLogic HBAs is recommended for use with ESXi 5.

# esxcli system module parameters set -p

ql2xlogintimeout=5 -m qla2xxx

Following the host reboot, use the following guidelines to check the ql2xlogintimeout parameter and ensure that it is set properly:

# esxcli system module parameters list -m qla2xxx |grep ql2xlogintimeout

ql2xlogintimeout int 5 Login timeoutvalue in seconds.

HBA Queue Depth Considerations

Review the VMware Knowledge Base references that relate to setting HBA queue depth and configuring VM disk request limits in attempting to increase queue depth or throttling the queue depth down to prevent overloading the storage.

dev_loss_tmo 150user_friendly_names no

}

Page 41: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 37

ISE Best Practices

Changing the Queue Depth for QLogic and Emulex HBAs

For example, to set the ql2xmaxqdepth parameter to 256, use the following command on each ESXi node:

# esxcli system module parameters set -p

ql2xmaxqdepth=256 -m qla2xxx

Following the host reboot, check the ql2xmaxqdepth parameter to see if it is set properly:

# esxcli system module parameters list -m qla2xxx | grep qdepth

ql2xmaxqdepth int 256 Maximum queuedepth to report for targetdevices.

Note. Please use this parameter with caution as it CAN affect other storage used by the same adapters.

Change Maximum Outstanding Disk Requests for Virtual Machines

If you adjusted the LUN queue depth, change the Disk.SchedNumReqOutstanding parameter so that its value matches the queue depth. The parameter controls the maximum number of outstanding requests that all virtual machines can issue to the LUN. Change this parameter only when you have multiple virtual machines active on a LUN. The parameter does not apply when only one virtual machine is active. In that case, the bandwidth is controlled by the queue depth of the storage adapter.

Procedure1. In the vSphere Client, select the host in the inventory panel.

2. Click the Configuration tab and click Advanced Settings under Software.

3. Click Disk in the left panel and scroll down to Disk.SchedNumReqOutstanding.

4. Change the parameter value to the number of your choice and click OK.

This change can impact disk bandwidth scheduling, but experiments have shown improvements for disk-intensive workloads.

Raw Device Maps (RDMs)

When using RDMs with VMware ESXi 5.0, the SCSI timeouts on RDMs should be adjusted on Linux Guest OS versions. The default SCSI timeouts for RDMs on Linux Guest OS versions are set to 30 seconds. X-IO recommends setting this value to at least 120 seconds to cover certain conditions, such as controller firmware upgrades for the ISE-2 platform.

To change these values, the administrator can create a script on the Linux Guest OS under the /etc/udev/rules.d directory. As an example, the script in use is called 99-scsidisk.rules. This does not affect virtual disk timeout values, which are set to 180 seconds by default.

# cat 99-scsi-disk.rules

#Addition by USER to test RDM scsi timeout settings.ACTION=="add", SUBSYSTEMS=="scsi",SYSFS{type}=="0|7|14", RUN+="/bin/sh -c 'echo 120 >/sys/$DEVPATH/timeout'"

A Linux Guest OS with existing RDMs may need to be rebooted for this to take effect.

RDMs look like a normal physical disk when viewed from within the Guest OS with the /usr/bin/lsscsi command:

# lsscsi

Page 42: Best Practices Guide_for XIOtech ISE Storage

Page 38 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

[1:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00

/dev/sr0

[2:0:0:0] disk VMware Virtual disk 1.0

/dev/sda

[2:0:1:0] disk XIOTECH ISE2400 A

/dev/sdb

[2:0:2:0] disk XIOTECH ISE2400 A

/dev/sd

Once the parameter is properly set, it can be verified in the /sys/class/scsi_disk directory structure. For example:

# cat sys/class/scsi_disk/2:0:1:0/device/timeout120

Mirror Special Considerations

ESXi 5.1 Cluster Active-Active 1. ESXi 5.1 only (no DAS).

2. Up to a 6-node cluster is supported.

Note. Windows VMs are not supported at this time for this configuration.

3. Before breaking an active-active mirror, switch to using a fixed path on each lun that will participate in the break.

4. The preferred fixed path must be a path on the mirror master ISE.

5. The preferred path must be set on each cluster node.

For example (selected below is a preferred path to the mirror master for the VMirror):From “show vmirror”:VMirror - tc02esxclu_shared_DS8:Status : Operational (None)Type : High Availability (long lived)GUID : 6001F931005C8000024F000400000000Created : Wed Dec 19 22:02:43 2012Synch Progress: 100%SameId : yesVDisk Member Count: 2Master VDisk Member ISE ID: 2000001F931005C8Master VDisk Member GUID: 6001F931005C800001B6000200000000Member VDisks:Member 1: ISE ID: 2000001F931005C8, VDisk GUID:6001F931005C800001B6000200000000, LDID: 23, Status: Operational (None)Member 2: ISE ID: 2000001F93100150, VDisk GUID:6001F931001500000441000200000000, LDID: 143, Status: Operational (None)

--------------------------------------------------------------------------

Matching LUN from within vCenter Server:

Page 43: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 39

ISE Best Practices

Figure 26. Matching LUN

ESXi 5.1 BFS Cluster Upgrades with Active-Active Mirrors

Requires:1. http://kb.vmware.com/selfservice/microsites/

search.do?language=en_US&cmd=displayKC&externalId=1033696

2. QLogic driver: 934.5.4.0-855364

3. Emulex driver is inbox

4. Must use Round Robin

ESXi 5.1 Cluster Active-Active Mirror Breaks1. Fixed paths must be used when breaking mirrors.

Page 44: Best Practices Guide_for XIOtech ISE Storage

Page 40 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Page 45: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 41

ISE Best Practices

Using IBM SVC with ISE Storage

Physical TopologyAn IBM SVC typically connects to one or more storage products via Fiber-Channel interfaces and presents slices of that storage to hosts as logical disks.

Figure 27. Physical Topology

Page 46: Best Practices Guide_for XIOtech ISE Storage

Page 42 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

SVC Logical TopologyEach LUN or volume presented from an ISE to the SVC is called an MDISK by the SVC. One or more MDISKs are placed in an MDISK POOL, which is where SVC virtual disks (also called volumes) are created and presented to one or more hosts. SVC volumes will stripe across the underlying MDISKs in the MDISK POOL. The stripe width depends on the options set by the SVC administrator.

Figure 28. Logical Topology

If any MDISK goes offline, the entire MDISK POOL goes offline. If the MDISK POOL goes offline, all SVC volumes hosted by the MDISK POOL are also offline and hosts will lose access.

Zoning

Single Fabric

Figure 29. Single Fabric

Zone Purpose Zone Members

Storage Access and Inter-Controller Communication

• All SVC ports• At least one port from each MRC on each ISE-2• Both MRC Ports on each ISE-1

Host access to SVC Volumes • All SVC ports• At least one HBA port

Page 47: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 43

ISE Best Practices

Redundant Fabric

Zoning Limitations• In order to prevent single points of failure, it is recommended to zone all connected ISE MRC ports to the

SVC controller ports.• It is possible to have the ISE shared with an SVC and other systems on the same fabric. This will not cause

a problem for the SVC or the ISE as long as the LUNs are presented to ONLY one system. If a volume from an ISE is presented to both the SVC as an MDISK and to a host as a LUN, data corruption will occur.

• To avoid a multi-pathing conflict, it is recommended to have a host view storage from either the SVC or from the ISE but not from both at the same time.

ISE Volumes

Volume SizeThere are no hard and fast rules on the size of ISE volumes to use. To maintain consistent performance characteristics of the MDISK POOL, it is recommended to use ISE volumes of equal size within the same MDISK POOL. Different MDISK POOLs can have different LUN sizes behind them with no performance impact.

Volume PlacementWhile it is technically possible to create a single large MDISK POOL of all LUNs on all ISEs behind an SVC controller pair, it is a very risky move because the failure of a single ISE or a single DataPac will take the entire configuration offline.

The recommended practice is to create multiple MDISK POOLs, each of which contain only ISE volumes from a single ISE DataPac.

Host Queue Depth SettingsThe SVC controller pairs can handle various maximum IOPS, depending on the model. The sum of the queue depth settings for all hosts connected to the SVC should not exceed this number. No matter how many ISEs are placed behind the SVC, the performance will be limited to what the SVC can handle.

Zone Purpose Fabric A Members Fabric B Members

Storage Access and Inter-Controller Communication

• SVC ports 1 and 2• At least one port from each

MRC on each ISE-2• Both MRC ports on each

ISE-1

• SVC ports 3 and 4• At least one port from each

MRC on each ISE-2• Both MRC ports on each

ISE-1Host access to SVC Volumes • SVC ports 1 and 2

• At least one host HBA port• SVC ports 3 and 4• At least one host HBA port

Page 48: Best Practices Guide_for XIOtech ISE Storage

Page 44 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Page 49: Best Practices Guide_for XIOtech ISE Storage

Xiotech - Proprietary and Confidential 160347-000 Rev C, 11 January, 2013 Page 45

ISE Best Practices

External References

Storage and Networking Industry Association (SNIA) Dictionaryhttp://www.snia.org/education/dictionary

IBM Fiber Cable Ratingshttps://www.ibm.com/developerworks/mydeveloperworks/blogs/anthonyv/entry/don_t_say_green_say_aqua1?lang=en

VMware Queue Depth Considerationshttp://kb.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1267

VMware Maximum Disk Requests per VMhttp://kb.VMware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1268

Red Hat Enterprise Linux (RHEL) MPIO Documentationhttp://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/index.html

SUSE Linux Enterprise Server (SLES) MPIO Documentationhttp://www.novell.com/documentation

QLogic Red Hat Enterprise Linux 6.x Driverhttp://filedownloads.qlogic.com/files/Driver/81208/readme_FC-FCoE_Inbox_driver_update.txt

Page 50: Best Practices Guide_for XIOtech ISE Storage

Page 46 160347-000 Rev C, 11 January, 2013 Xiotech - Proprietary and Confiden-

Best Practices ISE

Page 51: Best Practices Guide_for XIOtech ISE Storage

Best Practices

Xiotech - Proprietary and Confidential XXXXXX-000 Rev B, 11 January, 2013 Page 47

Page 52: Best Practices Guide_for XIOtech ISE Storage