60
Dell/EMC AX4-5 Fibre Channel Storage Arrays With Microsoft ® Windows Server ® Failover Clusters Hardware Installation and Troubleshooting Guide

EMC AX4-5 Hardware Installation and Troubleshooting Guide

Embed Size (px)

DESCRIPTION

EMC AX4-5 Hardware Installation and Troubleshooting Guide

Citation preview

Page 1: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Dell/EMC AX4-5 Fibre Channel

Storage Arrays With

Microsoft® Windows Server®

Failover Clusters

Hardware Installation

and Troubleshooting

Guide

Page 2: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Notes, Cautions, and Warnings

NOTE: A NOTE indicates important information that helps you make better use

of your computer.

CAUTION: A CAUTION indicates potential damage to hardware or loss of data

if instructions are not followed.

WARNING: A WARNING indicates a potential for property damage, personal

injury, or death.

____________________

Information in this document is subject to change without notice.© 2008-2010 Dell Inc. All rights reserved.

Reproduction of these materials in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.

Trademarks used in this text: Dell, the DELL logo, PowerEdge, and PowerVault are trademarks of Dell Inc.; Active Directory, Microsoft, Windows, Windows Server, and Windows NT are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.; EMC, Navisphere, and PowerPath are registered trademarks and MirrorView, SAN Copy, and SnapView are trademarks of EMC Corporation.

Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.

January 2010 Rev A01

Page 3: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 7

Cluster Solution . . . . . . . . . . . . . . . . . . . . . . 8

Cluster Hardware Requirements . . . . . . . . . . . . . 8

Cluster Nodes. . . . . . . . . . . . . . . . . . . . . 9

Cluster Storage . . . . . . . . . . . . . . . . . . . 10

Supported Cluster Configurations . . . . . . . . . . . . 11

Direct-Attached Cluster . . . . . . . . . . . . . . 11

SAN-Attached Cluster . . . . . . . . . . . . . . . 12

Other Documents You May Need . . . . . . . . . . . . 13

2 Cabling Your Cluster Hardware . . . . . . . . 15

Cabling the Mouse, Keyboard, and Monitor . . . . . . 15

Cabling the Power Supplies . . . . . . . . . . . . . . . 15

Cabling Your Cluster for Public and

Private Networks . . . . . . . . . . . . . . . . . . . . 17

Cabling the Public Network . . . . . . . . . . . . 19

Cabling the Private Network . . . . . . . . . . . . 19

NIC Teaming . . . . . . . . . . . . . . . . . . . . 20

Cabling the Storage Systems . . . . . . . . . . . . . . 20

Cabling Storage for Your

Direct-Attached Cluster . . . . . . . . . . . . . . 20

Cabling Storage for Your

SAN-Attached Cluster . . . . . . . . . . . . . . . 25

Cabling a SAN-Attached Cluster to an

AX4-5F Storage System . . . . . . . . . . . . . . . 27

Contents 3

Page 4: EMC AX4-5 Hardware Installation and Troubleshooting Guide

3 Preparing Your Systems

for Clustering . . . . . . . . . . . . . . . . . . . . . 35

Cluster Configuration Overview . . . . . . . . . . . . . 35

Installation Overview . . . . . . . . . . . . . . . . . . 37

Installing the Fibre Channel HBAs. . . . . . . . . . . . 38

Installing the Fibre Channel HBA Drivers . . . . . . 38

Installing EMC PowerPath . . . . . . . . . . . . . . . . 38

Implementing Zoning on a Fibre Channel

Switched Fabric . . . . . . . . . . . . . . . . . . . . . 39

Using Worldwide Port Name Zoning . . . . . . . . 39

Installing and Configuring the Shared

Storage System . . . . . . . . . . . . . . . . . . . . . . 41

Installing Navisphere Storage System

Initialization Utility . . . . . . . . . . . . . . . . . 41

Installing the Expansion Pack Using

Navisphere Express. . . . . . . . . . . . . . . . . 42

Installing Navisphere Server Utility. . . . . . . . . 43

Registering a Server With a Storage System . . . . 43

Assigning the Virtual Disks to Cluster Nodes. . . . 44

Advanced or Optional Storage Features . . . . . . 44

Installing and Configuring a Failover Cluster . . . . . . 46

4 Contents

Page 5: EMC AX4-5 Hardware Installation and Troubleshooting Guide

A Troubleshooting . . . . . . . . . . . . . . . . . . . . 47

B Cluster Data Form . . . . . . . . . . . . . . . . . . 55

C Zoning Configuration Form . . . . . . . . . . . 57

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Contents 5

Page 6: EMC AX4-5 Hardware Installation and Troubleshooting Guide

6 Contents

Page 7: EMC AX4-5 Hardware Installation and Troubleshooting Guide

IntroductionA failover cluster combines specific hardware and software components to provide enhanced availability for applications and services that are run on the cluster. A failover cluster is designed to reduce the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable. It is recommended that you use redundant components like server and storage power supplies, connections between the nodes and the storage array(s), connections to client systems or other servers in a multi-tier enterprise application architecture in your cluster.

This document provides information and specific configuration tasks that enable you to configure your Microsoft® Windows Server® failover cluster with Dell/EMC AX4-5F (2 Fibre Channel ports per Storage Processor) or Dell/EMC AX4-5FX (4 Fibre Channel ports per Storage Processor) storage array(s).

NOTE: Throughout this document, Dell/EMC AX4-5 refers to Dell/EMC AX4-5F and

Dell/EMC AX4-5FX storage arrays.

NOTE: Throughout this document, Windows Server 2008 refers to Windows Server

2008 and Windows Server 2008 R2 operating systems.

For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell™ Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide on the Dell Support website at support.dell.com/manuals. For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide on the Dell Support website at support.dell.com/manuals.

For a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha.

Introduction 7

Page 8: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cluster SolutionYour cluster implements a minimum of two node to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) clustering and provides the following features:

• 8-Gbps and 4-Gbps Fibre Channel technologies

• High availability of resources to network clients

• Redundant paths to the shared storage

• Failure recovery for applications and services

• Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or storage system without taking the entire cluster offline

Implementing Fibre Channel technology in a cluster provides the following advantages:

• Flexibility — Fibre Channel allows a distance of up to 10 km between switches without degrading the signal.

• Availability — Fibre Channel components use redundant connections, providing multiple data paths and greater availability for clients.

• Connectivity — Fibre Channel allows more device connections than SCSI. Because Fibre Channel devices are hot-swappable, you can add or remove devices from the nodes without bringing down the cluster.

Cluster Hardware RequirementsYour cluster requires the following hardware components:

• Cluster nodes

• Cluster storage

8 Introduction

Page 9: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cluster Nodes

Table 1-1 lists the hardware requirements for the cluster nodes.

NOTE: For more information about supported systems, HBAs and operating system

variants, see the Dell Cluster Configuration Support Matrices located on the Dell

High Availability Clustering website at dell.com/ha.

Table 1-1. Cluster Node Requirements

Component Minimum Requirement

Cluster nodes A minimum of two identical Dell™ PowerEdge™ servers are required. The maximum number of nodes that is supported depends on the variant of the Windows Server operating system used in your cluster, and on the physical topology in which the storage system and nodes are interconnected.

RAM The variant of the Windows Server operating system that is installed on your cluster nodes determines the minimum required amount of system RAM.

HBA ports Two Fibre Channel HBAs per node, unless the server employs an integrated or supported dual-port Fibre Channel HBA.

Where possible, place the HBAs on separate PCI buses to improve availability and performance.

NICs (public and private networks)

At least two NICs: one NIC for the public network and another NIC for the private network.

NOTE: It is recommended that the NICs on each public network

are identical, and that the NICs on each private network are

identical.

Internal disk controller

One controller connected to at least two internal hard drives for each node. Use any supported RAID controller or disk controller.

Two hard drives are required for mirroring (RAID 1) and at least three are required for disk striping with parity (RAID 5).

NOTE: It is strongly recommended that you use hardware-

based RAID or software-based disk-fault tolerance for the

internal drives.

Introduction 9

Page 10: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cluster Storage

Cluster nodes can share access to external storage systems. However, only one of the nodes can own any redundant array of independent disks (RAID) volume in the external storage system at any time. Microsoft Cluster Services (MSCS) controls which node has access to each RAID volume in the shared storage system.

Table 1-2 lists supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems.

The storage systems work together with the following hardware components:

• Disk processor enclosure (DPE)—Configured with storage processors that control the RAID arrays in the storage system and provide storage functionalities such as snapshots, LUN masking, and remote mirroring.

• Disk array enclosure (DAE)—Provides additional storage and is attached to the disk processor enclosure.

• Standby power supply (SPS)—Provides backup power to protect the integrity of the disk processor write cache. The SPS is connected to the disk processor enclosure

Table 1-2. Cluster Storage Requirements

Hardware Components Requirement

Supported storage systems

One to four supported Dell/EMC storage systems. For specific storage system requirements see Table 1-3.

Cluster nodes All nodes must be directly attached to a single storage system or attached to one or more storage systems through a SAN.

Multiple clusters and stand-alone systems

Can share one or more supported storage systems using optional software that is available for your storage system. See "Installing and Configuring the Shared Storage System" on page 41.

10 Introduction

Page 11: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Table 1-3 lists hardware requirements for the disk processor enclosures DPE DAE, and SPS.

NOTE: Ensure that the core software version running on the storage system is

supported by Dell. For specific version requirements, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Cluster website

at dell.com/ha.

Supported Cluster Configurations

Direct-Attached Cluster

In a direct-attached cluster, all nodes of the cluster are directly attached to a single storage system. In this configuration, the RAID controllers (or storage processors) on the storage systems are connected by cables directly to the Fibre Channel HBA ports in the nodes.

Figure 1-1 shows a basic direct-attached, single-cluster configuration.

Table 1-3. Dell/EMC Storage System Requirements

Storage

System

Minimum Required

Storage

Possible Storage

Expansion

SPS

AX4-5 1 DPE with at least 4 and up to 12 hard drives

Up to 3 DAE with a maximum of 12 hard-drives each

First is required, the second SPS is optional

Introduction 11

Page 12: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 1-1. Direct-Attached, Single-Cluster Configuration

EMC® PowerPath® Limitations in a Direct-Attached Cluster

EMC PowerPath provides failover capabilities and multiple path detection as well as dynamic load balancing between multiple ports on the same storage processor. However, direct-attached clusters supported by Dell connect to a single port on each storage processor in the storage system. Because of the single port limitation, PowerPath can provide only failover protection, not load balancing, in a direct-attached configuration.

SAN-Attached Cluster

In a SAN-attached cluster, all of the nodes are attached to a single storage system or to multiple storage systems through a SAN using redundant switch fabrics. SAN-attached clusters are superior to direct-attached clusters in configuration flexibility, expandability, and performance.

Figure 1-2 shows a SAN-attached cluster.

cluster node

private network

public network

cluster node

storage system

Fibre Channel

connections

Fibre Channel

connections

12 Introduction

Page 13: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 1-2. SAN-Attached Cluster

Other Documents You May Need

CAUTION: The safety information that is shipped with your system provides

important safety and regulatory information. Warranty information may be

included within this document or as a separate document.

NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the

Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document

located on the Dell Support website at support.dell.com/manuals.

• The Rack Installation Guide included with your rack solution describes how to install your system into a rack.

• The Getting Started Guide provides an overview of initially setting up your system.

• The Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide provides more information on deploying your cluster with the Windows Server 2003 operating system.

• The Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide provides more information on deploying your cluster with the Windows Server 2008 operating system.

cluster node cluster node

private network

Fibre Channel

connections

storage system

Fibre Channel

switchFibre Channel

switch

public network

Fibre Channel

connections

Introduction 13

Page 14: EMC AX4-5 Hardware Installation and Troubleshooting Guide

• The Dell Cluster Configuration Support Matrices provides a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster.

• The HBA documentation provides installation instructions for the HBAs.

• Systems management software documentation describes the features, requirements, installation, and basic operation of the software.

• Operating system documentation describes how to install (if necessary), configure, and use the operating system software.

• The Dell PowerVault™ tape library documentation provides information for installing, troubleshooting, and upgrading the tape library.

• The EMC PowerPath documentation that came with your HBA kit(s) and Dell/EMC Storage Enclosure User’s Guides.

• Updates are sometimes included with the system to describe changes to the system, software, and/or documentation.

NOTE: Always read the updates first because they often supersede

information in other documents.

• Release notes or readme files may be included to provide last-minute updates to the system or documentation, or advanced technical reference material intended for experienced users or technicians.

14 Introduction

Page 15: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cabling Your Cluster Hardware NOTE: To configure Dell blade server modules in a Dell™ PowerEdge™ cluster, see

the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster

document located on the Dell Support website at support.dell.com/manuals.

Cabling the Mouse, Keyboard, and MonitorWhen installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. See the documentation included with your rack for instructions on cabling each node’s connections to the switch box.

Cabling the Power SuppliesRefer to the documentation for each component in your cluster solution to ensure that the specific power requirements are satisfied.

The following guidelines are recommended to protect your cluster solution from power-related failures:

• For nodes with multiple power supplies, plug each power supply into a separate AC circuit.

• Use uninterruptible power supplies (UPS).

• For some environments, consider having backup generators and power from separate electrical substations.

Figure 2-1, and Figure 2-2 illustrate recommended methods for power cabling for a cluster solution consisting of two PowerEdge systems and two storage systems. To ensure redundancy, the primary power supplies of all the components are grouped onto one or two circuits and the redundant power supplies are grouped onto a different circuit.

Cabling Your Cluster Hardware 15

Page 16: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems

and One Standby Power Supply (SPS) in an AX4-5 Storage System

redundant power

supplies on one AC

power strip (or on one

AC PDU [not shown])

primary power

supplies on one AC

power strip (or on one

AC PDU [not shown])

NOTE: This illustration is intended only to demonstrate the power

distribution of the components.

SPS

16 Cabling Your Cluster Hardware

Page 17: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge

Systems and Two SPS(s) in an AX4-5 Storage System

Cabling Your Cluster for Public and Private NetworksThe network adapters in the cluster nodes provide at least two network connections for each node, as described in Table 2-1.

NOTE: To configure Dell blade server modules in a Dell PowerEdge cluster, see the

Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document

located on the Dell Support website at support.dell.com/manuals.

A

B

A

B

0 Fibre 1 Fibre 0 Fibre 1 Fibre

A

B

A

B

primary power supplies

on one AC power strip

(or on one AC PDU [not

shown])

redundant power supplies

on one AC power strip (or

on one AC PDU [not

shown])

Cabling Your Cluster Hardware 17

Page 18: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 2-3 shows an example of cabling in which dedicated network adapters in each node are connected to each other (for the private network) and the remaining network adapters are connected to the public network.

Figure 2-3. Example of Network Cabling Connection

Table 2-1. Network Connections

Network Connection Description

Public network All connections to the client LAN.

At least one public network must be configured for Mixed mode for private network failover.

Private network A dedicated connection for sharing cluster health and status information only.

cluster node 1 cluster node 2

public network

private network

private

network

adapter

public

network

adapter

18 Cabling Your Cluster Hardware

Page 19: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cabling the Public Network

Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.

Cabling the Private Network

The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes two possible private network configurations.

NOTE: For more information on the supported cable types, see your system or NIC

documentation.

Table 2-2. Private Network Hardware Components and Connections

Method Hardware Components Connection

Network switch

Gigabit or 10 Gigabit Ethernet network adapters and switches

Depending on the hardware, connect the CAT5e or CAT6 cables, the multimode optical cables with LC (Local Connector) connectors, or the twin-ax cables from the network adapters in the nodes to a switch.

Point-to-Point (two-node clusters only)

Gigabit or 10 Gigabit Ethernet network adapters with RJ-45 connectors

Connect a standard CAT5e or CAT6 Ethernet cable between the network adapters in both nodes.

10 Gigabit Ethernet network adapters with SFP+ connectors

Connect a twin-ax cable between the network adapters in both nodes.

Optical Gigabit or 10 Gigabit Ethernet network adapters with LC connectors

Connect a multi-mode optical cable between the network adapters in both nodes.

Cabling Your Cluster Hardware 19

Page 20: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Using Dual-Port Network Adapters

You can configure your cluster to use the public network as a failover for private network communications. If dual-port network adapters are used, do not use both ports simultaneously to support both the public and private networks.

NIC Teaming

NIC teaming combines two or more NICs to provide load balancing and fault tolerance. Your cluster supports NIC teaming, but only in a public network; NIC teaming is not supported in a private network.

You should use the same brand of NICs in a team, and you cannot mix brands of teaming drivers.

Cabling the Storage SystemsThis section provides information for connecting your cluster to a storage system in a direct-attached configuration, or to one or more storage systems in a SAN-attached configuration.

Cabling Storage for Your Direct-Attached Cluster

NOTE: Ensure that the management port on each storage processor is connected

to the storage server with the management station, using an Ethernet network

cable.

A direct-attached cluster configuration consists of redundant Fibre Channel host bus adapter (HBA) ports cabled directly to a Dell/EMC storage system. Direct-attached configurations are self-contained and do not share any physical resources with other server or storage systems outside of the cluster.

Figure 2-4 shows an example of a direct-attached, single cluster configuration with redundant HBA ports installed in each cluster node.

20 Cabling Your Cluster Hardware

Page 21: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 2-4. Direct-Attached Cluster Configuration

Each cluster node attaches to the storage system using two multi-mode optical cables with LC connectors that attach to the HBA ports in the cluster nodes and the storage processor (SP) ports in the Dell/EMC storage system. These connectors consist of two individual Fibre optic connectors with indexed tabs that must be aligned properly into the HBA ports and SP ports.

WARNING: Do not remove the connector covers until you are ready to insert the

connectors into the HBA port, SP port, or tape library port.

NOTE: The connections listed in this section are representative of one proven

method of ensuring redundancy in the connections between the cluster nodes and

the storage system. Other methods that achieve the same type of redundant

connectivity may be acceptable.

Cabling a Two-Node Cluster to an AX4-5F Storage System

1 Connect cluster node 1 to the storage system.

a Install a cable from cluster node 1 HBA port 0 to SP-A Fibre port 0 (first fibre port).

b Install a cable from cluster node 1 HBA port 1 to SP-B Fibre port 0 (first fibre port).

2 Connect cluster node 2 to the storage system.

public network

storage system

private network

cluster node cluster node

Fibre Channel

ConnectionsFibre Channel

Connections

Cabling Your Cluster Hardware 21

Page 22: EMC AX4-5 Hardware Installation and Troubleshooting Guide

a Install a cable from cluster node 2 HBA port 0 to SP-A Fibre port 1(second fibre port).

b Install a cable from cluster node 2 HBA port 1 to SP-B Fibre port 1(second fibre port).

Figure 2-5 and Figure 2-6 illustrate the method of cabling a two-node direct-attached cluster to an AX4-5F and AX4-5FX storage system respectively.

NOTE: The cables are connected to the storage processor ports in sequential

order for illustrative purposes. While the available ports in your storage system may

vary, HBA port 0 and HBA port 1 must be connected to SP-A and SP-B, respectively.

Figure 2-5. Cabling a Two-Node Cluster to an AX4-5F Storage System

A

B

A

B

0 Fibre 1 Fibre 0 Fibre 1 Fibre

cluster node 1 cluster node 2

SP-B SP-A

110 0

AX4-5F storage system

HBA ports (2) HBA ports (2)

22 Cabling Your Cluster Hardware

Page 23: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 2-6. Cabling a Two-node Cluster to an AX4-5FX Storage System

Cabling a Four-Node Cluster to a Dell/EMC AX4-5FX Storage System

You can configure a 4-node cluster in a direct-attached configuration using a Dell/EMC AX4-5FX storage system:

1 Connect cluster node 1 to the storage system:

a Install a cable from cluster node 1 HBA port 0 to the first front-end fibre channel port on SP-A.

b Install a cable from cluster node 1 HBA port 1 to the first front-end fibre channel port on SP-B.

2 Connect cluster node 2 to the storage system:

a Install a cable from cluster node 2 HBA port 0 to the second front-end fibre channel port on SP-A.

b Install a cable from cluster node 2 HBA port 1 to the second front-end fibre channel port on SP-B.

3 Connect cluster node 3 to the storage system:

a Install a cable from cluster node 3 HBA port 0 to the third front-end fibre channel port on SP-A.

b Install a cable from cluster node 3 HBA port 1 to the third front-end fibre channel port on SP-B.

4 Connect cluster node 4 to the storage system:

01

HBA ports (2)

SP-B SP-A

HBA ports (2)

10

AX4-5FX storage system

Cabling Your Cluster Hardware 23

Page 24: EMC AX4-5 Hardware Installation and Troubleshooting Guide

a Install a cable from cluster node 4 HBA port 0 to the fourth front-end fibre channel port on SP-A.

b Install a cable from cluster node 4 HBA port 1 to the fourth front-end fibre channel port on SP-B.

Cabling 2 Two-Node Clusters to a Dell/EMC AX4-5FX Storage System

The following steps are an example of how to cable a 2 two-node cluster to a Dell/EMC AX4-5FX storage system:

1 In the first cluster, connect cluster node 1 to the storage system:

a Install a cable from cluster node 1 HBA port 0 to the first front-end fibre channel port on SP-A.

b Install a cable from cluster node 1 HBA port 1 to the first front-end fibre channel port on SP-B.

2 In the first cluster, connect cluster node 2 to the storage system:

a Install a cable from cluster node 2 HBA port 0 to the second front-end fibre channel port on SP-A.

b Install a cable from cluster node 2 HBA port 1 to the second front-end fibre channel port on SP-B.

3 In the second cluster, connect cluster node 1 to the storage system:

a Install a cable from cluster node 1 HBA port 0 to the third front-end fibre channel port on SP-A.

b Install a cable from cluster node 1 HBA port 1 to the third front-end fibre channel port on SP-B.

4 In the second cluster, connect cluster node 2 to the storage system:

a Install a cable from cluster node 2 HBA port 0 to the fourth front-end fibre channel port on SP-A.

b Install a cable from cluster node 2 HBA port 1 to the fourth front-end fibre channel port on SP-B.

24 Cabling Your Cluster Hardware

Page 25: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cabling Storage for Your SAN-Attached Cluster

A SAN-attached cluster is a cluster configuration where all cluster nodes are attached to a single storage system or to multiple storage systems through a SAN using a redundant switch fabric.

SAN-attached cluster configurations provide more flexibility, expandability, and performance than direct-attached configurations.

Figure 2-7 shows an example of a two-node, SAN-attached cluster.

Figure 2-8 shows an example of an eight-node, SAN-attached cluster.

Similar cabling concepts can be applied to clusters that contain a different number of nodes.

NOTE: The connections listed in this section are representative of one proven

method of ensuring redundancy in the connections between the cluster nodes and

the storage system. Other methods that achieve the same type of redundant

connectivity may be acceptable.

Figure 2-7. Two-Node SAN-Attached Cluster

cluster node cluster nodeprivate network

Fibre Channel

connections

storage system

Fibre Channel

switchFibre Channel

switch

public network

Fibre Channel

connections

Cabling Your Cluster Hardware 25

Page 26: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 2-8. Eight-Node SAN-Attached Cluster

Each HBA port is cabled to a port on a Fibre Channel switch. One or more cables connect from the outgoing ports on a switch to a storage processor on a Dell/EMC storage system.

public network

storage system

cluster

nodes (8)

Fibre

Channel

switch

Fibre

Channel

switch

private

network

26 Cabling Your Cluster Hardware

Page 27: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cabling a SAN-Attached Cluster to an AX4-5F Storage System

1 Connect cluster node 1 to the SAN.

a Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0).

b Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1).

2 Repeat step 1 for each cluster node.

3 Connect the storage system to the SAN.

a Connect a cable from Fibre Channel switch 0 (sw0) to SP-A Fibre port 0(first fibre port).

b Connect a cable from Fibre Channel switch 0 (sw0) to SP-B Fibre port 1(second fibre port).

c Connect a cable from Fibre Channel switch 1 (sw1) to SP-A Fibre port 1(second fibre port).

d Connect a cable from Fibre Channel switch 1 (sw1) to SP-B Fibre port 0(first fibre port).

Cabling Your Cluster Hardware 27

Page 28: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Figure 2-9. Cabling a SAN-Attached Cluster to an AX4-5F Storage System

Cabling a SAN-Attached Cluster to an AX4-5FX Storage System

1 Connect cluster node 1 to the SAN.

a Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0).

b Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1).

2 Repeat step 1 for each cluster node.

3 Connect the storage system to the SAN.

a Connect a cable from Fibre Channel switch 0 (sw0) to SP-A Fibre port 0(First fibre port).

b Connect a cable from Fibre Channel switch 0 (sw0) to SP-A Fibre port 1(Second fibre port).

c Connect a cable from Fibre Channel switch 0 (sw0) to SP-B Fibre port 2(Third fibre port).

A

B

A

B

0 Fibre 1 Fibre 0 Fibre 1 Fibre

sw0 sw1

cluster node 2cluster node 1

AX4-5F storage system

0 1 0 1

HBA ports (2) HBA ports (2)

SP-ASP-B

28 Cabling Your Cluster Hardware

Page 29: EMC AX4-5 Hardware Installation and Troubleshooting Guide

d Connect a cable from Fibre Channel switch 0 (sw0) to SP-B Fibre port 3(Fourth fibre port).

e Connect a cable from Fibre Channel switch 1 (sw1) to SP-A Fibre port 2(Third fibre port).

f Connect a cable from Fibre Channel switch 1 (sw1) to SP-A Fibre port 3(Fourth fibre port).

g Connect a cable from Fibre Channel switch 1 (sw1) to SP-B Fibre port 0(first fibre port).

h Connect a cable from Fibre Channel switch 1 (sw1) to SP-B Fibre port 1(Second fibre port).

Figure 2-10. Cabling a SAN-Attached Cluster to an AX4-5FX Storage System

sw0 sw1

cluster node 2cluster node 1

AX4-5FX storage system

0 1 0 1HBA ports (2) HBA ports (2)

SP-ASP-B

Cabling Your Cluster Hardware 29

Page 30: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cabling Multiple SAN-Attached Clusters to a Dell/EMC Storage System

To cable multiple clusters to the storage system, connect the cluster nodes to the appropriate Fibre Channel switches and then connect the Fibre Channel switches to the appropriate storage processors on the processor enclosure.

See the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha for rules and guidelines for SAN-attached clusters.

NOTE: The following procedure uses Figure 2-9 as an example for cabling

additional clusters.

Cabling Multiple SAN-Attached Clusters to the AX4-5 Storage System

1 In the first cluster, connect cluster node 1 to the SAN.

a Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0).

b Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1).

2 In the first cluster, repeat step 1 for each node.

3 For each additional cluster, repeat step 1 and step 2.

4 Connect the storage system to the SAN.

a Connect a cable from Fibre Channel switch 0 (sw0) to SP-A Fibre port 0(first fibre port).

b Connect a cable from Fibre Channel switch 0 (sw0) to SP-B Fibre port 1(second fibre port).

c Connect a cable from Fibre Channel switch 1 (sw1) to SP-A Fibre port 1(second fibre port).

d Connect a cable from Fibre Channel switch 1 (sw1) to SP-B Fibre port 0(first fibre port).

Connecting a PowerEdge Cluster to Multiple Storage Systems

You can increase your cluster storage capacity by attaching multiple storage systems to your cluster using a redundant switch fabric. PowerEdge cluster systems can support configurations with multiple storage units attached to clustered servers. In this scenario, the MSCS software can fail over disk drives in any cluster-attached shared storage array between the cluster nodes. When attaching multiple storage systems with your cluster, the following rules apply:

• There is a maximum of four storage systems per cluster.

30 Cabling Your Cluster Hardware

Page 31: EMC AX4-5 Hardware Installation and Troubleshooting Guide

• The shared storage systems and firmware must be identical. Using dissimilar storage systems and firmware for your shared storage is not supported.

• MSCS is limited to 22 drive letters. Because drive letters A through D are reserved for local disks, a maximum of 22 drive letters (E to Z) can be used for your storage system disks.

• Windows Server 2003 and 2008 support mount points, allowing greater than 22 drives per cluster.

For more information, see "Assigning Drive Letters and Mount Points" section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide on the Dell Support website at support.dell.com/manuals.

Figure 2-11 provides an example of cabling the cluster nodes to four Dell/EMC storage systems.

Figure 2-11. PowerEdge Cluster Nodes Cabled to Four Storage Systems

private network

cluster node

Fibre Channel

switch

storage systems (4)

Fibre Channel

switch

cluster node

Cabling Your Cluster Hardware 31

Page 32: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Connecting a PowerEdge Cluster to a Tape Library

To provide additional backup for your cluster, you can add tape backup devices to your cluster configuration. The Dell PowerVault™ tape libraries may contain an integrated Fibre Channel bridge, or Storage Network Controller (SNC), that connects directly to your Dell/EMC Fibre Channel switch.

Figure 2-12 shows a supported PowerEdge cluster configuration using redundant Fibre Channel switches and a tape library. In this configuration, each of the cluster nodes can access the tape library to provide backup for your local disk resources, as well as your cluster disk resources. Using this configuration allows you to add more servers and storage systems in the future, if needed.

NOTE: While tape libraries can be connected to multiple fabrics, they do not

provide path failover.

Figure 2-12. Cabling a Storage System and a Tape Library

Obtaining More Information

See the storage and tape backup documentation for more information on configuring these components.

private network

cluster node

Fibre Channel

switch

storage systems

Fibre Channel

switch

cluster node

tape library

32 Cabling Your Cluster Hardware

Page 33: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Configuring Your Cluster With SAN Backup

You can provide centralized backup for your clusters by sharing your SAN with multiple clusters, storage systems, and a tape library.

Figure 2-13 provides an example of cabling the cluster nodes to your storage systems and SAN backup with a tape library.

Figure 2-13. Cluster Configuration Using SAN-Based Backup

cluster 2cluster 1

Fibre Channel switch

storage systems

tape library

Fibre Channel switch

Cabling Your Cluster Hardware 33

Page 34: EMC AX4-5 Hardware Installation and Troubleshooting Guide

34 Cabling Your Cluster Hardware

Page 35: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Preparing Your Systems for Clustering

CAUTION: Only trained service technicians are authorized to remove and access

any of the components inside the system. See the safety information that shipped

with your system for complete information about safety precautions, working

inside the computer, and protecting against electrostatic discharge.

Cluster Configuration Overview1 Ensure that your site can handle the cluster’s power requirements.

Contact your sales representative for information about your region's power requirements.

2 Install the systems, the shared storage array(s), and the interconnect switches (example: in an equipment rack), and ensure that all these components are powered on.

NOTE: For more information on step 3 to step 7 and step 10 to step 13, see

"Preparing your systems for clustering" section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at

support.dell.com/manuals.

3 Deploy the operating system (including any relevant service pack and hotfixes), network adapter drivers, and storage adapter drivers (including Multipath I/O drivers(MPIO)) on each of the systems that will become cluster nodes. Depending on the deployment method that is used, it may be necessary to provide a network connection to successfully complete this step.

NOTE: To help in planning and deployment of your cluster, record the relevant

cluster configuration information in the "Cluster Data Form" on page 55 and

zoning information in the "Zoning Configuration Form" on page 57.

Preparing Your Systems for Clustering 35

Page 36: EMC AX4-5 Hardware Installation and Troubleshooting Guide

4 Establish the physical network topology and the TCP/IP settings for network adapters on each server node to provide access to the cluster public and private networks.

5 Configure each cluster node as a member in the same Microsoft® Windows Active Directory® Domain.

NOTE: You can configure the cluster nodes as Domain Controllers. For more

information, see “Selecting a Domain Model” section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at

support.dell.com/manuals.

6 Establish the physical storage topology and any required storage network settings to provide connectivity between the storage array and the servers that will be configured as cluster nodes. Configure the storage system(s) as described in your storage system documentation.

7 Use storage array management tools to create at least one logical unit number (LUN). The LUN is used as a cluster Quorum disk for Windows Server 2003 Failover cluster and as a Witness disk for Windows Server 2008 Failover cluster. Ensure that this LUN is presented to the servers that will be configured as cluster nodes.

NOTE: For security reasons, it is recommended that you configure the LUN on

a single node as mentioned in step 8 when you are setting up the cluster.

Later, you can configure the LUN as mentioned in step 9 so that other cluster

nodes can access it.

8 Select one of the systems and form a new failover cluster by configuring the cluster name, cluster management IP, and quorum resource. For more information, see "Preparing Your Systems for Clustering" on page 35.

NOTE: For Windows Server® 2008 Failover Clusters, run the Cluster

Validation Wizard to ensure that your system is ready to form the cluster.

9 Join the remaining node(s) to the failover cluster. For more information, see "Preparing Your Systems for Clustering" on page 35.

10 Configure roles for cluster networks. Take any network interfaces that are used for iSCSI storage (or for other purposes outside of the cluster) out of the control of the cluster.

11 Test the failover capabilities of your new cluster.

36 Preparing Your Systems for Clustering

Page 37: EMC AX4-5 Hardware Installation and Troubleshooting Guide

NOTE: For Windows Server 2008 Failover Clusters, you can also use the

Cluster Validation Wizard.

12 Configure highly-available applications and services on your failover cluster. Depending on your configuration, this may also require providing additional LUNs to the cluster or creating new cluster resource groups. Test the failover capabilities of the new resources.

13 Configure client systems to access the highly available applications and services that are hosted on your failover cluster.

Installation OverviewEach node in your Dell Windows Server failover cluster must have the same release, edition, service pack, and processor architecture of the Windows Server operating system installed. For example, all nodes in your cluster may be configured with Windows Server 2003 R2, Enterprise x64 Edition. If the operating system varies among nodes, it is not possible to configure a failover cluster successfully. It is recommended to establish server roles prior to configuring a failover cluster, depending on the operating system configured on your cluster.

For a list of Dell PowerEdge Servers, Fibre Channel HBAs and switches, and recommended list of operating system variants, specific driver and firmware revisions, see the Cluster Configuration Support Matrices located on Dell High Availablity Cluster website at dell.com/ha.

For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide. For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide.

The following sub-sections describe steps that must be taken to enable communication between the cluster nodes and your shared Dell/EMC AX4-5 Fibre Channel storage array, and to present disks from the storage array to the cluster. Install and configure the following components on each node:

1 The Fibre Channel HBA(s) and driver on each cluster node

2 EMC PowerPath on each cluster node

3 Zoning, if applicable

Preparing Your Systems for Clustering 37

Page 38: EMC AX4-5 Hardware Installation and Troubleshooting Guide

4 The shared storage system

5 A failover cluster

Installing the Fibre Channel HBAsFor dual-HBA configurations, It is recommended that you install the Fibre Channel HBAs on separate peripheral component interconnect (PCI) buses. Placing the adapters on separate buses improves availability and performance. See the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha for more information about your system's PCI bus configuration and supported HBAs.

Installing the Fibre Channel HBA Drivers

See the EMC documentation that is included with your HBA kit for more information.

See the Emulex support website located at emulex.com or the Dell Support website at support.dell.com for information about installing and configuring Emulex HBAs and EMC-approved drivers.

See the QLogic support website at qlogic.com or the Dell Support website at support.dell.com for information about installing and configuring QLogic HBAs and EMC-approved drivers.

See the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at dell.com/ha for information about supported HBA controllers and drivers.

Installing EMC PowerPathEMC® PowerPath® detects a failed storage path and automatically re-routes I/Os through an alternate path. PowerPath also provides load balancing of data from the server to the storage system. To install PowerPath:

1 Insert the PowerPath installation media in the CD/DVD drive.

2 On the Getting Started screen, go to the Installation section, and click the appropriate link for the operating system that is running on the node.

3 Select Run this program from its current location and click OK.

4 In the Choose Language Setup screen, select the required language, and click OK.

38 Preparing Your Systems for Clustering

Page 39: EMC AX4-5 Hardware Installation and Troubleshooting Guide

5 In the Welcome window of the setup wizard, click Next.

6 In the CLARiiON AX-Series window, select and click Next. Follow the onscreen instructions to complete the installation.

7 Click Yes to reboot the system.

Implementing Zoning on a Fibre Channel Switched FabricA Fibre Channel switched fabric consists of one or more Fibre Channel switches that provide high-speed connections between servers and storage devices. The switches in a Fibre Channel fabric provide a connection through inbound and outbound points from one device (sender) to another device or switch (receiver) on the network. If the data is sent to another switch, the process repeats itself until a connection is established between the sender and the receiver.

Fibre Channel switches provide you with the ability to set up barriers between different devices and operating environments. These barriers create logical fabric subsets with minimal software and hardware intervention. Similar to subnets in the client/server network, logical fabric subsets divide a fabric into similar groups of components, regardless of their proximity to one another. The logical subsets that form these barriers are called zones.

Zoning automatically and transparently enforces access of information to the zone devices. More than one PowerEdge cluster configuration can share Dell/EMC storage system(s) in a switched fabric using Fibre Channel switch zoning. By using Fibre Channel switches to implement zoning, you can segment the SANs to isolate heterogeneous servers and storage systems from each other.

Using Worldwide Port Name Zoning

PowerEdge cluster configurations support worldwide port name zoning.

A worldwide name (WWN) is a unique numeric identifier assigned to Fibre Channel interfaces, such as HBA ports, storage processor (SP) ports, and Fibre Channel to SCSI bridges or storage network controllers (SNCs).

Preparing Your Systems for Clustering 39

Page 40: EMC AX4-5 Hardware Installation and Troubleshooting Guide

A WWN consists of an 8-byte hexadecimal number with each byte separated by a colon. For example, 10:00:00:60:69:00:00:8a is a valid WWN. Using WWN port name zoning allows you to move cables between switch ports within the fabric without having to update the zones.

Table 3-1 provides a list of WWN identifiers that you can find in the Dell/EMC cluster environment.

WARNING: When you replace a switch module, a storage controller, or a Fibre

Channel HBA in a PowerEdge server, reconfigure your zones to provide continuous

client data access.

Single Initiator Zoning

Each host HBA port in a SAN must be configured in a separate zone on the switch with the appropriate storage ports. This zoning configuration, known as single initiator zoning, prevents different hosts from communicating with each other, thereby ensuring that Fibre Channel communications between the HBAs and their target storage systems do not affect each other.

When you create your single-initiator zones, follow these guidelines:

Table 3-1. Port Worldwide Names in a SAN Environment

Identifier Description

xx:xx:00:60:69:xx:xx:xx Dell/EMC or Brocade switch

xx:xx:xx:00:88:xx:xx:xx McData switch

50:06:01:6x:xx:xx:xx:xx Dell/EMC storage processor

xx:xx:00:00:C9:xx:xx:xx Emulex HBA ports

xx:xx:00:E0:8B:xx:xx:xx QLogic HBA ports (non-embedded)

xx:xx:00:0F:1F:xx:xx:xx Dell 2362M HBA port

xx:xx:xx:60:45:xx:xx:xx PowerVault 132T and 136T tape libraries

xx:xx:xx:E0:02:xx:xx:xx PowerVault 128T tape autoloader

xx:xx:xx:C0:01:xx:xx:xx PowerVault 160T tape library and Fibre Channel tape drives

xx:xx:xx:C0:97:xx:xx:xx PowerVault ML6000 Fibre Channel tape drives

40 Preparing Your Systems for Clustering

Page 41: EMC AX4-5 Hardware Installation and Troubleshooting Guide

• Create a zone for each HBA port and its target storage devices.

• Each AX4-5 storage processor port can be connected to a maximum of 64 HBA ports in a SAN-attached environment.

• Each host can be connected to a maximum of four storage systems.

• The integrated bridge/SNC or fibre-channel interface on a tape library can be added to any zone.

Installing and Configuring the Shared Storage System

NOTE: You must configure the network settings and create a user account to

manage the AX4-5 storage system from the network.

To install and configure the Dell/EMC storage system in your cluster:

1 Install and use Navisphere® Storage System Initialization Utility from a node or management station to initialize your AX4-5 storage system.

2 Install the expansion pack using Navisphere Express (optional).

3 Install the Navisphere Server Utility on each cluster node.

4 Register the cluster node with the storage system.

5 Assign the virtual disks to the cluster nodes.

Installing Navisphere Storage System Initialization Utility

The Navisphere Storage System Initialization Utility provides a user interface to initialize your AX4-5 storage system. Using the utility, you can configure the IP address, subnet mask, default gateway address for the storage system’s SPs, and assign user names and password for storage system access.

1 To install the software from the support media that is shipped with the storage system:

a Insert the installation media in the CD/DVD drive.

b In the Choose Language Setup screen, select the appropriate language and click OK.

c On Server screen, click Install Products.

d In the Install Products menu, click Navisphere Storage System Initialization Utility.

Preparing Your Systems for Clustering 41

Page 42: EMC AX4-5 Hardware Installation and Troubleshooting Guide

e Follow the on-screen instructions to complete the installation.

2 To initialize the storage system:

a Select Start→ Programs→ EMC→ Navisphere→ Navisphere Storage System Initialization.

b Read the license agreement, click I accept and then click Next.

c From the Uninitialized Systems list, select the storage system to be initialized, and click Next.

d Follow the on-screen instructions to complete the initialization.

Installing the Expansion Pack Using Navisphere Express

Each storage system in the cluster is centrally managed by one host system (also called a management station) running EMC Navisphere Express—a centralized storage management application used to configure Dell/EMC storage systems.

If you have an expansion pack option for the storage system and it has not been installed, install it now by following the steps listed below:

1 From the management host, open an Internet browser.

2 Enter the IP address of an SP in the storage system.

3 Log in to Navisphere Express with the username and password that you specified during the storage system initialization.

4 In the Navisphere Express navigation pane under System, click Software.

5 In the System Software screen, click Upgrade Software.

6 Insert the expansion pack media into the CD/DVD drive on the host from which you are running Navisphere Express.

7 Browse for the expansion tier enabler software file (.ena file), and click Upgrade.

You can use Navisphere Express to perform tasks such as creating disk pools, binding the virtual disks, and downloading the firmware. Additionally, you can use Snapshot Management to capture point-in-time images of a virtual disk for backups or testing without affecting the contents of the source virtual disk. You can also use the SAN Copy™ feature in Navisphere Express to move data from the virtual disks on one storage system to the virtual disks on another storage system without using the host CPU cycles.

42 Preparing Your Systems for Clustering

Page 43: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Installing Navisphere Server Utility

The Navisphere Server Utility registers the cluster node HBAs with the storage systems, allowing the nodes to access the cluster storage data. The tool is also used for the following cluster node maintenance procedures:

• Updating the cluster node host name and/or IP address on the storage array

• Updating file system information

• Adding, removing, or replacing an HBA

• Starting and stopping a snapshot

To install Navisphere Server Utility:

1 Log in to the Windows server using an account with administrative privileges.

2 Insert the installation media in the CD/DVD drive.

3 In the Choose Language Setup screen, select the appropriate language and click OK.

4 In the main menu, click Install Products on Server.

5 In the Install Products menu, click Navisphere Server Utility.

6 Follow the on-screen instructions to complete the installation.

Registering a Server With a Storage System

To register the server with the storage system:

1 Start the Navisphere Server Utility by clicking Start→ Programs→ EMC→ Navisphere→ Navisphere Server Utility.

2 In the Choose Language Setup screen, select the appropriate language and click OK.

3 In the Navisphere Server Utility dialog window, select Register this server to all connected storage systems. The utility automatically scans for all connected storage systems and lists them under Connected Storage Systems.

4 Select the WWN of the HBA you just installed. The HBA should appear once for every SP port it is connected to.

5 Click Next to register the server with the storage system.

6 Click Finish to exit the utility.

Preparing Your Systems for Clustering 43

Page 44: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Assigning the Virtual Disks to Cluster Nodes

NOTE: For best practice, have at least one virtual disk for each application. If

multiple NTFS partitions are created on a single LUN or virtual disk, these partitions

will not be able to fail over individually from node-to-node.

To perform data I/O to the virtual disks, assign the virtual disks to a cluster node by performing the following steps:

1 Open a Web browser.

2 Type the storage system IP address in the Address field.

The Navisphere Express console is displayed.

3 Login with user name and password that were created during the storage initialization.

4 In the Manage menu,

a Click Disk Pools. You can create multiple disk pools.

b Click Virtual Disks. You can create multiple virtual disks for each disk pool.

c In the Virtual Disks screen, select the virtual disks that you want to assign to the cluster node and click Assign Server.

5 In Assign Server screen, select the cluster nodes that you want to assign to the virtual disk and click Apply.

6 Repeat step 4 and step 5 for each virtual disk.

7 Close the Navisphere Express window.

8 Verify that the PowerPath on the cluster nodes can access all paths to the virtual disks.

Advanced or Optional Storage Features

Your Dell/EMC AX4-5 storage array may be configured to provide optional features that can be used in with your cluster. These features include Snapshot Management, SANCopy, Navisphere Manager and MirrorView™.

44 Preparing Your Systems for Clustering

Page 45: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Snapshot Management

Snapshot Management captures images of a virtual disk and retains the image independently of subsequent changes to the files. The images can be used to share virtual disks with another system without affecting the contents of the source virtual disk. Snapshot Management creates copies of either virtual disks or snapshots. Snapshots are virtual copies that create an image of the source virtual disk at the time the snapshot was created. This snapshot is retained independently of subsequent changes to the source virtual disk. You can use snapshots to facilitate backups or to allow multiple hosts to access data without affecting the contents of the source virtual disk.

WARNING: To avoid data corruption, do not access a snapshot from the same

node as its source.

SAN Copy

SAN Copy allows you to move data between storage systems without using host processor cycles or LAN bandwidth. It can be used in conjunction with SnapView™ or MirrorView and is managed from within Navisphere Manager.

Navishphere Manager

Optionally, you can also upgrade Navisphere Express to EMC Navisphere® Manager—a centralized storage management application used to configure Dell/EMC storage systems.

EMC Navisphere Manager adds the support for EMC MirrorView, an optional software that enables synchronous or asynchronous mirroring between two storage systems.

MirrorView

MirrorView automatically duplicates primary storage system data from a cluster or stand-alone system to a secondary storage system. It can be used in conjunction with SnapView and is managed from within Navisphere Manager.

Preparing Your Systems for Clustering 45

Page 46: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Installing and Configuring a Failover ClusterYou can configure the operating system services on your Dell Windows Server failover cluster, after you have established the private and public networks and have assigned the shared disks from the storage array to the cluster nodes. The procedures for configuring the failover cluster are different depending on the Windows Server operating system you use.

For more information on deploying your cluster with Windows Server 2003 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide. For more information on deploying your cluster with Windows Server 2008 operating systems, see the Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide.

46 Preparing Your Systems for Clustering

Page 47: EMC AX4-5 Hardware Installation and Troubleshooting Guide

TroubleshootingThis section provides troubleshooting information for your cluster configuration.

Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem.

Table A-1. General Cluster Troubleshooting

Problem Probable Cause Corrective Action

The nodes cannot access the storage system, or the cluster software is not functioning with the storage system.

The storage system is not cabled properly to the nodes or the cabling between the storage components is incorrect.

Ensure that the cables are connected properly from the node to the storage system. See "Cabling Your Cluster for Public and Private Networks" for more information.

The length of the interface cables exceeds the maximum allowable length.

Ensure that the fibre optic cables do not exceed 300 m (multimode) or 10 km (single mode switch-to-switch connections only).

One of the cables is faulty.

Replace the faulty cable.

Switching zone is not configured correctly.

Verify that all switched zones are configured correctly.

LUNs are not assigned to the hosts.

Verify that all LUNs are assigned to the hosts.

The cluster is in a SAN, and one or more zones are not configured correctly.

Verify the following:

• Each zone contains only one initiator (Fibre Channel daughter card).

• Each zone contains the correct initiator and the correct storage port(s).

Appendix:Troubleshooting 47

Page 48: EMC AX4-5 Hardware Installation and Troubleshooting Guide

One of the nodes takes a long time to join the cluster.

or

One of the nodes fail to join the cluster.

The node-to-node network has failed due to a cabling or hardware failure.

Long delays in node-to-node communications may be normal.

Check the network cabling. Ensure that the node-to-node interconnection and the public network are connected to the correct NICs.

Verify that the nodes can communicate with each other by running the ping command from each node to the other node. Try both the host name and IP address when using the ping command.

One or more nodes may have the Internet Connection Firewall enabled, blocking Remote Procedure Call (RPC) communications between the nodes.

Configure the Internet Connection Firewall to allow communications that are required by the Microsoft® Cluster Service (MSCS) and the clustered applications or services.

See Microsoft Knowledge Base article KB883398 at the Microsoft Support website at support.microsoft.com for more information.

Table A-1. General Cluster Troubleshooting (continued)

Problem Probable Cause Corrective Action

48 Appendix:Troubleshooting

Page 49: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Attempts to connect to a cluster using Cluster Administrator fail.

The Cluster Service has not been started.

A cluster has not been formed on the system.

The system has just been booted and services are still starting.

Verify that the Cluster Service is running and that a cluster has been formed. Use the Event Viewer and look for the following events logged by the Cluster Service:

Microsoft Cluster Service successfully formed a cluster on this node.

or

Microsoft Cluster Service successfully joined the cluster.

If these events do not appear in Event Viewer, see the Microsoft Cluster Service Administrator’s Guide for instructions on setting up the cluster on your system and starting the Cluster Service.

The cluster network name is not responding on the network because the Internet Connection Firewall is enabled on one or more nodes.

Configure the Internet Connection Firewall to allow communications that are required by MSCS and the clustered applications or services.

See Microsoft Knowledge Base article KB883398 at the Microsoft Support website at support.microsoft.com for more information.

Table A-1. General Cluster Troubleshooting (continued)

Problem Probable Cause Corrective Action

Appendix:Troubleshooting 49

Page 50: EMC AX4-5 Hardware Installation and Troubleshooting Guide

You are prompted to configure one network instead of two during MSCS installation.

The TCP/IP configuration is incorrect.

The node-to-node network and public network must be assigned static IP addresses on different subnets. For information about assigning the network IPs with a specific variant of the Windows Server operating system (for example: Windows Server 2003 or Windows Server 2008), see Dell Failover Clusters with Microsoft Windows Server Installation and Troubleshooting Guide.

The private (point-to-point) network is disconnected.

Ensure that all systems are powered on so that the NICs in the private network are available.

Using Microsoft Windows NT® 4.0 to remotely administer a Windows Server 2003 cluster generates error messages.

Normal. Some resources in Windows Server 2003 are not supported in Windows NT 4.0.

Dell strongly recommends that you use Windows XP Professional or Windows Server 2003 for remote administration of a cluster running Windows Server 2003.

Table A-1. General Cluster Troubleshooting (continued)

Problem Probable Cause Corrective Action

50 Appendix:Troubleshooting

Page 51: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Unable to add a node to the cluster.

The new node cannot access the shared disks.

The shared disks are enumerated by the operating system differently on the cluster nodes.

Ensure that the new cluster node can enumerate the cluster disks using Windows Disk Administration. If the disks do not appear in Disk Administration, check the following:

• Check all cable connections

• Check all zone configurations

• Check the LUN Settings. Use the "Advanced" with "Minimum" option

One or more nodes may have the Internet Connection Firewall enabled, blocking RPC communications between the nodes.

Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services.

See Microsoft Knowledge Base article KB883398 at the Microsoft Support website at support.microsoft.com for more information.

The disks on the shared cluster storage appear unreadable or uninitialized in Windows Disk Administration

This situation is normal if you stopped the Cluster Service. If you are running Windows Server 2003, this situation is normal if the cluster node does not own the cluster disk.

No action required.

Table A-1. General Cluster Troubleshooting (continued)

Problem Probable Cause Corrective Action

Appendix:Troubleshooting 51

Page 52: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cluster Services does not operate correctly on a cluster running Windows Server 2003 and the Internet Firewall enabled.

The Windows Internet Connection Firewall is enabled, which may conflict with Cluster Services.

Perform the following steps:

1 On the Windows desktop, right-click My Computer and click Manage.

2 In the Computer Management window, double-click Services.

3 In the Services window, double-click Cluster Services.

4 In the Cluster Services window, click the Recovery tab.

5 Click the First Failure drop-down arrow and select Restart the Service.

6 Click the Second Failure drop-down arrow and select Restart the service.

7 Click OK.

For information on how to configure your cluster with the Windows Internet Connection Firewall enabled, see Microsoft Base (KB) articles 258469 and 883398 at the Microsoft Support website at support.microsoft.com and the Microsoft Windows Server 2003 Technet website at www.microsoft.com/technet.

Table A-1. General Cluster Troubleshooting (continued)

Problem Probable Cause Corrective Action

52 Appendix:Troubleshooting

Page 53: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Public network clients cannot access the applications or services that are provided by the cluster.

One or more nodes may have the Internet Connection Firewall enabled, blocking RPC communications between the nodes.

Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services.

See Microsoft Knowledge Base article KB883398 at the Microsoft Support website at support.microsoft.com for more information.

Table A-1. General Cluster Troubleshooting (continued)

Problem Probable Cause Corrective Action

Appendix:Troubleshooting 53

Page 54: EMC AX4-5 Hardware Installation and Troubleshooting Guide

54 Appendix:Troubleshooting

Page 55: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Cluster Data FormYou can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support.

Table B-1.

Cluster Information Cluster Solution

Cluster name and IP address

Server type

Installer

Date installed

Applications

Location

Notes

Table B-2.

Node Name Service Tag

Number

Public IP Address Private IP Address

Appendix: Cluster Data Form 55

Page 56: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Additional Networks

Table B-3.

Array Array xPE Type Array Service Tag

Number or World Wide

Name Seed

Number of Attached

DAEs

1

2

3

4

56 Appendix: Cluster Data Form

Page 57: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Zoning Configuration Form

Node HBA WWPNs or

Alias Names

Storage

WWPNs or

Alias Names

Zone Name Zone Set for

Configuration

Name

Appendix: Zoning Configuration Form 57

Page 58: EMC AX4-5 Hardware Installation and Troubleshooting Guide

58 Appendix: Zoning Configuration Form

Page 59: EMC AX4-5 Hardware Installation and Troubleshooting Guide

Index

C

cable configurationscluster interconnect, 19for client networks, 19for mouse, keyboard, and

monitor, 15for power supplies, 15

cluster configurationsconnecting to multiple shared

storage systems, 30connecting to one shared storage

system, 11direct-attached, 11, 20SAN-attached, 12

cluster storagerequirements, 10

clusteringoverview, 7

D

Dell | EMC CX3-40Cabling a two-node cluster, 21

Dell|EMC CX3-20Cabling a two-node cluster, 21

Dell|EMC CX3-80configuring, 41installing, 41

direct-attached cluster

about, 20

driversinstalling and configuring

Emulex, 38

E

Emulex HBAsinstalling and configuring, 38installing and configuring

drivers, 38

H

HBA driversinstalling and configuring, 38

host bus adapterconfiguring the Fibre Channel

HBA, 38

K

keyboardcabling, 15

M

monitorcabling, 15

Index 59

Page 60: EMC AX4-5 Hardware Installation and Troubleshooting Guide

mousecabling, 15

MSCSinstalling and configuring, 46

N

Navisphere Managerabout, 45hardware view, 45storage view, 45

network adapterscabling the private

network, 18-19cabling the public network, 19

O

operating systemWindows Server 2003, Enterprise

Edition

installing, 37

P

power suppliescabling, 15

private networkcabling, 17, 19hardware components, 19hardware components and

connections, 19

public network

cabling, 17

S

SANconfiguring SAN backup in your

cluster, 33

SAN-attached clusterabout, 25configurations, 11

single initiator zoningabout, 40

T

tape libraryconnecting to a PowerEdge

cluster, 32

troubleshootingconnecting to a cluster, 49shared storage subsystem, 47

W

warranty, 13

worldwide port name zoning, 39

Z

zonesimplementing on a Fibre Channel

switched fabric, 39using worldwide port names, 39

60 Index