117
Dell Storage Center SC4020 Storage System Deployment Guide

Sc4020 deployment guide

Embed Size (px)

Citation preview

Page 1: Sc4020 deployment guide

Dell Storage CenterSC4020 Storage System

Deployment Guide

Page 2: Sc4020 deployment guide

Notes, Cautions, and WarningsNOTE: A NOTE indicates important information that helps you make better use of your computer.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

Copyright © 2014 Dell Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Dell™ and the Dell logo are trademarks of Dell Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.

2014 – 08

Rev. C

Page 3: Sc4020 deployment guide

Contents

About this Guide.......................................................................................................7Revision History..................................................................................................................................... 7

Audience................................................................................................................................................ 7

Related Publications.............................................................................................................................. 7

1 About the SC4020 Storage Controller..............................................................9Storage Center Hardware Components.............................................................................................. 9

SC4020 Storage Controller.............................................................................................................9

Switches...........................................................................................................................................9

Expansion Enclosures......................................................................................................................9

Storage Center Architecture Options.................................................................................................10

Storage Center Communication......................................................................................................... 11

SC4020 Storage Controller with Fibre Channel HBAs................................................................. 11

SC4020 Storage Controller with iSCSI HBAs............................................................................... 12

Front-End Connectivity................................................................................................................. 12

Back-End Connectivity..................................................................................................................13

System Administration...................................................................................................................13

SC4020 Storage Controller Hardware................................................................................................13

SC4020 Front Panel Features and Indicators............................................................................... 13

SC4020 Back-Panel Features and Indicators...............................................................................14

SC4020 Storage Controller Module Features and Indicators .....................................................16

SC4020 Drives............................................................................................................................... 18

SC4020 Drive Numbering............................................................................................................. 18

SC200/SC220 Expansion Enclosure Hardware................................................................................. 18

SC200/SC220 Front Panel Features and Indicators.....................................................................19

SC200/SC220 Back Panel Features and Indicators.....................................................................20

SC200/SC220 EMM Features and Indicators...............................................................................20

SC200/SC220 Drives..................................................................................................................... 21

SC200/SC220 Drive Numbering...................................................................................................22

2 Install the Storage Center Hardware.............................................................. 23Unpack and Inventory the Storage Center Equipment..................................................................... 23

Prepare the Installation Environment.................................................................................................23

Gather Required Materials.................................................................................................................. 23

Safety Precautions...............................................................................................................................24

Electrical Safety Precautions.........................................................................................................24

Electrostatic Discharge Precautions.............................................................................................25

Page 4: Sc4020 deployment guide

General Safety Precautions...........................................................................................................25

Mount the Hardware in a Rack........................................................................................................... 25

Install Enterprise Plus Drives in an SC4020 Storage Controller........................................................26

Install Enterprise Plus Drives in SC200/SC220 Expansion Enclosures..............................................27

3 Connect the Back End....................................................................................... 29SAS Expansion Enclosure Cabling Guidelines....................................................................................29

SAS Redundancy............................................................................................................................29

SAS Port Types...............................................................................................................................29

Back-end Connections for an SC4020 without Expansion Enclosures...........................................30

Interconnect the Storage Controller Modules.............................................................................30

Back-end Connections for an SC4020 with SC200/SC220 Expansion Enclosures........................ 31

SC4020 and One SC200/SC220 Expansion Enclosure...............................................................32

SC4020 and Two or More SC200/SC220 Expansion Enclosures...............................................33

Label the Back-End Cables.................................................................................................................34

4 Connect the Front End...................................................................................... 35Types of Redundancy for Front-End Connections........................................................................... 35

Multipath IO.........................................................................................................................................35

MPIO Behavior...............................................................................................................................35

MPIO Configuration Instructions for Servers...............................................................................36

Front-End Connectivity Modes.......................................................................................................... 36

Failover Behavior for Legacy Mode and Virtual Port Mode.........................................................36

Virtual Port Mode...........................................................................................................................37

Legacy Mode................................................................................................................................. 39

Fibre Channel Zoning..........................................................................................................................41

Port Zoning Guidelines..................................................................................................................41

WWN Zoning Guidelines...............................................................................................................42

Connect the Front End for a Storage Center.....................................................................................42

Connect Fibre Channel Servers....................................................................................................42

Connect iSCSI Servers...................................................................................................................47

Label the Front-End Cables................................................................................................................ 51

5 Set up the Storage Center Software................................................................52Prerequisites........................................................................................................................................ 52

Hardware Configuration............................................................................................................... 52

Required Materials.........................................................................................................................52

Required Documents.................................................................................................................... 53

Required Software Versions..........................................................................................................53

Connect the Ethernet Management Interface...................................................................................53

Label the Ethernet Management Cables......................................................................................54

Turn on the Storage Controller.......................................................................................................... 55

Page 5: Sc4020 deployment guide

Configure the Storage Controller Modules....................................................................................... 56

Establish a Serial Connection to a Storage Controller Module...................................................56

Configure the Top Storage Controller Module............................................................................57

Configure the Bottom Storage Controller Module..................................................................... 60

Launch the Storage Center Startup Wizard....................................................................................... 63

Complete the Startup Wizard............................................................................................................. 65

License Agreement Page.............................................................................................................. 65

Load License Page........................................................................................................................ 66

Create Disk Folder Page................................................................................................................67

Add Controller Page......................................................................................................................70

Time Settings Page........................................................................................................................74

System Setup Page........................................................................................................................ 75

Configure SMTP Page....................................................................................................................77

Update Setup Page........................................................................................................................78

User Setup Page............................................................................................................................ 79

Configure IO Cards Page..............................................................................................................80

Configure Ports Page.................................................................................................................... 81

Generate SSL Cert Page................................................................................................................ 91

6 Perform Post-Setup Tasks................................................................................ 94Update the Storage Center Software.................................................................................................94

Perform a Phone Home......................................................................................................................94

Configure a Phone Home Proxy...................................................................................................95

Check for Storage Center Updates ............................................................................................. 96

Verify Connectivity and Failover.........................................................................................................96

Create Test Volumes.....................................................................................................................96

Test Basic Connectivity................................................................................................................. 97

Test Storage Controller Module Failover......................................................................................97

Test MPIO...................................................................................................................................... 97

Clean up Test Volumes.................................................................................................................98

Label SC200/SC220 Expansion Enclosures.......................................................................................98

Next Steps........................................................................................................................................... 99

A Adding or Removing an Expansion Enclosure............................................ 101Adding Expansion Enclosures to an SC4020 Deployed without Expansion Enclosures............... 101

Adding an Expansion Enclosure to a Chain Currently in Service....................................................103

Identify the Leader Storage Controller Module......................................................................... 104

Check Current Disk Count before Adding an Expansion Enclosure.........................................104

Add an SC200/SC220 Expansion Enclosure to the A-side Chain.............................................104

Add an SC200/SC220 Expansion Enclosure to the B-side Chain............................................ 106

Label the Back-End Cables.........................................................................................................108

Page 6: Sc4020 deployment guide

Removing an Expansion Enclosure from a Chain Currently in Service.......................................... 109

Release Disks before Removing an Expansion Enclosure.........................................................109

Disconnect the A-side Chain from the SC200/SC220 Expansion Enclosure...........................110

Disconnect the B-side Chain from the SC200/SC220 Expansion Enclosure........................... 111

B Troubleshooting Storage Center...................................................................115Troubleshooting the Serial Connection to a Storage Controller Module.......................................115

Troubleshooting Expansion Enclosures........................................................................................... 115

Troubleshooting Storage Center Licenses....................................................................................... 116

Troubleshooting the Storage Center Startup Wizard...................................................................... 116

C Getting Help...................................................................................................... 117Locating Your System Service Tag.................................................................................................... 117

Contacting Dell.................................................................................................................................. 117

Documentation feedback..................................................................................................................117

Page 7: Sc4020 deployment guide

Preface

About this GuideThis guide describes how to install and configure an SC4020 storage controller.

Revision History

Document Number: 690-052-001

Revision Date Description

A May 2014 Initial release

B June 2014 Updated the storage controller module configuration instructions and removed a reference to an internal document

C August 2014 Added information about iSCSI front-end connectivity support

AudienceThe information provided in this Deployment Guide is intended for use by Dell installers and Dell business partners.

Related PublicationsThe following documentation is available for the Dell Storage Center SC4020 Storage Controller.

• Dell SC4020 Storage Controller Getting Started Guide

Provides information about SC4020 storage controller setting up the storage controller and technical specifications. This document is shipped with your system.

• Dell Storage Center SC4020 Storage System Owner’s Manual

Provides information about SC4020 storage controller system features, front-end cabling, and technical specifications.

• Dell Storage Center SC4020 Storage System Service Guide

Provides information about SC4020 storage controller service and maintenance.

• Storage Center Operating System Release Notes

Contains information about features and open and resolved issues for a particular Storage Center software version.

• Storage Center System Manager Administrator’ Guide

Describes the Storage Center System Manager software that manages an individual Storage Center.

• Storage Center Software Update Guide

Describes how to upgrade Storage Center software from an earlier version to the current version.

• Storage Center Maintenance CD Instructions

Describes how to install Storage Center software on a storage controller. Installing Storage Center software using the Storage Center Maintenance CD is intended for use only by sites that cannot

7

Page 8: Sc4020 deployment guide

update Storage Center using the standard update options available through the Storage Center System Manager.

• Storage Center Command Utility Reference Guide

Provides instructions for using the Storage Center Command Utility. The Command Utility provides a command‐line interface (CLI) to enable management of Storage Center functionality on Windows, Linux, Solaris, and AIX platforms.

• Storage Center Command Set for Windows PowerShell

Provides instructions for getting started with Windows PowerShell cmdlets and scripting objects that interact with the Storage Center via the PowerShell interactive shell, scripts, and PowerShell hosting applications. Help for individual cmdlets is available online.

• Dell TechCenter

Provides technical white papers, best practice guides, and frequently asked questions about Dell Storage products. Go to: http://en.community.dell.com/techcenter/storage/ and select Dell Compellent in the Table of Contents.

8

Page 9: Sc4020 deployment guide

1About the SC4020 Storage ControllerThe SC4020 storage controller provides the central processing capabilities for the Storage Center Operating System (OS), application software (Storage Center System Manager), and management of RAID storage.

Storage Center Hardware ComponentsThe Storage Center described in this document consists of an SC4020 storage controller, enterprise-class switches, and expansion enclosures.

The SC4020 storage controller supports SC200 and SC220 expansion enclosures.

SC4020 Storage Controller

The SC4020 is a 2U storage controller that supports up to 24 2.5–inch hot-swappable SAS hard drives installed vertically side-by-side.

The SC4020 consists of two storage controller modules with multiple IO ports that provide communication with servers and expansion enclosures.

Switches

Dell offers enterprise-class switches as part of the total Storage Center solution.

The SC4020 supports Fibre Channel (FC) and Ethernet switches, which provide robust connectivity to servers and allow for the use of redundant transport paths. Fibre Channel (FC) or Ethernet switches can provide connectivity to remote a Storage Center to allow for replication of data. In addition, Ethernet switches provide connectivity to a management network to allow configuration, administration, and management of the Storage Center.

NOTE: The cabling between the storage controller and switches (and servers) is referred to as front‐end connectivity.

Expansion Enclosures

Expansion enclosures allow the data storage capabilities of the SC4020 to be expanded beyond the 24 internal drives in the storage controller chassis.

Storage Center supports a total of 120 drives per system. This total includes the drives in the SC4020 storage controller and the drives in the SC200/SC220 expansion enclosures.

The SC4020 supports up to eight SC200 expansion enclosures, up to four SC220 expansion enclosures, or any combination of SC200/SC220 expansion enclosures as long as the total drive count of the system does not exceed 120.

About the SC4020 Storage Controller 9

Page 10: Sc4020 deployment guide

Storage Center Architecture Options

A Storage Center that contains an SC4020 storage controller can be deployed in two configurations:

• An SC4020 storage controller without SC200/SC220 expansion enclosures.

Figure 1. SC4020 without Expansion Enclosures

• An SC4020 storage controller with one or more SC200/SC220 expansion enclosures.

Figure 2. SC4020 with Two Expansion Enclosures

Storage Center sites can be co-located or remotely connected and continue to share and replicate data between sites. Replication duplicates volume data to support a disaster recovery plan or to provide local access to a remote data volume. Typically, data is replicated remotely to safeguard against data threats as part of an overall disaster recovery plan.

10 About the SC4020 Storage Controller

Page 11: Sc4020 deployment guide

Storage Center CommunicationA Storage Center uses multiple types of communication for both data transfer and administrative functions. Storage Center communication is classified into three types: front end, back end, and system administration.

SC4020 Storage Controller with Fibre Channel HBAs

Figure 3. SC4020 Storage Controller Fibre Channel Front-End Communication

Item Description Speed Communication Type

1 Server with Fibre Channel Host Bus Adapters (HBAs)

8 Gbps or 16 Gbps Front End

2 Fibre Channel switch 8 Gbps or 16 Gbps Front End

3 SC4020 storage controller IO port dependent N/A

4 SC200/SC220 expansion enclosures 6 Gbps per channel Back End

5 Remote Storage Center connected via iSCSI for replication

1 Gbps or 10 Gbps Front End

6 Ethernet switch 1 Gbps or 10 Gbps Front End

7 Ethernet connection from a computer to the SC4020 through the Ethernet switch

10 Mbps, 100 Mbps, or 1 Gbps

System Administration

8 Serial connection from a computer to the SC4020

115,200 Kbps System Administration (Service and installation only)

About the SC4020 Storage Controller 11

Page 12: Sc4020 deployment guide

SC4020 Storage Controller with iSCSI HBAs

Figure 4. SC4020 Storage Controller iSCSI Front-End Communication

Item Description Speed Communication Type

1 Server with iSCSI Host Bus Adapters (HBAs) 1 Gbps or 10 Gbps Front End

2 Ethernet switch 1 Gbps or 10 Gbps Front End

3 SC4020 storage controller IO port dependent N/A

4 SC200/SC220 expansion enclosures 6 Gbps per channel Back End

5 Remote Storage Center connected via iSCSI for replication

1 Gbps or 10 Gbps Front End

6 Ethernet connection from a computer to the SC4020 through the Ethernet switch

10 Mbps, 100 Mbps, or 1 Gbps

System Administration

7 Serial connection from a computer to the SC4020

115,200 Kbps System Administration (Service and installation only)

Front-End Connectivity

Front-end connections provide IO paths from servers to storage controllers and replication paths from one Storage Center to another Storage Center. The SC4020 provides two types of front-end ports:

• Front-end Fibre Channel ports: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by connecting to the storage controller Fibre Channel ports through one or more Fibre Channel switches. The Fibre Channel ports are located on the back of the storage controller, but are designated as front-end ports in Storage Center.

• Front-end iSCSI ports: Hosts, servers, or Network Attached Storage (NAS) appliances access storage by connecting to the storage controller iSCSI ports through one or more Ethernet switches. The

12 About the SC4020 Storage Controller

Page 13: Sc4020 deployment guide

SC4020 also uses iSCSI ports to replicate data to a remote Storage Center. The iSCSI ports are located on the back of the storage controller, but are designated as front-end ports in Storage Center.

CAUTION: The embedded iSCSI ports are only used for replication to another Storage Center. The embedded iSCSI ports cannot be used for front‐end connectivity to servers.

Back-End Connectivity

Back-end connectivity is strictly between the storage controller and expansion enclosures, which hold the physical drives that provide back-end expansion storage.

The SC4020 supports SAS connections to SC200/SC220 expansion enclosures. SAS provides a point-to-point topology that transfers data on four lanes simultaneously. Each lane can perform concurrent IO transactions at 6 Gbps. The SAS ports are located on the back of the storage controller, but are designated as back-end ports in Storage Center.

System Administration

To perform system administration, the Storage Center communicates with computers using Ethernet and serial ports.

• Ethernet port: Used for configuration, administration, and management of Storage Center.

NOTE: The baseboard management controller (BMC) does not have a physical port on the SC4020. The BMC is accessed through the same Ethernet port that is used for Storage Center configuration, administration, and management.

• Serial port: Used for initial configuration of the storage controller modules. In addition, it is used to perform support only functions when instructed by Dell Technical Support Services.

SC4020 Storage Controller HardwareThe SC4020 storage controller ships with two power supply/cooling fan modules and two redundant storage controller modules.

Each storage controller module contains the communication ports of the storage controller.

SC4020 Front Panel Features and Indicators

The front panel of the SC4020 contains power and status indicators, a system identification button, and a seven-segment display.

In addition, the hard drives are installed and removed through the front of the storage controller chassis.

Figure 5. SC4020 Front View

About the SC4020 Storage Controller 13

Page 14: Sc4020 deployment guide

Item Name Icon Description

1 Power indicator Lights when the storage controller power is on.

• Off: No power

• On steady green: At least one power supply is providing power to the storage controller

2 Status indicator Lights when at least one power supply is supplying power to the storage controller.

• Off: No power

• On steady blue: Power is on and firmware is running

• Blinking blue: Storage controller is busy booting or updating

• On steady amber: Hardware detected fault

• Blinking amber: Software detected fault

3 Identification button

Lights when the storage controller identification is enabled.

• Off: Normal status

• Blinking blue: Storage controller identification enabled

4 7–segment display — Displays the storage controller ID (01) either through firmware or by holding the identification button

5 Hard drives — Up to 24 2.5–inch SAS hard drives

SC4020 Back-Panel Features and Indicators

The back panel of the SC4020 shows the storage controller module and power supply indicators.

Figure 6. SC4020 Back View

14 About the SC4020 Storage Controller

Page 15: Sc4020 deployment guide

Item Name Icon Description

1 Power supply/cooling fan module (PSU) (2)

— Contains a 580 W power supply and fans that provide cooling for the storage controller.

2 Battery backup unit (BBU) (2)

— Allows the storage controller module to shut down gracefully when a loss of AC power is detected.

3 Storage controller module (2)

— Each storage controller module contains:

• Two 6 Gbps SAS ports

• Four 8 Gbps Fibre Channel ports or two 10 Gbps iSCSI ports

• One embedded Ethernet port, which is used for system management

• One embedded iSCSI port, which is only used for replication to another Storage Center

• One serial port, which is used for initial configuration and support only functions

4 Cooling fan fault indicator (2)

• Off: Normal operation

• Steady amber: Fan fault or there is a problem communicating with the PSU

• Blinking amber: PSU is in programming mode

5 AC power fault indicator (2)

• Off: Normal operation

• Steady Amber: A PSU has been removed or there is a problem communicating with the PSU

• Blinking amber: PSU is in programming mode

6 AC power status indicator (2)

• Off: AC power is off, or the power is on but the module is not in a controller, or it may indicate a hardware fault

• Steady green: AC power is on

• Blinking green: AC power is on and the PSU is in standby mode

7 DC power fault indicator (2)

• Off: Normal operation

• Steady amber: A PSU has been removed, or there is a DC or other hardware fault, or there is a problem communicating with the PSU

• Blinking amber: PSU is in programming mode

8 Power socket (2) — Accepts a standard computer power cord.

9 Power switch (2) — Controls power for the storage controller. There is one switch for each power supply/cooling fan module.

About the SC4020 Storage Controller 15

Page 16: Sc4020 deployment guide

SC4020 Storage Controller Module Features and Indicators

The SC4020 includes two storage controller modules in two interface slots.

Figure 7. SC4020 Storage Controller Module with Fibre Channel Ports

Figure 8. SC4020 Storage Controller Module with iSCSI Ports

Item Control/Feature Icon Description

1 Battery status indicator • Blinking green (on 0.5 sec. / off 1.5 sec.): Battery heartbeat

• Fast blinking green (on 0.5 sec. / off 0.5 sec.): Battery is charging

• Steady green: Battery is ready

2 Battery fault indicator • Off: No faults

• Blinking amber: Correctable fault detected

• Steady amber: Uncorrectable fault detected; replace battery

3 MGMT port 10 Mbps, 100 Mbps, or 1 Gbps Ethernet port used for storage controller management and access to the BMC

4 iSCSI port 10 Mbps, 100 Mbps, or 1 Gbps Ethernet port used only for replication to another Storage Center

5 SAS activity indicators • Off: Port is off

• Steady green: Port is on, but without activity

• Blinking green: Port is on and there is activity

16 About the SC4020 Storage Controller

Page 17: Sc4020 deployment guide

Item Control/Feature Icon Description

6 Storage controller module status

On: Storage controller module completed POST

7 Recessed power off button

Powers down the storage controller module if held for more than five seconds

8 Storage controller module fault

• Off: No faults

• Steady amber: Firmware has detected an error

• Blinking amber: Storage controller module is performing POST

9 Recessed reset button Reboots the storage controller module forcing it to restart at the POST process

10 Identification LED • Off: Identification disabled

• Blinking blue (for 15 sec.): Identification is enabled

• Blinking blue (continuously): Storage controller module shut down to the Advanced Configuration and Power Interface (ACPI) S5 state

11 USB port One USB 3.0 connector

NOTE: For engineering use only

12 Diagnostic LEDs (8) • Green LEDs 0–3: Low byte hex POST code

• Green LEDs 4–7: High byte hex POST code

13 Serial port (3.5 mm mini jack)

Used with a serial to mini adapter to perform initial storage controller module configurations. In addition, it is used to perform support only functions when instructed by Dell Technical Support Services.

14 Fibre Channel Ports (4) with three LEDS per port or iSCSI Ports (2) with one LED per port

Fibre Channel Ports

• All off: No power

• All on: Booting up

• Blinking amber: 2 Gbps activity

• Blinking green: 4 Gbps activity

• Blinking yellow: 8 Gbps activity

• Blinking amber and yellow: Beacon

• All blinking (simultaneous): Firmware initialized

• All blinking (alternating): Firmware fault

iSCSI Ports

• Off: No power

• Steady Amber: Link

• Blinking Green: Activity

15 Mini-SAS port B Back-end expansion port B

16 Mini-SAS port A Back-end expansion port A

About the SC4020 Storage Controller 17

Page 18: Sc4020 deployment guide

SC4020 Drives

Dell Enterprise Plus Drives are the only drives that can be installed in an SC4020 storage controller. If a non-Dell Enterprise Plus Drive is installed, Storage Center prevents the drive from being managed.

The indicators on the drives provide status and activity information.

Figure 9. SC4020 Hard Drive

Item Name Indicator Code

1 Hard drive activity indicator

• Blinking green: Drive activity

• Steady green: Drive is detected and there are no faults

2 Hard drive status indicator

• Off: Normal operation

• Blinking amber (on 1 sec. / off 1 sec.): Drive identification is enabled

• Blinking amber (on 2 sec. / off 1 sec.): Drive failed

• Steady amber: Drive has been manually put into a failed state

SC4020 Drive Numbering

In an SC4020 storage controller, drives are numbered from left to right starting from 0.

Figure 10. SC4020 Drive Numbering

SC200/SC220 Expansion Enclosure HardwareSC200/SC220 expansion enclosures hold drives for data storage and connect directly to the SAS ports on the back of the SC4020 storage controller.

The SC200 is a 2U expansion enclosure that supports up to 12 3.5‐inch hard drives installed in a four‐column, three-row configuration. The SC220 is a 2U expansion enclosure that supports up to 24 2.5‐inch hard drives installed vertically side‐by‐side. The SC200/SC220 expansion enclosures ship with two power supply/cooling fan modules and two redundant enclosure management modules (EMMs).

18 About the SC4020 Storage Controller

Page 19: Sc4020 deployment guide

SC200/SC220 Front Panel Features and Indicators

The SC200/SC220 front panel shows the expansion enclosure status and power supply status.

Figure 11. SC200 Expansion Enclosure Front View

Figure 12. SC220 Expansion Enclosure Front View

Item Name Icon Description

1 Expansion enclosure status indicator

Lights when the expansion enclosure power is on.

• Off: No power

• On steady blue: Normal operation

• Blinking blue: Indicates that Storage Center is identifying the enclosure

• On steady amber: Expansion enclosure is turning on or was reset

• Blinking amber: Expansion enclosure is in the fault state.

2 Power supply status indicator

Lights when at least one power supply is supplying power to the expansion enclosure.

• Off: Both power supplies are off.

• On steady green: At least one power supply is providing power to the expansion enclosure

3 Hard drives — Dell Enterprise Plus Drives

• SC200: Up to 12 3.5-inch hard drives

• SC220: Up to 24 2.5-inch hard drives

About the SC4020 Storage Controller 19

Page 20: Sc4020 deployment guide

SC200/SC220 Back Panel Features and Indicators

The SC200/SC220 back panel provides controls to power up and reset the expansion enclosure, indicators to show the expansion enclosure status, and connections for back-end cabling.

Figure 13. SC200/SC220 Expansion Enclosure Back View

Item Name Icon Description

1 DC power indicator • Green: Normal operation. The power supply module is supplying DC power to the expansion enclosure

• Off: Power switch is off, the power supply is not connected to AC power, or there is a fault condition

2 Power supply/cooling fan indicator

• Amber: Power supply/cooling fan fault is detected

• Off: Normal operation

3 AC power indicator • Green: Power supply module is connected to a source of AC power, whether or not the power switch is on

• Off: Power supply module is disconnected from a source of AC power

4 Power switches (2) — Controls power for the expansion enclosure. There is one switch for each power supply/cooling fan module.

5 Power supply/cooling fan modules (2)

— Contains a 700 W power supply and fans that provide cooling for the expansion enclosure.

6 Enclosure Management Modules (2)

— EMMs provide the data path and management functions for the expansion enclosure.

SC200/SC220 EMM Features and Indicators

SC200/SC220 expansion enclosures include two Enclosure Management Modules (EMMs) in two interface slots.

Figure 14. SC200/SC220 EMM Features and Indicators

20 About the SC4020 Storage Controller

Page 21: Sc4020 deployment guide

Item Name Icon Description

1 System status indicator

Not used on SC200/SC220 expansion enclosures.

2 Serial port For engineering use only.

3 SAS port A (in) Connects to the storage controller module or to other expansion enclosures. SAS ports A and B can be used for either input or output. However for cabling consistency, use port A as an input port.

4 Port A link status

• Green: All the links to the port are connected

• Amber: One or more links are not connected

• Off: Expansion Enclosure is not connected

5 SAS port B (out)

Connects to the storage controller module or to other expansion enclosures. SAS ports A and B can be used for either input or output. However for cabling consistency, use port B as an output port.

6 Port B link status

• Green: All the links to the port are connected

• Amber: One or more links are not connected

• Off: Expansion Enclosure is not connected

7 EMM status indicator

• On steady green: Normal operation

• Amber: Expansion Enclosure did not boot or is not properly configured

• Blinking green: Automatic update in process

• Blinking amber (two times per sequence): Expansion Enclosure is unable to communicate with other expansion enclosures

• Blinking amber (four times per sequence): Firmware update failed

• Blinking amber (five times per sequence): Firmware versions are different between the two EMMs

SC200/SC220 Drives

Dell Enterprise Plus Drives are the only drives that can be installed in SC200/SC220 expansion enclosures. The drives are installed either horizontally (SC200) or vertically (SC220) in the enclosure. The indicators on the drives provide status and activity information.

If a non-Dell Enterprise Plus Drive is installed, Storage Center prevents the drive from being managed.

Figure 15. SC200/SC220 Hard Drive Indicators

About the SC4020 Storage Controller 21

Page 22: Sc4020 deployment guide

Item Name Indicator Code

1 Hard drive activity indicator

• Blinking green: Indicates drive activity

• Steady green: Indicates no drive activity

2 Hard drive status indicator

• Steady green: Normal operation

• Blinking green (on 1 sec. / off 1 sec.): Drive identification is enabled

• Off: No power to the drive

SC200/SC220 Drive Numbering

In SC200/SC220 expansion enclosures, drives are numbered from left to right starting from 0.

Storage Center System Manager identifies drives as Disk‑XX-YY, where XX is the number of the

expansion enclosure that physically contains the drive, and YY is the drive position inside the expansion enclosure.

• SC200: Drives are numbered in rows starting from 0 at the top-left drive.

Figure 16. SC200 Expansion Enclosure Drive Numbering

• SC220: Drives are numbered starting from 0 at the left-most drive.

Figure 17. SC220 Expansion Enclosure Drive Numbering

22 About the SC4020 Storage Controller

Page 23: Sc4020 deployment guide

2Install the Storage Center HardwareVerify that the ordered equipment arrived safely, prepare for the installation, mount the equipment in a rack, and install the disks.

Unpack and Inventory the Storage Center EquipmentMake sure the equipment that was ordered arrived safely on site:

• Storage controller chassis, storage controller modules, and drives

• Expansion enclosures and drives, if applicable

• Front-end cables

• Back-end cables

• Switches

Prepare the Installation EnvironmentMake sure that the environment is ready for Storage Center installation.

• Rack Space: There must be sufficient space in the rack to accommodate the storage controller chassis, expansion enclosures, and switches.

• Power: Power must be available in the rack, and the power delivery system must meet the power requirements for the Storage Center.

• Connectivity: The rack must be wired for connectivity to the management network and any networks that carry front-end IO from the Storage Center to servers.

Gather Required MaterialsMake sure that you have the following items before you begin the Storage Center installation.

Table 1. Required Materials

Required Item Description

Storage Center license file Makes Storage Center settings available depending on what features were purchased.

RS-232/3.5 mm mini jack serial cable and computer

Used to run commands and view console messages during storage controller module configuration.

Computer connected to the same network as the Storage Center

Used to connect to the Storage Center System Manager (through a web browser) to complete the Storage Center configuration.

Hardware Serial Number (HSN) Used to configure the storage controller modules.

Install the Storage Center Hardware 23

Page 24: Sc4020 deployment guide

Required Item Description

Storage Architect or Business Partner supplied pre-installation documents

(Optional) Provide site-specific settings used during deployment:

• List of hardware needed to support storage requirements

• Optional connectivity diagrams to illustrate cabling between the storage controller, expansion enclosures, switches, and servers

• Network information, such as IP addresses, subnet masks, and gateways

Safety PrecautionsAlways follow these safety precautions to avoid injury and damage to Storage Center equipment.

If equipment described in the document is used in a manner not specified by Dell, the protection provided by the equipment may be impaired. For your safety and protection, observe the rules described in the following sections.

NOTE: See the safety and regulatory information that shipped with each Storage Center component. Warranty information may be included within this document or as a separate document.

Electrical Safety Precautions

Always follow electrical safety precautions to avoid injury and damage to Storage Center equipment.

• Provide a suitable power source with electrical overload protection. All Storage Center components must be grounded before applying power. Make sure that there is a safe electrical earth connection to power supply cords. Check the grounding before applying power.

• The plugs on the power supply cords are used as the main disconnect device. Make sure that the socket outlets are located near the equipment and are easily accessible.

• Know the locations of the equipment power switches and the room's emergency power-off switch, disconnection switch, or electrical outlet.

• Do not work alone when working with high-voltage components.

• Do not use mats designed to decrease electrostatic discharge as protection from electrical shock. Instead, use rubber mats that have been specifically designed as electrical insulators.

• Do not remove covers from the power supply (PSU). Disconnect the power connection before removing a PSU from the storage controller chassis.

• Do not remove a faulty PSU unless you have a replacement model of the correct type ready for insertion. A faulty PSU must be replaced with a fully operational module within 24 hours.

• Permanently unplug the storage controller chassis before you move it or if you think it has become damaged in any way. When powered by multiple AC sources, disconnect all supply power for complete isolation.

24 Install the Storage Center Hardware

Page 25: Sc4020 deployment guide

Electrostatic Discharge Precautions

Always follow electrostatic discharge (ESD) precautions to avoid injury and damage to Storage Center equipment.

Electrostatic discharge (ESD) is generated by two objects with different electrical charges coming into contact with each other. The resulting electrical discharge can damage electronic components and printed circuit boards. Follow these guidelines to protect your equipment from ESD:

• Dell recommends that you always use a static mat and static strap while working on components in the interior of the storage controller chassis.

• Observe all conventional ESD precautions when handling plug-in modules and components.

• Use a suitable ESD wrist or ankle strap.

• Avoid contact with backplane components and module connectors.

• Keep all components and printed circuit boards (PCBs) in their antistatic bags until ready for use.

General Safety Precautions

Always follow general safety precautions to avoid injury and damage to Storage Center equipment.

• Keep the area around the storage controller chassis clean and free of clutter.

• Place any system components that have been removed away from the storage controller chassis or on a table so that they are not in the way of foot traffic.

• While working on the storage controller chassis, do not wear loose clothing such as neckties and unbuttoned shirt sleeves, which can come into contact with electrical circuits or be pulled into a cooling fan.

• Remove any jewelry or metal objects from your body because they are excellent metal conductors that can create short circuits and harm you if they come into contact with printed circuit boards or areas where power is present.

• Do not lift a storage controller chassis by the handles of the power supply units (PSUs). They are not designed to hold the weight of the entire chassis, and the chassis cover may become bent.

• Before moving a storage controller chassis, remove the PSUs to minimize weight.

• Do not remove drives until you are ready to replace them.

Mount the Hardware in a RackDell recommends mounting the hardware in a rack to ensure rack safety and to allow for expansion.

About this task

Mount the storage controller chassis and expansion enclosures in a manner that allows for expansion in the rack and prevents the rack from becoming top‐heavy. Dell recommends mounting the storage controller chassis in the middle of the rack.

Steps

1. Determine where to mount the storage controller chassis in the rack and mark the location.

2. Install the rack rails at the marked location using the top mounting holes of the bottom U.

3. Mount the storage controller chassis on the rails and secure the chassis.

a. Insert the top locking pin in the middle mounting hole of the top U.

b. Insert bottom locking pin in the bottom mounting hole of the bottom U

Install the Storage Center Hardware 25

Page 26: Sc4020 deployment guide

Figure 18. Mount the SC4020 Storage Controller Chassis

4. Secure the storage controller chassis to the rack using the mounting bolts.

a. Lift the latch on each chassis ear to access the mounting bolts.

b. Screw the bolts into the rack.

c. Close the latch on each chassis ear.

5. If the Storage Center system includes expansion enclosures, mount expansion enclosures below the storage controller. See the instructions included with the expansion enclosure rail kit for detailed steps.

Figure 19. Mount Expansion Enclosures Below the SC4020 Storage Controller Chassis

6. Secure the expansion enclosures to the rack using the bolts in the mounting ears.

Install Enterprise Plus Drives in an SC4020 Storage ControllerWhen an SC4020 is shipped, the drives are not installed in the storage controller chassis.

About this taskInstall the drives and/or drive blanks in the storage controller chassis after it is mounted in a rack. Start on the left side of the chassis with slot 0 and install drives from left to right.

26 Install the Storage Center Hardware

Page 27: Sc4020 deployment guide

NOTE: Dell Enterprise Plus drives and Enterprise Solid-State Drives (eSSDs) are the only drives that can be installed in the storage controller chassis. If non Dell Enterprise Plus drives are installed, Storage Center prevents management of the drives.

Figure 20. Installing an Enterprise Plus Drive in an SC4020

Steps

1. Open the hard drive carrier handle and insert the hard drive carrier into the hard drive bay.

2. Slide the drive into the bay until the hard drive carrier contacts the backplane.

3. Close the hard drive carrier handle to lock the hard drive in place.

4. Continue to push firmly until you hear a click and the hard drive carrier handle fully engages.

5. Insert drive blanks into any open slots in the chassis.

All of the drive slots in the chassis must be filled with a drive or drive blank.

Related Links

SC4020 Drive Numbering

Install Enterprise Plus Drives in SC200/SC220 Expansion EnclosuresSC200/SC220 expansion enclosures are shipped with the drives installed and empty drive blanks installed in unused slots.

About this taskThe following instructions show the installation of the Dell Enterprise Plus drive for reference only.

Install the Storage Center Hardware 27

Page 28: Sc4020 deployment guide

NOTE: Dell Enterprise Plus drives and Enterprise Solid-State Drives (eSSDs) are the only drives that can be installed in SC200/SC220 expansion enclosures. If non Dell Enterprise Plus drives are installed, Storage Center prevents management of the drives.

Figure 21. Installing an Enterprise Plus Drive in an Expansion Enclosure

Steps

1. Insert the hard drive carrier into the hard drive bay.

Start on the left side of the expansion enclosure with slot 0 and install drives from left to right.

2. Slide the drive into the bay until the hard drive carrier contacts the backplane.

3. Close the hard drive carrier handle to lock the hard drive in place.

4. Continue to push firmly until you hear a click and the hard drive carrier handle fully engages.

5. Insert drive blanks into any open slots in the expansion enclosure.

All of the drive slots in the expansion enclosure must be filled with a drive or drive blank.

Related Links

SC200/SC220 Drive Numbering

28 Install the Storage Center Hardware

Page 29: Sc4020 deployment guide

3Connect the Back EndAn SC4020 storage controller can be deployed with or without expansion enclosures.

When an SC4020 is deployed with expansion enclosures, the expansion enclosures connect to the SAS ports on the storage controller modules.

When an SC4020 is deployed without expansion enclosures, the storage controller modules must be interconnected using SAS cables. This connection enables SAS path redundancy between the storage controller modules.

SAS Expansion Enclosure Cabling GuidelinesMultiple SAS expansion enclosures can be connected to the storage controller by cabling the expansion enclosures in series. A series of interconnected expansion enclosures is referred to as a SAS chain.

A SAS chain begins at an initiator port on a storage controller module SAS port and connects to the first expansion enclosure. Subsequent expansion enclosures are connected in series.

SAS Redundancy

Redundant SAS cabling ensures that an IO port failure does not cause a Storage Center outage.

The IO continues to flow through the remaining functioning paths. Redundant cabling also provides protection from a storage controller module failure. Each SAS chain is made up of two paths that are referred to as the A side and B side.

SAS Port Types

There are two types of SAS ports on each storage controller module: initiator only and initiator/target.

The ports labeled A (red) are initiator only ports and the ports labeled B (blue) are initiator/target ports.

Figure 22. SC4020 SAS Ports

1. Storage controller module 1 2. Storage controller module 2

3. Initiator only ports (Ports A) 4. Initiator/target ports (Ports B)

Connect the Back End 29

Page 30: Sc4020 deployment guide

Back-end Connections for an SC4020 without Expansion EnclosuresWhen you configure an SC4020 without expansion enclosures, you must interconnect the storage controller modules using SAS cables.

This connection enables SAS path redundancy between the storage controller modules.

Interconnect the Storage Controller Modules

The storage controller modules of an SC4020 without expansion enclosures must be interconnected using SAS cables.

PrerequisitesLocate the SAS cables and labels that shipped with the SC4020.

About this taskConnect the SAS cables between the initiator and target ports of each storage controller module and label the cables.

Steps

1. Connect a SAS cable between storage controller module 1: port A and storage controller module 2: port B.

2. Connect a SAS cable between storage controller module 1: port B and storage controller module 2: port A.

Figure 23. SAS Connections

1. Storage controller module 1 2. Storage controller module 2

3. SAS cable 4. SAS cable

3. Label both ends of each SAS cable with the supplied labels.

a. Near the connector, align the label perpendicular to the cable and affix it starting with the top edge of the label.

b. Wrap the label around the cable until it fully encircles the cable. The bottom of each pre‐made label is clear so that it does not obscure the text.

c. Apply a matching label to the other end of the cable.

30 Connect the Back End

Page 31: Sc4020 deployment guide

Back-end Connections for an SC4020 with SC200/SC220 Expansion EnclosuresThis section shows common cabling between the SC4020 storage controller and SC200/SC220 expansion enclosures. Locate the scenario that most closely matches the Storage Center you are configuring and follow the instructions, modifying them as necessary.

The SC4020 supports up to eight SC200 expansion enclosures, up to four SC220 expansion enclosures, or any combination of SC200/SC220 expansion enclosures as long as the total drive count in Storage Center does not exceed 120.

SC200/SC220 expansion enclosures chains are cabled as follows.

• Side A (Red): Expansion enclosures are connected from port B to port A, using the top EMMs.

• Side B (Blue): Expansion enclosures are connected from port A to port B, using the bottom EMMs.

Table 2. High-Level Expansion Enclosure Connection Steps

Path High-Level Steps

Chain 1: Side A (Red)1. Connect storage controller module 1: port A to the first expansion

enclosure: top EMM, port A.

2. Connect the remaining expansion enclosures in series from port B to port A using the top EMMs.

3. Connect the last expansion enclosure: top EMM, port A to storage controller module 2: port B.

Chain 1: Side B (Blue)1. Connect storage controller module 2: port B to the last expansion

enclosure: bottom EMM, port B.

2. Connect the remaining expansion enclosures in series from port A to port B using the bottom EMMs.

3. Connect the first expansion enclosure: bottom EMM, port A to storage controller module 2: port A.

Connect the Back End 31

Page 32: Sc4020 deployment guide

SC4020 and One SC200/SC220 Expansion Enclosure

This figure shows an SC4020 cabled to one expansion enclosure forming a single chain.

Figure 24. SC4020 and One Expansion Enclosure

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1

Table 3. Connected to One Expansion Enclosure

Path Connections

Chain 1: A Side (Red)1. Storage controller module 1: port A → expansion enclosure 1: top EMM,

port A.

2. Expansion enclosure 1: top EMM, port B → Storage controller module 2: port B.

Chain 1: B Side (Blue)1. Storage controller module 2: port A → expansion enclosure 1: bottom

EMM, port A.

2. Expansion enclosure 1: bottom EMM, port B → Storage controller module 1: port B.

32 Connect the Back End

Page 33: Sc4020 deployment guide

SC4020 and Two or More SC200/SC220 Expansion Enclosures

This figure shows an SC4020 cabled to two expansion enclosures forming a single chain.

Figure 25. SC4020 and Two Expansion Enclosures

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. Expansion enclosure 2

To connect additional expansion enclosures, cable the expansion enclosures in series from port B to port A using the top EMMs and from port A to port B using the bottom EMMs.

Table 4. Connected to Two Expansion Enclosures

Path Connections

Chain 1: A Side (Red) 1. Storage controller module 1: port A → expansion enclosure 1: top EMM, port A.

2. Expansion enclosure 1: top EMM, port B → expansion enclosure 2: top EMM, port A.

3. Expansion enclosure 2: top EMM, port B → storage controller module 2: port B.

Chain 1: B Side (Blue) 1. Storage controller module 2: port A → expansion enclosure 1: bottom EMM, port A.

2. Expansion enclosure 1: bottom EMM, port B → expansion enclosure 2: bottom EMM, port A.

3. Expansion enclosure 2: bottom EMM, port B → storage controller module 1: port B.

Connect the Back End 33

Page 34: Sc4020 deployment guide

Label the Back-End CablesLabel the back-end cables that connect storage controller modules to expansion enclosures to indicate the chain number and side (A or B).

PrerequisitesLocate the pre-made cable labels provided with the SC200/SC220 expansion enclosures.

About this taskApply cable labels to both ends of each SAS cable that connects a storage controller module to an expansion enclosure.

Steps

1. Starting with the top edge of the label, attach the label to the cable near the connector.

Figure 26. Attach Cable Label

2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text.

3. Apply a matching label to the other end of the cable.

34 Connect the Back End

Page 35: Sc4020 deployment guide

4Connect the Front EndFront-end cabling refers to the connections between the storage controller and servers.

Front‐end connections can be made using Fibre Channel or iSCSI interfaces. Dell recommends connecting servers to the storage controller using the most redundant options available.

Types of Redundancy for Front-End ConnectionsFront-end redundancy is achieved by eliminating single points of failure that could cause a server to lose connectivity to Storage Center.

Depending on how Storage Center is configured, the following types of redundancy are available.

• Path redundancy: When multiple paths are available from a server to a storage controller, a server configured for multipath IO (MPIO) can use multiple paths for IO. If a path fails, the server continues to use the remaining active paths.

• Storage controller module redundancy: If one storage controller module fails, the front-end ports fail over to the functioning storage controller module. Both front-end connectivity modes (legacy mode and virtual port mode) provide storage controller module redundancy.

• Port redundancy: In virtual port mode, if a port goes down because it is disconnected or there is a hardware failure, the port can fail over to another functioning port in the same fault domain. Port redundancy is available only for Storage Centers operating in virtual port mode.

Related Links

Multipath IO

Legacy Mode

Virtual Port Mode

Multipath IOMultipath IO (MPIO) allows a server to use multiple paths for IO if they are available.

MPIO software offers redundancy at the path level. MPIO loads as a driver on the server, and typically operates in a round-robin manner by sending packets first down one path and then the other. If a path fails, MPIO software continues to send packets down the functioning path. MPIO is operating-system specific.

MPIO Behavior

When MPIO is configured, a server can send IO to both storage controller modules.

However, a single volume is owned by one storage controller module at a time. Even if there are active paths from a server to both storage controller modules, IO for a single volume is always processed by one storage controller module. This limitation only applies to individual volumes. If a server has two or more volumes owned by different storage controller modules, the server can send IO to both storage controller modules.

Connect the Front End 35

Page 36: Sc4020 deployment guide

MPIO Configuration Instructions for Servers

To use MPIO, configure MPIO on the server prior to connecting the Storage Center front end.

To configure MPIO on a server, see the Dell document that corresponds to the server operating system. Depending on the operating system, you may need to install MPIO software or configure server options.

Table 5. MPIO Configuration Documents

Operating System Document with MPIO Instructions

IBM AIX Dell Compellent AIX Best Practices

Linux • Dell Compellent Storage Center Linux Best Practices

• Dell Compellent Best Practices: Storage Center with SUSE Linux Enterprise Server 11

VMware vSphere 5.x Dell Compellent Storage Center Best Practices with vSphere 5.x

Windows Server 2008, 2008 R2, and 2012 Dell Compellent Storage Center Microsoft Multipath IO (MPIO) Best Practices Guide

Front-End Connectivity ModesStorage Center uses either virtual port mode or legacy mode to transport data to servers that use SAN storage.

In virtual port mode, all ports are active, and if one port fails the load is distributed between the remaining ports within the same fault domain. In legacy mode, front-end IO ports are configured in pairs of primary and reserved ports.

Failover Behavior for Legacy Mode and Virtual Port Mode

Legacy mode and virtual port mode behave differently during failure conditions because they use different mechanisms to provide fault tolerance.

Table 6. Differences between Legacy Mode and Virtual Port Mode Failover Behavior

Scenario Virtual Port Mode Legacy Mode

Normal operating conditions

All ports are active and pass IO. • Primary ports pass IO.

• Reserved ports remain in a standby mode until a storage controller module failure.

A storage controller module fails

Virtual ports on the failed storage controller module move to physical ports on the functioning storage controller module.

Primary ports on the failed storage controller module fail over to reserved ports on the functioning storage controller module.

A single port fails An individual port fails over to another port in the fault domain.

The port does not fail over because there was no storage controller module failure. If a second path is available, MPIO software on the server provides fault tolerance.

36 Connect the Front End

Page 37: Sc4020 deployment guide

Virtual Port Mode

Virtual port mode provides port and storage controller module redundancy by connecting multiple active ports to each Fibre Channel or Ethernet switch.

In virtual port mode, each physical port has a WWN (World Wide Name) and a virtual WWN. Servers target only the virtual WWNs. During normal conditions, all ports process IO. If a port or storage controller module failure occurs, a virtual WWN moves to another physical WWN in the same fault domain. When the failure is resolved and ports are rebalanced, the virtual port returns to the preferred physical port.

Virtual port mode provides the following advantages over legacy mode:

• Increased connectivity: Because all ports are active, additional front-end bandwidth is available without sacrificing redundancy.

• Improved redundancy

– Fibre Channel: A Fibre Channel port can fail over to another Fibre Channel port in the same fault domain on the storage controller module.

– iSCSI: In a single fault domain configuration, an iSCSI port can fail over to the other iSCSI port on the storage controller module. In a two fault domain configuration, an iSCSI port cannot fail over to the other iSCSI port on storage controller module

• Simplified iSCSI configuration: Each fault domain has an iSCSI control port that coordinates discovery of the iSCSI ports in the domain. When a server targets the iSCSI port IP address, it automatically discovers all ports in the fault domain.

Fault Domains in Virtual Port Mode

Fault domains group front-end ports that are connected to the same Fibre Channel fabric or Ethernet network. Ports that belong to the same fault domain can fail over to each other because they have the same connectivity.

The following requirements apply to fault domains in virtual port mode:

• A separate fault domain must be created for each front-end Fibre Channel fabric or Ethernet network.

• A fault domain must contain a single type of transport media (FC or iSCSI, but not both).

• Dell recommends configuring at least two connections from each storage controller module to each Fibre Channel network (fault domain) or Ethernet network (fault domain).

Requirements for Virtual Port Mode

The following requirements must be met to configure a storage controller in virtual port mode.

Table 7. Virtual Port Mode Requirements

Requirement Description

License Storage Center must be licensed for virtual port mode.

Switches Front-end ports must be connected to Fibre Channel or Ethernet switches; servers cannot be directly connected to storage controller front-end ports.

Multipathing If multiple active paths are available to a server, the server must be configured for MPIO to use more than one path simultaneously.

iSCSI networks • NAT must be disabled for iSCSI replication traffic.

• CHAP authentication must be disabled.

Connect the Front End 37

Page 38: Sc4020 deployment guide

Requirement Description

Fibre Channel fabrics • The FC topology must be switched fabric. Point-to-point and arbitrated loop topologies are not supported.

• FC switches must be zoned to meet the virtual port mode zoning requirements.

• FC switches must support N_Port ID Virtualization (NPIV).

• Persistent FCID must be disabled on FC switches.

NOTE: AIX servers are not supported.

Related Links

Fibre Channel Zoning

Example Virtual Port Mode Configuration

The following figure shows a Storage Center in virtual port mode connected to servers with Fibre Channel HBAs and two fault domains.

Figure 27. Virtual Port Mode Example with FC

1. Server 1 2. Server 2

3. FC switch 1 4. FC switch 2

5. Storage controller module 1 6. Storage controller module 2

NOTE: To use multiple primary paths simultaneously, the server must be configured to use MPIO.

The following table summarizes the failover behaviors for this configuration.

38 Connect the Front End

Page 39: Sc4020 deployment guide

Table 8. Virtual Port Mode Failover Scenarios

Scenario Failover Behavior

Normal operating conditions All ports are active and pass IO.

Storage controller module 1 fails Virtual ports on storage controller module 1 fail over by moving to physical ports on storage controller module 2.

Storage controller module 2 fails Virtual ports on storage controller module 2 fail over by moving to physical ports on storage controller module 1.

A single port fails The virtual port associated with the failed physical port moves to another physical port in the fault domain.

Legacy Mode

Legacy mode provides storage controller module redundancy for a Storage Center by connecting multiple primary and reserved ports to each Fibre Channel or Ethernet switch.

In legacy mode, each primary port on a storage controller module is paired with a corresponding reserved port on the other storage controller module. During normal conditions, the primary ports process IO and the reserved ports are in standby mode. If a storage controller module fails, the primary ports fail over to the corresponding reserved ports on the other storage controller module. This approach ensures that servers connected to the switch do not lose connectivity if one of the storage controller module fails. For optimal performance, the primary ports should be evenly distributed across both storage controller modules.

Fault Domains in Legacy Mode

Each pair of primary and reserved ports is grouped into a fault domain in the Storage Center software. The fault domain determines which ports are allowed to fail over to each other.

The following requirements apply to fault domains in legacy mode on a Storage Center:

• A fault domain must contain one type of transport media (FC or iSCSI, but not both).

• A fault domain must contain one primary port and one reserved port.

• The reserved port must be on a different storage controller module than the primary port.

Requirements for Legacy Mode

The following requirements must be met to configure a storage controller in legacy mode.

Table 9. Legacy Mode Requirements

Requirement Description

Storage controller module front-end ports

On an SC4020 with FC front-end ports, each storage controller module must have two FC front-end ports to connect two paths to each Fibre Channel switch.

On an SC4020 with iSCSI front-end ports, each storage controller module must have two iSCSI front-end ports to connect two paths to each Ethernet switch.

Multipathing If multiple active paths are available to a server, the server must be configured for MPIO to use more than one path simultaneously.

Connect the Front End 39

Page 40: Sc4020 deployment guide

Requirement Description

Fibre Channel zoning Fibre Channel switches must be zoned to meet the legacy mode zoning requirements.

Related Links

Fibre Channel Zoning

Example Legacy Mode Configuration

The following figure shows a Storage Center in legacy mode connected to servers with Fibre Channel HBAs and four fault domains.

• Fault domain 1 (shown in orange) is comprised of primary port P1 on storage controller module 1 and reserved port R1 on storage controller module 2.

• Fault domain 2 (shown in blue) is comprised of primary port P2 on storage controller module 2 and reserved port R2 on storage controller module 1.

• Fault domain 3 (shown in green) is comprised of primary port P3 on storage controller module 1 and reserved port R3 on storage controller module 2.

• Fault domain 4 (shown in purple) is comprised of primary port P4 on storage controller module 2 and reserved port R4 on storage controller module 1.

Figure 28. Legacy Mode Example with Fibre Channel

1. Server 1 2. Server 2

3. FC switch 1 4. FC switch 2

5. Storage controller module 1 6. Storage controller module 2

NOTE: To use multiple paths simultaneously, the server must be configured to use MPIO.

The following table summarizes the failover behaviors for this configuration.

40 Connect the Front End

Page 41: Sc4020 deployment guide

Table 10. Failover Scenario Behaviors

Scenario Failover Behavior

Normal operating conditions • Primary ports P1, P2, P3, and P4 pass IO.

• Reserved ports R1, R2, R3, and R4 remain in a standby mode until a storage controller module failure.

Storage controller module 1 fails • In fault domain 1, primary port P1 fails over to reserved port R1.

• In fault domain 3, primary port P3 fails over to reserved port R3.

Storage controller module 2 fails • In fault domain 2, primary port P2 fails over to reserved port R2.

• In fault domain 4, primary port P4 fails over to reserved port R4.

A single port fails The port does not fail over because there was no controller failure. MPIO software on the server provides fault tolerance by sending IO to a functioning port in a different fault domain.

Fibre Channel ZoningWhen using Fibre Channel for front-end connectivity, zones must be established to ensure that storage is visible to the servers. Plan the front-end connectivity using zoning concepts discussed in this section before starting to cable the storage controller.

Zoning can be applied to either the ports on switches or to the World Wide Names (WWNs) of the end devices. Zones should be created using a single initiator (using physical WWNs) and multiple targets (using virtual WWNs). The virtual WWNs from each storage controller module must be included in the zone with each host’s WWN.

NOTE: WWN zoning is recommended for virtual port mode.

Port Zoning Guidelines

When port zoning is configured, only specific switch ports are visible. If a storage device is moved to a different port that is not part of the zone, it is no longer visible to the other ports in the zone.

Table 11. Guidelines for Port Zoning

Connectivity Type Guidelines

Virtual port mode or legacy mode

• Include all Storage Center front-end ports.

• Create server zones which contain all Storage Center front-end ports and a single server port.

Fibre Channel replication

Include all Storage Center front-end ports from Storage Center system A and Storage Center system B in a single zone.

Connect the Front End 41

Page 42: Sc4020 deployment guide

WWN Zoning Guidelines

When WWN zoning is configured, a device may reside on any port or change physical ports and still be visible because the switch is seeking a WWN.

Table 12. Guidelines for WWN Zoning

Connectivity Type Guidelines

Virtual port mode • Include all Storage Center virtual WWNs in a single zone.

• Include all Storage Center physical WWNs in a single zone.

• Create server zones that contain a single initiator and multiple targets (Storage Center virtual WWNs), and which include the server HBA WWN.

Legacy mode • Include all Storage Center front-end WWNs or ports in a single zone.

• Create server zones that contain a single initiator and multiple targets, and which include the server HBA WWN.

Fibre Channel replication

• Include all Storage Center physical WWNs from Storage Center system A and Storage Center system B in a single zone.

• Include all Storage Center physical WWNs of Storage Center system A and the virtual WWNs of Storage Center system B on the particular fabric.

• Include all Storage Center physical WWNs of Storage Center system B and the virtual WWNs of Storage Center system A on the particular fabric.

NOTE: Some ports may not be used or dedicated for replication, however ports that are used must be in these zones.

Connect the Front End for a Storage CenterUse Fibre Channel or iSCSI to connect servers to the storage controller. Choose a front-end connectivity mode (virtual port mode or legacy mode), and then connect the servers in a way that meets the requirements of the chosen connectivity mode.

Connect Fibre Channel Servers

Choose the Fibre Channel connectivity option that best suits the front‐end redundancy requirements and network infrastructure.

Virtual Port Mode with Dual-Fabric Fibre Channel

Using two fabrics with virtual port mode prevents an FC switch failure or storage controller module failure from causing an outage.

Prerequisites

• Each storage controller module must have two available FC front-end ports for each FC fabric.

• The requirements for virtual port mode must be met.

About this taskIn this configuration, two fault domains are spread across both storage controller modules. Each storage controller module is connected to both fabrics by two FC connections. If a single port fails, its virtual port fails over to another port in the same fault domain. If a storage controller module fails, the virtual ports on the failed storage controller module move to ports on the other storage controller module in the same fault domain.

42 Connect the Front End

Page 43: Sc4020 deployment guide

Steps

1. Connect each server to both FC fabrics.

2. Connect fault domain 1 (shown in orange) to fabric 1.

• Storage controller module 1: port 1 → fabric 1

• Storage controller module 1: port 2 → fabric 1

• Storage controller module 2: port 1 → fabric 1

• Storage controller module 2: port 2 → fabric 1

3. Connect fault domain 2 (shown in blue) to fabric 2.

• Storage controller module 1: port 3 → fabric 2

• Storage controller module 1: port 4 → fabric 2

• Storage controller module 2: port 3 → fabric 2

• Storage controller module 2: port 4 → fabric 2

4. Configure FC zoning to meet the legacy mode zoning requirements.

Figure 29. Dual-Fabric FC in Virtual Port Mode

1. Server 1 2. Server 2

3. Fabric 1 (FC switch 1) 4. Fabric 2 (FC switch 2)

5. Storage controller module 1 6. Storage controller module 2

Related Links

Fibre Channel Zoning

Virtual Port Mode with Single-Fabric Fibre Channel

Use virtual port mode with one fabric to protect against port or storage controller module failure.

PrerequisitesEach storage controller module must have two available FC front-end ports.

About this taskIn this configuration, there is one fault domain because there is a single FC fabric. Each storage controller module is connected to the fabric by at least two FC connections to provide port redundancy. If a single port fails, its virtual port fails over to another port. If a storage controller module fails, the virtual ports on

Connect the Front End 43

Page 44: Sc4020 deployment guide

the failed controller move to ports on the other storage controller module. This configuration is vulnerable to switch failure.

• Each storage controller module must have two available FC front-end ports.

• The requirements for virtual port mode must be met.

Steps

1. Connect each server to the FC fabric.

2. Connect fault domain 1 (shown in orange) to fabric 1.

• Storage controller module 1: port 1 → fabric 1

• Storage controller module 1: port 2 → fabric 1

• Storage controller module 2: port 1 → fabric 1

• Storage controller module 2: port 2 → fabric 1

3. Configure FC zoning to meet the legacy mode zoning requirements.

Figure 30. Single-Fabric FC in Virtual Port Mode

1. Server 1 2. Server 2

3. Fabric 1 (FC switch) 4. Storage controller module 1

5. Storage controller module 2

Related Links

Fibre Channel Zoning

Legacy Mode with Dual-Fabric Fibre Channel

Using two fabrics with legacy mode prevents an FC switch failure or storage controller module failure from causing an outage.

PrerequisitesEach storage controller module must have two available FC front-end ports for each FC fabric.

About this taskIn this configuration, four fault domains are spread across both storage controller modules. Each fault domain contains sets of primary and reserve paths (P1-R1, P2-R2, P3-R3, and P4-R4). To provide redundancy, the primary port and the corresponding reserve port in a fault domain must connect to the same fabric. When MPIO is configured on the FC servers, the primary paths provide redundancy for a

44 Connect the Front End

Page 45: Sc4020 deployment guide

single port or cable failure. The reserved paths provide redundancy for a single storage controller module failure.

Steps

1. Connect each FC server to both FC fabrics.

2. Connect fault domain 1 (shown in orange) to fabric 1.

• Primary port P1: Storage controller module 1: port 1 → fabric 1

• Reserved port R1: Storage controller module 2: port 1 → fabric 1

3. Connect fault domain 2 (shown in blue) to fabric 1.

• Primary port P2: Storage controller module 2: port 2 → fabric 1

• Reserved port R2: Storage controller module 1: port 2 → fabric 1

4. Connect fault domain 3 (shown in green) to fabric 2.

• Primary port P3: Storage controller module 1: port 3 → fabric 2

• Reserved port R3: Storage controller module 2: port 3 → fabric 2

5. Connect fault domain 4 (shown in purple) to fabric 2.

• Primary port P4: Storage controller module 2: port 4 → fabric 2

• Reserved port R4: Storage controller module 1: port 4 → fabric 2

6. Configure FC zoning to meet the legacy mode zoning requirements.

Figure 31. Dual-Fabric FC in Legacy Mode

1. Server 1 2. Server 2

3. Fabric 1 (FC switch 1) 4. Fabric 2 (FC switch 2)

5. Storage controller module 1 6. Storage controller module 2

Related Links

Fibre Channel Zoning

Legacy Mode with Single-Fabric Fibre Channel

Use legacy mode with one fabric to prevent a storage controller module failure from causing an outage.

PrerequisitesEach storage controller module must have two available FC front-end ports.

Connect the Front End 45

Page 46: Sc4020 deployment guide

About this taskIn this configuration, two fault domains are spread across both storage controller modules. Each fault domain contains sets of primary and reserve paths (P1-R1 and P2-R2). The reserved paths provide redundancy for a single storage controller module failure. This configuration is vulnerable to switch failure.

Steps

1. Connect each FC server to the FC fabric.

2. Connect fault domain 1 (shown in orange) to fabric 1.

• Primary port P1: Storage controller module 1: port 1 → fabric 1

• Reserved port R1: Storage controller module 2: port 1 → fabric 1

3. Connect fault domain 2 (shown in blue) to fabric 1.

• Primary port P2: Storage controller module 1: port 2 → fabric 1

• Reserved port R2: Storage controller module 2: port 2 → fabric 1

4. Configure FC zoning to meet the legacy mode zoning requirements.

Figure 32. Single-Fabric FC in Legacy Mode

1. Server 1 2. Server 2

3. Fabric 1 (FC switch) 4. Storage controller module 1

5. Storage controller module 2

Related Links

Fibre Channel Zoning

46 Connect the Front End

Page 47: Sc4020 deployment guide

Connect iSCSI Servers

Choose the iSCSI connectivity option that best suits the front‐end redundancy requirements and network infrastructure.

Virtual Port Mode with Dual-Network iSCSI

Using two networks with virtual port mode prevents an Ethernet switch failure or storage controller module failure from causing an outage.

Prerequisites

• Each storage controller module must have two available iSCSI front-end ports.

• The requirements for virtual port mode must be met.

About this taskIn this configuration, two fault domains are spread across both storage controller modules. Each storage controller module is connected to both networks by two iSCSI connections. If a storage controller module fails, the virtual ports on the failed storage controller module move to ports on the other storage controller module.

Steps

1. Connect each server to both iSCSI networks.

2. Connect fault domain 1 (shown in orange) to iSCSI network 1.

• Storage controller module 1: port 1 → iSCSI network 1

• Storage controller module 2: port 1 → iSCSI network 1

3. Connect fault domain 2 (shown in blue) to iSCSI network 2.

• Storage controller module 1: port 2 → iSCSI network 2

• Storage controller module 2: port 2 → iSCSI network 2

Example

Figure 33. Dual-Fabric FC in Virtual Port Mode

1. Server 1 2. Server 2

3. iSCSI network 1 (Ethernet switch 1) 4. iSCSI network 2 (Ethernet switch 2)

Connect the Front End 47

Page 48: Sc4020 deployment guide

5. Storage controller module 1 6. Storage controller module 2

Virtual Port Mode with Single-Network iSCSI

Using a single network with virtual port mode prevents an iSCSI port failure or storage controller module failure from causing an outage.

Prerequisites

• Each storage controller module must have two available iSCSI front-end ports.

• The requirements for virtual port mode must be met.

About this taskIn this configuration, one fault domain is spread across both storage controller modules. Each storage controller module is connected to the network by two iSCSI connections to provide port redundancy. If a single port fails, its virtual port fails over to another port. If a storage controller module fails, the virtual ports on the failed controller move to ports on the other storage controller module. This configuration is vulnerable to switch failure.

Steps

1. Connect each server to the iSCSI network.

2. Connect fault domain 1 (shown in orange) to iSCSI network 1.

• Storage controller module 1: port 1 → iSCSI network 1

• Storage controller module 1: port 2 → iSCSI network 1

• Storage controller module 2: port 1 → iSCSI network 1

• Storage controller module 2: port 2 → iSCSI network 1

Example

Figure 34. Single-Network iSCSI in Virtual Port Mode

1. Server 1 2. Server 2

3. iSCSI network 1 (Ethernet switch) 4. Storage controller module 1

5. Storage controller module 2

48 Connect the Front End

Page 49: Sc4020 deployment guide

Legacy Mode with Dual-Network iSCSI

Using two networks with legacy mode prevents an Ethernet switch failure or storage controller module failure from causing an outage.

PrerequisitesEach storage controller module must have two available iSCSI front-end ports.

About this taskIn this configuration, two fault domains are spread across both storage controller modules. Each storage controller module is connected to both networks by two iSCSI connections. In addition. each fault domain contains sets of primary and reserve paths (P1-R1 and P2-R2). To provide redundancy, the primary port and the corresponding reserve port in a fault domain must connect to the same network. When MPIO is configured on the iSCSI servers, the primary paths provide redundancy for a single port or cable failure. The reserved paths provide redundancy for a single storage controller module failure.

Steps

1. Connect each server to both iSCSI networks.

2. Connect fault domain 1 (shown in orange) to iSCSI network 1.

• Primary port P1: Storage controller module 1: port 1 → iSCSI network 1

• Reserved port R1: Storage controller module 2: port 1 → iSCSI network 1

3. Connect fault domain 2 (shown in blue) to iSCSI network 2.

• Primary port P2: Storage controller module 2: port 2 → iSCSI network 2

• Reserved port R2: Storage controller module 1: port 2 → iSCSI network 2

Example

Figure 35. Dual-Network iSCSI in Legacy Mode

1. Server 1 2. Server 2

3. iSCSI network 1 (Ethernet switch 1) 4. iSCSI network 2 (Ethernet switch 2)

5. Storage controller module 1 6. Storage controller module 2

Connect the Front End 49

Page 50: Sc4020 deployment guide

Legacy Mode with Single-Network iSCSI

Using a single network with legacy mode prevents a storage controller module failure from causing an outage.

PrerequisitesEach storage controller module must have two available iSCSI front-end ports.

About this taskIn this configuration, two fault domains are spread across both storage controller modules. Each fault domain contains sets of primary and reserve paths (P1-R1 and P2-R2). The reserved paths provide redundancy for a single storage controller module failure. This configuration is vulnerable to switch failure.

Steps

1. Connect each server to the iSCSI network.

2. Connect fault domain 1 (shown in orange) to iSCSI network 1.

• Primary port P1: Storage controller module 1: port 1 → switch 1

• Reserved port R1: Storage controller module 2: port 1 → switch 1

3. Connect fault domain 2 (shown in blue) to iSCSI network 1.

• Primary port P2: Storage controller module 1: port 2 → switch 1

• Reserved port R2: Storage controller module 2: port 2 → switch 1

Example

Figure 36. Single-Network iSCSI in Legacy Mode

1. Server 1 2. Server 2

3. iSCSI network 1 (Ethernet switch) 4. Storage controller module 1

5. Storage controller module 2

50 Connect the Front End

Page 51: Sc4020 deployment guide

Label the Front-End CablesLabel the front-end cables to indicate the storage controller module and port to which they are connected.

PrerequisitesLocate the pre-made front-end cable labels that shipped with the storage controller.

About this taskApply cable labels to both ends of each Fibre Channel or Ethernet cable that connects a storage controller module to a front-end fabric or network.

Steps

1. Starting with the top edge of the label, attach the label to the cable near the connector.

Figure 37. Attach Cable Label

2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text

3. Apply a matching label to the other end of the cable.

Connect the Front End 51

Page 52: Sc4020 deployment guide

5Set up the Storage Center SoftwareConnect the storage controller modules to the management network, configure the storage controller modules, and configure the Storage Center software.

Connect the MGMT port on each storage controller module to the Ethernet management network. Then, connect to the serial port on each storage controller module and configure the storage controller modules with prejoin information. After the storage controller modules are configured, launch the Storage Center Startup Wizard and configure the Storage Center software.

PrerequisitesBefore the Storage Center software can be configured, the hardware must be connected, and you must have the required materials and documents.

Hardware Configuration

All hardware must be installed and cabled before beginning the software setup process.

If server connectivity is through Fibre Channel (FC), the FC switches must be configured and zoned before the storage controller modules are configured.

Required Materials

Serial and network connections to the storage controller are required during different stages of configuration.

Make sure that you have the items listed in the following table.

Table 13. Required Materials for Storage Controller Configuration

Required Item Purpose

RS-232/3.5 mm mini jack serial cable and computer

Used to run commands and view console messages during storage controller module configuration.

Computer connected to the same network as the Storage Center

Used to connect to the Storage Center System Manager (through a web browser) to complete the system configuration.

Storage Center license file Used to activate purchased features.

License files use the naming convention snxxx_35_date.lic, where:

• snxxx is the serial number of the top storage controller module.

• 35 indicates that the system runs software higher than version 3.5.

• date shows the license generation date in YYMMDD format.

• .lic is the file extension.

52 Set up the Storage Center Software

Page 53: Sc4020 deployment guide

Required Documents

A Storage Architect or Business Partner designed the components of Storage Center for the specific site. Pre-installation documentation contains detailed information from the design that is used during the setup process.

Pre-installation documents include:

• List of hardware needed to support storage requirements

• Hardware Serial Number (HSN) used to configure the storage controller modules

• Network information, such as IP addresses, subnet masks, and gateways

• Optional connectivity diagrams to illustrate cabling between the controllers, enclosures, switches, and servers

Refer to these documents for information about site-specific settings used during Storage Center configuration.

Required Software Versions

Storage Center hardware has minimum software version requirements.

Table 14. Minimum Software Version Requirements

Hardware Minimum Software Version

SC4020 storage controller 6.5.2

SC200/SC220 expansion enclosures 6.5.2

Connect the Ethernet Management InterfaceThe Ethernet management (MGMT) interface of each storage controller module must be connected to a management network. The Ethernet management port (eth0) provides access to the Storage Center System Manager software and is used to send emails, alerts, SNMP traps, and Phone Home data. The Ethernet management (MGMT) interface also provides access to the baseboard management controller (BMC) software.

PrerequisitesLocate the Ethernet management cables that shipped with the SC4020 storage controller.

Steps

1. Connect the Ethernet management port on storage controller module 1 to the Ethernet switch or VLAN.

2. Connect the Ethernet management port on storage controller module 2 to the Ethernet switch or VLAN.

Set up the Storage Center Software 53

Page 54: Sc4020 deployment guide

Figure 38. Ethernet Management Interfaces

1. Ethernet switch 2. Storage controller module 1

3. Storage controller module 2

Label the Ethernet Management Cables

Label the Ethernet management cables that connect the storage controller modules to an Ethernet switch or VLAN.

PrerequisitesLocate the pre-made Ethernet management cable labels that shipped with the SC4020 storage controller.

About this taskApply cable labels to both ends of each Ethernet management cable.

Steps

1. Starting with the top edge of the label, attach the label to the cable near the connector.

54 Set up the Storage Center Software

Page 55: Sc4020 deployment guide

Figure 39. Attach Cable Label

2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text.

3. Apply a matching label to the other end of the cable.

Turn on the Storage ControllerConnect the power cables and turn on the storage controller.

1. Ensure that the power switches are in the OFF position before connecting the power cables.

2. Connect the power cables to the power supply units (PSUs) in the storage controller chassis.

Figure 40. Connect the Power Cables

3. Plug the other end of the power cables into a grounded electrical outlet or a separate power source such as an uninterrupted power supply (UPS) or a power distribution unit (PDU).

4. Secure the power cables firmly to the bracket using the strap provided.

Set up the Storage Center Software 55

Page 56: Sc4020 deployment guide

Figure 41. Secure the Power Cables

5. Simultaneously press both power switches on the rear of the storage system enclosure to turn on the storage controller.

Figure 42. Turn on the Storage Controller

NOTE: When the SC4020 storage controller is powered on, there is a one‐minute delay while the storage controller prepares to boot. During this time, the only indication that the storage controller is powered on are the LEDs on the storage controller modules. After the one‐minute delay, the fans and LEDs turn on as an indication that the storage controller is starting up.

Configure the Storage Controller ModulesAfter the SC4020 is fully powered on, connect to the storage controller modules using serial connections and set the initial configurations.

Establish a Serial Connection to a Storage Controller Module

Create a serial connection from a computer to a storage controller module.

1. Use the supplied serial cable (RS-232 to 3.5 mm mini connector) to connect a computer to the storage controller module. To connect from a USB port on a computer, use a USB to RS-232 converter.

2. Open a terminal emulator or a command-line interface on the computer.

56 Set up the Storage Center Software

Page 57: Sc4020 deployment guide

3. Configure the connection as shown in the following table.

Table 15. Serial Connection Settings

Setting Value

Emulation VT220

Column Mode 132

Line Wrapping Off

Connection Serial Port

Connection Type Direct

Baud Rate 115,200

Parity None

Data Bits 8

Stop Bits 1

Flow Control XON/XOFF

NOTE: To facilitate troubleshooting, enable logging for the session.

Related Links

Troubleshooting the Serial Connection to a Storage Controller Module

Configure the Top Storage Controller Module

Configure the top storage controller module (storage controller module 1) with the information provided in the pre-installation documents.

PrerequisitesCreate a serial connection from a computer to the storage controller module, as described in Establish a Serial Connection to a Storage Controller Module.

About this taskConfigure the HSN of the storage controller module, the IP addresses for the Ethernet management interface (eth0) and baseboard management controller (BMC), one or more DNS servers, and the domain name.

Figure 43. Storage Controller Modules

1. Top storage controller module (storage controller module 1)

2. Bottom storage controller module (storage controller module 2)

Set up the Storage Center Software 57

Page 58: Sc4020 deployment guide

Steps

1. Press the Enter key several times to initiate the connection. The terminal displays the prompt <SafeMode> sn0> to indicate that the storage controller module is in Safe Mode.

2. Reseat the battery in the storage controller module before continuing with the next step.

a. Press the release tab on the battery and slide it half way out of the storage controller module.

Figure 44. Reseat the Storage Controller Module Battery

1. Battery 2. Release tab

3. Storage controller module

b. Wait approximately two seconds.

c. Slide the battery back into the storage controller module until the release tab clicks into place.

3. Run the following command to enter the shellacess developer mode.

shellaccess developer4. Run the following command to display the prejoin configuration information that must be specified.

platform prejoin showExample:

<SafeMode> sn0> platform prejoin show****************************************************************Prejoin configuration:

System (License) Serial Number: 0

Item Value------------ --------------------------------Canister 0HSN 0eth0 IPv4 0.0.0.0/32eth0 IPv6 ::/128eth1 IPv4 169.254.1.101/30eth1 IPv6 ::/128gateway IPv4 0.0.0.0gateway IPv6 ::Domain DNS Server 0 0.0.0.0

58 Set up the Storage Center Software

Page 59: Sc4020 deployment guide

DNS Server 1 0.0.0.0****************************************************************

HSN not configured.eth0 IPv4 address not configured.

When all issues identified above are corrected, type: '-sm -go'to continue normal processing.***************************************************************

5. Configure storage controller module 1 with the information required to prejoin the storage controller modules.

a. Run the following command to set the HSN using the lower serial number provided in the pre-installation documents.

platform init hsn set <lower HSN>Example:

platform init hsn set 10001

b. Run the following command to specify an IP address and subnet mask for the management interface (eth0).

ifconfig inet eth0 <IP address>/<prefix length>Example:

ifconfig inet eth0 198.51.100.99/24c. Run the following command to specify the gateway address.

ifconfig inet gateway <gateway address>Example:

ifconfig inet gateway 198.51.100.1d. Run the following command to specify one or more DNS servers.

net dns init <DNS server 1> <DNS server 2>Example:

net dns init 203.50.113.55 203.50.113.90e. Run the following command to specify the domain name.

ifconfig domainname <domain name>Example:

ifconfig domainname corp.comf. Run the following command to verify that the prejoin configuration information is accurate and

complete.

platform prejoin show6. To prevent unauthorized access to the BMC, make sure that the BMC IP address is set to a private

non-routable IP address.

a. Run the following command to view the BMC IP address.

platform bmc showb. If necessary, run the following commands to set the BMC IP address, netmask, and gateway.

platform bmc set ip <IP address>platform bmc set netmask <netmask>platform bmc set gateway <gateway address>

Set up the Storage Center Software 59

Page 60: Sc4020 deployment guide

Example:

platform bmc set ip 192.168.1.1platform bmc set netmask 255.255.255.0platform bmc set gateway 192.168.1.0

7. Run the following command to exit safe mode and boot up the storage controller module.

-sm -go

CAUTION: Wait for the top storage controller module to finishing booting before configuring the bottom storage controller module. If you configure the bottom storage controller module before the top storage controller module has finished booting, the bottom storage controller module may not configure correctly.

NOTE: The storage controller module may take several minutes to completely boot up — this is normal.

8. Wait for the storage controller module battery status indicator to change to steady green, which means the battery is ready.

Figure 45. Battery Indicators

1. Battery status indicator 2. Battery fault indicator

If the battery status indicator does not change to steady green, contact Dell Technical Support Services for help.

Configure the Bottom Storage Controller Module

Configure the bottom storage controller module (storage controller module 2) with the information provided in the pre-installation documents.

PrerequisitesCreate a serial connection from a computer to the storage controller module, as described in Establish a Serial Connection to a Storage Controller Module.

About this taskConfigure the HSN of the storage controller module and IP addresses for the Ethernet management interface (eth0) and baseboard management controller (BMC).

60 Set up the Storage Center Software

Page 61: Sc4020 deployment guide

Figure 46. Storage Controller Modules

1. Top storage controller module (storage controller module 1)

2. Bottom storage controller module (storage controller module 2)

CAUTION: Wait for the top storage controller module to finishing booting before configuring the bottom storage controller module. If you configure the bottom storage controller module before the top storage controller module has finished booting, the bottom storage controller module may not configure correctly.

Steps

1. Press the Enter key several times to initiate the connection. The terminal displays the prompt <SafeMode> sn0> to indicate that the storage controller module is in Safe Mode.

2. Reseat the battery in the storage controller module before continuing with the next step.

a. Press the release tab on the battery and slide it half way out of the storage controller module.

Figure 47. Reseat the Storage Controller Module Battery

1. Storage controller module 2. Release tab

3. Battery

b. Wait approximately two seconds.

c. Slide the battery back into the storage controller module until the release tab clicks into place.

3. Run the following command to enter the shellacess developer mode.

shellaccess developer

Set up the Storage Center Software 61

Page 62: Sc4020 deployment guide

4. Run the following command to display the prejoin configuration information that must be specified.

platform prejoin showExample:

<SafeMode> sn0> platform prejoin show****************************************************************Prejoin configuration:

System (License) Serial Number: 0

Item Value------------ --------------------------------Canister 1HSN 0eth0 IPv4 0.0.0.0/32eth0 IPv6 ::/128eth1 IPv4 169.254.1.102/30eth1 IPv6 ::/128gateway IPv4 0.0.0.0gateway IPv6 ::DomainDNS Server 0 0.0.0.0DNS Server 1 0.0.0.0************************************************

HSN not configured.eth0 IPv4 address not configured.

When all issues identified above are corrected, type: '-sm -go'to continue normal processing.************************************************

5. Configure storage controller module 2 with the information required to prejoin the storage controller modules.

a. Run the following commands to set the HSN using the higher serial number provided in the pre-installation documents.

platform init hsn set <higher HSN>Example:

platform init hsn set 10002b. Run the following command to specify an IP address and subnet mask for the management

interface (eth0).

ifconfig inet eth0 <IP address>/<prefix length>Example:

ifconfig inet eth0 198.51.100.100/24c. Run the following command to verify that the prejoin configuration information is accurate and

complete.

platform prejoin show6. To prevent unauthorized access to the BMC, make sure that the BMC IP address is set to a private

non-routable IP address.

a. Run the following command to view the BMC IP address.

platform bmc show

62 Set up the Storage Center Software

Page 63: Sc4020 deployment guide

b. If necessary, run the following commands to set the BMC IP address, netmask, and gateway.

platform bmc set ip <IP address>platform bmc set netmask <netmask>platform bmc set gateway <gateway address>Example:

platform bmc set ip 192.168.1.2platform bmc set netmask 255.255.255.0platform bmc set gateway 192.168.1.0

NOTE: Dell recommends setting the BMC IP addresses of the storage controller modules in sequential order. Using the BMC IP address of the top storage controller module, set the BMC IP address of the bottom storage controller module to the next IP address in sequence.

7. Run the following command to exit safe mode and boot up the storage controller module.

-sm -go

NOTE: The storage controller module may take several minutes to completely boot up — this is normal.

8. Wait for the storage controller module battery status indicator to change to steady green, which means the battery is ready.

Figure 48. Battery Indicators

1. Battery status indicator 2. Battery fault indicator

If the battery status indicator does not change to steady green, contact Dell Technical Support Services for help.

Launch the Storage Center Startup WizardUse a web browser to log in to Storage Center System Manager and launch the Startup Wizard.

PrerequisitesYou must have a computer that has connectivity to the Storage Center management network. The computer must have the following software installed:

• One of the following supported web browsers (other browsers and versions may have limited compatibility):

– Microsoft Windows Internet Explorer Versions 7, 8, and 9

– Microsoft Windows Internet Explorer 10 and 11 on the desktop (Internet Explorer 10 and 11 Modern version is not supported)

– Mozilla Firefox on Microsoft Windows

• The 32-bit (x86) Java Runtime Environment must be installed to run Storage Center System Manager in any 32-bit browser.

Set up the Storage Center Software 63

Page 64: Sc4020 deployment guide

Steps

1. Use a supported web browser to connect to the eth0 IP address or host name of storage controller module 1.

2. Acknowledge any security messages.

NOTE: The security messages that appear may differ depending on the browser used.

The Storage Center Login page appears.

Figure 49. Storage Center Login Page

3. Enter the default user name and password:

• User: Admin• Password: mmm

NOTE: The user name and password are case-sensitive.

4. Click Login. The password expired page appears.

64 Set up the Storage Center Software

Page 65: Sc4020 deployment guide

Figure 50. Change Admin Password

5. Enter a new password for the Admin user.

a. Enter the old password in the Old Password field.

b. Enter a new password in the New Password and Confirm Password fields.

c. Click Login. The password is changed and you are logged in to the Storage Center System Manager.

6. If prompted, click Yes to acknowledge certificate messages.

The Startup Wizard appears and displays the Software End User License Agreement (EULA).

Complete the Startup WizardConfigure the Storage Center using the Startup Wizard.

License Agreement Page

Use the License Agreement page to read and accept the Software End User License Agreement (EULA).

1. Enter information for the Approving Customer Name and Approving Customer Title fields. The customer information and the approval date are recorded and sent to Dell Technical Support Services using Phone Home.

NOTE: The End User License Agreement is also displayed the first time any new user logs on to the Product Name. When displayed for a new user, the EULA does not require a customer name or title.

2. After reading the EULA, click Accept to continue with setup. The Load License page appears.

Set up the Storage Center Software 65

Page 66: Sc4020 deployment guide

Load License Page

Use the Load License page to upload the Storage Center license file. The license file makes settings available depending on what features were purchased.

1. Browse to the location of the license file.

2. Verify that the serial number of the license file matches the serial number of the storage controller module. Compare the name of the license file to the title bar of the Startup Wizard.

License files use the naming convention snxxx_35_date.lic, where:

• snxxx is the serial number of the top storage controller module.

• 35 indicates that the system runs software higher than version 3.5.

• date shows the license generation date in YYMMDD format.

• .lic is the file extension.

Figure 51. Load License Page

3. Select the license file, then click Load License. The Startup Wizard displays a message when the license is successfully loaded.

• If the license loads successfully, the message The license submission has completed successfully appears.

• If the license does not load successfully, the message Error: The license file is not valid message appears.

4. Click Continue. The Create Disk Folder page appears.

Related Links

Troubleshooting Storage Center Licenses

66 Set up the Storage Center Software

Page 67: Sc4020 deployment guide

Create Disk Folder Page

Use the Create Disk Folder page to assign disks to a folder in order to create a single pool of storage for volumes.

1. Select the disks to include in the disk folder. The Startup Wizard selects all available disks by default.

Figure 52. Create Disk Folder Page

a. Scroll through the list to verify that all the expected disks and expansion enclosures are listed.

b. (Optional) From the list of disks, select disks to include in the disk folder. By default, all disks are selected.

• To exclude individual disks from the disk folder, clear the corresponding check boxes.

• To include individual disks in the disk folder, click Unselect All and then select the individual disks to include.

• To select all disks, click Select All.

NOTE: Dell recommends selecting all available disks to maximize the benefits of Data Progression.

c. Click Continue. The Startup Wizard displays a prompt to select disks to designate as hot spares.

2. Review the hot spare disks that the Startup Wizard automatically selected.

Set up the Storage Center Software 67

Page 68: Sc4020 deployment guide

Figure 53. Disk Folder Hot Spares

A hot spare disk is held in reserve until a disk fails. When a disk fails, the hot spare replaces the failed disk. The hot spare disk must be as large or larger than the largest disk of its type in the disk folder. For redundancy, there must be at least one hot spare for each expansion enclosure. In general, the best practice is to designate one spare disk for every disk class (10K, 7.5K, and so on).

a. (Optional) Change the selection by selecting or clearing disks to be used as hot spares.b. Click Continue, and if prompted, click OK to confirm. The Startup Wizard displays a summary of

the disk folder.

3. (Optional) Modify the default folder name and enter a description in the Notes field.

Figure 54. Disk Folder Attributes

4. (Optional) Click Advanced to configure advanced disk folder options. The wizard displays options for redundancy and datapage size.

68 Set up the Storage Center Software

Page 69: Sc4020 deployment guide

Figure 55. Advanced Disk Folder Options

NOTE: The default managed disk folder settings are appropriate for most sites. If you are considering changing the default settings, contact Dell Technical Support Services for advice.

a. Select Prepare Disk Folder for redundant storage.

b. Configure the Tier Redundancy for each tier.

• RAID 10 (each disk is mirrored)

• RAID 5-5 (four data segments / one parity segment per stripe)

• RAID 5-9 (eight data segments / one parity segment per stripe)

• RAID 10 Dual Mirror (data is written simultaneously to three separate disks)

• RAID 6-6 (four data segments / two parity segments per stripe)

• RAID 6-10 (eight data segments / two parity segments per stripe)c. From the Datapage Size drop-down menu, choose a datapage size.

• 2 MB: (Default) Recommended for most application needs.

• 512 KB: Appropriate for applications with high performance needs (such as certain databases) or environments in which Replays are taken frequently under heavy IO. Selecting this size reduces the amount of space the System Manager can present to servers.

• 4 MB: Appropriate for systems that use a large amount of disk space with infrequent Replays (such as video streaming).

CAUTION: When considering using either the 512 KB or 4 MB datapage settings, contact Dell Technical Support Services for advice on balancing resources and to understand the impact on performance.

d. To configure the disk folder to use RAID 0, select the Prepare for Non-Redundant Storage check box. Non-redundant storage does not protect data in the event of a disk failure. Select this option only for data that is backed up some other way.

e. Click Continue.

5. Click Create Now, and if prompted, click OK to confirm. The Add Controller page appears.

Set up the Storage Center Software 69

Page 70: Sc4020 deployment guide

Add Controller Page

If the storage controller modules are configured with the correct pre‐join information, the Storage Center Startup Wizard automatically joins the storage controller modules.

1. Storage Center joins the bottom storage controller module to the top storage controller module. It then restarts the bottom storage controller module.

Figure 56. Waiting for the Bottom Storage Controller Module to Restart

2. Storage Center loads configuration information onto the bottom storage controller module.

Figure 57. Loading Configuration Information on the Bottom Storage Controller Module

3. Storage Center rebalances the local ports on both the top and bottom storage controller modules.

70 Set up the Storage Center Software

Page 71: Sc4020 deployment guide

Figure 58. Rebalancing Local Ports on the Top and Bottom Storage Controller Module

When the join process completes, the Time Settings page appears.

If the join process fails to complete, a prejoin error page appears.

Figure 59. Joining Process Failed

To manually add the bottom storage controller module, click Manually Add Controller.

Related Links

Joining Storage Controller Modules Manually

Joining Storage Controller Modules Manually

If the storage controller modules fail to join automatically, use the Add Controller page to join the bottom storage controller module to the top storage controller module.

1. If the IPv6 address settings for the top storage controller module are not configured, a page appears on which you can configure the IPv6 addresses.

Set up the Storage Center Software 71

Page 72: Sc4020 deployment guide

Figure 60. Top Storage Controller Module IPv6 Address Values

If the management network uses IPv6, configure the following IPv6 addresses:

a. Enter the IPv6 IP address in the IPv6 Address field.

b. Enter the IPv6 prefix length in IPv6 Prefix Length field.

c. Enter the IPv6 gateway address in the IPv6 Gateway field.

d. Click Continue.

If the management network does not use IPv6, click Skip IPv6 Configuration.

2. Click Add Controller. The wizard prompts you for information about the bottom storage controller module.

NOTE: Information displayed in the following figure is for illustration only. The values displayed are unique to each storage controller module.

72 Set up the Storage Center Software

Page 73: Sc4020 deployment guide

Figure 61. Add Controller Page

3. In the Ether 0 (MGMT) Interface area, enter the IPv4 Address and IPv6 Address of the bottom storage controller module.

• The Storage Center Setup Wizard uses the HSN value in the license file as the Controller ID for the bottom storage controller module. The HSN value cannot be changed.

• The Storage Center Setup Wizard uses a predefined IP address and subnet mask for the Ether 1 (IPC) Interface. The IP address values of the Ether 1 (IPC) Interface cannot be changed.

NOTE: In the SC4020 storage controller, the IPC connection between the storage controller modules is internal. The IPC connection does not require external cabling.

4. Click Continue.

The Startup Wizard displays a message that data and configuration information on the bottom storage controller module will be lost and asks for confirmation.

Set up the Storage Center Software 73

Page 74: Sc4020 deployment guide

Figure 62. Add Storage Controller Module Confirmation

5. Click Join Now. Wait for the process to complete and for the storage controller module to reboot. The storage controller module takes several minutes to reboot. When complete, the Time Settings page appears.

Time Settings Page

Use the Time Settings page is to set the system time for the Storage Center.

1. Set the time zone.

a. From the Region drop-down menu, select the geographical region in which Storage Center is located.

b. From the Time Zone drop-down menu, select the time zone in which Storage Center is located.

74 Set up the Storage Center Software

Page 75: Sc4020 deployment guide

Figure 63. Time Settings Page

NOTE: For locations in the United States, either select US as the region and select a time zone name, or select America as the region and select a city within the same time zone.

2. Set the system time using one of the following methods.

• To configure time manually, select Configure Time Manually, then enter the date and time.

• To configure time using a Network Time Protocol (NTP) server, select Use NTP Time Server, then enter the fully qualified domain name (FQDN) or IP address of an NTP server.

NOTE: Accurate time synchronization is critical for replications. Dell recommends using NTP to set the system time. For more information, see: http://support.ntp.org/bin/view/Support/WebHome.

3. When the system time has been set, click Continue. The System Setup page appears.

System Setup Page

Use the System Setup page to specify the system name and management IP address for Storage Center.

About this taskRefer to the pre-installation documents to find the Storage Center management IP address.

Steps

1. In the System Name field, enter a name for the Storage Center. The system name is typically the serial number of storage controller module 1.

Set up the Storage Center Software 75

Page 76: Sc4020 deployment guide

Figure 64. System Setup Page

2. Enter the Storage Center management IPv4 address in the Management IP Address (IPv4) field.

3. If the management network uses IPv6, enter the Storage Center management IPv6 address in the Management IP Address (IPv6) field.

4. Click Continue. The wizard prompts you to enable or disable the read and write cache.

Figure 65. Enable Read and Write Cache

5. Select or clear the check boxes to enable or disable read and write cache.

NOTE: Disable cache only if there will never be volumes that use cache. If cache is left enabled on this page, it can be disabled later for individual volumes. For more information, see the Storage Center System Manager Administrator’s Guide.

76 Set up the Storage Center Software

Page 77: Sc4020 deployment guide

6. Click Continue. The Configure SMTP page appears.

Configure SMTP Page

Use the Configure SMTP page to configure the SMTP mail server and the sender email address.

About this taskThe SMTP server enables alert message emails to be sent to Storage Center users.

Steps

1. In the SMTP Mail Server field, enter the IP address or fully qualified domain name of the SMTP email server. Click Test server to verify connectivity to the SMTP server.

2. In the Sender E-mail Address field, enter the email address of the sender. Most SMTP servers require this address and it is used as the MAIL FROM address of email messages.

Figure 66. Configure SMTP Page

3. Click Advanced to configure additional SMTP settings for sites that use an advanced SMTP configuration. The page for advanced options appears.

Set up the Storage Center Software 77

Page 78: Sc4020 deployment guide

Figure 67. Enable Read and Write Cache

a. Verify that the Enable SMTP E-mail check box is selected.

b. In the SMTP Mail Server field, verify the IP address or fully qualified domain name of the SMTP mail server. Modify this field if necessary.

c. In the Backup SMTP Mail Server field, enter the IP address or fully qualified domain name of the backup SMTP mail server.

d. Click Test server to test the connections.

e. In the Sender E-mail Address (MAIL FROM) field, verify the email address of the sender. Modify this field if necessary.

f. In the Common Subject Line field, enter a common subject line for all emails from Storage Center.

g. (Optional) To configure the use of extended hello for mail system compatibility, select Send Extended Hello (EHLO).

Instead of beginning the session with the HELO command, the receiving host issues the HELO command. If the sending host accepts this command, the receiving host then sends it a list of SMTP extensions it understands, and the sending host then knows which SMTP extensions it can use to communicate with the receiving host. Implementing Extended SMTP (ESMTP) requires no modification of the SMTP configuration for either the client or the mail server.

h. (Optional) If the email system requires the use of an authorized login, select Use Authorized Login (AUTH LOGIN) and enter the Login ID and Password.

4. Click Continue. The Update Setup page appears.

Update Setup Page

Use the Update Setup page to configure how Storage Center handles software updates.

1. Select a Storage Center update option from the drop-down menu.

78 Set up the Storage Center Software

Page 79: Sc4020 deployment guide

Figure 68. Update Setup Page

The Storage Center update options are:

• Do not automatically check for software updates.: Select this option to disable automatic checking for updates. When this option is selected, manually check for updates by selecting Storage Management → System → Update → Check for Update. For more information, see the Storage Center System Manager Administrator’s Guide.

• Notify me of a software update but do not download automatically.: Select this option to automatically check for updates and receive notification when an update is available. Updates are not downloaded until you explicitly download the update. This option is the recommended Storage Center update option.

• Download software updates automatically and notify me.: Select this option to automatically download updates and receive notification when the download is complete.

• Never check for software updates (Phone Home not available).: Select this option to prevent the system from ever checking for updates. This option is for secure sites at which Phone Home is not available.

2. Click Continue. The User Setup page appears.

To change the update options later, select Storage Management → System → Update → Configure Automatic Updates from the Storage Center System Manager. For more information, see the Storage Center System Manager Administrator’s Guide.

User Setup Page

Use the User Setup page to the session timeout and email addresses for the Admin account.

1. From the Session Timeout drop-down list, select the session timeout.

2. Enter email addresses in the Email, Email 2, and Email 3 fields. The Storage Center sends system alerts to the specified email addresses.

3. Click Send test e-mail.

NOTE: Make sure that an administrator receives the test emails.

Set up the Storage Center Software 79

Page 80: Sc4020 deployment guide

Figure 69. User Setup Page

4. Click Continue. The Configure IO Cards page appears.

Configure IO Cards Page

Use the Configure IO Cards page to configure network settings for iSCSI ports.

About this taskFor an SC4020 with front-end iSCSI ports, use the Configure IO Cards page to configure network settings for the iSCSI ports that are used for front‐end connectivity to servers and replication to another Storage Center.

For an SC4020 with front-end Fibre Channel ports, use the Configure IO Cards page to configure network settings for the embedded iSCSI ports if you plan on performing iSCSI replication to another Storage Center. If you do not plan on performing iSCSI replication on an SC4020 with front-end Fibre Channel ports, click Skip iSCSI IO Card Configuration.

CAUTION: The embedded iSCSI ports are only used for replication to another Storage Center. The embedded iSCSI ports cannot be used for front‐end connectivity to servers.

Steps

1. Enter the IP address for each iSCSI port in the IP Address field.

80 Set up the Storage Center Software

Page 81: Sc4020 deployment guide

Figure 70. Configure IO Cards Page

Uninitialized iSCSI IO ports have an IP Address of 0.0.0.0.

2. Enter the subnet mask for each iSCSI port in the Subnet Mask field.

3. Enter the gateway address for each iSCSI port in the Gateway field.

4. When you are finished, click Continue.

• If no messages are generated, iSCSI IO port configuration is saved and the Configure Ports page appears.

• If there are issues, a message appears.

Figure 71. Card Not Initialized Dialog Box

– Click No to go back and correct the IP Address, Subnet Mask, and/or Gateway address for cards that generated an error.

– Click Yes to ignore the message and continue. The Configure Ports page appears.

Configure Ports Page

Use the Configure Ports page to configure the local ports for the Storage Center.

Select the connectivity modes and configure the front-end and back-end ports.

Set up the Storage Center Software 81

Page 82: Sc4020 deployment guide

Select the Front-End Connectivity Modes

Select the front-end connectivity mode for the FC and iSCSI transport types.

1. Select the operational mode for the FC and iSCSI transport types.

For an SC4020 with front-end iSCSI ports, the Startup Wizard only displays iSCSI operational modes.

CAUTION: The embedded iSCSI ports are only used for replication to another Storage Center. The embedded iSCSI ports cannot be used for front‐end connectivity to servers.

Figure 72. Configure Ports Page

2. Click Continue to configure the selected operational modes. The wizard informs you that the selected operational modes have been configured.

82 Set up the Storage Center Software

Page 83: Sc4020 deployment guide

Figure 73. Selected Operational Modes Configured

3. Click Continue to begin port initialization in the selected operational modes.

The wizard verifies the configuration and converts selected transports to the selected mode. The wizard also displays progress pages and presents a confirmation page when operational modes for transport types have been configured and initialized.

Figure 74. Port Configuration Generated

4. Click Continue. The page that appears depends on whether there are any iSCSI transports in virtual port mode.

5. If there are iSCSI transports in virtual port mode, a page appears asking for IP address attributes for the control port of the new iSCSI fault domain.

Set up the Storage Center Software 83

Page 84: Sc4020 deployment guide

Figure 75. Configure the Control Port for the iSCSI Fault Domain

a. Enter the IP address, subnet mask, gateway, and port for the control port of the new iSCSI fault domain. Check the pre‐installation documentation for this address.

b. Click Continue. The Startup Wizard generates the new iSCSI fault domain and the initial port configuration. After the initial port configuration has been generated, it is automatically validated.

• If there are issues, an error message page appears.

• If the validation is successful, a confirmation page appears.

If there are iSCSI transports in legacy mode, the Startup Wizard generates the initial port configuration.

After the initial port configuration has been generated, it is automatically validated

• If there are issues, an error message page appears.

• If the validation is successful, a confirmation page appears.

6. Click Configure Local Ports. The wizard displays a tab for each transport type (FC, iSCSI, and SAS).

84 Set up the Storage Center Software

Page 85: Sc4020 deployment guide

Figure 76. Configure Local Ports

a. Configure the Purpose, Fault Domain, and User Alias for all transport types (FC, iSCSI, SAS).

b. Click Assign Now. The wizard informs you that the port configuration has been generated.

7. Click Continue. The Generate SSL Cert page appears.

Related Links

Configure Local Ports

Configure Local Ports

Configure the Purpose, Fault Domain, and User Alias for all transport types.

The following table shows the port purposes for each transport type in virtual port mode.

Table 16. Port Purposes in Virtual Port Mode

Port Purpose Transport Type Description

Unknown FC, iSCSI, and SAS One of the following:

• Port purpose is not yet defined.

• The port is unused.

Front End FC and iSCSI • FC port is connected to servers and is used for the server IO path or the FC port is connected to another Storage Center and used for replication.

• iSCSI port is connected to servers and is used for the server IO path or the iSCSI port is connected to another

Set up the Storage Center Software 85

Page 86: Sc4020 deployment guide

Port Purpose Transport Type Description

Storage Center and used for replication.

Back End SAS Port is connected to disk enclosures.

The following table shows the port purposes for each transport type in legacy mode.

Table 17. Port Purposes in Legacy Mode

Port Purpose Transport Type Description

Unknown FC, iSCSI, and SAS One of the following:

• Port purpose is not yet defined.

• The port is unused.

Front-End Primary FC and iSCSI • FC port is connected to servers and is used for the server IO path or the FC port is connected to another Storage Center and used for replication.

• iSCSI port is connected to servers and is used for the server IO path or the iSCSI port is connected to another Storage Center and used for replication.

Front-End Reserved FC and iSCSI • FC port is connected to servers and is used for the server IO path or the FC port is connected to another Storage Center and used for replication.

• iSCSI port is connected to servers and is used for the server IO path or the iSCSI port is connected to another Storage Center and used for replication.

Back End SAS Port is connected to disk enclosures.

Direct Connect FC and iSCSI This port purpose is not used. Do not select it for any ports.

Configure FC Ports — Virtual Port Mode

Use the FC tab to configure the Fibre Channel ports in virtual port mode.

About this taskThe Startup Wizard does not display the FC tab for an SC4020 with front-end iSCSI ports.

86 Set up the Storage Center Software

Page 87: Sc4020 deployment guide

Steps

1. Click the FC tab.

Figure 77. FC Ports Tab — Virtual Port Mode

2. Create a fault domain for each FC fabric.

a. Click Edit Fault Domains. The wizard displays a list of the currently defined fault domains.

b. Click Create Fault Domain. A dialog box appears.

c. In the Name field, type a name for the fault domain.

d. From the Type drop-down menu, select FC.

e. (Optional) In the Notes field, type a description of the fault domain.

f. Click Continue. The dialog box displays a summary.

g. Click Create Now to create the fault domain.

h. Repeat the previous steps to create additional fault domains as needed.

i. When you are finished creating fault domains, click Return. The FC tab appears.

3. Configure each front-end FC port.

a. Set the Purpose field to Front End.

b. Set the Fault Domain field to the appropriate fault domain that you created.

c. (Optional) Type a descriptive name in the User Alias field.

4. (Optional) Change the preferred physical port for one or more virtual ports.

a. Click Edit Virtual Ports. The wizard displays a list of virtual ports.

b. For each virtual port that you want to modify, select the preferred physical port In the Preferred Physical Port.

c. When you are finished, click Apply Changes. The iSCSI tab appears.

5. Configure each port that is unused.

a. Set the Purpose field to Unknown.

b. Confirm that the Fault Domain field is set to <none>.

c. (Optional) Type a descriptive name in the User Alias field.

Set up the Storage Center Software 87

Page 88: Sc4020 deployment guide

Configure FC Ports — Legacy Mode

Use the FC tab to configure the Fibre Channel ports in legacy mode.

About this taskThe Startup Wizard does not display the FC tab for an SC4020 with front-end iSCSI ports.

Steps

1. Click the FC tab.

Figure 78. FC Ports Tab — Legacy Mode

2. Create a fault domain for each pair of redundant FC ports.

a. Click Edit Fault Domains. The wizard displays a list of the currently defined fault domains.

b. Click Create Fault Domain. A dialog box appears.

c. In the Name field, type a name for the fault domain.

d. From the Type drop-down menu, select FC.

e. (Optional) In the Notes field, type a description of the fault domain.

f. Click Continue. The dialog box displays a summary.

g. Click Create Now to create the fault domain.

h. Repeat the previous steps to create additional fault domains as needed.

i. When you are finished creating fault domains, click Return. The FC tab appears.

3. Configure each front-end FC port.

a. Set the Purpose field to Front End Primary or Front End Reserved as appropriate.

b. Set the Fault Domain field to the appropriate fault domain that you created.

c. (Optional) Type a descriptive name in the User Alias field.

4. Configure each port that is unused.

a. Set the Purpose field to Unknown.

b. Confirm that the Fault Domain field is set to <none>.

c. (Optional) Type a descriptive name in the User Alias field.

88 Set up the Storage Center Software

Page 89: Sc4020 deployment guide

Configure iSCSI Ports — Virtual Port Mode

Use the iSCSI tab to configure the iSCSI ports in virtual port mode.

About this taskThe Startup Wizard only displays the iSCSI and SAS tabs for an SC4020 with front-end iSCSI ports.

CAUTION: The embedded iSCSI ports are only used for replication to another Storage Center. The embedded iSCSI ports cannot be used for front‐end connectivity to servers.

Steps

1. Click the iSCSI tab.

Figure 79. iSCSI Ports Tab — Virtual Ports Mode

2. Create a fault domain for each iSCSI network.

a. Click Edit Fault Domains. The wizard displays a list of the currently defined fault domains.

b. Click Create Fault Domain. A dialog box appears.

c. In the Name field, type a name for the fault domain.

d. From the Type drop-down menu, select iSCSI.e. (Optional) In the Notes field, type a description of the fault domain.

f. Click Continue. The dialog box displays a summary.

g. Click Create Now.

h. Repeat the previous steps to create additional fault domains as needed.

i. When you are finished creating fault domains, click Return. The iSCSI tab appears.

3. If you are using an iSCSI port, configure the iSCSI port as a front-end port.

a. Set the Purpose field to Front End.

b. Set the Fault Domain field to the appropriate fault domain that you created.

c. (Optional) Type a descriptive name in the User Alias field.

4. (Optional) Change the preferred physical port for one or more virtual ports.

a. Click Edit Virtual Ports. The wizard displays a list of virtual ports.

b. For each virtual port that you want to modify, select the preferred physical port In the Preferred Physical Port.

c. When you are finished, click Apply Changes. The iSCSI tab appears.

Set up the Storage Center Software 89

Page 90: Sc4020 deployment guide

5. If you are not using an iSCSI port, confirm that the unused iSCSI port is not configured.

a. Set the Purpose field to Unknown.

b. Confirm that the Fault Domain field is set to <none>.

c. (Optional) Type a descriptive name in the User Alias field.

Configure iSCSI Ports — Legacy Mode

Use the iSCSI tab to configure the iSCSI ports in legacy mode.

About this taskThe Startup Wizard only displays the iSCSI and SAS tabs for an SC4020 with front-end iSCSI ports.

CAUTION: The embedded iSCSI ports are only used for replication to another Storage Center. The embedded iSCSI ports cannot be used for front‐end connectivity to servers.

Steps

1. Click the iSCSI tab.

Figure 80. iSCSI Ports Tab — Legacy Mode

2. Create a fault domain for each pair of redundant iSCSI ports.

a. Click Edit Fault Domains. The wizard displays a list of the currently defined fault domains.

b. Click Create Fault Domain. A dialog box appears.

c. In the Name field, type a name for the fault domain.

d. From the Type drop-down menu, select iSCSI.e. (Optional) In the Notes field, type a description of the fault domain.

f. Click Continue. The dialog box displays a summary.

g. Click Create Now.

h. Repeat the previous steps to create additional fault domains as needed.

i. When you are finished creating fault domains, click Return. The iSCSI tab appears.

3. If you are using an iSCSI port, configure the iSCSI port as a front-end port.

a. Set the Purpose field to Front End Primary or Front End Reserved as appropriate.

b. Set the Fault Domain field to the appropriate fault domain that you created.

c. (Optional) Type a descriptive name in the User Alias field.

4. If you are not using an iSCSI port, confirm that the unused iSCSI port is not configured.

90 Set up the Storage Center Software

Page 91: Sc4020 deployment guide

a. Set the Purpose field to Unknown.

b. Confirm that the Fault Domain field is set to <none>.

c. (Optional) Type a descriptive name in the User Alias field.

Configure SAS Ports

Use the SAS tab to configure the SAS ports.

1. Click the SAS tab.

Figure 81. SAS Ports Tab

2. Set the Purpose field to Back End.

3. (Optional) Type a descriptive name in the User Alias field.

Generate SSL Cert Page

Use the Generate SSL Cert page to generate a new Secure Socket Layer (SSL) certificate or import an existing certificate for the Storage Center. The SSL certificate must match the IP Address or DNS host name of the Storage Center.

The initial certificate shipped with Storage Center may not match the IP address or DNS host name assigned to the Storage Center. If this is the case, when connecting to Storage Center a message is displayed that identifies a mismatch between the certificate and the IP address or DNS host name of the Storage Center. To correct this issue, import or generate an SSL certificate.

CAUTION: Do not click Skip to bypass this page. Clicking Skip can result in connection disruptions to Storage Center System Manager.

Set up the Storage Center Software 91

Page 92: Sc4020 deployment guide

Figure 82. Generate SSL Cert Page

Import a Certificate

If an SSL certificate has already been generated, import the certificate.

Prerequisites

The public key file must be in x.509 format.

Steps

1. Click Import. A file browser appears.

Figure 83. Import Certificate

2. Browse to the location of the public key (*.pem) file and select it.

92 Set up the Storage Center Software

Page 93: Sc4020 deployment guide

3. Click Next.

4. Browse to the location of the private key file (*.pem).

5. Click Next. A Summary page appears that identifies the key files selected.

6. Click Save to import the certificates.

Create a New Certificate

Generate a new certificate if you do not have a certificate that contains the Storage Center host name or IP address.

1. Click Generate. The wizard prompts you to enter the IP addresses and hostnames for the Storage Center.

Figure 84. Create New SSL Cert

2. In the text field, enter all DNS host names, IPv4 addresses, and IPv6 addresses for the Storage Center and separate the entries by commas.

The wizard prepopulates the field with network information that it knows. However, it is likely that more site-specific host names and addresses must be entered.

CAUTION: The host names and addresses must match the Storage Center or you will be unable to reconnect to the Storage Center.

3. Click Generate Now. A new certificate is generated and the Storage Center System Manager session ends.

To continue, refresh the browser and log on to the Storage Center again.

Set up the Storage Center Software 93

Page 94: Sc4020 deployment guide

6Perform Post-Setup TasksUpdate the Storage Center software, then perform connectivity and failover tests to make sure that the deployment was successful.

Update the Storage Center SoftwareAfter the Storage Center is set up, update the Storage Center to the latest version of the software using one of the following methods:

• If the Storage Center can access the Internet, use Phone Home to update the Storage Center software. For more information, see the Storage Center Software Update Guide

• If the Storage Center cannot access the Internet, use a Storage Center ISO image to update the Storage Center software. For more information, see the Storage Center Maintenance CD Instructions.

Perform a Phone HomeIf the Storage Center can access the Internet, complete the Storage Center by performing a Phone Home.

Prerequisites

• TCP ports 22 and 443 must be allowed outbound to the Internet.

• If the network requires hosts to use a proxy server to reach the Internet, the Storage Center must be configured to use a Phone Home proxy.

About this taskPhone Home sends a copy of the Storage Center configuration to Dell Technical Support Services to enable them to support the Storage Center.

Steps

1. Select Storage Management → System → Phone Home → Phone Home. The Phone Home wizard starts, listing any previous Phone Home events.

94 Perform Post-Setup Tasks

Page 95: Sc4020 deployment guide

Figure 85. Phone Home Wizard

2. Click Phone Home Now. The Phone Home In Progress dialog box is displayed.

3. Click OK.

4. When the State column lists all items with Success, click Close.

5. If the Storage Center System Manager displays a message about available software updates, choose to install the updates. For more information, see the Storage Center Software Update Guide

Configure a Phone Home Proxy

If a proxy server is required to reach the Internet, configure Phone Home to use the proxy server before sending data with Phone Home or checking for Storage Center updates.

1. Select Storage Management → System → Phone Home → Configure Phone Home Proxy. The Configure Phone Home Proxy wizard starts.

Figure 86. Configure Phone Home Proxy

2. Select Use Phone Home Proxy Server and enter the following:

• Proxy Server Address: IP address of the proxy server.

• Port: TCP port number of the proxy server.

Perform Post-Setup Tasks 95

Page 96: Sc4020 deployment guide

• Proxy User Name: Username for the proxy server.

• Proxy Password: Password for the proxy server.

• Confirm Password: Password for the proxy server.

3. Click OK.

Check for Storage Center Updates

If the Storage Center System Manager did not display a message about available updates after completing the Storage Center setup, manually check for available Storage Center updates.

1. Select Storage Management → System → Update → Update Status.

2. Click Check Now. As Storage Center checks for updates, the status appears on the Update Status page.

3. If Storage Center updates are available, choose to install the updates. For more information, see the Storage Center Software Update Guide.

Verify Connectivity and FailoverThis section describes the general steps needed to verify that Storage Center is set up properly and performs failover correctly.

The process includes connecting a server, copying data to verify connectivity, and shutting down a storage controller module to verify failover and MPIO functionality. For more information, see the Storage Center System Manager Administrator’s Guide.

Create Test Volumes

Connect a server to the Storage Center, create one or more test volumes, and map them to the server to prepare for connectivity and failover testing.

1. Connect a server to the Storage Center using Fibre Channel or iSCSI.

2. (iSCSI only) Configure the iSCSI initiator on the server to target the Storage Center.

3. Use a web browser to connect to the Storage Center System Manager.

4. Define the server in System Manager.

a. In the System Tree, select Servers.

b. Click Create Server. The Create Server wizard appears.

c. Select the server HBAs in the table, then click Continue.

d. In the Name field, type a name for the server.

e. From the Operating System drop-down menu, select the operating system that the server is running.

f. Click Continue. The wizard displays a summary page.

g. Click Create Now. The wizard displays a list of additional actions.

h. Click Close.

5. Create a 25GB test volume called TestVol1.

a. In the System Tree, select Volumes.

b. In the Create Server wizard, click Create Volume. The Create Volume wizard appears.

c. Set the volume size to 25GB, then click Continue.

d. Click Continue to apply the default Replay Profile.

e. Type TestVol1 in the Name field, then click Continue. The wizard displays a summary.

f. Click Create Now. The wizard displays a list of additional actions.

g. Click Close.

96 Perform Post-Setup Tasks

Page 97: Sc4020 deployment guide

6. Repeat the previous steps to create a second test volume named TestVol2.

7. Map TestVol1 to storage controller module 1.

a. In the System Tree, expand the Volumes node.

b. Select TestVol1.

c. Click Map Volume to Server. The Map Volume to Server wizard appears.

d. Select the server, then click Continue. The wizard displays a summary.

e. Click Advanced. The wizard displays advanced mapping options.

f. In the Restrict Mapping Paths area, select the Map to controller check box, then select the controller from the drop-down menu.

g. Click Continue. The wizard displays a summary.

h. Click Create Now. The mapping is created and the wizard closes.

8. Repeat the previous steps to map TestVol2 to storage controller module 2.

9. Partition and format the test volumes on the server.

Test Basic Connectivity

Verify connectivity by copying some data to the test volumes.

Test Storage Controller Module Failover

Test the Storage Center to make sure that a storage controller module failover does not interrupt IO.

1. Create a Test folder on the server and copy at least 2GB of data into it.

2. Restart storage controller module 1 while copying data to verify that the failover event does not interrupt IO.

a. Copy the Test folder to the TestVol1 volume.

b. During the copy process, restart storage controller module 1 (the storage controller module through which TestVol1 is mapped) by selecting it in the System Tree and clicking Shutdown/Restart Controller.

c. Verify that the copy process continues while the storage controller module restarts.

d. Wait several minutes and verify that the storage controller module has finished starting.

3. Restart storage controller module 2 while copying data to verify that the failover event does not interrupt IO.

a. Copy the Test folder to the TestVol2 volume.

b. During the copy process, restart storage controller module 2 (the storage controller module through which the TestVol2 is mapped) by selecting it in the System Tree and clicking Shutdown/Restart Controller.

c. Verify that the copy process continues while the storage controller module restarts.

d. Wait several minutes and verify that the storage controller module has finished starting.

e. Rebalance the local ports.

Test MPIO

If the network environment and servers are configured for MPIO, perform tests to make sure that failed paths do not interrupt IO.

1. Create a Test folder on the server and copy at least 2GB of data into it.

2. Make sure that the Fibre Channel or iSCSI server is configured to use load balancing MPIO (round-robin).

3. Manually disconnect a path while copying data to TestVol1 to verify that MPIO is functioning correctly.

Perform Post-Setup Tasks 97

Page 98: Sc4020 deployment guide

a. Copy the Test folder to the TestVol1 volume.

b. During the copy process, disconnect one of the paths and verify that the copy process continues.

c. Reconnect the path.

4. Repeat the previous steps as necessary to test additional paths.

5. Restart the storage controller module that contains the active path while IO is being transferred.

6. If the Storage Center is not in a production environment, restart the switch that contains the active path while IO is being transferred.

Clean up Test Volumes

After testing is complete, delete the volumes used for testing.

Label SC200/SC220 Expansion EnclosuresSC200/SC220 expansion enclosures do not have displays to indicate the expansion enclosure ID assigned by Storage Center. To facilitate easy identification in the rack, use Storage Center System Manager to match each expansion enclosure ID to a service tag. Locate the service tag on the back of each expansion enclosure and then label it with the correct expansion enclosure ID.

About this task

NOTE: If the expansion enclosure is deleted from System Manager and then added back in, the expansion enclosure is assigned a new index number, requiring a label change.

Steps

1. Use the Storage Center System Manager to map each expansion enclosure ID to a service tag.

a. From the System Tree, expand the Enclosures node.

b. Select an expansion enclosure.

c. On the General tab, locate and record the Index and Service Tag.

98 Perform Post-Setup Tasks

Page 99: Sc4020 deployment guide

Figure 87. Expansion Enclosure General Tab

2. Create a label for each SC200/SC220 expansion enclosure with the expansion enclosure ID number.

3. Apply an ID label to the left-front of each expansion enclosure.

Next StepsAfter installation is complete, perform some basic tasks to configure Storage Center for your environment. These tasks are configuration-dependent, so some might not apply to your site.

• Manage unmanaged disks

• Add Storage Center users; including configuring Lightweight Directory Access Protocol (LDAP)

• Configure password rules for local users

• Create servers

• Configure user volume defaults

• Add Storage Center volumes

• Add a Storage Center login message

For more information, see the Storage Center System Manager Administrator’s Guide.

Perform Post-Setup Tasks 99

Page 100: Sc4020 deployment guide

100 Perform Post-Setup Tasks

Page 101: Sc4020 deployment guide

AAdding or Removing an Expansion EnclosureThis chapter describes how to add an expansion enclosure to a SAS chain and how to remove an expansion enclosure from a SAS chain.

Adding Expansion Enclosures to an SC4020 Deployed without Expansion EnclosuresUse caution when adding expansion enclosures to a live Storage Center system to preserve the integrity of the existing data.

PrerequisitesIdentify the leader storage controller module by checking storage controller summary information in Storage Center System Manager.

About this taskNew expansion enclosures must be added to the leader storage controller module first. These cabling instructions assume that the top storage controller module (storage controller module 1) is the leader controller.

Steps

1. Cable the expansion enclosures together, but do not connect the chain to the storage controller. Connect the new expansion enclosures together in the following order:

a. New expansion enclosure (1): top, port B→ new expansion enclosure (2): top, port A.b. New expansion enclosure (1): bottom, port B→ new expansion enclosure (2): bottom, port A.

Figure 88. Cable the Expansion Enclosures

1. New expansion enclosure (1) 2. New expansion enclosure (2)

3. Add SAS cable between new expansion enclosure (1): top, port B and new expansion enclosure (2): top, port A

4. Add SAS cable between new expansion enclosure (1): bottom, port B and new expansion enclosure (2): bottom, port A

2. Turn on the expansion enclosures being added. When the drives spin up, make sure that the front panel and power status LEDs show normal operation.

Adding or Removing an Expansion Enclosure 101

Page 102: Sc4020 deployment guide

3. Connect the expansion enclosures to the A-side of the chain.

a. Disconnect the SAS cable between storage controller module 1: port A and storage controller module 2: port B.

b. Connect storage controller module 1: port A to expansion enclosure 1: top EMM, port A.

Figure 89. Connect Expansion Enclosures to A-Side of Chain

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. Expansion enclosure 2

5. Connect storage controller module 1: port A to expansion enclosure 1: top EMM, port A.

6. Connect storage controller module 1: port B to expansion enclosure 2: top EMM, port B.

c. Connect storage controller module 1: port A to expansion enclosure 1: top EMM, port A.

d. In the Storage Center System Manager, select Storage Management → System → Setup → Configure Local Ports.

e. Make sure that port A on storage controller module 1 is Up, the purpose is set to Back End, and the Target Count equals the number of added drives.

f. In the Storage Center System Tree, confirm that there is only one expansion enclosure.

g. Connect storage controller module 2: port B to the last expansion enclosure in the chain, top EMM, port B.

h. In the Storage Center System Tree, confirm that there are two expansion enclosure.

4. Connect the expansion enclosures to the B-side of the chain.

a. Disconnect the SAS cable between storage controller module 1: port and storage controller module 2: port A.

b. Connect storage controller module 1: port B to expansion enclosure 2: bottom EMM, port B.

c. Connect storage controller module 2: port A to expansion enclosure 1: bottom EMM, port A.

102 Adding or Removing an Expansion Enclosure

Page 103: Sc4020 deployment guide

Figure 90. Connect Expansion Enclosures to B-Side of Chain

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. Expansion enclosure 2

5. Connect storage controller module 1: port B to expansion enclosure 2: bottom EMM, port B.

6. Connect storage controller module 2: port A to expansion enclosure 1: bottom EMM, port A.

5. The Storage Center System Manager informs you that you have new, unassigned disks. Select Storage Management → Disk → Manage Unassigned Disks to move the unassigned dusk to the managed disk folder and add the space to the disk pool. For more information, see the Storage Center System Manager Administrator’s Guide.

6. To redistribute data across all drives, select Storage Management → Disk → Rebalance RAID.

Adding an Expansion Enclosure to a Chain Currently in ServiceTo add an SC200/SC220 expansion enclosure to an existing chain, connect the expansion enclosure to one side of the chain at a time.

During this process, one side of the SAS chain is disconnected, and the Storage Center fails over to other the side of the chain, which remains connected. Use caution when adding an enclosure to a live Storage Center system to preserve the integrity of the existing data.

CAUTION: Before adding an enclosure to an existing chain, make sure that your data is backed up. For maximum protection, add enclosures during a service outage.

Adding or Removing an Expansion Enclosure 103

Page 104: Sc4020 deployment guide

Identify the Leader Storage Controller Module

New expansion enclosures must be added to the leader storage controller module first. The cabling instructions in this section assume that the top storage controller module (storage controller module 1) is the leader controller.

About this taskIdentify the leader storage controller module by checking controller summary information in Storage Center System Manager.

Steps

1. In the Storage Center System Tree, select the Controllers node.

2. In the storage controller module summary pane note which storage controller module is identified as the leader.

Check Current Disk Count before Adding an Expansion Enclosure

Check the current number of drives accessible by Storage Center to use as a comparison after an expansion enclosure is added.

1. Access Storage Center System Manager from a computer on the same management network as the Storage Center.

2. In the Storage Center System Manager, select Storage Management → System → Setup → Configure Local Ports.

3. Maximize the Configure Local Port dialog box so that you can see the Target Count column.

4. Record the number of drives currently viewable by the controllers. This number increases when expansion enclosures are added to Storage Center.

NOTE: When adding expansion enclosures to an existing Storage Center, add the expansion enclosures to the end of the SAS chain.

Add an SC200/SC220 Expansion Enclosure to the A-side Chain

Connect an expansion enclosure to the chain one side at a time to maintain drive availability.

1. Disconnect the A-side cables (shown in red) from the storage controller modules.

• Storage controller module 1, port A

• Storage controller module 2, port B

IO continues through the B-side cables.

2. Turn on the expansion enclosure being added. When the drives spin up, make sure that the front panel and power status LEDs show normal operation.

3. Move the cable from expansion enclosure 1: top, port B to the new expansion enclosure (2): top, port B.

4. Use a new SAS cable to connect expansion enclosure 1: top, port B to the new expansion enclosure (2): top, port A.

104 Adding or Removing an Expansion Enclosure

Page 105: Sc4020 deployment guide

Figure 91. Connect an SC200/SC220 Expansion Enclosure to the A side

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. New expansion enclosure (2)

5. Disconnect the SAS cable from storage controller module 1: port A

6. Disconnect the SAS cable from storage controller module 2: port B

7. Add a SAS cable between expansion enclosure 1 and the new expansion enclosure (2)

8. Disconnect the SAS cable from expansion enclosure 1: top, port B

9. Connect the SAS cable to new expansion enclosure (2): top, port B

5. Reconnect the A-side cables to the storage controller modules:

a. Connect the new expansion enclosure (2): top, port B to storage controller module 2: port B.

b. Connect expansion enclosure 1: top, port A to storage controller module 1: port A.

Adding or Removing an Expansion Enclosure 105

Page 106: Sc4020 deployment guide

Figure 92. Reconnect A-side Cables to Storage Controller Modules

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. New expansion enclosure (2)

5. Reconnect A-side cable to storage controller module 1: port A

6. Reconnect A-side cable to storage controller module 2: port B

6. In the Storage Center System Manager, select Storage Management→ System→ Setup→ Configure Local Ports.

7. Make sure that the SAS ports are up and the purpose is set to Back End.

8. Make sure the Target Count equals the number of added drives. If the target count is equal to the number of added drives, the storage controller recognized the new expansion enclosure.

Add an SC200/SC220 Expansion Enclosure to the B-side Chain

Connect an expansion enclosure to the chain one side at a time to maintain drive availability.

1. Disconnect the B-side cables (shown in blue) from the storage controller modules.

• Storage controller module 1: port B.

• Storage controller module 2, port A.

IO continues through the A-side cables.

2. Move the cable from expansion enclosure 1: bottom, port B to the new expansion enclosure (2): bottom, port B.

3. Use a new SAS cable to connect expansion enclosure 1: bottom, port B to the new expansion enclosure (2): bottom, port A.

106 Adding or Removing an Expansion Enclosure

Page 107: Sc4020 deployment guide

Figure 93. Connect an SC200/SC220 Expansion Enclosure to the B Side

1. New expansion enclosure (2) 2. Expansion enclosure 1

3. Storage controller module 1 4. Storage controller module 2

5. Disconnect the SAS cable from storage controller module 1: port B

6. Disconnect the SAS cable from storage controller module 2: port A

7. Add a SAS cable between expansion enclosure 1 and new expansion enclosure (2)

8. Disconnect the SAS cable from expansion enclosure 1: bottom, port B

9. Connect the SAS cable to new expansion enclosure (2): bottom, port B

4. Reconnect the A-side cables to the storage controller modules:

a. Connect the new expansion enclosure (2): bottom, port B to storage controller module 1: port B.

b. Connect expansion enclosure 1: bottom, port A to storage controller module 2: port A.

Adding or Removing an Expansion Enclosure 107

Page 108: Sc4020 deployment guide

Figure 94. Reconnect B-Side Cables to Storage Controller Modules

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. New expansion enclosure (2)

5. Reconnect B-side cable to storage controller module 1: port B

6. Reconnect B-side cable to storage controller module 2: port A

The Storage Center System Manager informs you that you have new, unassigned disks.

5. In the Storage Center System Manager, select Storage Management → Disk → Manage Unassigned Disks.

6. Move the unassigned disk to the managed disk folder and add the space to the disk pool. For more information, see the Storage Center System Manager Administrator’s Guide

7. In the Storage Manager System Tree, view the list of enclosures. The Storage Center System Manager assigns an index number to all managed drives.

8. To redistribute data across all drives, select Storage Management → Disk → Rebalance RAID.

Label the Back-End Cables

Label the back-end cables that connect storage controller modules to expansion enclosures to indicate the chain number and side (A or B).

PrerequisitesLocate the pre-made cable labels provided with the SC200/SC220 expansion enclosures.

About this taskApply cable labels to both ends of each SAS cable that connects a storage controller module to an expansion enclosure.

Steps

1. Starting with the top edge of the label, attach the label to the cable near the connector.

108 Adding or Removing an Expansion Enclosure

Page 109: Sc4020 deployment guide

Figure 95. Attach Cable Label

2. Wrap the label around the cable until it fully encircles the cable. The bottom of each label is clear so that it does not obscure the text.

3. Apply a matching label to the other end of the cable.

Removing an Expansion Enclosure from a Chain Currently in ServiceThese instructions describe how to remove expansion enclosure 1 from an SC200/SC220 expansion enclosure chain.

Use Storage Center System Manager to release the disks in the expansion enclosure before removing the expansion enclosure. For more information, see the Storage Center System Manager Administrator’s Guide.

CAUTION: Make sure that your data is backed up and that all disks in the expansion enclosure are empty of data before removing the expansion enclosure.

Release Disks before Removing an Expansion Enclosure

Release disks to remove them from the pool in preparation of removing the expansion enclosure.

1. In the Storage Center tree, expand the Disks node to view the disks in the expansion enclosure.

2. Select all the disks in the expansion enclosure to remove.

3. From the shortcut menu, select Release Disk. The Release Disk wizard starts.

4. Click Yes. The disks are immediately released. A dialog box opens asking if you wish to rebalance RAID devices.

5. Click Yes. Wait for the RAID rebalance operation to finish.

When all of the drives in the expansion enclosure are in the unassigned disk folder, the expansion enclosure is safe to remove.

Adding or Removing an Expansion Enclosure 109

Page 110: Sc4020 deployment guide

Disconnect the A-side Chain from the SC200/SC220 Expansion Enclosure

Disconnect the A-side chain from the SC200/SC220 expansion enclosure to remove.

About this task

CAUTION: To disconnect the A-side chain without a system outage, make sure that you first disconnect the A-side cable from storage controller module 1. Disconnecting any other cable first disrupts IO to the expansion enclosure, resulting in a system outage.

Steps

1. Disconnect the A-side cable (shown in red) from storage controller module 1: port A. The B side continues to carry IO while the A side is disconnected.

2. In the Storage Center Configure Local Ports wizard, make sure that the B-side ports on both storage controller modules are Up and the Target Count recognizes the drives through the B side.

3. Remove the A-side cable between expansion enclosure1: top, port B and expansion enclosure 2: top, port A.

Figure 96. Disconnecting the Expansion Enclosure from the A-side Chain

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. Expansion enclosure 2

5. Disconnect the cable from storage controller module 1: port A

6. Remove the cable between expansion enclosure 1: top, port B and expansion enclosure 2: top, port A

4. Move the A-side cable from expansion enclosure 1: top, port A to expansion enclosure 2: top, port A.

5. Reconnect the top A-side cable to storage controller module 1: port A.

110 Adding or Removing an Expansion Enclosure

Page 111: Sc4020 deployment guide

Figure 97. Reconnecting the A-side Chain

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. Expansion enclosure 2

5. Disconnect the SAS cable from expansion enclosure 1: top, port A

6. Connect the SAS cable to expansion enclosure 2: top, port A

7. Reconnect the SAS cable to storage controller module 1: port A

Disconnect the B-side Chain from the SC200/SC220 Expansion Enclosure

Disconnect the B-side chain from the SC200/SC220 expansion enclosure to remove.

About this task

CAUTION: To disconnect the B-side chain without a system outage, make sure that you first disconnect the B-side cable from storage controller module 2. Disconnecting any other cable first disrupts IO to the expansion enclosure, resulting in a system outage.

Steps

1. Disconnect the B-side cable (shown in blue) from storage controller module 2: port A. The A side continues to carry IO while the B side is disconnected.

2. In the Storage Center Configure Local Ports wizard, make sure that the A-side ports on both storage controller modules are Up and the Target Count recognizes the drives through the A side.

3. Remove the B-side cable between expansion enclosure 1: bottom, port B and expansion enclosure 2: bottom, port A.

4. Move the B-side cable from expansion enclosure 1: bottom, port A to expansion enclosure 2: bottom, port A.

Adding or Removing an Expansion Enclosure 111

Page 112: Sc4020 deployment guide

Figure 98. Disconnecting the Expansion Enclosure from the B-side Chain

1. Storage controller module 1 2. Storage controller module 2

3. Expansion enclosure 1 4. Expansion enclosure 2

5. Disconnect the SAS cable from storage controller module 2: port A

6. Remove the SAS cable between expansion enclosure 1: bottom, port B and expansion enclosure 2: bottom, port A

7. Disconnect the SAS cable from expansion enclosure 1: bottom, port A

8. Connect the SAS cable to expansion enclosure 2: bottom, port A

5. Reconnect the B-side cable to storage controller module 2: port A.

The expansion enclosure is now disconnected and can be removed.

112 Adding or Removing an Expansion Enclosure

Page 113: Sc4020 deployment guide

Figure 99. Expansion Enclosure Disconnected

1. Expansion enclosure 1 2. Disconnected expansion enclosure

3. Storage controller module 1 4. Storage controller module 2

5. Reconnect the SAS cable to storage controller module 2: port A

Adding or Removing an Expansion Enclosure 113

Page 114: Sc4020 deployment guide

114 Adding or Removing an Expansion Enclosure

Page 115: Sc4020 deployment guide

BTroubleshooting Storage CenterThis appendix contains troubleshooting steps for common Storage Center issues.

Troubleshooting the Serial Connection to a Storage Controller ModuleTable 18. Troubleshooting Communication with a Storage Controller Module

Issue Possible Reason Solution

Unable to connect to a storage controller module with a serial connection.

Communication problems. • Disconnect and reconnect the serial cable.

• Verify the connection settings.

• Replace the serial cable.

Troubleshooting Expansion EnclosuresTable 19. Troubleshooting Issues Expansion Enclosures

Issue Possible Reason Solution

No Disks Found by Startup Wizard:

No disks could be found attached to this Storage Center.

Need to update software and/or firmware to use the attached expansion enclosure.

1. Click Quit.

2. Check for Storage Center updates and install the updates.

3. Complete the Storage Center Startup Wizard.

Expansion enclosure firmware update fails

Back-end cabling is not redundant.

Check the back-end cabling and ensure that redundant connections are used.

Expected disks and/or expansion enclosure are missing

Need to update software and/or firmware to use attached expansion enclosure and/or drives.

1. Check the back-end cabling and the expansion enclosure power state.

2. Check for Storage Center updates and install the updates.

Related Links

Check for Storage Center Updates

Complete the Startup Wizard

SAS Redundacy

Troubleshooting Storage Center 115

Page 116: Sc4020 deployment guide

Troubleshooting Storage Center LicensesTable 20. Troubleshooting Licenses

Issue Possible Reason Solution

Error: The license file is not valid (could be old license file). Contact Dell Technical Support Services for a new license file.

This issue may occur if you attempt to apply the license file before the storage controller modules are joined and the license file is applied to the storage controller module with the higher serial number.

Make sure that all the Storage Center setup steps are followed in order and try again.

• Make sure to connect to the IP address of the storage controller module with the lower serial number.

• Make sure the serial number in the name of the license file matches the storage controller module.

Related Links

Set up the Storage Center Software

Troubleshooting the Storage Center Startup WizardTable 21. Trouble with the Storage Center Startup Wizard and Updates

Issue Possible Reason Solution

Cannot complete the Storage Center Startup Wizard and cannot check for updates.

The management network may use a proxy server, which blocks communication with the Phone Home server.

Use the Command Line Interface (CLI) to configure a proxy server for Phone Home operations.

1. Establish a serial connection to the top storage controller module.

2. Run the following commands:

mc values set UsePhHomeProxy truemc values set PhHomeProxyString [proxy server IP address]mc values set PhHomeProxyUserName [proxy server username]mc values set PhPomeProxyPassword [proxy server password]

3. Complete the Storage Center Startup Wizard.

4. Check for Storage Center updates.

Related Links

Complete the Startup Wizard

Check for Storage Center Updates

116 Troubleshooting Storage Center

Page 117: Sc4020 deployment guide

CGetting HelpGetting help from Dell Technical Support Services.

Locating Your System Service Tag

Your system is identified by a unique Express Service Code and Service Tag number. The Express Service Code and Service Tag are found on the front of the system by pulling out the information tag. This information is used by Dell to route support calls to the appropriate personnel.

Contacting DellDell provides several online and telephone-based support and service options. Availability varies by country and product, and some services may not be available in your area.

To contact Dell for technical support, go to http://support.dell.com/compellent/.

Documentation feedback

If you have feedback for this document, write to [email protected]. Alternatively, you can click on the Feedback link in any of the Dell documentation pages, fill out the form, and click Submit to send your feedback.

Getting Help 117