102
Windows Host Utilities 5.2 Installation and Setup Guide NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPP Documentation comments: [email protected] Information Web: http://www.netapp.com Part number 215-04803_A0 July 2009

Windows Host Util 5.2setup

  • Upload
    vmwared

  • View
    155

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Windows Host Util 5.2setup

Windows Host Utilities 5.2Installation and Setup Guide

NetApp, Inc.495 East Java Drive Sunnyvale, CA 94089 U.S.A. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone: +1 (888) 4-NETAPPDocumentation comments: [email protected] Web: http://www.netapp.com

Part number 215-04803_A0July 2009

Page 2: Windows Host Util 5.2setup
Page 3: Windows Host Util 5.2setup

Contents

Copyright information...................................................................................9Trademark information...............................................................................11About this guide............................................................................................13

Audience......................................................................................................................13

Terminology.................................................................................................................14

Command conventions................................................................................................14

Keyboard and formatting conventions.........................................................................15

Special messages.........................................................................................................16

How to send your comments.......................................................................................16

New features in this Host Utilities release..................................................17Introduction to Host Utilities.......................................................................19

What the Host Utilities are..........................................................................................19

Tasks required for installing and configuring the Host Utilities..................................20

What the Host Utilities Contain...................................................................................20

Windows configurations supported by the Host Utilities............................................21

Protocols supported by the Host Utilities....................................................................22

Data ONTAP and Fibre Channel.....................................................................22

Data ONTAP and Fibre Channel over Ethernet...............................................23

Data ONTAP and iSCSI..................................................................................24

Dynamic disk support..................................................................................................25

Multipathing options supported by the Host Utilities.................................................26

What is Hyper-V..........................................................................................................27

Methods for using storage with Hyper-V........................................................27

Methods for clustering Windows hosts with Hyper-V....................................28

Recommended LUN layout with Hyper-V......................................................28

Virtual Server 2005 overview......................................................................................28

About using virtual hard disks.........................................................................29

Virtual machines and iSCSI initiators..............................................................29

About SAN booting.....................................................................................................29

Support for non-English operating system versions....................................................30

Where to find more information..................................................................................30

Installing and Configuring Host Utilities...................................................33

Table of Contents | 3

Page 4: Windows Host Util 5.2setup

Installing and configuring the Host Utilities (high level)............................................33

Verifying your host and storage system configuration................................................34

Confirming your storage system configuration...........................................................35

Configuring FC HBAs and switches...........................................................................35

Checking the media type of FC ports..........................................................................36

Configuring iSCSI initiators and HBAs......................................................................37

iSCSI software initiator options.......................................................................37

Downloading the iSCSI software initiator.......................................................38

Installing the iSCSI Initiator software.............................................................39

Installing the iSCSI HBA................................................................................39

Options for iSCSI sessions and error recovery levels......................................40

Options for using CHAP with iSCSI Initiators...............................................41

Installing multipath I/O software.................................................................................41

How to have a DSM multipath solution..........................................................42

Disabling ALUA for Data ONTAP DSM........................................................42

Enabling ALUA for FC with msdsm...............................................................43

About configuring Hyper-V systems...........................................................................44

Adding virtual machines to a failover cluster..................................................44

Configuring SUSE Linux guests for Hyper-V.................................................44

About Veritas Storage Foundation...............................................................................45

Using Veritas Storage Foundation 5.1 for Windows........................................46

Installation process overview.......................................................................................46

Installing the Host Utilities interactively.........................................................47

Installing the Host Utilities from a command line..........................................47

About SnapDrive for Windows....................................................................................48

Repairing and removing Windows Host Utilities........................................................49

Repairing or removing Windows Host Utilities interactively..........................49

Repairing or removing Windows Host Utilities from a

command line.............................................................................................49

Removing Windows Host Utilities affects DSM.............................................50

Host configuration settings..........................................................................51What are FC and iSCSI identifiers..............................................................................51

Recording the WWPN.....................................................................................51

Recording the iSCSI initiator node name........................................................53

Overview of settings used by the Host Utilities..........................................................53

Summary of registry values set by Windows Host Utilities............................54

4 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 5: Windows Host Util 5.2setup

Summary of FC HBA values set by Windows Host Utilities..........................56

ManageDisksOnSystemBuses setting used by the Host Utilities....................57

ClusSvcHangTimeout setting..........................................................................57

TimeOutValue setting used by the Host Utilities ............................................57

PathVerifyEnabled setting used by the Host Utilities .....................................58

PDORemovePeriod setting used by the Host Utilities ...................................58

RetryCount setting used by the Host Utilities.................................................58

RetryInterval setting used by the Host Utilities...............................................59

DsmMaximumStateTransitionTime and

DsmMaximumRetryTimeDuringStateTransition

settings used by the Host Utilities .............................................................59

MPIOSupportedDeviceList setting used by the Host

Utilities ......................................................................................................60

DsmSupportedDeviceList setting used by Host Utilities................................60

IPSecConfigTimeout setting used by the Host Utilities..................................60

LinkDownTime setting used by the Host Utilities..........................................60

MaxRequestHoldTime setting used by the Host Utilities...............................61

FC HBA parameters set by the Host Utilities..................................................61

Setting up LUNs............................................................................................63LUN overview.............................................................................................................63

LUN types to use for hosts and guest operating systems................................63

Overview of creating LUNs.............................................................................64

Initiator group overview..............................................................................................65

Mapping LUNs to igroups...............................................................................65

About mapping LUNs for Windows clusters...................................................66

About FC targets..........................................................................................................66

About proxy paths in FC configurations.........................................................66

Adding iSCSI targets...................................................................................................68

About dependent services on the Native Stack and iSCSI..............................69

About dependent services on Veritas and iSCSI..............................................70

Accessing LUNs on hosts that use Veritas Storage Foundation..................................70

Accessing LUNs on hosts that use the native OS stack...............................................71

Overview of initializing and partitioning the disk.......................................................71

Troubleshooting............................................................................................73Areas to check for possible problems..........................................................................73

Updating the HBA software driver..............................................................................74

Table of Contents | 5

Page 6: Windows Host Util 5.2setup

Understanding the Host Utilities changes to FC HBA driver settings.........................75

Verifying the Emulex HBA driver settings on FC systems.............................75

Verifying the QLogic HBA driver settings on FC systems..............................76

Enabling logging on the Emulex HBA .......................................................................76

Enabling logging on the QLogic HBA........................................................................77

About the diagnostic programs....................................................................................77

Running the diagnostic programs (general steps)............................................78

Collecting storage system information............................................................79

Collecting Windows host information.............................................................79

Collecting FC switch information...................................................................80

About collecting information on Veritas Storage Foundation.........................80

SAN Booting..................................................................................................81What SAN booting is...................................................................................................81

How SnapDrive supports SAN booting...........................................................82

General requirements for SAN booting...........................................................82

About queue depths used with FC SAN booting.............................................82

Configuring FC SAN booting .....................................................................................83

About BootBIOS and SAN booting................................................................84

Enabling Emulex BootBIOS using HBAnyware.............................................84

Enabling Emulex BootBIOS using LP6DUTIL..............................................85

Enabling QLogic BootBIOS............................................................................85

WWPN for the HBA required.........................................................................86

Configuring a single path between the host and storage system.....................86

Creating the boot LUN................................................................................................89

Configuring Emulex BootBIOS..................................................................................89

Configuring QLogic BootBIOS...................................................................................90

About configuring BIOS to allow booting from a LUN..............................................91

Configuring a Dell BIOS Revision A10..........................................................91

Configuring an IBM BIOS..............................................................................92

Configuring a Phoenix BIOS 4 Release 6.......................................................92

Configuring QLogic iSCSI HBA boot BIOS settings.................................................92

Configuration process requires limited paths to HBA.....................................93

Setting iSCSI HBA parameters in boot BIOS.................................................93

Getting the correct driver for the boot LUN................................................................93

Installing Windows on the boot LUN..............................................................94

6 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 7: Windows Host Util 5.2setup

Configuring a VCS or MSCS cluster with Veritas in a SAN

booted environment................................................................................................95

Index...............................................................................................................97

Table of Contents | 7

Page 8: Windows Host Util 5.2setup
Page 9: Windows Host Util 5.2setup

Copyright information

Copyright © 1994–2009 NetApp, Inc. All rights reserved. Printed in the U.S.A.

No part of this document covered by copyright may be reproduced in any form or by any means—graphic,electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrievalsystem—without prior written permission of the copyright owner.

Software derived from copyrighted NetApp material is subject to the following license and disclaimer:

THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS ORIMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIESOF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBYDISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT,INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUTNOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANYTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OFTHIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

NetApp reserves the right to change any products described herein at any time, and without notice.NetApp assumes no responsibility or liability arising from the use of products described herein, exceptas expressly agreed to in writing by NetApp. The use or purchase of this product does not convey alicense under any patent rights, trademark rights, or any other intellectual property rights of NetApp.

The product described in this manual may be protected by one or more U.S.A. patents, foreign patents,or pending applications.

RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject torestrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Softwareclause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).

Copyright information | 9

Page 10: Windows Host Util 5.2setup
Page 11: Windows Host Util 5.2setup

Trademark information

NetApp, the Network Appliance logo, the bolt design, NetApp-the Network Appliance Company,Cryptainer, Cryptoshred, DataFabric, DataFort, Data ONTAP, Decru, FAServer, FilerView, FlexClone,FlexVol, Manage ONTAP, MultiStore, NearStore, NetCache, NOW NetApp on the Web, SANscreen,SecureShare, SnapDrive, SnapLock, SnapManager, SnapMirror, SnapMover, SnapRestore,SnapValidator, SnapVault, Spinnaker Networks, SpinCluster, SpinFS, SpinHA, SpinMove, SpinServer,StoreVault, SyncMirror, Topio, VFM, and WAFL are registered trademarks of NetApp, Inc. in theU.S.A. and/or other countries. gFiler, Network Appliance, SnapCopy, Snapshot, and The evolution ofstorage are trademarks of NetApp, Inc. in the U.S.A. and/or other countries and registered trademarksin some other countries. The NetApp arch logo; the StoreVault logo; ApplianceWatch; BareMetal;Camera-to-Viewer; ComplianceClock; ComplianceJournal; ContentDirector; ContentFabric; EdgeFiler;FlexShare; FPolicy; Go Further, Faster; HyperSAN; InfoFabric; Lifetime Key Management, LockVault;NOW; ONTAPI; OpenKey, RAID-DP; ReplicatorX; RoboCache; RoboFiler; SecureAdmin; ServingData by Design; Shadow Tape; SharedStorage; Simplicore; Simulate ONTAP; Smart SAN; SnapCache;SnapDirector; SnapFilter; SnapMigrator; SnapSuite; SohoFiler; SpinMirror; SpinRestore; SpinShot;SpinStor; vFiler; VFM Virtual File Manager; VPolicy; and Web Filer are trademarks of NetApp, Inc.in the U.S.A. and other countries. NetApp Availability Assurance and NetApp ProTech Expert areservice marks of NetApp, Inc. in the U.S.A.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International BusinessMachines Corporation in the United States, other countries, or both. A complete and current list of otherIBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml.

Apple is a registered trademark and QuickTime is a trademark of Apple, Inc. in the U.S.A. and/or othercountries. Microsoft is a registered trademark and Windows Media is a trademark of MicrosoftCorporation in the U.S.A. and/or other countries. RealAudio, RealNetworks, RealPlayer, RealSystem,RealText, and RealVideo are registered trademarks and RealMedia, RealProxy, and SureStream aretrademarks of RealNetworks, Inc. in the U.S.A. and/or other countries.

All other brands or products are trademarks or registered trademarks of their respective holders andshould be treated as such.

NetApp, Inc. is a licensee of the CompactFlash and CF Logo trademarks.

NetCache is certified RealSystem compatible.

Trademark information | 11

Page 12: Windows Host Util 5.2setup
Page 13: Windows Host Util 5.2setup

About this guide

This document helps you to use the product that it describes. You can do that more effectively whenyou understand the intended audience and the conventions that this document uses to present information.

This guide describes how to install Windows Host Utilities and how to enable Microsoft Windowshosts to access data on storage systems that run Data ONTAP software.

This guide describes how to use both the iSCSI and Fibre Channel protocols. It also describes usingboth protocols to connect to the same LUN. Starting with Windows Host Utilities 5.0, the previousiSCSI Windows Host Utilities and FCP Windows Host Utilities are merged into a single product.

See the NetApp Interoperability Matrix for information on the specific Windows versions supported.

For currently supported guest operating systems on Hyper-V virtual machines, see the Windows HostUtilities Release Notes.

Next topics

Audience on page 13

Terminology on page 14

Command conventions on page 14

Keyboard and formatting conventions on page 15

Special messages on page 16

How to send your comments on page 16

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

AudienceThis document is written with certain assumptions about your technical knowledge and experience.

This guide is intended for system installers and administrators who are familiar with the MicrosoftWindows operating system and configuring and administering NetApp storage systems.

You should be familiar with the specifics of your configuration, including the following items.

• Multipathing software, such as Data ONTAP DSM (device-specific module) for Windows MPIO(multipath I/O) or Veritas Storage Foundation for Windows.

• The iSCSI or Fibre Channel protocols.

• Creating virtual hard disks and virtual networks if you are using Virtual Server 2005 or the Hyper-Vrole of Windows Server 2008 to create virtual machines.

About this guide | 13

Page 14: Windows Host Util 5.2setup

TerminologyTo understand the concepts in this document, you might need to know how certain terms are used.

Storage terms

• Storage system refers to the hardware device running Data ONTAP that receives data from andsends data to native disk shelves, third-party storage, or both.Storage systems that run Data ONTAP are sometimes referred to as filers, appliances, storageappliances, V-Series systems, or systems. When a storage system is in a high-availabilityconfiguration, it is usually referred to as a node. The terms used in this document reflect one ofthese common usages.

• Storage controller refers to the component of a storage system that runs the Data ONTAP operatingsystem and controls its disk subsystem.Storage controllers are also sometimes called controllers, storage appliances, appliances, storageengines, heads, CPU modules, or controller modules.

Cluster and high-availability terms

• In Data ONTAP 7.2 and 7.3 release families, an HA pair is referred to as an active/activeconfiguration or active/active pair.

Command conventionsYou can use your product more effectively when you understand how this document uses commandconventions to present information.

You can perform common administrator tasks in one or more of the following ways:

• You can enter commands on the system console, or from any client computer that can obtain accessto the storage system using a Telnet or Secure Socket Shell (SSH) session.In examples that illustrate command execution, the command syntax and output might differ,depending on your version of the operating system.

• You can use the FilerView graphical user interface.For information about accessing your system with FilerView and about FilerView Help, whichexplains Data ONTAP features and how to work with them in FilerView, see the Data ONTAPSystem Administration Guide.

• You can enter Windows, ESX, HP-UX, AIX, Linux, and Solaris commands on a client console.Your product documentation provides specific command options you can use.

• You can use the client graphical user interface.

14 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 15: Windows Host Util 5.2setup

Your product documentation provides details about how to use the graphical user interface.

• You can enter commands on the switch console or from any client that can obtain access to theswitch using a Telnet session.In examples that illustrate command execution, the command syntax and output might differ,depending on your version of the operating system.

Keyboard and formatting conventionsYou can use your product more effectively when you understand how this document uses keyboardand formatting conventions to present information.

Keyboard conventions

What it meansConvention

Refers to NetApp On the Web at http://now.netapp.com/.The NOW site

• Used to refer to the key that generates a carriage return; the key is named Returnon some keyboards.

• Used to mean pressing one or more keys on the keyboard and then pressing theEnter key, or clicking in a field in a graphical interface and then typing informationinto the field.

Enter, enter

Used to separate individual keys. For example, Ctrl-D means holding down the Ctrlkey while pressing the D key.

hyphen (-)

Used to mean pressing one or more keys on the keyboard.type

Formatting conventions

What it meansConvention

• Words or characters that require special attention.

• Placeholders for information you must supply.

For example, if the guide says to enter the arp -d hostname command, youenter the characters "arp -d" followed by the actual name of the host.

• Book titles in cross-references.

Italic font

About this guide | 15

Page 16: Windows Host Util 5.2setup

What it meansConvention

• Command names, option names, keywords, and daemon names.

• Information displayed on the system console or other computer monitors.

• Contents of files.

• File, path, and directory names.

Monospaced font

Words or characters you type. What you type is always shown in lowercase letters,unless your program is case-sensitive and uppercase letters are necessary for it to workproperly.

Bold monospaced

font

Special messagesThis document might contain the following types of messages to alert you to conditions that you needto be aware of.

Note: A note contains important information that helps you install or operate the system efficiently.

Attention: An attention notice contains instructions that you must follow to avoid a system crash,loss of data, or damage to the equipment.

How to send your commentsYou can help us to improve the quality of our documentation by sending us your feedback.

Your feedback is important in helping us to provide the most accurate and high-quality information. Ifyou have suggestions for improving this document, send us your comments by e-mail [email protected]. To help us direct your comments to the correct division, include in thesubject line the name of your product and the applicable operating system. For example, FAS6070—DataONTAP 8.0, or Host Utilities—Solaris, or Operations Manager 3.8—Windows.

16 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 17: Windows Host Util 5.2setup

New features in this Host Utilities release

Windows Host Utilities 5.2 includes several new features and support for additional configurations.

Host Utilities 5.2 includes the following changes from 5.1:

• Support for Windows Server 2008R2.NOTE: If you are upgrading your host to Windows Server 2008 R2, upgrade to Windows HostUtilities 5.2 and Data ONTAP DSM 3.3.1 for Windows MPIO before upgrading to Server 2008 R2.

• A new timeout parameter for Windows Server 2008 R2 cluster configurations: ClusSvcHangTimeout.

Note: If you add a Windows Server 2008 R2 host to a Windows failover cluster after installingWindows Host Utilities, run the Repair option of the Host Utilities to set the required value forthis parameter.

• An updated installation dialog with the LUN Protocol Support selection removed. The PROTOCOLSoption is also removed from the command line installation. For MPIO configurations, a single DSMmanages all LUNs on storage systems running Data ONTAP software regardless of the protocolused to access the LUNs.

• An updated installation dialog with the Enable Microsoft DSM (MSDSM) Support option removed.The MSDSMSUPPORT option is also removed from the command line installation. Veritas SoftwareFoundation for Windows (SFW) can now coexist with the MSDSM. If you are using the optionalSFW software, SFW 5.1 with the Device Driver Installation Package 1 (DDI-1) is required.

Related concepts

ClusSvcHangTimeout setting on page 57

Related information

Veritas DDI-1 package - seer.entsupport.symantec.com/docs/317825.htm

New features in this Host Utilities release | 17

Page 18: Windows Host Util 5.2setup
Page 19: Windows Host Util 5.2setup

Introduction to Host Utilities

This section introduces the Host Utilities and what they contain.

Next topics

What the Host Utilities are on page 19

Tasks required for installing and configuring the Host Utilities on page 20

What the Host Utilities Contain on page 20

Windows configurations supported by the Host Utilities on page 21

Protocols supported by the Host Utilities on page 22

Dynamic disk support on page 25

Multipathing options supported by the Host Utilities on page 26

What is Hyper-V on page 27

Virtual Server 2005 overview on page 28

About SAN booting on page 29

Support for non-English operating system versions on page 30

Where to find more information on page 30

What the Host Utilities areThe Host Utilities are a set of software programs and documentation that enable you to connect hostcomputers to NetApp storage systems.

The Host Utilities include the following components:

• An installation program that sets required parameters on the host computer and host bus adapters(HBAs).

• Diagnostic programs for displaying important information about the host, HBAs, Fibre Channelswitches, and storage systems in your network.

• Programs to detect and correct LUN alignment problems.

• This documentation, which describes how to install the Host Utilities and troubleshoot commonproblems.

Introduction to Host Utilities | 19

Page 20: Windows Host Util 5.2setup

Tasks required for installing and configuring the Host UtilitiesInstalling and configuring the Host Utilities involves performing a number of tasks on the host and thestorage system.

The required tasks are as follows.

1. Install the Host Utilities and other required and optional software.2. Record the FC and iSCSI initiator identifiers.3. Create LUNs and make them available as disks on the host computer.

The following tasks are optional, depending on your configuration.

• Change the Fibre Channel cfmode setting of the storage system to single_image.

• Configure SAN booting of the host.

Related concepts

Host configuration settings on page 51

Setting up LUNs on page 63

SAN Booting on page 81

Related tasks

Installing and Configuring Host Utilities on page 33

Related information

Changing the Cluster cfmode Setting in Fibre Channel SAN Configurations -http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/QuickRef/SSICFMODE_1205.pdf

What the Host Utilities ContainThe Host Utilities include a set of diagnostic programs and an installation program. When you installthe Host Utilities software, the installer extracts the programs and places them on your host. The installeralso sets required Windows registry and HBA parameters.

The toolkit contains the following programs:

PurposeProgram

Collects information about Brocade Fibre Channelswitches the host is connected to in a file that can besubmitted to technical support for problem analysis.

brocade_info.exe

20 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 21: Windows Host Util 5.2setup

PurposeProgram

Collects information about Cisco Fibre Channel switchesthe host is connected to in a file that can be submittedto technical support for problem analysis.

cisco_info.exe

Collects information about a storage system in a file thatcan be submitted to technical support for problemanalysis. This program was named filer_info.exe inprevious Host Utilities versions.

controller_info.exe

Displays detailed information about Fibre Channel hostbus adapters (HBAs) in the host.

hba_info.exe

Collects information about McData Fibre Channelswitches the host is connected to in a file that can besubmitted to technical support for problem analysis.

mcdata_info.exe

Displays information about the iSCSI initiator runningon the Windows host.

msiscsi_info.exe

Collects information about QLogic Fibre Channelswitches the host is connected to in a file that can besubmitted to technical support for problem analysis.

qlogic_info.exe

Displays the version of the diagnostic tools and FibreChannel HBAs.

san_version.exe

Displays detailed information about the virtual machinesincluding the storage resources they use. Installed onlyon Windows Server 2008 and 2008 R2 x64 systems.Runs on a Hyper-V parent only.

vm_info.exe

Collects information about the Windows host in a filethat can be submitted to technical support for problemanalysis.

windows_info.exe

Windows configurations supported by the Host UtilitiesThe Host Utilities support a number of Windows host configurations.

Depending on your specific environment, the Host Utilities support the following:

• iSCSI paths to the storage system

• Fibre Channel paths to the storage system

• Multiple paths to the storage system when a multipathing solution is installed

• Virtual machines using Hyper-V (Windows Server 2008) or Virtual Server 2005 (Windows Server2003), both parent and guest

• Veritas Storage Foundation

Introduction to Host Utilities | 21

Page 22: Windows Host Util 5.2setup

• SAN booting

Use the Interoperability Matrix to find a supported combination of host and storage system componentsand software and firmware versions.

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

Protocols supported by the Host UtilitiesThe Host Utilities provide support for Fibre Channel, Fibre Channel over Ethernet, iSCSI, and NFSconnections to the storage system.

Next topics

Data ONTAP and Fibre Channel on page 22

Data ONTAP and Fibre Channel over Ethernet on page 23

Data ONTAP and iSCSI on page 24

Data ONTAP and Fibre ChannelThe Fibre Channel (FC) protocol for SCSI is one method for enabling the host to access data on storagesystems that run supported versions of Data ONTAP software.

Fibre Channel connections require one or more supported host bus adapters (HBAs) in the host.

The storage system is an FC target device. The Fibre Channel service must be licensed and running onthe storage system.

Each HBA port is an initiator that uses FC to access logical units of storage (LUNs) on a storage systemto store and retrieve data.

On the host, a worldwide port name (WWPN) identifies each port on an HBA. The host WWPNs areused as identifiers when creating initiator groups on a storage system. An initiator group permits hostaccess to specific LUNs on a storage system.

Supported FC configurations

The Host Utilities support fabric-attached SAN network configurations and direct-attached configurations.

The following configurations are supported:

• Fabric-attached storage area network (SAN). Two variations of fabric-attached SANs are supported:

• A single-host FC connection from the HBA to the storage system through a single switch. Ahost is cabled to a single FC switch that is connected by cable to redundant FC ports on anactive/active storage system configuration. A fabric-attached single-path host has one HBA.

22 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 23: Windows Host Util 5.2setup

• Two (or more) FC connections from the HBA to the storage system through dual switches or azoned switch. In this configuration, the host has at least one dual-port HBA or two single-portHBAs. The redundant configuration avoids the single point of failure of a single-switchconfiguration. This variation requires a supported multipathing solution and device-specificmodule (DSM), such as the Data ONTAP DSM for Windows MPIO or the Veritas DMP DSM.

• Direct-attached. A single host with a direct FC connection from the HBA to stand-alone oractive/active storage systems.

ALUA (asymmetric logical unit access) must be enabled on igroups when using the Microsoft DSM(msdsm), and must be disabled when using the Data ONTAP DSM for Windows MPIO or the VeritasDMP DSM. ALUA requires a supported version of Data ONTAP software; see the InteroperabilityMatrix.

Note: Use redundant configurations with two FC switches for high availability in productionenvironments. However, direct FC connections and switched configurations using a single zonedswitch might be appropriate for less critical business applications.

For more detailed information about the supported Fibre Channel topologies, including diagrams, seethe Fibre Channel and iSCSI Configuration Guide for your version of Data ONTAP.

For more information about using Fibre Channel on your storage system, see the Data ONTAP BlockAccess Management Guide for iSCSI and FC for your version of Data ONTAP.

Related information

Fibre Channel and iSCSI Configuration Guide -http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/

Data ONTAP and Fibre Channel over EthernetThe Fibre Channel over Ethernet (FCoE) protocol for SCSI is one method for enabling the host to accessdata on storage systems that run supported versions of Data ONTAP software.

Fibre Channel over Ethernet (FCoE) is a new model for connecting hosts to storage systems. FCoE isvery similar to traditional Fibre Channel (FC), as it maintains existing FC management and controls,but the hardware transport is a lossless 10-gigabit Ethernet network.

Setting up an FCoE connection requires one or more supported converged network adapters (CNAs)in the host, connected to a supported data center bridging (DCB) Ethernet switch. The CNA is aconsolidation point and effectively serves as both an HBA and an Ethernet adapter.

As an HBA, the presentation to the host is FC targets and all FC traffic is sent out as FC frames mappedinto Ethernet packets (FC over Ethernet). The 10 gigabit Ethernet adapter is used for IP traffic, such asiSCSI, NFS, and HTTP. Both FCoE and IP communications through the CNA run over the same 10gigabit Ethernet port, which connects to the DCB switch.

In general, you configure and use FCoE connections just like traditional FC connections.

Introduction to Host Utilities | 23

Page 24: Windows Host Util 5.2setup

Supported FCoE configurations

The Host Utilities support fabric-attached SAN network configurations, but direct-attached configurationsare not supported.

FCoE adapter configuration

The converged network adapters (CNA) must be directly cabled to a supported data center bridging(DCB) switch. No intermediate Ethernet switches may be connected between the CNA end point andthe DCB switch. The CNA is a consolidation point and effectively serves as both an HBA and anEthernet adapter.

Updating the drivers and firmware for a CNA is just like updating them for a traditional FC HBA.Check the Interoperability Matrix for the supported firmware versions.

The CNA uses the same timeout parameters as a traditional FC HBA. The Windows Host Utilitiesinstallation program detects the FC HBA portion of the CNA and sets the required parameters. If youinstall a CNA after installing the Host Utilities, run the Repair option of the installation program toconfigure the CNA parameters.

FCoE cabling configuration

FCoE cabling information and diagrams are included in the Fibre Channel and iSCSI ConfigurationGuide for your version of Data ONTAP software.

FCoE switch configuration

The DCB switch requires special setup steps for FCoE. See the documentation supplied by the switchmanufacturer. For example, the steps for a Cisco 5020 are included in the Cisco Nexus 5000 SeriesSwitch Fabric Manager Software Configuration Guide.

You zone the DCB switch for FCoE just like you zone a traditional FC switch.

Related information

NetApp Interoperability Matrix - now.netapp.com/NOW/products/interoperability

Cisco Nexus 5000 Series Switch Fabric Manager Software Configuration Guide -www.cisco.com/en/US/products/ps9670/tsd_products_support_series_home.html

Data ONTAP and iSCSIThe iSCSI protocol is one method for enabling the host to access data on storage systems that runsupported versions of Data ONTAP software.

iSCSI connections can use a software initiator over the host’s standard Ethernet interfaces, or one ormore supported host bus adapters (HBAs).

24 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 25: Windows Host Util 5.2setup

The iSCSI protocol is a licensed service on the NetApp storage system that enables you to transferblock data to hosts using the SCSI protocol over TCP/IP. The iSCSI protocol standard is defined byRFC 3720 (www.ietf.org).

The storage system is an iSCSI target device. A host running the iSCSI Initiator software or an iSCSIHBA uses the iSCSI protocol to access LUNs on a storage system.

The connection between the initiator and target uses a standard TCP/IP network. No special networkconfiguration is needed to support iSCSI traffic. The network can be a dedicated TCP/IP network, oryour regular public network. The storage system listens for iSCSI connections on TCP port 3260. Formore information on using iSCSI on your storage system, see the Data ONTAP Block Access ManagementGuide for iSCSI and FC for your version of Data ONTAP.

About iSCSI on IPv6

You can use iSCSI connections over networks running Internet Protocol version 6 (IPv6) if allcomponents of the network have IPv6 support enabled.

IPv6 uses a longer IP address than the previous IPv4. This enables a larger address space. IPv6 addresslook likefe80:0000:0000:0000:ad7a:41a0:62d0:dd0bIPv4 addresses look like192.168.2.3

Windows Host Utilities supports IPv6 starting with version 5.1. A version of Data ONTAP softwarethat supports IPv6 is required on the storage system.

There are no IPv6-specific tasks for Windows Host Utilities. See the Microsoft Windows documentationfor information about enabling IPv6 on the Windows host. See your Ethernet switch documentationfor information on enabling IPv6 on your switches if needed. See the Data ONTAP Network ManagementGuide for your version of Data ONTAP for information on enabling IPv6 on the storage system.

Note: The diagnostic programs included with Windows Host Utilities do not currently support IPv6addresses. See the Host Utilities Release Notes for information about collecting diagnostic informationin an IPv6 configuration and any updates to the diagnostic programs.

Dynamic disk supportWindows dynamic disks are supported with specific configuration requirements.

When using the native Windows storage stack, all LUNs composing the dynamic volume must belocated on the same storage system controller.

When using Veritas Storage Foundation for Windows, the LUNs composing the dynamic volume canspan storage controllers in active/active configurations.

Dynamic disks are not supported by SnapDrive for Windows.

Introduction to Host Utilities | 25

Page 26: Windows Host Util 5.2setup

Multipathing options supported by the Host UtilitiesThe Host Utilities support multiple FC (Fibre Channel) paths, multiple iSCSI paths, or a combinationof FC and iSCSI paths.

Configure multiple paths to ensure a highly available connection between the Windows host and storagesystem.

Multipath I/O (MPIO) software is required any time a Windows host has more than one path to thestorage system. The MPIO software presents a single disk to the operating system for all paths, and adevice-specific module (DSM) manages path failover. Without MPIO software, the operating systemcould see each path as a separate disk, which can lead to data corruption.

On a Windows system, there are two main components to any MPIO configuration: the Windows MPIOcomponents and a DSM. MPIO is supported for Windows Server 2003 and Windows Server 2008systems. MPIO is not supported for Windows XP and Windows Vista running in a Hyper- V virtualmachine.

When you select MPIO support during Host Utilities installation, the Host Utilities installer installs theMicrosoft MPIO components on Windows Server 2003 or enables the included MPIO feature ofWindows Server 2008.

See the NetApp Interoperability Matrix for the multipathing software currently supported.

The Microsoft Windows multipathing software uses a DSM to communicate with storage devices suchas NetApp storage systems. You must use only one DSM for a given storage vendor. More precisely,you can have only one DSM that claims the LUNs for a given vendor ID, product ID (VID/PID) pair.If you are using Windows Server 2008, you must enable the optional Windows multipathing featurebefore installing a DSM.

A supported DSM is required for multipathing. The following DSMs are available for Windows hosts.

This multipathing software supports active/active and active/passive policieson Windows Server 2003 and Windows Server 2008. If installed on Windows

Data ONTAP DSMfor Windows MPIO

Server 2008, the Data ONTAP DSM claims NetApp LUNs and the Microsoftmsdsm is not used. ALUA must be disabled on the storage system igroup whenusing this DSM.

For MPIO using Veritas DMP, only the Veritas DMP DSM is supported; theVeritas DMP Array Support Library (ASL) is not supported. See the

Veritas DMP DSM

Interoperability Matrix for details on supported load balance policies with FCand iSCSI protocols.

Note: If you are using Veritas Storage Foundation for Windows, configureeither Fibre Channel paths or iSCSI paths depending on how you want toconnect to the storage system. There is no support for both Fibre Channeland iSCSI protocols on the same host with Veritas Storage Foundation.

26 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 27: Windows Host Util 5.2setup

This is the native DSM provided with Microsoft Windows Server 2008. Itoffers active/active and active/passive load balance policies for both the FC

Windows Server2008 msdsm

and iSCSI protocols. ALUA must be enabled on the storage system igroup forFC. See the Interoperability Matrix to be sure you have a version of DataONTAP software that is supported with this DSM.

This is the DSM provided with the Microsoft iSCSI initiator. You can use thisDSM for iSCSI paths on Windows Server 2003 systems.

Microsoft iSCSIDSM

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

What is Hyper-VHyper-V is a Windows technology that enables you to create multiple virtual machines on a singlephysical x64 computer running Microsoft Windows Server 2008.

Hyper-V is a “role” available in Microsoft Windows Server 2008. Each virtual machine runs its ownoperating system and applications. For a list of currently-supported operating systems on Hyper-Vvirtual machines, see the Windows Host Utilities Release Notes.

Next topics

Methods for using storage with Hyper-V on page 27

Methods for clustering Windows hosts with Hyper-V on page 28

Recommended LUN layout with Hyper-V on page 28

Methods for using storage with Hyper-VHyper-V enables you to provision storage using a virtual hard disk, an unformatted (raw) LUN, or aniSCSI LUN.

Virtual machines use storage on a storage system in the following ways:

• A virtual hard disk (IDE or SCSI) formatted as NTFS. The virtual hard disk is stored on a LUNmapped to the Hyper-V parent system. The guest OS must boot from an IDE virtual hard disk.

• An unformatted (raw) LUN mapped to the Hyper-V parent system and provided to the virtualmachine as a physical disk mapped through either the SCSI or IDE virtual adapter.

Note: Do not enable multipathing in Windows Host Utilities installed on a guest OS if you areusing raw (passthru) disks. The raw disks do not show up in the guest OS.

• An iSCSI LUN accessed by an iSCSI initiator running on the guest OS.

• For Windows Vista, use the built-in iSCSI initiator; multipathing is not supported.

• For Windows XP, use Microsoft iSCSI initiator 2.07; multipathing is not supported.

Introduction to Host Utilities | 27

Page 28: Windows Host Util 5.2setup

• For Windows Server 2003 and Windows Server 2008, use an iSCSI initiator and multipathingsolution that is supported by NetApp for use on a standard host platform. The guest OS supportsthe same iSCSI configurations as if it was not running on a virtual machine.

• For SUSE Linux Enterprise Server, use a supported iSCSI initiator and multipathing solution.The guest OS supports the same iSCSI configurations as if it was not running on a virtual machine.

The parent Hyper-V system can connect to storage system LUNs just like any other Windows Server2008 host.

Methods for clustering Windows hosts with Hyper-VHyper-V provides two ways to let you cluster Windows hosts.

• You can cluster the parent Hyper-V system with other parent systems using Windows failoverclustering.

• You can cluster guest systems running in virtual machines with other guest systems using theclustering solution supported on the operating system You must use an iSCSI software initiator onthe guest system to access the quorum and shared disks.

Recommended LUN layout with Hyper-VYou can put one or more virtual hard disks (VHDs) on a single LUN for use with Hyper-V.

The recommended LUN layout with Hyper-V is to put up to 10 VHDs on a single LUN. If you needless than ten VHDs, put each VHD on its own LUN. If you need more than ten VHDs for a Windowshost, spread the VHDs evenly across about ten LUNs.

When you create virtual machines, store the virtual machine and the VHD it boots from in the sameLUN.

For Windows failover clusters, the layout is different.

• For Windows Server 2008 R2 with cluster shared volumes (CSVs), you can have VHDs for multipleguests on the same LUN.

• For failover clusters without CSV, use a separate LUN for each guest's VHDs.

Virtual Server 2005 overviewVirtual Server 2005 R2 enables you to run Windows and Linux virtual machines on Windows Server2003.

NetApp storage systems support Microsoft Virtual Server 2005 R2 using iSCSI and FC. Virtual Server2005 R2 is supported on any supported Windows 2003 configuration.

The virtual machines can use storage system LUNs in one of two ways:

28 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 29: Windows Host Util 5.2setup

• Connect to LUNs from the underlying Windows Server 2003 system. Then create virtual hard disks(VHDs) on those LUNs for the virtual machines. You can use either iSCSI or FC to connect to theLUNs.

• Connect to LUNs using a software iSCSI initiator running on the virtual machine.

For specific configurations that support Virtual Server 2005 R2, see the Interoperability Matrix.

Next topics

About using virtual hard disks on page 29

Virtual machines and iSCSI initiators on page 29

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

About using virtual hard disksRun the Windows Host Utilities installer to set required parameters on Virtual Server 2005 virtualmachines.

Install Windows Host Utilities on each virtual machine running a Windows operating system that usesvirtual hard disks (VHDs) based on LUNs. This is in addition to the Windows Host Utilities you installon the physical system that runs Virtual Server 2005.

No changes are required for Virtual Server 2005 virtual machines running a Linux operating system.

Virtual machines and iSCSI initiatorsTo use an iSCSI initiator on a Virtual Server 2005 virtual machine, you must set the required time-outvalues.

Run the Windows Host Utilities installer (Windows) or follow the instructions in the Host Utilitiesdocumentation (Linux) to set required timeout values on the virtual machine. An iSCSI initiator on avirtual machine is installed and works just like the iSCSI initiator on a physical system.

About SAN bootingSAN booting is the general term for booting a Windows host from a storage system LUN instead of aninternal hard disk. The host might or might not have any hard drives installed.

SAN booting offers many advantages. Because the system (C:\) drive is located on the storage system,all of the reliability and backup features of the storage system are available to the system drive. Youcan also clone system drives to simplify deploying many Windows hosts and to reduce the total storageneeded. SAN booting is especially useful for blade servers.

Introduction to Host Utilities | 29

Page 30: Windows Host Util 5.2setup

The downside of SAN booting is that loss of connectivity between the host and storage system canprevent the host from booting. Be sure to use a reliable connection to the storage system.

There are three options for SAN booting a Windows host:

Requires one or more supported adapters. These same adapters can also beused for data LUNs. The Windows Host Utilities installer automaticallyconfigures required HBA settings.

Fibre Channel HBA

Requires one or more supported adapters. These same adapters can also beused for data LUNs, or you can use an iSCSI software initiator for data. Youmust manually configure the HBA settings.

iSCSI HBA

Requires a supported network interface card (NIC) and a special version ofthe Microsoft iSCSI software initiator.

iSCSI software boot

For information on iSCSI software boot, see the vendor (emBoot, Intel, or IBM) documentation for theiSCSI boot solution you choose. Also, see Technical Report 3644.

Related information

Technical Report 3644 - http://media.netapp.com/documents/tr-3644.pdf

Support for non-English operating system versionsWindows Host Utilities are supported on all Language Editions of Windows Server 2003 and Server2008. All product interfaces and messages are displayed in English. However, all variables acceptUnicode characters as input.

Where to find more informationFor additional information about host and storage system requirements, supported configurations, youroperating system, and troubleshooting, see the documents listed in the following table.

Go to...If you need more information about...

The latest Host Utilities Release NotesKnown issues, system requirements, and last minuteupdates

• The Interoperability Matrix.

• System Configuration Guide.

The latest supported configurations

30 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 31: Windows Host Util 5.2setup

Go to...If you need more information about...

• The Data ONTAP Software Setup Guide

• The Data ONTAP Block Access Management Guidefor iSCSI and FC

Configuring the storage system

The FC and iSCSI Configuration Guide for your versionof Data ONTAP software

Supported SAN topologies

Your HBA vendor documentationInstalling and configuring the HBA in your host

The Installation and Administration Guide for thatversion of Data ONTAP DSM for Windows MPIO

Installing and configuring MPIO using the Data ONTAPDSM

• The Veritas Storage Foundation and HighAvailability Solutions Installation and UpgradeGuide

• The Veritas Storage Foundation Administrator’sGuide

Installing and configuring Veritas Storage Foundationfor Windows and the Veritas DMP DSM

• The Veritas Cluster Server Administrator’s Guide

• The Veritas Storage Foundation Administrator’sGuide

Configuring Veritas Cluster Server and MicrosoftClustering in a Storage Foundation environment

The Installation and Administration Guide for thatversion of SnapDrive for Windows

Installing and configuring a supported version ofSnapDrive® for Windows software

• Data ONTAP Commands: Manual Page Reference,Volume 1and Data ONTAP Commands: ManualPage Reference, Volume 2

• The Data ONTAP Block Access Management Guidefor iSCSI and FC

• The FilerView online Help

Managing SAN storage on the storage system

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

FC and iSCSI Configuration Guide -http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/

Introduction to Host Utilities | 31

Page 32: Windows Host Util 5.2setup
Page 33: Windows Host Util 5.2setup

Installing and Configuring Host Utilities

This section describes how to install and configure the Host Utilities and how to perform related tasks.

Installing and configuring the Host Utilities (high level) on page 331.Verifying your host and storage system configuration on page 342.Confirming your storage system configuration on page 353.Configuring FC HBAs and switches on page 354.Checking the media type of FC ports on page 365.Configuring iSCSI initiators and HBAs on page 376.Installing multipath I/O software on page 417.About configuring Hyper-V systems on page 448.About Veritas Storage Foundation on page 459.Installation process overview on page 4610.About SnapDrive for Windows on page 4811.Repairing and removing Windows Host Utilities on page 4912.

Installing and configuring the Host Utilities (high level)The following steps provide a high-level overview of what is involved in installing the Host Utilitiesand configuring your system to work with that software.

About this task

This section is for people familiar with this operating system and storage systems. If you need moreinformation, see the detailed instructions for the steps.

Note: If you are upgrading a host running the Data ONTAP DSM from Windows Server 2008 toServer 2008 R2, you must upgrade to Data ONTAP DSM 3.3.1 before installing Server 2008 R2.You must also upgrade to Windows Host Utilities 5.2 before upgrading to Windows Server 2008R2.

Steps

1. Verify your host and storage system configuration.2. Confirm your storage system is set up.3. Configure FC HBAs and switches.4. Check the media type setting of FC target ports.5. Install an iSCSI software initiator or HBA.

Installing and Configuring Host Utilities | 33

Page 34: Windows Host Util 5.2setup

6. Configure iSCSI options and security.7. Configure a multipathing solution.8. Install Veritas Storage Foundation.9. Install the Host Utilities.10. Install SnapDrive for Windows.

After you finish

If you add a Windows 2008 R2 host to a failover cluster after installing the Host Utilities, run the Repairoption of the Host Utilities installation program to set the required ClusSvcHangTimeout parameter.

Verifying your host and storage system configurationBefore you install the Host Utilities, verify that the Host Utilities version supports your host and storagesystem configuration.

About this task

The Interoperability Matrix lists all supported configurations. Individual computer models are not listed;Windows hosts are qualified based on their CPU chips. The following configuration items must beverified:

• Windows host CPU architecture

• Windows operating system version, service pack level, and required hotfixes

Note: If you are upgrading a host running the Data ONTAP DSM from Windows Server 2008to Server 2008 R2, you must upgrade to Data ONTAP DSM 3.3.1 before installing Server 2008R2. You must also upgrade to Windows Host Utilities 5.2 before upgrading to Windows Server2008 R2.

• HBA model and firmware version

• Fibre Channel switch model and firmware version

• iSCSI initiator

• Multipathing software

• Veritas Storage Foundation for Windows software

• Data ONTAP version and cfmode setting

• Option software such as SnapDrive for Windows

Steps

1. Verify that your entire configuration is listed in the matrix.2. Verify guest operating systems on Hyper-V virtual machines are shown as supported in the Windows

Host Utilities Release Notes.

34 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 35: Windows Host Util 5.2setup

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

Confirming your storage system configurationYou must make sure your storage system is properly cabled and the FC and iSCSI services are licensedand started.

About this task

This topic describes the high-level tasks you must complete to configure your storage system for usewith Fibre Channel and iSCSI hosts. See the Data ONTAP Block Access Management Guide for iSCSIand FC for your version of Data ONTAP for detailed instructions.

Steps

1. Add the iSCSI or FCP license and start the target service. The Fibre Channel and iSCSI protocolsare licensed features of Data ONTAP software. If you need to purchase a license, contact yourNetApp or sales partner representative.

2. Verify your cabling. See the FC and iSCSI Configuration Guide for detailed cabling and configurationinformation.

Related information

FC and iSCSI Configuration Guide -http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/

Configuring FC HBAs and switchesInstall and configure one or more supported Fibre Channel host bus adapters (HBAs) for Fibre Channelconnections to the storage system.

About this task

The Windows Host Utilities installer sets the required Fibre Channel HBA settings.

Note: Do not change HBA settings manually.

Steps

1. Install one or more supported Fibre Channel host bus adapters (HBAs) according to the instructionsprovided by the HBA vendor.

Installing and Configuring Host Utilities | 35

Page 36: Windows Host Util 5.2setup

2. Obtain the supported HBA drivers and management utilities and install them according to theinstructions provided by the HBA vendor.

The supported HBA drivers and utilities are available from the following locations:

Emulex support page for NetApp.Emulex HBAs

QLogic support page for NetApp.QLogic HBAs

3. Connect the HBAs to your Fibre Channel switches or directly to the storage system.4. Create zones on the Fibre Channel switch according to your Fibre Channel switch documentation.

Related information

FC and iSCSI Configuration Guide -http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/

Emulex support page for NetApp - http://www.emulex.com/ts/index.html

QLogic support page for NetApp - http://support.qlogic.com/support/drivers_software.asp

Checking the media type of FC portsThe media type of the storage system FC target ports must be configured for the type of connectionbetween the host and storage system.

About this task

The default media type setting of “auto” is for fabric (switched) connections. If you are connecting thehost’s HBA ports directly to the storage system, you must change the media setting of the target portsto “loop”.

Steps

1. To display the current setting of the storage system’s target ports, enter the following command ata storage system command prompt:

fcp show adapter -v

The current media type setting is displayed.

2. To change the setting of a target port to “loop” for direct connections, enter the following commandsat a storage system command prompt:

fcp config adapter down

fcp config adapter mediatype loop

fcp config adapter up

adapter is the storage system adapter directly connected to the host.

36 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 37: Windows Host Util 5.2setup

For more information, see the fcp man page or Data ONTAP Commands: Manual Page Reference,Volume 1 for your version of Data ONTAP.

Configuring iSCSI initiators and HBAsFor configurations using iSCSI, you must either download and install an iSCSI software initiator orinstall an iSCSI HBA, or both

An iSCSI software initiator uses the Windows host CPU for most processing and Ethernet networkinterface cards (NICs) or TCP/IP offload engine (TOE) cards for network connectivity. An iSCSI HBAoffloads most iSCSI processing to the HBA card, which also provides network connectivity.

The iSCSI software initiator typically provides excellent performance. In fact, an iSCSI software initiatorprovides better performance than an iSCSI HBA in most configurations. The iSCSI initiator softwarefor Windows is available from Microsoft at no charge. In some cases, you can even SAN boot a hostwith an iSCSI software initiator and a supported NIC.

iSCSI HBAs are best used for SAN booting. An iSCSI HBA implements SAN booting just like an FCHBA. When booting from an iSCSI HBA, it is recommended that you use an iSCSI software initiatorto access your data LUNs.

Next topics

iSCSI software initiator options on page 37

Downloading the iSCSI software initiator on page 38

Installing the iSCSI Initiator software on page 39

Installing the iSCSI HBA on page 39

Options for iSCSI sessions and error recovery levels on page 40

Options for using CHAP with iSCSI Initiators on page 41

iSCSI software initiator optionsSelect the appropriate iSCSI software initiator for your host configuration.

The following is a list of operating systems and their iSCSI software initiator options.

Download and install the iSCSI software initiatorWindows Server 2003

The iSCSI initiator is built into the operating system. The iSCSI InitiatorProperties dialog is available from Administrative Tools.

Windows Server 2008

For guest systems on Hyper-V virtual machines that access storagedirectly (not as a virtual hard disk mapped to the parent system), download

Windows XP guestsystems on Hyper-V

and install the iSCSI software initiator. You cannot select the MicrosoftMPIO Multipathing Support for iSCSI option; Microsoft does not support

Installing and Configuring Host Utilities | 37

Page 38: Windows Host Util 5.2setup

MPIO with Windows XP. Note that a Windows XP iSCSI connection toNetApp storage is supported only on Hyper-V virtual machines.

For guest systems on Hyper-V virtual machines that access storagedirectly (not as a virtual hard disk mapped to the parent system), the

Windows Vista guestsystems on Hyper-V

iSCSI initiator is built into the operating system. The iSCSI InitiatorProperties dialog is available from Administrative Tools. Note that aWindows Vista iSCSI connection to NetApp storage is supported onlyon Hyper-V virtual machines.

For guest systems on Hyper-V virtual machines that access storagedirectly (not as a virtual hard disk mapped to the parent system), use an

SUSE Linux EnterpriseServer guest systems onHyper-V iSCSI initiator solution on a Hyper-V guest that is supported for

standalone hardware. A supported version of Linux Host Utilities isrequired.

For guest systems on Virtual Server 2005 virtual machines that accessstorage directly (not as a virtual hard disk mapped to the parent system),

Linux guest systems onVirtual Server 2005

use an iSCSI initiator solution on a Virtual Server 2005 guest that issupported for standalone hardware. A supported version of Linux HostUtilities is required.

Note: If you want to use an iSCSI HBA on Windows Server 2003 hosts to access the storage system,you must download and install the iSCSI initiator service.

Related tasks

Configuring SUSE Linux guests for Hyper-V on page 44

Downloading the iSCSI software initiatorTo download the iSCSI initiator, complete the following steps.

About this task

If you are using iSCSI software boot, you need a special boot-enabled version of the iSCSI softwareinitiator.

Steps

1. Go to the Microsoft Web site at http://www.microsoft.com/.2. Click Downloads & Trials.3. Click Download Center.4. Keep the default setting of All Downloads. In the Search box, type

iSCSI Initiator

and then click Go.5. Select the supported Initiator version you want to install.

38 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 39: Windows Host Util 5.2setup

6. Click the download link for the CPU type in your Windows host. You might also choose to downloadthe Release Notes and Users Guide for the iSCSI Initiator from this Web page.

7. Click Save to save the installation file to a local directory on your Windows host.

The initiator installation program is saved to the Windows host.

Related concepts

About SAN booting on page 29

Installing the iSCSI Initiator softwareOn the Windows host, complete the following steps to install the iSCSI Initiator.

Before you begin

You must have downloaded the appropriate iSCSI initiator installer to the Windows host.

Steps

1. Open the local directory to which you downloaded the iSCSI Initiator software.2. Run the installation program by double-clicking the icon.3. When prompted to select installation options, select Initiator Service and Software Initiator.4. For all multipathing solutions except Veritas, select the Microsoft MPIO Multipathing Support

for iSCSI check box, regardless of whether you are using MPIO or not. For the Veritas multipathing,clear this check box.

Multipathing is not available for Windows XP and Windows Vista.

5. Follow the prompts to complete the installation.

Installing the iSCSI HBAIf your configuration uses an iSCSI HBA, you must make sure that the HBA is installed and configuredcorrectly.

Before you begin

If you use an iSCSI HBA on Windows Server 2003 hosts, you also need to install the Microsoft iSCSIinitiator service. Follow the instructions in “Installing an iSCSI software initiator” on page 29 to installthe software initiator. If you are using only the iSCSI HBA, you can clear the “iSCSI Initiator” checkbox when installing the initiator package. The initiator service is built into Windows Server 2008.

About this task

You can optionally boot your Windows host from a storage system LUN using a supported HBA.

Installing and Configuring Host Utilities | 39

Page 40: Windows Host Util 5.2setup

Steps

1. Install one or more supported iSCSI host bus adapters according to the instructions provided by theHBA vendor.

2. Obtain the supported HBA drivers and management utilities and install them according to theinstructions provided by the HBA vendor.

Drivers for QLogic iSCSI HBA devices can be found here:http://support.qlogic.com/support/drivers_software.asp

3. Manually set the required QLogic iSCSI HBA settings.

a. Start the SANsurfer program on the Windows host and select the iSCSI HBA. See the SANsurferonline Help for more information.

b. Specify an IP address for each HBA port.c. Set the Connection KeepAliveTO value to 180.d. Enable ARP Redirect.e. Set the iSCSI node name of all iSCSI HBA ports to the same name as shown in the iSCSI initiator

GUI on the Windows host.f. Save the HBA settings and reboot the Windows host.

4. Connect the iSCSI HBA to your Ethernet switches or directly to the storage system. Avoid routingif possible.

5. Using the iSCSI initiator GUI, configure the iSCSI target addresses of your storage system. If youare using more than one path, explicitly select the initiator and target for each path when you logon.

After you finish

If you are SAN booting from an iSCSI HBA, you must also manually set the boot BIOS on the HBA.

Related concepts

SAN Booting on page 81

Related references

Configuring QLogic iSCSI HBA boot BIOS settings on page 92

Options for iSCSI sessions and error recovery levelsThe defaults allowed by Data ONTAP are one TCP/IP connection per iSCSI session and an errorrecovery level of 0.

You can optionally enable multiple connections per session and error recovery level 1 or 2 by settingData ONTAP option values. Regardless of the settings, you can always use error recovery level 0 andsingle-connection sessions. For more information, see the chapter about managing the iSCSI networkin the Data ONTAP Block Access Management Guide for iSCSI and FC.

40 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 41: Windows Host Util 5.2setup

The iSCSI initiator does not automatically create multiple sessions. You must explicitly create eachsession using the iSCSI Initiator GUI.

Options for using CHAP with iSCSI InitiatorsYou can use one-way or mutual (bidirectional) authentication with the challenge handshake authenticationprotocol (CHAP).

For one-way CHAP, the target only authenticates the initiator. For mutual CHAP, the initiator alsoauthenticates the target.

The iSCSI Initiator sets strict limits on the length of both the initiator’s and target’s CHAP passwords.For Windows Server 2003, see the readme file on the host (C:\Windows\iSCSI\readme.txt) for moreinformation. For Windows Server 2008, see the Manage iSCSI Security topic in Help.

There are two types of CHAP user names and passwords. These types indicate the direction ofauthentication, relative to the storage system:

The storage system authenticates the iSCSI Initiator. Inbound settings arerequired if you are using CHAP authentication.

Inbound

The iSCSI Initiator authenticates the storage system using CHAP. Outboundvalues are used only with mutual CHAP.

Outbound

You specify the iSCSI Initiator CHAP settings using the Microsoft iSCSI Initiator GUI on the host.Click Advanced on the GUI Discovery tab to specify inbound values for each storage system whenyou add a target portal. Click Secret on the General tab to specify the outbound value (mutual CHAPonly).

By default, the iSCSI Initiator uses its iSCSI node name as its CHAP user name.

Always use ASCII text passwords; do not use hexadecimal passwords. For mutual (bidirectional) CHAP,the inbound and outbound passwords cannot be the same.

Installing multipath I/O softwareYou must have multipathing set up if your Windows host has more than one path to the storage system.

The MPIO software presents a single disk to the operating system for all paths, and a device-specificmodule (DSM) manages path failover. Without MPIO software, the operating system could see eachpath as a separate disk, which can lead to data corruption.

On a Windows system, there are two main components to any MPIO solution: the Windows MPIOcomponents and a DSM.

MPIO is supported for Windows Server 2003 and Windows Server 2008 systems. MPIO is not supportedfor Windows XP and Windows Vista running in a Hyper- V virtual machine.

Installing and Configuring Host Utilities | 41

Page 42: Windows Host Util 5.2setup

When you select MPIO support, the Windows Host Utilities installs the Microsoft MPIO componentson Windows Server 2003 or enables the included MPIO feature of Windows Server 2008.

You then need to install a supported DSM. Choices include the Data ONTAP DSM for Windows MPIO,the Veritas DMP DSM, the Microsoft iSCSI DSM (part of the iSCSI initiator package), and the Microsoftmsdsm (included with Windows Server 2008).

Next topics

How to have a DSM multipath solution on page 42

Disabling ALUA for Data ONTAP DSM on page 42

Enabling ALUA for FC with msdsm on page 43

How to have a DSM multipath solutionIf your environment uses DSM as its multipathing solution, see the appropriate DSM documentationfor installation instructions.

To install the Data ONTAP DSM for Windows MPIO, follow the instructions in the Installation andAdministration Guide for your version of the DSM.

To install the Veritas DMP DSM in Veritas Storage Foundation for Windows software, follow theinstructions in the Veritas Storage Foundation and High Availability Solutions Installation and UpgradeGuide. Be sure to install the Veritas DMP DSM when you install the Veritas Storage Foundation forWindows software.

To install the Microsoft iSCSI DSM, you select the Microsoft MPIO Multipathing Support for iSCSIoption when you install the iSCSI initiator on Windows Server 2003.

The Microsoft msdsm is included with Windows Server 2008. No additional installation is required ifyou selected MPIO support when you installed Windows Host Utilities. If you did not originally selectMPIO support, run the Repair option of the Windows Host Utilities installer and select MPIO support.

Note: You must select MPIO support during the Host Utilities installation.

Disabling ALUA for Data ONTAP DSMDisable asymmetric logical unit access (ALUA) on the storage system initiator group (igroup) whenyou use the Data ONTAP DSM for Windows MPIO.

About this task

ALUA was introduced in Data ONTAP 7.2. The default settings is disabled, which is the requiredsetting for the Data ONTAP DSM.

ALUA is enabled or disabled on igroups.

Steps

1. If the igroup has already been created, verify the ALUA setting by entering

42 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 43: Windows Host Util 5.2setup

igroup show -v igroup_name

If ALUA is enabled, the command output includesALUA: YesIf ALUA is disabled, nothing about ALUA is displayed.

2. If enabled, disable ALUA on the igroup by entering

igroup set igroup_name alua no

For more information about the igroup command, see the igroup man page or the Data ONTAPBlock Access Management Guide for iSCSI and FC for your version of Data ONTAP.

Enabling ALUA for FC with msdsmEnable asymmetric logical unit access (ALUA) on the storage system initiator group (igroup) whenyou use the Microsoft native Fibre Channel (FC) device specific module (DSM) in Windows Server2008 (msdsm) for FC paths.

Before you begin

Data ONTAP 7.3.0 or later software running single_image cfmode is required to support ALUA forthe msdsm.

About this task

The msdsm uses ALUA to identify the primary (non-proxy) paths to FC LUNs.

ALUA is enabled on FC igroups. ALUA is not currently supported for iSCSI Windows igroups. Youcannot map a LUN to both FC and iSCSI igroups when ALUA is enabled on the FC igroup.

The msdsm does not support mixed FC and iSCSI paths to the same LUN. If you want mixed paths,use the Data ONTAP DSM for Windows MPIO.

Steps

1. Verify you have a supported version of Data ONTAP software and it is configured for single_imagecfmode.

2. Create the FC igroup for the Windows host.3. Enable ALUA on the igroup by entering

igroup set igroup_name alua yes

For more information about the igroup command, see the igroup man page or the Data ONTAPBlock Access Management Guide for iSCSI and FC for your version of Data ONTAP.

Installing and Configuring Host Utilities | 43

Page 44: Windows Host Util 5.2setup

About configuring Hyper-V systemsHyper-V systems require special configuration steps for some virtual machines.

Next topics

Adding virtual machines to a failover cluster on page 44

Configuring SUSE Linux guests for Hyper-V on page 44

Adding virtual machines to a failover clusterTo add Hyper-V virtual machines to a cluster, they must be on a node to which you are creating andadding virtual machines.

About this task

When you have more than one virtual machine (configuration files and boot .vhd file) stored on thesame LUN, and you are adding the virtual machines to a failover cluster, you must put all of the virtualmachine resources in the same resource group. Otherwise adding virtual machines to the cluster fails.

Steps

1. Move the available storage group to the node on which you are creating and adding virtual machines.(The available storage resource group in a Windows Server 2008 failover cluster is hidden.) On thecluster node, enter the following command at a Windows command prompt:

c:\cluster group "Available Storage" /move:node_name

node_name is the host name of the cluster node from which you are adding virtual machines.2. Move all of the virtual machine resources to the same failover cluster resource group.3. Run the Virtual Machine Resource Wizard to create the virtual machines and then add them to the

failover cluster. Be sure that the resources for all virtual machines are configured as dependent onthe disk mapped to the LUN.

Configuring SUSE Linux guests for Hyper-VSUSE Linux Enterprise Server guest operating systems running on Hyper-V require a timeout parametersetting to support virtual hard disks and the Linux Host Utilities to support iSCSI initiators. You mustalso install the Linux Integration Components package from Microsoft.

Before you begin

Install a supported version of the SUSE Linux Enterprise Server operating system on a Hyper-V virtualmachine.

44 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 45: Windows Host Util 5.2setup

About this task

Setting timeout parameters on a Linux guest ensures correct failover behavior.

You can use an iSCSI initiator solution on a Hyper-V guest that is supported for standalone hardware.Be sure to install a supported version of Linux Host Utilities. Use the linux type for LUNs accessedwith an iSCSI initiator and for raw Hyper-V LUNs. Use the windows_2008 or hyper_v LUN type forLUNs that contain VHDs.

Steps

1. Download and install the Linux Integration Components package from Microsoft.

The package is available from the Microsoft Connect site. Registration is required.

2. Set the timeout parameter.

a. Open the /etc/udev/rules.d/50-udev-default.rules file on the Linux guest.b. Locate the line that includes sys$$DEVPATH/timeoutc. Change the value in that line to 120.

ExampleACTION==”add”, SUBSYSTEM==”scsi” , SYSFS{type}==”0|7|14”, RUN+=”/bin/sh –c ‘echo120 > /sys$$DEVPATH/timeout’”

d. Save the file.e. Make the new value active by entering the following command at the Linux command prompt:

/etc/init.d/boot.udev force-reload

f. Verify the new timeout value is active by running the following command:

cat /sys/block/sdX/device/timeout

sdX is the name of the SCSI disk device, such as sdb.

3. Set all virtual network adapters for the virtual machine to use static MAC addresses.4. If you are running an iSCSI initiator on the Linux guest, install a supported version of SUSE Linux

Host Utilities.

Related information

Microsoft Connect - http://connect.microsoft.com

About Veritas Storage FoundationIf you are using Veritas Storage Foundation for Windows, make sure you have it installed before youinstall the Host Utilities software package.

Installing and Configuring Host Utilities | 45

Page 46: Windows Host Util 5.2setup

Using Veritas Storage Foundation 5.1 for WindowsVeritas Storage Foundation 5.1 for Windows software requires specific fixes and settings to work withNetApp storage systems.

About this task

The following steps are required to create a supported configuration.

Steps

1. Download and install the DDI-1 package for Veritas Storage Foundation 5.1 from the SymantecWeb site.

2. For clustering environments (VCS or Microsoft Clustering-MSCS), set the SCSI setting in theVeritas Enterprise Administrator control panel to SCSI-3.

For the latest information, see the Host Utilities Release Notes.

Related information

Veritas DDI-1 package - seer.entsupport.symantec.com/docs/317825.htm

Installation process overviewYou must specify whether to include multipathing support when you install the Windows Host Utilitiessoftware package.

The installer prompts you for the following choice. You can also run a quiet (unattended) installationfrom a Windows command prompt.

Choose MPIO if you have more than one path from the Windows host or virtualmachine to the storage system. MPIO is required with Veritas Storage Foundation

Multipathingsupport

for Windows. Choose no MPIO only if you are using a single path to the storagesystem.

The MPIO selection is not available for Windows XP and Windows Vista systems;multipath I/O is not supported on these guest operating systems.

For Hyper-V guests, raw (passthru) disks do not appear in the guest OS if you choosemultipathing support. You can either use raw disks, or you can use MPIO, but youcannot use both in the guest OS.

The Enable Microsoft DSM (MSDSM) support and Protocol Support choices are removed fromthe installation program starting with Windows Host Utilities 5.2.

46 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 47: Windows Host Util 5.2setup

Note: If you are using Veritas Storage Foundation for Windows, configure either Fibre Channelpaths or iSCSI paths depending on how you want to connect to the storage system. There is no supportfor both Fibre Channel and iSCSI protocols on the same host with Veritas Storage Foundation.

Next topics

Installing the Host Utilities interactively on page 47

Installing the Host Utilities from a command line on page 47

Installing the Host Utilities interactivelyTo install the Host Utilities software package interactively, run the Host Utilities installation programand follow the prompts.

Steps

1. Insert your Windows Host Utilities CD or change to the directory to which you downloaded theexecutable file.

2. Run the executable file and follow the instructions on the screen.

Note: The Windows Host Utilities installer checks for required Windows hotfixes. If it detectsa missing hotfix, it displays an error. Download and install any requested hotfixes, and then restartthe installer. Some hotfixes for Windows Server 2008 are not recognized unless the affectedfeature is enabled. For example, an MPIO hotfix might not be recognized as installed until theMPIO feature is enabled. If you are prompted to install a hotfix that is already installed, tryenabling the affected Windows feature and then restart the Host Utilities installer.

3. Reboot the Windows host when prompted.

After you finish

If you add a Windows 2008 R2 host to a failover cluster after installing the Host Utilities, run the Repairoption of the Host Utilities installation program to set the required ClusSvcHangTimeout parameter.

Related tasks

Installing the Host Utilities from a command line on page 47

Installing the Host Utilities from a command lineYou can perform a quiet (unattended) installation of the Host Utilities by entering the commands at aWindows command prompt.

Before you begin

The Host Utilities installation package must be in a path that is accessible by the Windows host. Followthe instructions for installing the Host Utilities interactively to obtain the installation package.

Installing and Configuring Host Utilities | 47

Page 48: Windows Host Util 5.2setup

About this task

The system automatically reboots when the installation is complete.

If a required Windows hotfix is missing, the installation fails with a simple message in the event logsaying the installation failed, but not why. Problems with hotfixes are logged to the msiexec log file.It is recommend that you enable logging in the msiexec command to record the cause of this and otherpossible installation failures.

Step

1. Enter the following command at a Windows command prompt:

msiexec /i installer.msi /quiet

MULTIPATHING={0 | 1}

[INSTALLDIR=inst_path]

installer is the name of the .msi file for your CPU architecture.

MULTIPATHING specifies whether MPIO support is installed. Allowed values are 0 for no, 1 foryes.

inst_path is the path where the Host Utilities files are installed. The default path is C:\ProgramFiles\NetApp\Windows Host Utilities\.

Note: To see the standard Microsoft Installer (MSI) options for logging and other functions,enter

msiexec /helpat a Windows command prompt.

After you finish

If you add a Windows 2008 R2 host to a failover cluster after installing the Host Utilities, run the Repairoption of the Host Utilities installation program to set the required ClusSvcHangTimeout parameter.

Related tasks

Installing the Host Utilities interactively on page 47

About SnapDrive for WindowsYou have the option of using SnapDrive for Windows software to help you provision and manage LUNsand Snapshot copies.

If you are using SnapDrive software, install it on your Windows host after you have installed WindowsHost Utilities.

48 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 49: Windows Host Util 5.2setup

Follow the instructions in the Installation and Administration Guide for your version of SnapDrivesoftware.

Note: SnapDrive for Windows is not supported with Veritas Storage Foundation for Windowssoftware.

Repairing and removing Windows Host UtilitiesYou can use the Repair option of the Host Utilities installation program to update HBA and Windowsregistry settings. You can remove the Host Utilities entirely, either interactively or from the Windowscommand line.

Note: Removing the Host Utilities affects installed DSMs.

Next topics

Repairing or removing Windows Host Utilities interactively on page 49

Repairing or removing Windows Host Utilities from a command line on page 49

Removing Windows Host Utilities affects DSM on page 50

Repairing or removing Windows Host Utilities interactivelyThe Repair option updates the Windows registry and Fibre Channel HBAs with the required settings.You can also remove the Host Utilities entirely.

Steps

1. Open Windows Add or Remove Programs (Windows Server 2003) or Programs and Features(Windows Server 2008).

2. Select Windows Host Utilities3. Click Change.4. Click Repair or Remove as needed.5. Follow the instructions on the screen.

Repairing or removing Windows Host Utilities from a command lineThe Repair option updates the Windows registry and Fibre Channel HBAs with the required settings.You can also remove the Host Utilities entirely from a Windows command line.

Step

1. Enter the following command on the Windows command line to repair Windows Host Utilities:

msiexec {/uninstall | /f]installer.msi [/quiet]

Installing and Configuring Host Utilities | 49

Page 50: Windows Host Util 5.2setup

/uninstall removes the Host Utilities entirely

/f repairs the installation

installer.msi is the name of the Windows Host Utilities installation program on your system

/quiet suppresses all feedback and reboots the system automatically without prompting when thecommand completes

Removing Windows Host Utilities affects DSMRemoving Windows Host Utilities removes the registry settings needed by DSMs.

The registry settings for Windows Host Utilities affect device-specific modules (DSMs) that claimNetApp LUNs. This includes the Data ONTAP DSM for Windows MPIO and the Veritas DMP DSM.Removing Windows Host Utilities removes the registry settings needed by these DSMs to claim theLUNs.

If you remove Windows Host Utilities, you can restore the DSM registry settings by running the Repairoption of the DSM installation program.

Note: Windows Host Utilities is currently required for all supported configurations of Windowshosts that use NetApp LUNs.

50 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 51: Windows Host Util 5.2setup

Host configuration settings

You need to collect some host configuration settings as part of the installation process. The Host Utilitiesinstaller modifies other host settings based on your installation choices.

Next topics

What are FC and iSCSI identifiers on page 51

Overview of settings used by the Host Utilities on page 53

What are FC and iSCSI identifiersThe storage system identifies hosts that are allowed to access LUNs based on the FC worldwide portnames (WWPNs) or iSCSI initiator node name on the host.

Each Fibre Channel port has its own WWPN. A host has a single iSCSI node name for all iSCSI ports.You need these identifiers when creating initiator groups (igroups) on the storage system.

The storage system also has WWPNs and an iSCSI node name, but you do not need them for configuringthe host.

Next topics

Recording the WWPN on page 51

Recording the iSCSI initiator node name on page 53

Recording the WWPNRecord the worldwide port names of all FC ports that connect to the storage system.

About this task

Each HBA port has its own WWPN. For a dual-port HBA, you need to record two values.

The WWPN looks like this:

WWPN: 10:00:00:00:c9:73:5b:90

Steps

1. If the system is running and Windows Host Utilities are installed, obtain the WWPNs using thehba_info.exe program.

2. If the system is SAN booted and not yet running an operating system, obtain the WWPNs using theboot BIOS.

Host configuration settings | 51

Page 52: Windows Host Util 5.2setup

Next topics

Obtaining the WWPN using hba_info on page 52

Obtaining the WWPN using Emulex BootBIOS on page 52

Obtaining the WWPN using QLogic BootBIOS on page 52

Obtaining the WWPN using hba_info

The hba_info.exe program installed by Windows Host Utilities displays the WWPN of all FC ports ona Windows host.

Steps

1. Run the hba_info.exe program that was installed as part of Windows Host Utilities.

The program displays detailed information about each Fibre Channel HBA port.

2. Copy the WWPN line for each port to a text file or write it down.

Obtaining the WWPN using Emulex BootBIOS

For SAN-booted systems with Emulex HBAs that do not yet have an operating system, you can get theWWPNs from the boot BIOS.

Steps

1. Restart the host.2. During startup, press Alt-E to access BootBIOS.3. Select the menu entry for the Emulex HBA.

BootBIOS displays the configuration information for the HBA, including the WWPN.

4. Record the WWPN for each HBA port.

Obtaining the WWPN using QLogic BootBIOS

For SAN-booted systems with QLogic HBAs that do not yet have an operating system, you can get theWWPNs from the boot BIOS.

Steps

1. Restart the host.2. During startup, press Ctrl-Q to access BootBIOS.3. Select the appropriate HBA and press Enter.

The Fast!UTIL options are displayed.

4. Select Configuration Settings and press Enter.

52 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 53: Windows Host Util 5.2setup

5. Select Adapter Settings and press Enter.6. Record the WWPN for each HBA port from the Adapter Port Name field.

Recording the iSCSI initiator node nameRecord the iSCSI initiator node name from the iSCSI Initiator program on the Windows host.

Steps

1. For Windows Server 2008 and Windows Vista, click Start ➤ Administrative Tools ➤ iSCSIInitiator. For Windows Server 2003 and Windows XP, click Start ➤ All Programs ➤ MicrosoftiSCSI Initiator ➤ Microsoft iSCSI Initiator.

The iSCSI Initiator Properties dialog box is displayed.

2. Copy the Initiator Name or Initiator Node Name value to a text file or write it down.

The exact label in the dialog box differs depending on the Windows version. The iSCSI node namelooks like this:

iqn.1991-05.com.microsoft:server3

Overview of settings used by the Host UtilitiesThe Host Utilities require certain registry and parameter settings to ensure the Windows host correctlyhandles the storage system behavior.

The parameters set by Windows Host Utilities affect how the Windows host responds to a delay or lossof data. The particular values have been selected to ensure that the Windows host correctly handlesevents such as the failover of one controller in the storage system to its partner controller.

Fibre Channel and iSCSI host bus adapters (HBAs) also have parameters that must be set to ensure thebest performance and to successfully handle storage system events.

The installation program supplied with Windows Host Utilities sets the Windows and Fibre ChannelHBA parameters to the supported values. You must manually set iSCSI HBA parameters.

The installer sets different values depending on whether you specify multipath I/O (MPIO) supportwhen running the installation program, whether you enable the Microsoft DSM on Windows Server2008, and which protocols you select (iSCSI, Fibre Channel, both, or none). You should not changethese values unless directed to do so by technical support.

Next topics

Summary of registry values set by Windows Host Utilities on page 54

Summary of FC HBA values set by Windows Host Utilities on page 56

Host configuration settings | 53

Page 54: Windows Host Util 5.2setup

ManageDisksOnSystemBuses setting used by the Host Utilities on page 57

ClusSvcHangTimeout setting on page 57

TimeOutValue setting used by the Host Utilities on page 57

PathVerifyEnabled setting used by the Host Utilities on page 58

PDORemovePeriod setting used by the Host Utilities on page 58

RetryCount setting used by the Host Utilities on page 58

RetryInterval setting used by the Host Utilities on page 59

DsmMaximumStateTransitionTime and DsmMaximumRetryTimeDuringStateTransition settingsused by the Host Utilities on page 59

MPIOSupportedDeviceList setting used by the Host Utilities on page 60

DsmSupportedDeviceList setting used by Host Utilities on page 60

IPSecConfigTimeout setting used by the Host Utilities on page 60

LinkDownTime setting used by the Host Utilities on page 60

MaxRequestHoldTime setting used by the Host Utilities on page 61

FC HBA parameters set by the Host Utilities on page 61

Summary of registry values set by Windows Host UtilitiesThe Windows Host Utilities installer sets a number of Windows registry values based on the choicesyou make during installation and the operating system version.

All values are decimal unless otherwise noted.

All configurationsThe following value is set for all configurations:

Property ValueProperty Type

1HKLM\SYSTEM\CurrentControlSet\Services\ClusDisk\Parameters\ManageDisksOnSystemBuses

120HKLM\SYSTEM\CurrentControlSet\Control\Class\{iSCSI_driver_GUID}\instance_ID\Parameters\MaxRequestHoldTime

5HKLM\SYSTEM\CurrentControlSet\Control\Class\{iSCSI_driver_GUID}\instance_ID\Parameters\LinkDownTime

60HKLM\SYSTEM\CurrentControlSet\Control\Class\{iSCSI_driver_GUID}\instance_ID\Parameters\IPSecConfigTimeout

All Windows Server 2008 R2 configurationsThe following value is set only for Windows Server 2008 R2 cluster configurations:

Property ValueProperty Type

240\HKLM\Cluster\ClusSvcHangTimeout

54 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 55: Windows Host Util 5.2setup

Note: If you configure a Windows cluster after installing Windows Host Utilities, run the Repairoption of the Host Utilities installation program to set ClusSvcHangTimeout.

No MPIO specifiedThe following value is set when no multipath I/O (MPIO) support is selected:

Property ValueProperty Type

120HKLM\SYSTEM\CurrentControlSet\Services\disk\TimeOutValue

MPIO support specified, all OSThe following values are set when multipath I/O (MPIO) is selected. These values are set for all operatingsystems.

Property ValueProperty Type

20HKLM\SYSTEM\CurrentControlSet\Services\disk\TimeOutValue

130HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\ PDORemovePeriod

0HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\ PathVerifyEnabled

6HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\RetryCount

1HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\ RetryInterval

130HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ PDORemovePeriod

0HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ PathVerifyEnabled

6HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ RetryCount

1HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\ RetryInterval

"NETAPP LUN "HKLM\SYSTEM\CurrentControlSet\Control\MPDEV\ MPIOSupportedDeviceList

MPIO support specified, Windows Server 2003 onlyThe following values are set when multipath I/O (MPIO) is selected. These values are set for WindowsServer 2003 only.

Property ValueProperty Type

0HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\ PathVerifyEnabled

130HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\ PDORemovePeriod

6HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\ RetryCount

1HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\ RetryInterval

Host configuration settings | 55

Page 56: Windows Host Util 5.2setup

MPIO support specified, Windows Server 2008 onlyThe following values are set when multipath I/O (MPIO) is selected. These values are set for WindowsServer 2008 only.

Property ValueProperty Type

0HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ PathVerifyEnabled

130HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ PDORemovePeriod

6HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ RetryCount

1HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\ RetryInterval

120HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\DsmMaximumRetryTimeDuringStateTransition

120HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\DsmMaximumStateTransitionTime

"NETAPP LUN "HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\DsmSupportedDeviceList

Summary of FC HBA values set by Windows Host UtilitiesOn systems using FC, the Host Utilities installer sets the required HBA values.

For Emulex Fibre Channel HBAs, the installer sets the following parameters when MPIO is selected:

Property ValueProperty Type

5LinkTimeOut

30NodeTimeOut

For Emulex Fibre Channel HBAs, the installer sets the following parameters when no MPIO is selected:

Property ValueProperty Type

30LinkTimeOut

120NodeTimeOut

For QLogic Fibre Channel HBAs, the installer sets the following parameters when MPIO is selected:

Property ValueProperty Type

5LinkDownTimeOut

30PortDownRetryCount

For QLogic Fibre Channel HBAs, the installer sets the following parameters when no MPIO is selected:

56 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 57: Windows Host Util 5.2setup

Property ValueProperty Type

30LinkDownTimeOut

120PortDownRetryCount

ManageDisksOnSystemBuses setting used by the Host UtilitiesThe Windows Host Utilities installer sets the ManageDisksOnSystemBuses registry parameter to 1 forall configurations.

The ManageDisksOnSystemBuses parameter is used by SAN-booted systems to ensure that the startupdisk, pagefile disks, and cluster disks are all on the same SAN fabric.

This parameter is set in the following registry key:HKLM\SYSTEM\CurrentControlSet\Services\ClusDisk\Parameters\

For detailed information about the ManageDisksOnSystemBuses parameter, see Microsoft Supportarticle 886569.

Related information

Microsoft Support article 886569 - http://support.microsoft.com/kb/886569.

ClusSvcHangTimeout settingThe Windows Host Utilities installer sets ClusSvcHangTimeout to 240 seconds for Windows Server2008 R2 hosts in a cluster configuration. This ensures the correct path discovery sequence after all pathsto a LUN are lost and paths begin to return.

The ClusSvcHangTimeout setting specifies the time period for which a PR_IN will is retried after lossof all paths. This setting must be greater than the PDORemovePeriod for Windows Server 2008 R2.

This parameter is set in the following Windows registry key: \HKLM\Cluster\

Note: If you configure a Windows cluster after installing Windows Host Utilities, run the Repairoption of the Host Utilities installation program to set ClusSvcHangTimeout.

TimeOutValue setting used by the Host UtilitiesThe Windows Host Utilities installer sets the disk TimeOutValue to 20 seconds for MPIO configurationsand to 120 seconds for non-MPIO configurations.

The disk TimeOutValue parameter specifies how long an I/O request is held at the SCSI layer beforetiming out and passing a timeout error to the application above.

The disk TimeOutValue setting for MPIO configurations changed to 20 from 60 seconds in WindowsHost Utilities 5.1. This lower value improves failover performance.

This parameter is set in the following Windows registry key:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\

Host configuration settings | 57

Page 58: Windows Host Util 5.2setup

Attention: Installing the cluster service on Windows 2003 sets the disk TimeOutValue to 10 seconds.If cluster service is installed after you install Windows Host Utilities, use the Repair option to changethe disk TimeOutValue back to 20 or 120 seconds. In the Windows Control Panel, launch Add orRemove Programs. Select Windows Host Utilities and click Change. Then click Repair to correctthe TimeOutValue setting.

PathVerifyEnabled setting used by the Host UtilitiesThe Windows Host Utilities installer sets the PathVerifyEnabled parameter to 0 (disabled) for all MPIOconfigurations. For Microsoft iSCSI DSM configurations, this value must be 0.

The PathVerifyEnabled parameter specifies whether the Windows MPIO driver periodically requeststhat the DSM check its paths. Note that this parameter affects all DSMs on the system.

This parameter is set in the following Windows registry keys:

• HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\

PDORemovePeriod setting used by the Host UtilitiesThe Windows Host Utilities installer sets PDORemovePeriod to 130 seconds for all MPIO configurations.This time is based on the maximum time expected for a storage system failover, in addition to the timefor the path to be established.

The PDORemovePeriod parameter is used only for MPIO configurations. This parameter specifies themaximum time that the MPIO layer waits for the device-specific module (DSM) to find another pathto a LUN. If a new path is not found at the end of this time, the MPIO layer removes its pseudo-LUNand reports that the disk device has been removed.

Note that this parameter affects all DSMs on the system.

This parameter is set in the following Windows registry keys:

• HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\

RetryCount setting used by the Host UtilitiesThe Windows Host Utilities installer sets the RetryCount to 6 for all MPIO configurations. This valueenables recovery from a transient path problem. If the path is not recovered in six tries, it is probablya more serious network problem.

58 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 59: Windows Host Util 5.2setup

The RetryCount parameter specifies the number of times the current path to a LUN is retried beforefailing over to an alternate path.

Note that this parameter affects all DSMs on the system.

This parameter is set in the following Windows registry keys:

• HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\

RetryInterval setting used by the Host UtilitiesThe Windows Host Utilities installer sets RetryInterval to 1 second for all MPIO configurations. Thisvalue gives the path a chance to recover from a transient problem before trying again.

The RetryInterval parameter specifies the amount of time to wait between retries of a failed path.

Note that this parameter affects all DSMs on the system.

This parameter is set in the following Windows registry keys:

• HKLM\SYSTEM\CurrentControlSet\Services\mpio\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\vnetapp\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\

• HKLM\SYSTEM\CurrentControlSet\Services\msiscdsm\Parameters\

DsmMaximumStateTransitionTime andDsmMaximumRetryTimeDuringStateTransition settings used by the HostUtilities

The Windows Host Utilities installer sets the DsmMaximumStateTransitionTime and theDsmMaximumRetryTimeDuringStateTransition parameters to 120 seconds for MPIO configurationson Windows Server 2008. This value allows time for a path state transition to complete after a storagesystem failover or other event.

The DsmMaximumStateTransitionTime and DsmMaximumRetryTimeDuringStateTransition parametersspecify the time the Windows Server 2008 msdsm waits for an ALUA path transition before returningan I/O error to the layer above it in the stack. Both parameters are currently set because Microsoft usesa different parameter in different release candidate versions of Windows Server 2008.

The parameters are set in the following Windows registry key:HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\

Host configuration settings | 59

Page 60: Windows Host Util 5.2setup

MPIOSupportedDeviceList setting used by the Host UtilitiesThe Windows Host Utilities installer sets MPIOSupportedDeviceList to NETAPP LUN for all MPIOconfigurations.

The MPIOSupportedDeviceList parameter specifies that the Windows MPIO component should claimstorage devices with the specified vendor identifier and product identifier (VID/PID). This parameterdoes not determine which DSM handles the claimed devices.

This parameter is set in the following Windows registry key:HKLM\SYSTEM\CurrentControlSet\Control\MPDEV\

DsmSupportedDeviceList setting used by Host UtilitiesThe Windows Host Utilities installer sets DsmSupportedDeviceList to "NETAPP LUN " for MPIOconfigurations on Windows Server 2008. This parameter specifies that a DSM should claim storagedevices with the specified vendor identifier and product identifier (VID/PID).

This parameter is set for the msdsm included in Windows Server 2008. The msdsm always gives priorityto other DSMs. If another DSM is installed and configured to claim all LUNs with VID/PID of NETAPPLUN, that other DSM would handle the specified LUNs, even though the msdsm has this parameterset.

This parameter is set in the following Windows registry key:HKLM\SYSTEM\CurrentControlSet\Services\msdsm\Parameters\

IPSecConfigTimeout setting used by the Host UtilitiesThe Windows Host Utilities installer sets IPSecConfigTimeout to 60 when the iSCSI protocol is selected.

The IPSecConfigTimeout parameter specifies how long the driver waits for the discovery service toconfigure or release ipsec for an iSCSI connection. This value is set to 60 to enable the initiator serviceto start correctly on slow-booting systems that use CHAP.

This parameter is set in the following Windows registry key:HKLM\SYSTEM\CurrentControlSet\Control\Class\{iSCSI_driver_GUID}\instance_ID\Parameters\IPSecConfigTimeout

LinkDownTime setting used by the Host UtilitiesThe Windows Host Utilities installer sets LinkDownTime to 5 when the iSCSI protocol is selected.

The LinkDownTime parameter specifies the maximum time in seconds that requests are held in thedevice queue and retried if the connection to the target is lost. If MPIO is installed this value is used.If MPIO is not installed, MaxRequestHoldTime is used instead.

This parameter is set in the following Windows registry key:HKLM\SYSTEM\CurrentControlSet\Control\Class\{iSCSI_driver_GUID}\instance_ID\Parameters\LinkDownTime

60 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 61: Windows Host Util 5.2setup

MaxRequestHoldTime setting used by the Host UtilitiesThe Windows Host Utilities installer sets MaxRequestHoldTime to 120 when the iSCSI protocol isselected.

The MaxRequestHoldTime parameter specifies the maximum time in seconds that requests are queuedif connection to the target is lost and the connection is being retried. After this hold period, requestsare failed with "error no device" and the disk is removed from the system. The setting of 120 enablesthe connection to survive the maximum expected failover time. This setting is used if MPIO is notinstalled. If MPIO is installed, LinkDownTime is used instead.

This parameter is set in the following Windows registry key:HKLM\SYSTEM\CurrentControlSet\Control\Class\{iSCSI_driver_GUID}\instance_ID\Parameters\MaxRequestHoldTime

FC HBA parameters set by the Host UtilitiesThe Windows Host Utilities installer sets the required FC HBA parameters. The installer sets differentvalues depending on whether MPIO support is specified during installation.

The names of the parameters may vary slightly depending on the program. For example, in the QLogicSANsurfer program, the parameter is displayed as Link Down Timeout. The Host Utilities fcconfig.inifile displays this parameter as either LinkDownTimeOut or MpioLinkDownTimeOut depending onwhether MPIO is specified. However, all of these names refer to the same HBA parameter.

For Emulex HBAs, the installation program sets the following HBA parameters:

• LinkTimeOut=5 (with MPIO)

• LinkTimeOut=30 (without MPIO)

• NodeTimeOut=30 (with MPIO)

• NodeTimeOut=120 (without MPIO)

The LinkTimeOut parameter specifies the interval after which a link that is down stops issuing a BUSYstatus for requests and starts issuing SELECTION_TIMEOUT error status. This LinkTimeOut includesport login and discovery time.

The NodeTimeOut parameter specifies the interval after which a formerly logged-in node issuesSELECTION_TIMEOUT error status to an I/O request. This causes the system to wait for a node thatmight reenter the configuration soon before reporting a failure. The timer starts after port discovery iscompleted and the node is no longer present.

For QLogic HBAs, the installation program sets the following HBA parameters:

• LinkDownTimeOut=5 (with MPIO)

• LinkDownTimeOut=30 (without MPIO)

• PortDownRetryCount=30 (with MPIO)

• PortDownRetryCount=120 (without MPIO)

Host configuration settings | 61

Page 62: Windows Host Util 5.2setup

The LinkDownTimeOut parameter controls the timeout when a link that is down stops issuing a BUSYstatus for requests and starts issuing SELECTION_TIMEOUT error status. This LinkDownTimeOutincludes port login and discovery time.

The PortDownRetryCount parameter specifies the number of times the I/O request is re-sent to a portthat is not responding in one second intervals.

62 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 63: Windows Host Util 5.2setup

Setting up LUNs

LUNs are the basic unit of storage in a SAN configuration. The host system uses LUNs as virtual disks.

Next topics

LUN overview on page 63

Initiator group overview on page 65

About FC targets on page 66

Adding iSCSI targets on page 68

Accessing LUNs on hosts that use Veritas Storage Foundation on page 70

Accessing LUNs on hosts that use the native OS stack on page 71

Overview of initializing and partitioning the disk on page 71

LUN overviewYou can use a LUN the same way you use local disks on the host.

After you create the LUN, you must make it visible to the host. The LUN then appears on the Windowshost as a disk. You can:

• Format the disk with NTFS. To do this, you must initialize the disk and create a new partition. Onlybasic disks are supported with the native OS stack.

• Use the disk as a raw device. To do this, you must leave the disk offline. Do not initialize or formatthe disk.

• Configure automatic start services or applications that access the LUNs. You must configure thesestart services so that they are dependent on the Microsoft iSCSI Initiator service.

Next topics

LUN types to use for hosts and guest operating systems on page 63

Overview of creating LUNs on page 64

LUN types to use for hosts and guest operating systemsThe LUN type determines the on-disk layout of the LUN.

It is important to specify the correct LUN type to ensure good performance. The LUN type you specifydepends on the Windows version and disk type and the Data ONTAP version.

Setting up LUNs | 63

Page 64: Windows Host Util 5.2setup

Note: Not all LUN types are available when you create LUNs using the FilerView interface. To usethe hyper_v, windows_2008, windows_gpt, and windows_lhs LUN types, you must create the LUNusing the Data ONTAP command line interface.

Use the following table to select the correct LUN type.

Windows disk type and versionData ONTAPLUN type

Master boot record (MBR) on

• Windows Server 2003

• Windows XP

• Windows Vista

Allwindows

Windows Server 2008 Hyper-VLUNs containing virtual hard disks(VHDs).

Note: For raw LUNs, use thetype of child operating system asthe LUN type.

7.3.1 and laterhyper_v

All other disks on Windows Server2008

7.2.5 and later 7.3.0 RC2 and laterwindows_2008

GUID Partition Type (GPT) diskson Windows Server 2003

7.2.1 and laterwindows_gpt

All disks on Windows Server 20087.3.0 RC1windows_lhs

All disks on Windows Server 20087.2.4 and earlierlinux

Overview of creating LUNsYou can create LUNs manually, or by running SnapDrive software.

You can access the LUN using either the FC or the iSCSI protocol. The procedure for creating LUNsis the same regardless of which protocol you use. You must create an initiator group (igroup), createthe LUN, and then map the LUN to the igroup.

Note: If you are using the optional SnapDrive software, use SnapDrive to create LUNs and igroups.Refer to the documentation for your version of SnapDrive for specific steps. If you create LUNsmanually, you cannot manage them using SnapDrive.

The igroup must be the correct type for the protocol. You cannot use an iSCSI igroup when you areusing the FC protocol to access the LUN. If you want to access a LUN with both FC and iSCSI protocols,you must create two igroups, one FC and one iSCSI.

The lun setup command steps you through the process of creating an igroup and LUN on the storagesystem. You can also create igroups and LUNs by executing a series of individual commands (such as

64 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 65: Windows Host Util 5.2setup

igroup create, lun create, and lun map). Detailed steps for creating LUNs are in the DataONTAP Block Access Management Guide for iSCSI and FC for your version of Data ONTAP.

Initiator group overviewInitiator groups specify which hosts can access specified LUNs on the storage system.

Initiator groups (igroups) are protocol-specific.

For FC connections, create an FC igroup using all WWPNs for the host.

For iSCSI connections, create an iSCSI igroup using the iSCSI node name of the host.

For systems using both FC and iSCSI connections to the same LUN, create two igroups: one for FCand one for iSCSI. Then map the LUN to both igroups. You cannot create a combined FC/iSCSI igroup.

Note: Mixed FC and iSCSI connections to the same LUN require Data ONTAP DSM for WindowsMPIO. Mixed connections are not supported with the Veritas DSM or other multipathing solutions.

Note: If you are using the Microsoft native DSM for Windows Server 2008 (msdsm), you mustenable ALUA for FC igroups. If you are using the Data ONTAP DSM for Windows MPIO, you mustdisable ALUA for igroups.

There are many ways to create and manage initiator groups and LUNs on your storage system. Theseprocesses vary depending on your configuration. These topics are covered in detail in the Data ONTAPBlock Access Management Guide for iSCSI and FC for your version of Data ONTAP software.

Next topics

Mapping LUNs to igroups on page 65

About mapping LUNs for Windows clusters on page 66

Related tasks

Enabling ALUA for FC with msdsm on page 43

Disabling ALUA for Data ONTAP DSM on page 42

Mapping LUNs to igroupsWhen you map a LUN to an igroup, you assign the LUN identifier.

You must assign the LUN ID of 0 to any LUN that will be used as a boot device. LUNs with IDs otherthan 0 are not supported as boot devices.

If you map a LUN to both an FC igroup and an iSCSI igroup, the LUN has two different LUN identifiers.

Note: The Windows operating system only recognizes LUNs with identifiers 0 through 254, regardlessof the number of LUNs mapped. Be sure to map your LUNs to numbers in this range.

Setting up LUNs | 65

Page 66: Windows Host Util 5.2setup

About mapping LUNs for Windows clustersWhen you use clustered Windows systems, all members of the cluster must be able to access LUNs forshared disks.

Map shared LUNs to an igroup for each node in the cluster.

Attention: If more than one host is mapped to a LUN, you must run clustering software on the hoststo prevent data corruption.

About FC targetsThe host automatically discovers FC targets that are accessible to its HBAs. However, you do need toverify that the host selects only primary (non-proxy) paths to FC targets.

About proxy paths in FC configurationsProxy paths are intended for use when certain storage system resources are not available.

An active/active storage system configuration has both primary and proxy FC paths. Proxy paths havehigher overhead and possibly lower performance. To prevent performance problems, make sure the FCpaths are configured so that proxy paths are only used when there is a failure.

If your FC paths are not configured correctly, routine traffic can flow over a proxy path. The storagesystem measures FC traffic over primary and proxy paths. If it detects significant traffic on a proxypath, the storage system issues a log message and triggers an AutoSupport message.

Next topics

Verifying FC paths to LUNs on page 66

Correcting FC path configurations with the Data ONTAP DSM on page 67

Correcting FC path configurations with the Veritas DMP DSM on page 68

Correcting FC path configurations with the Microsoft msdsm on page 68

Verifying FC paths to LUNs

When you configure your host for FC, verify that the active paths are not proxy paths.

You can verify the paths by mapping a LUN to the host on each storage system node, generating I/Oto the LUN, and then checking the FC statistics on each note.

Complete the following steps.

Steps

1. Map a LUN to the host on each node in the active/active storage system configuration.

66 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 67: Windows Host Util 5.2setup

2. On the consoles of both storage system in the active/active configuration, start collecting statisticsusing the following command:

sysstat -b

3. Generate I/O to the LUNs.4. Check the FC statistics on each storage system node to verify that the proxy paths have essentially

no traffic. The sysstat command periodically writes a line of statistics to the console. Check thePartner columns; the values should remain close to zero, while the FCP columns should show data.

Note: Some initiators send occasional traffic over passive paths to ensure they are still available,so you typically see some traffic on proxy paths even when the system is correctly configured.

5. Enter Ctrl-C to exit the sysstat command on each console.

If the Partner values remain close to zero, traffic is flowing over the correct paths. If the Partner valuesare high, as in the example below, the paths are not configured correctly.

Example of high partner values

In this example, all FC traffic is flowing over the proxy paths. Some columns from the sysstatcommand are removed from the example to make it easier to read.

CPU FCP iSCSI Partner Total FCP kB/s Partner kB/s in out in out6% 0 0 124 124 0 0 5987 269% 0 0 186 186 0 0 9777 157% 0 0 147 147 0 0 6675 266% 0 0 87 87 0 0 3934 141% 0 0 6 6 0 0 257 0

Correcting FC path configurations with the Data ONTAP DSM

When running the Data ONTAP DSM for Windows MPIO, correct any FC path configurations that areusing partner (proxy) paths to the LUNs during routine operations.You might need to select a different load balance policy to be able to select which paths are used.

Step

1. Use the DSM management interface to check the paths to the LUNs. The detail display for eachLUN shows whether it is using a proxy path. Reconfigure the paths to the LUN so that they do notuse a proxy path under normal conditions.

For more information, see the Installation and Administration Guide for your version of the DataONTAP DSM.

Setting up LUNs | 67

Page 68: Windows Host Util 5.2setup

Correcting FC path configurations with the Veritas DMP DSM

When running the Veritas DMP DSM, correct any FC path configurations that are using partner (proxy)paths to the LUNs during routine operations.The Veritas DMP DSM selects FC paths automatically. You cannot manually select paths.

Step

1. Verify that you are running a supported version of the DSM.

See the Host Utilities Release Notes for information about obtaining an updated DSM.

Correcting FC path configurations with the Microsoft msdsm

When running the Microsoft Windows Server 2008 native msdsm, correct any FC path configurationsthat are using partner (proxy) paths to the LUNs during routine operations.The Microsoft msdsm selects FC paths using ALUA.

Steps

1. Verify that you are running a supported version of Data ONTAP software with the single_imagecfmode.

Data ONTAP 7.3.0 is the first supported version.

2. Verify that ALUA is enabled on the igroup.

Related tasks

Enabling ALUA for FC with msdsm on page 43

Adding iSCSI targetsTo access LUNs when you are using iSCSI, you must add an entry for the storage system using theMicrosoft iSCSI Initiator GUI.

About this task

You only need one entry for each storage system in the configuration, regardless of the number ofinterfaces that are enabled for iSCSI traffic. An active/active storage system configuration must havetwo entries, one for each storage system node in the configuration.

The iSCSI Initiator GUI manages connections for both the software initiator and the optional iSCSIHBAs.

68 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 69: Windows Host Util 5.2setup

You can also add entries for the targets using the iscsicli interface. Enter

iscsicli helpon the Windows command line for more information on iscsicli.

Note: If you are using Veritas Storage Foundation for Windows for iSCSI sessions, use the MicrosoftiSCSI initiator interface to log on to the iSCSI targets. Do not use the Veritas Enterprise Administratorinterface to log on because it does not create iSCSI sessions with the specified source-target portals.

If you are using SnapDrive for Windows software, use the SnapDrive interface to add iSCSI targets.

To add a target, complete the following steps.

Steps

1. Run the Microsoft iSCSI Initiator GUI.2. On the Discovery tab, create an entry for the storage system.3. On the Targets tab, log on to the storage system.4. If you want the LUNs to be persistent across host reboots, select the Automatically restore this

connection when the system boots check box when logging on to the target.5. If you are using MPIO or multiple connections per session, create additional connections to the

target as needed.

Enabling the optional MPIO support or multiple-connections-per-session support does notautomatically create multiple connections between the host and storage system. You must explicitlycreate the additional connections.

For Windows Server 2003, see the section “Multipathing I/O” in the Microsoft iSCSI SoftwareInitiator 2.x Users Guide for specific instructions on configuring multiple paths to iSCSI LUNs.

For Windows Server 2008, see the iSCSI topics in Help.

Next topics

About dependent services on the Native Stack and iSCSI on page 69

About dependent services on Veritas and iSCSI on page 70

About dependent services on the Native Stack and iSCSIWhen you use disks based on iSCSI LUNs on a Host Utilities Native stack, you must reconfigure anydependent service or application to start after the iSCSI service.

The Windows disks that are based on iSCSI LUNs become available later in the startup sequence thanthe local disks do. This can create a problem if you have not reconfigured the dependent services orapplications.

Setting up LUNs | 69

Page 70: Windows Host Util 5.2setup

About dependent services on Veritas and iSCSIWhen you use disks based on iSCSI LUNs on a Veritas stack, you must reconfigure any dependentapplication or service to start after the iSCSI service.

Accessing LUNs on hosts that use Veritas Storage FoundationTo enable the host running Veritas Storage Foundation to access a LUN, you must make the LUNvisible to the host.

Before you begin

These steps apply only to a host running Veritas Storage Foundation. They do not apply to the hostsrunning the native OS stack.

Steps

1. Select Start ➤ All Programs ➤ Symantec ➤ Veritas Storage Foundation ➤ Veritas EnterpriseAdministrator.

The Select Profile window is displayed.

2. Select a profile and click OK to continue.

The Veritas Enterprise Administrator window is displayed.

3. Click Connect to a Host or Domain.

The Connect window is displayed.

4. Select a Host from the drop-down menu and click Browse to find a host, or enter the host name ofthe computer and click Connect.

The Veritas Enterprise Administrator GUI window with storage objects is displayed.

5. Select Action ➤ Rescan.

All the disks on the host are rescanned.

6. Select Action ➤ Refresh.

The latest data is displayed.

7. In the Veritas Enterprise Administrator, with the Disks expanded, verify that the newly createdLUNs are visible as disks on the host.

The LUNs appear on the Windows host as basic disks under Veritas Enterprise Administrator.

70 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 71: Windows Host Util 5.2setup

After you finish

You can upgrade the disks to dynamic disks by adding them to dynamic disk groups. For moreinformation on creating basic or dynamic volumes, see the Veritas Storage Foundation Administrator’sGuide.

Related tasks

Accessing LUNs on hosts that use the native OS stack on page 71

Accessing LUNs on hosts that use the native OS stackTo access a LUN when you are using the native OS stack, you must make the LUN visible to theWindows host.

Before you begin

These steps apply only to the native OS stack only. They do not apply to the hosts running the VeritasStorage Foundation.

Steps

1. Right-click My Computer on your desktop and select Manage.2. Expand Storage and double-click the Disk Management folder.3. From the Action menu, select Rescan Disks.4. From the Action menu, select Refresh.5. In the Computer Management window, with Storage expanded and the Disk Management folder

open, check the lower right pane to verify that the newly created LUN is visible as a disk on thehost.

Related tasks

Accessing LUNs on hosts that use Veritas Storage Foundation on page 70

Overview of initializing and partitioning the diskYou can create one or more basic partitions on the LUN.

After you rescan the disks, the LUN appears in Disk Management as an Unallocated disk.

If you format the disk as NTFS, be sure to select the Perform a quick format option.

The procedures for initializing disks vary depending on which version of Windows you are running onthe host. See the Windows Disk Management online Help for more information.

Setting up LUNs | 71

Page 72: Windows Host Util 5.2setup
Page 73: Windows Host Util 5.2setup

Troubleshooting

This section describes general troubleshooting techniques for Windows Host Utilities.

Be sure to check the latest Release Notes for known problems and solutions.

Next topics

Areas to check for possible problems on page 73

Updating the HBA software driver on page 74

Understanding the Host Utilities changes to FC HBA driver settings on page 75

Enabling logging on the Emulex HBA on page 76

Enabling logging on the QLogic HBA on page 77

About the diagnostic programs on page 77

Areas to check for possible problemsTo avoid potential problems, confirm that the Host Utilities support your combination of host operatingsystem software, host hardware, Data ONTAP software, and storage system hardware.

• Check the Interoperability Matrix.

• Verify that you have a correct iSCSI configuration. If iSCSI LUNs are not available after a reboot,verify that the target is listed as persistent on the Persistent Targets tab of the Microsoft iSCSIInitiator GUI.If applications using the LUNs display errors on startup, verify that the applications are configuredto be dependent on the iSCSI service.

• Check for known problems. Review the Release Notes for Windows Host Utilities. The ReleaseNotes include a list of known problems and limitations.

• Review the troubleshooting information in the Data ONTAP Block Access Management Guide foriSCSI and FC for your version of Data ONTAP.

• Search Bugs Online for recently discovered problems. In the Bug Types field under AdvancedSearch, select ISCSI - Windows, and then click Go!. Repeat the search for Bug Type FCP -Windows.

• Collect information about your system. Record any error messages displayed on the host or storagesystem console. Collect the host and storage system log files. Record the symptoms of the problemand any changes made to the host or storage system just before the problem appeared. Run thediagnostic programs that are included with Windows Host Utilities.

• Contact technical support. If you are unable to resolve the problem, contact NetApp Global Servicesfor assistance.

Troubleshooting | 73

Page 74: Windows Host Util 5.2setup

Related concepts

About the diagnostic programs on page 77

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

Bugs Online - http://now.netapp.com/NOW/cgi-bin/bol

Contacting NetApp Global Services - http://www.netapp.com/us/support/ngs-contacts.html

Updating the HBA software driverCheck the version of the HBA software driver and determine whether it needs to be upgraded.

Before you begin

Current driver requirements are in the Interoperability Matrix.

About this task

To see if you have the latest driver, complete the following steps.

Steps

1. Right-click My Computer and select Manage.

The Computer Management window is displayed.

2. Double-click Device Manager.

A list of installed devices displays. Previously installed drivers are listed under SCSI and RAIDcontroller. One installed driver appears for each port on the HBA.

Note: If you uninstalled a device driver, a FC controller (HBA) appears under Other devices.

3. Expand SCSI and RAID controllers and double-click the appropriate HBA.

The General dialog box is displayed.

4. Click Driver.

• If the driver version is correct, then you do not need to do anything else and can stop now.

• If the version is not correct, proceed to the next step.

5. Obtain the latest supported version from the Emulex or QLogic Web site.

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

74 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 75: Windows Host Util 5.2setup

Emulex support page for NetApp - http://www.emulex.com/ts/index.html

QLogic support page for NetApp - http://support.qlogic.com/support/drivers_software.asp

Understanding the Host Utilities changes to FC HBA driversettings

During the installation of the required Emulex or QLogic HBA drivers on an FC system, severalparameters are checked and, in some cases, modified.

The Host Utilities set values for the following parameters:

• LinkTimeOut – defines the length of time in seconds that the host port waits before resuming I/Oafter a physical link is down.

• NodeTimeOut – defines the length of time in seconds before the host port recognizes that a connectionto the target device is down.

When troubleshooting HBA issues, check to make sure these settings have the correct values. Thecorrect values depend on two factors:

• The HBA vendor

• Whether you are using multipathing software (MPIO)

You can correct the HBA settings by running the Repair option of the Windows Host Utilities installer.

Next topics

Verifying the Emulex HBA driver settings on FC systems on page 75

Verifying the QLogic HBA driver settings on FC systems on page 76

Verifying the Emulex HBA driver settings on FC systemsOn FC systems, verify the Emulex HBA driver settings. These settings must exist for each port on theHBA.

Steps

1. Open HBAnyware.2. Select the appropriate HBA from the list and click the Driver Parameters tab.

The driver parameters appear.

3. If you are using MPIO software, ensure you have the following driver settings:

• LinkTimeOut - 5

• NodeTimeOut - 30

Troubleshooting | 75

Page 76: Windows Host Util 5.2setup

4. If you are not using MPIO software, ensure you have the following driver settings:

• LinkTimeOut - 30

• NodeTimeOut - 120

Verifying the QLogic HBA driver settings on FC systemsOn FC systems, verify the QLogic HBA driver settings. These settings must exist for each port on theHBA.

Steps

1. Open SANsurfer and click Connect on the toolbar.

The Connect to Host dialog appears.

2. Select the appropriate host from the list and click Connect.

A list of HBAs appears in the FC HBA pane.

3. Select the appropriate HBA port from the list and click the Settings tab.4. Select Advanced HBA Port Settings from the Select Settings section.5. If you are using MPIO software, ensure you have the following driver settings:

• Link Down Timeout (linkdwnto) - 5

• Port Down Retry Count (portdwnrc) - 30

6. If you are not using MPIO software, ensure you have the following driver settings:

• Link Down Timeout (linkdwnto) - 30

• Port Down Retry Count (portdwnrc) - 120

Enabling logging on the Emulex HBAIn some unusual circumstances, your technical support engineer might request that you enable errorlogging on the Emulex HBA miniport driver.

Steps

1. Open HBAnyware.2. Select the appropriate HBA from the list and click the Driver Parameters tab.3. Select the LogErrors parameter and change the value to the desired severity level.4. Click Apply.

76 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 77: Windows Host Util 5.2setup

Enabling logging on the QLogic HBAIn some unusual circumstances, your technical support engineer might request that you enable errorlogging on the QLogic HBA miniport driver.

Steps

1. Open SANsurfer.2. Open the Settings menu and select Options.3. Ensure Log Informational Events, Warning Events, and Enable Warning display are selected.4. Click OK.

About the diagnostic programsWindows Host Utilities provides several diagnostic programs you can use for troubleshooting.

When you run these programs, they gather information about different aspects of your system. Thespecific programs installed on your host depend on options you selected when installing Windows HostUtilities.

Note: The diagnostic programs do not currently support IPv6. See the Release Notes for informationon collecting diagnostic information in an IPv6 environment.

There are two types of diagnostic programs: programs that display information you can use yourself,and programs that collect and package information that you send to technical support.

The following programs display information that you can use yourself:

• hba_info.exe. The hba_info.exe program displays information about FC HBAs in the Windowshost. Information includes the worldwide port names and firmware version.

• msiscsi_info.exe.The msiscsi_info.exe program displays detailed information about theiSCSI initiator version, iSCSI targets, sessions, connections, and LUNs.

• san_version.exe. The san_version.exe program displays the version number of the HostUtilities programs installed on the Windows host. It also displays the model and firmware versionof FC HBAs installed on the host.

• vm_info.exe. The vm_info.exe program displays detailed information about each virtual machineand virtual switch running on a Hyper-V system. Information includes virtual hard disk resourcesand where they are stored.

The following programs collect configuration information in a file that you can send to technical support.Normally, you use these programs only when requested to do so by a technical support engineer. Theprograms are:

Troubleshooting | 77

Page 78: Windows Host Util 5.2setup

• controller_info.exe

Note: This program was named filer_info.exe in previous Host Utilities.

• windows_info.exe

• brocade_info.exe, cisco_info.exe, mcdata_info.exe, and qlogic_info.exe

Next topics

Running the diagnostic programs (general steps) on page 78

Collecting storage system information on page 79

Collecting Windows host information on page 79

Collecting FC switch information on page 80

About collecting information on Veritas Storage Foundation on page 80

Running the diagnostic programs (general steps)Although some of the scripts have unique arguments, the general steps for running diagnostic programsare the same.

About this task

These programs run in command-line mode only. Double-clicking a program name to run it does notwork.

Step

1. Select Start ➤ Programs ➤ NetApp ➤ Windows Host Utilities ➤ program_name.

The system opens a command window. If the program requires you to provide parameters, thecommand syntax is displayed. If the program does not require parameters, performing this step runsthe program. The program either displays the information on the screen or saves it to a file that youcan send to technical support for analysis.

You can use this step to run:

• hba_info.exe

• msiscsi_info.exe

• san_version.exe

• vm_info.exe

• controller_info.exe

• windows_info.exe

• brocade_info.exe

• cisco_info.exe

• mcdata_info.exe

78 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 79: Windows Host Util 5.2setup

• qlogic_info.exe

Collecting storage system informationIf you are having a problem, the technical support engineer might ask you to run thecontroller_info.exe program to collect storage system configuration information.

Before you begin

You should run this utility only when requested to do so by a technical support engineer.

About this task

To collect information on your storage system, complete the following steps.

Steps

1. Select Start ➤ Programs ➤ NetApp ➤ Windows Host Utilities ➤ Controller Info.

A command prompt opens and the command syntax is displayed.

2. Enter the appropriate information for your storage system.

Note: For an active/active storage system configuration you should run the program twice, oncefor each controller.

When the program completes, it displays a message telling you where the file is stored. Send this fileto the address provided by the technical support engineer. The message is similar to the following:

Controller ss1 info is in directoryC:\temp\netapp\netapp_controller_ss1.yourcompany.comCompressed file isC:\temp\netapp\netapp_controller_ss1.yourcompany.com.tar.gzPlease send this file to Customer Support for analysis.

Collecting Windows host informationIf you are having a problem, a technical support engineer might ask you to run the windows_info.exeprogram to collect Windows configuration information.

Before you begin

You should run this utility only when requested to do so by a technical support engineer.

About this task

To collect information on your storage system, complete the following step.

Troubleshooting | 79

Page 80: Windows Host Util 5.2setup

Step

1. Select Start ➤ Programs ➤ NetApp ➤ Windows Host Utilities ➤ Windows Info.

When the program completes, it displays a message telling you where the file is stored. Send this fileto the address provided by the technical support engineer. The message is similar to the following:

Host info is in directory C:\temp\netapp\windows_infoCompressed file is C:\temp\netapp\windows_info.tar.gzPlease send this file to Customer Support for analysis.

Collecting FC switch informationIf you are having a problem, a technical support engineer might ask you to run one of the FC switchdiagnostic programs: brocade_info.exe, cisco_info.exe, mcdata_info.exe, orqlogic_info.exe.

Steps

1. Select Start ➤ Programs ➤ NetApp ➤ Windows Host Utilities ➤ program_name.program_name is the name of the program for your switch.

A command prompt opens and the command syntax is displayed.

2. Enter the appropriate information for your switch.

Note: If the Windows host is connected to more than one switch, repeat the command for eachswitch.

When the program completes, it displays a message telling you where the file is stored. Send this fileto the address provided by the technical support engineer. The message is similar to the following:

Executing "supportShow" (may take up to 10 mins)...Switch fcsw32a1 info is in directoryC:\temp\netapp\brocade_switch_fcsw32a1Compressed file is C:\temp\netapp\brocade_switch_fcsw32a1.tar.gzPlease send this file to Customer Support for analysis.

About collecting information on Veritas Storage FoundationYou must use the VxExplorer tool, which is included with Veritas Storage Foundation for Windows,to collect information specific to Veritas configurations.

The windows_info program does not collect information specific to Veritas Storage Foundationconfigurations.

80 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 81: Windows Host Util 5.2setup

SAN Booting

This section describes how to boot your host from a storage system LUN.

Next topics

What SAN booting is on page 81

Configuring FC SAN booting on page 83

Creating the boot LUN on page 89

Configuring Emulex BootBIOS on page 89

Configuring QLogic BootBIOS on page 90

About configuring BIOS to allow booting from a LUN on page 91

Configuring QLogic iSCSI HBA boot BIOS settings on page 92

Getting the correct driver for the boot LUN on page 93

Configuring a VCS or MSCS cluster with Veritas in a SAN booted environment on page 95

What SAN booting isSAN booting is the general term for booting a host from a storage system LUN instead of an internalhard disk.

Fiber Channel SAN booting does not require support for special SCSI operations; it is not differentfrom any other SCSI disk operation. The HBA uses special code in the BIOS that enables the host toboot from a LUN on the storage system.

iSCSI SAN booting also uses special code in the BIOS that enables the host to boot from a LUN onthe storage system. You need to set specific parameters in the BIOS to enable iSCSI SAN booting.

The general process is as follows.

1. After the HBA has accessed the BIOS, use the Emulex or QLogic BootBIOS utility to configurethe LUN as a boot device.

2. Configure the PC BIOS to make the LUN the first disk device in the boot order.3. Install the following on the LUN.

• Windows operating system

• HBA driver

Note: Following a system failure, the bootable virtual disk is no longer the default boot device. Youneed to reconfigure the hard disk sequence in the system BIOS to set the bootable virtual disk as thedefault boot device.

SAN Booting | 81

Page 82: Windows Host Util 5.2setup

Next topics

How SnapDrive supports SAN booting on page 82

General requirements for SAN booting on page 82

About queue depths used with FC SAN booting on page 82

How SnapDrive supports SAN bootingVersions of SnapDrive supported by Windows Host Utilities detect both bootable virtual disks (eligiblefor SAN booting) and nonbootable virtual disks.

SnapDrive differentiates between the two in the SnapDrive GUI by representing each virtual disk typewith a separate icon.

SnapDrive identifies bootable virtual disks and prevents you from performing operations you wouldnormally perform on a nonbootable virtual disk.

General requirements for SAN bootingFor Windows 2003 configurations, store the pagefile.sys file on the local disk if you suspect pagefilelatency issues.

See the Microsoft article Support for booting from a Storage Area Network (SAN) for more informationabout pagefiles.

Check the Interoperability Matrix for the latest SAN booting requirements for your operating systemversion.

Related information

Support for booting from a Storage Area Network (SAN) -http://support.microsoft.com/kb/305547/en-us

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

About queue depths used with FC SAN bootingFor best results, ensure you have the correct queue depths for FC SAN booting configurations.

It is best to tune the queue depths on the server-side HBA for Windows hosts to 254 for Emulex HBAsor 256 for QLogic HBAs.

Note: To avoid host queuing, the host queue depths should not exceed the target queue depths on aper-target basis. See the FC and iSCSI Configuration Guide for more information about target queuedepths.

Related information

FC and iSCSI Configuration Guide -http://now.netapp.com/NOW/knowledge/docs/san/fcp_iscsi_config/

82 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 83: Windows Host Util 5.2setup

Configuring FC SAN bootingTo configure FC SAN booting, you must perform a number of steps on both the host and the storagesystem.

Steps

1. Enable BootBIOS on the HBA.2. For Fibre Channel, obtain the WWPN of the initiator HBA installed in the host. The WWPN is

required when you configure the igroup on the storage system.3. Use port sets to limit the available paths to only a single path to the boot LUN.4. Create the LUN that will be used as a boot device and map the LUN to the igroup that contains the

WWPNs of the host.5. Configure BootBIOS to use the LUN as a boot device.6. Configure the host BIOS to make the boot LUN the first disk device in the boot order.7. Get the correct Emulex or QLogic driver for the boot LUN.8. Install the Windows operating system on the new LUN.9. If you are using the Veritas Storage Foundation for Windows software, install it now. Follow the

instructions in the Veritas Storage Foundation and High Availability Solutions Installation andUpgrade Guide.

Be sure to install the Veritas DMP DSM when you install the Veritas Storage Foundation forWindows software. Do not install the Data ONTAP DSM for Windows MPIO with Veritas StorageFoundation.

10. Install Windows Host Utilities.

Note: For the latest information about SAN booting, including restrictions and configurationrecommendations, refer to the Windows Host Utilities Release Notes.

Next topics

About BootBIOS and SAN booting on page 84

Enabling Emulex BootBIOS using HBAnyware on page 84

Enabling Emulex BootBIOS using LP6DUTIL on page 85

Enabling QLogic BootBIOS on page 85

WWPN for the HBA required on page 86

Configuring a single path between the host and storage system on page 86

Related tasks

Recording the WWPN on page 51

Getting the correct driver for the boot LUN on page 93

SAN Booting | 83

Page 84: Windows Host Util 5.2setup

About BootBIOS and SAN bootingBootBIOS enables the HBA to access the existing BIOS on Intel 32-bit, Intel Xeon 64-bit, and AMDOpteron 64-bit systems. It also enables you to designate a LUN as the boot device for the host.

BootBIOS firmware is installed on the HBA you purchased.

Note: Ensure you are using the version of firmware required by this version of Windows HostUtilities, as listed in the Interoperability Matrix.

BootBIOS firmware is disabled by default. To configure SAN booting, you must first enable BootBIOSfirmware and then configure it to boot from a SAN disk.

Related information

NetApp Interoperability Matrix - http://now.netapp.com/NOW/products/interoperability/

Enabling Emulex BootBIOS using HBAnywareIf you have an internal disk, then you can enable BootBIOS by using the HBAnyware configurationutility, which is included with the Emulex HBA driver package.

Steps

1. Open HBAnyware.2. Select the appropriate HBA from the list.3. Click the Firmware.4. Click Enable next to BootBIOS Message.

The BootBIOS message is enabled, allowing you to press Alt-E to access BootBIOS upon rebooting.

5. Reboot the host.6. Press Alt-E to access BootBIOS.

BootBIOS displays a menu of available adapters.

7. Enter the number next to the appropriate adapter and press Enter.8. Type 2, Configure this Adapter’s Parameters, and press Enter.9. Type 1, Enable or Disable BIOS, and press Enter.10. Type 1 to enable BIOS and press Enter.

BIOS is enabled.

11. Enter x to exit BootBIOS.12. Enter y to reboot the system.

The changes are saved.

84 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 85: Windows Host Util 5.2setup

Enabling Emulex BootBIOS using LP6DUTILIf you did not install an operating system on your host, you can enable Emulex BootBIOS from a floppydisk using the LP6DUTIL utility. This utility is included with your Emulex software package.

Steps

1. Create a bootable floppy disk.2. Copy the contents of the Emulex HBA firmware package to the floppy disk.3. Boot the host from the floppy disk.4. At the command prompt, type

lp6dutil

and press Enter.

The lp6dutil program starts.

5. Press Alt-L to open the Flash menu.6. Type B for BootBIOS.

A list of adapters appears.

7. Select the appropriate adapter and press Enter.8. Press Alt-C to activate the selected adapter.9. Press Tab until OK is selected and press Enter.

The BootBIOS is enabled.

10. Exit the lp6dutil program.11. Reboot the host.

Enabling QLogic BootBIOSEnabling QLogic BootBIOS configures the boot device.

Steps

1. Reboot the host.2. Press Ctrl-Q to access BootBIOS.

The QLogic Fast!UTIL displays the available adapters, listed in boot order. The primary boot deviceis the first device listed. If the primary boot device is unavailable, the host boots from the nextavailable device in the list.

3. Press Enter to configure the boot device.

The Fast!UTIL options appear.

4. Select Configuration Settings and press Enter.

SAN Booting | 85

Page 86: Windows Host Util 5.2setup

The Configuration Settings appear.

5. Select Adapter Settings and press Enter.

The Adapter Settings appear.

6. Scroll to Host Adapter BIOS:

• If this option is disabled, press Enter to enable it.

• If this option is enabled, go to the next step.

7. Press Esc to return to the Configuration Settings.8. Press Esc.

You are prompted to save the configuration settings.

9. Select Save changes and press Enter.

The changes are saved and you are returned to the configuration settings.

10. Press Esc.

You are prompted to reboot the system.

11. Select Reboot system and press Enter.

The system reboots.

WWPN for the HBA requiredTo create the igroup for the boot LUN, you must obtain the WWPN of the HBA installed in the host.

Related tasks

Recording the WWPN on page 51

Configuring a single path between the host and storage systemDuring installation, the host must have only a single path to the storage system.

Before you begin

MPIO (multipathing) drivers are not installed with the operating system and first boot. When preparingto install the operating system, ensure there is only one path from the HBA to the LUN used as a bootdevice. The pre-requisites are:

• There is at least one HBA installed in the host. If there is more than one HBA, only one HBA ismapped to the boot LUN.

• Two dual-port FC adapters are installed in each storage system.

• On both storage systems, two ports from Adapter1 connect to a single switch, and two ports fromAdapter2 connect to a different switch.

86 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 87: Windows Host Util 5.2setup

Steps

1. Use port sets to ensure there is a single path to the boot LUN.2. Map the boot LUN in the HBA BIOS and system BootBIOS.3. After installing the operating system, install Windows Host Utilities and reboot your system.4. Install a supported multipathing solution, such as Data ONTAP DSM for Windows MPIO or Veritas

Storage Foundation for Windows, then reboot your system. See the multipathing productdocumentation for details.

5. Verify the availability of the paths to the LUN in the management interface for your multipathingsolution.

Use port sets to limit FC paths

Use ports sets to limit the number of paths from a host to the storage system.The single_image cfmode setting on the storage system creates multiple paths from a FC LUN to ahost. By default, all LUNs in an active/active configuration are exported to all FC target ports. Whensetting up a SAN boot configuration, you must have only a single path from the host to the boot LUN.

Step

1. Enter the following command at a storage system prompt:

portset create -f portset_nameport...

-f creates an FC port set.

portset_name is the name you specify for the port set. You can specify a string of up to 95characters.

port is the target FC port. You can specify a list of ports, but at this point, you must specify a singleport. If you do not specify any ports, then you create an empty port set. You specify a port usingthe following formats:

• filername:slotletter adds only a specific port on a storage system—for example, filerA:4b.

• slotletter is the slot and letter of the port—for example, 4b. If you use the slotletter formatand the system is in an active/active configuration, the ports from both the local and partnerstorage system are added to the port set.

Bind the port group to the igroup for the boot LUN.

Next topics

Binding a port set on page 88

Adding a port to a port set on page 88

Removing a port from a port set on page 88

SAN Booting | 87

Page 88: Windows Host Util 5.2setup

Binding a port set

Binding a port set to an igroup allows you to precisely control which target ports can access the LUN.

Step

1. To bind a port set to an igroup enter the following command: igroup bind igroup_nameportset_name

igroup_name is the name of the igroup.

portset_name is the name of the port set.

Adding a port to a port set

Use the portset add command to add a port to a port set.

Step

1. To add a port to a port set, enter the following command: portset add portset_name port...

portset_name is the name of the port set.

port is the target FC port. You can specify more than one port. You specify a port by using thefollowing formats:

• slotletter is the slot and letter of the port—for example, 4b. If you use the slotletter formatand the system is in a cluster, the port from both the local and partner system is added to theport set.

• filername : slotletter adds only a specific port on a system—for example, filerA:4b.

Removing a port from a port set

Use the portset remove command to remove the port from a port set.

Step

1. Enter the following command:portset remove portset_nameport...

portset_name is the name of the port set.

port... is the target FC port. You can specify more than one port. You specify a port by usingthe following formats:

88 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 89: Windows Host Util 5.2setup

• slotletter is the slot and letter of the port—for example, 4b. If you use the slotletter formatand the system is in a cluster, the port from both the local and partner system is removed fromthe port set.

• filername:slotletter removes only a specific port on a system— for example, filerA:4b.

For additional instructions, see the Data ONTAP Block Access Management Guide for iSCSI andFC for your version of Data ONTAP.

Creating the boot LUNAfter you obtain the WWPN for the HBA, you must create the LUN to use as a boot device, map it toan igroup, and assign the LUN ID of 0.

Step

1. Create a LUN and map it to an igroup with LUN ID of 0.

Related concepts

Setting up LUNs on page 63

Configuring Emulex BootBIOSIf you are using an Emulex HBA, configure the Emulex BootBIOS to boot from the LUN.

Steps

1. Reboot the host.2. Press Alt-E to access BootBIOS.

BootBIOS displays a menu of available adapters.

3. Type the number of the adapter you want to configure and press Enter.4. Type 1 to configure the primary boot device.

BootBIOS displays a menu of bootable devices. The devices are listed in boot order. The primaryboot device is the first device listed. If the primary boot device is unavailable, the host boots fromthe next available device in the list.

5. Type 1 to select the primary boot entry.

BootBIOS displays a menu of boot devices.

6. Type the two-digit menu entry for the LUN and press Enter.

SAN Booting | 89

Page 90: Windows Host Util 5.2setup

BootBIOS prompts you to enter the two digits of the starting LUN, in hexadecimal format.

7. Type 00 and press Enter.

BootBIOS displays a menu of LUNs. The storage system LUN appears as follows:

01 LUN 0.2

8. Type 01 to select the LUN and press Enter.

BootBIOS prompts you to configure the device to boot using WWPN or DID.

9. Type 1 to boot the device using WWPN and press Enter.

BootBIOS returns to the menu of boot devices.

10. Press Esc to return to the previous menu.

Configuring QLogic BootBIOSIf you are using a QLogic HBA, configure the QLogic BootBIOS to boot from the LUN.

Steps

1. Reboot the host.2. Press Ctrl-Q to access BootBIOS.

The QLogic Fast!UTIL displays the available adapters, listed in boot order. The primary boot deviceis the first device listed. If the primary boot device is unavailable, the host boots from the nextavailable device in the list.

3. Press Enter to configure the boot device.

ExampleThe Fast!UTIL options appear.

4. Select Configuration Settings and press Enter.

The Configuration Settings appear.

5. Select Selectable Boot Settings and press Enter.

The Selectable Boot Settings appear.

6. Scroll to Selectable Boot:

• If this option is disabled, press Enter to enable it.

• If this option is enabled, go to the next step.

7. Select the (Primary) Boot Port Name, LUN field and press Enter.

90 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 91: Windows Host Util 5.2setup

The available Fibre Channel devices appear.

8. Select the boot LUN from the list of devices and press Enter.

The LUN name and WWPN display as the primary boot device in the Selectable Boot Settings.

About configuring BIOS to allow booting from a LUNIf your host has an internal disk, you must enter BIOS setup to configure the host to boot from the LUN.

You must ensure the internal disk is not bootable through the BIOS boot order.

The BIOS setup program differs depending on the type of PC BIOS your host is using. The BIOS setupprograms are:

• Dell BIOS Revision A10

• IBM BIOS

• Phoenix BIOS 4 Release 6

Note:

If your host is running a different type of PC BIOS, see the vendor documentation for the host systemor see the vendor’s Web site for more information about configuring BIOS setup.

Next topics

Configuring a Dell BIOS Revision A10 on page 91

Configuring an IBM BIOS on page 92

Configuring a Phoenix BIOS 4 Release 6 on page 92

Configuring a Dell BIOS Revision A10Configure a BIOS setup for Dell BIOS Revision A10 to boot from the HBA.

Steps

1. Reboot the host.2. Press F2 to enter BIOS setup.3. In BIOS setup, select Boot Sequence.4. Select option 3: Hard drive.5. From the Hard drive menu, select Disk drive sequence.6. From the Disk drive sequence menu, select the HBA.

SAN Booting | 91

Page 92: Windows Host Util 5.2setup

Configuring an IBM BIOSConfigure a BIOS setup for IBM to boot from the HBA.

Steps

1. Reboot the host.2. Press F1 to enter BIOS setup.3. Select Start Options.4. Select PCI Device Boot Priority.5. Select the slot in which the HBA is installed.

Configuring a Phoenix BIOS 4 Release 6Configure Phoenix BIOS to boot from the HBA.

Steps

1. Reboot the host.2. Press F2 to enter BIOS setup.3. Select the Boot tab.4. The Boot tab lists the boot device order. Ensure the HBA is configured as the First Boot Device.5. Select Hard Drive.6. Configure the boot LUN as the first boot device.

Configuring QLogic iSCSI HBA boot BIOS settingsTo configure the iSCSI HBA, limit the configuration to one iSCSI path and set the HBA parameters.More information is available from the QLogic support site.

For detailed instructions on configuring the QLogic iSCSI HBA boot BIOS settings, see the QLogicdocument Boot from san configuration update -- 25 . Although not written for NetApp storage systems,the steps for configuring the boot BIOS are the same.

To locate the QLogic document, go to the QLogic support site. In the Search box, enter "SAN Boot".Under Product Type, select iSCSI HBAs. and then click Search.

Next topics

Configuration process requires limited paths to HBA on page 93

Setting iSCSI HBA parameters in boot BIOS on page 93

92 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 93: Windows Host Util 5.2setup

Related information

QLogic support site -http://solutions.qlogic.com/KanisaSupportSite/supportcentral/supportcentral.do?id=m1

Configuration process requires limited paths to HBAWhile configuring SAN booting with the iSCSI HBA, restrict the HBA to a single path to the iSCSItarget.

You can add additional paths once Windows is installed and you have a multipathing solution in place.

Setting iSCSI HBA parameters in boot BIOSYou must set the required iSCSI HBA settings in the boot BIOS.

Steps

1. Restart the Windows host and press Ctrl-Q to open the QLogic Fast!UTIL menu when prompted.2. Select the iSCSI HBA, and then select Configuration Settings > Host Adapter Settings.3. Specify an IP address for each port.4. Set the Connection KeepAliveTO value to 180.5. Enable ARP Redirect.6. Set the iSCSI node name of all iSCSI HBA ports to the same name as shown in the iSCSI initiator

GUI on the Windows host.7. Enter the IP address of the target (storage system) port where the boot LUN is located.8. Save the HBA settings and reboot the Windows host.

Getting the correct driver for the boot LUNWhen you install Windows on the boot LUN, you must have a copy of the correct HBA driver to installas a third-party SCSI array driver from a floppy disk (Windows Server 2003) or a CD (Windows Server2008).

Before you begin

When you boot from a LUN, you must ensure the operating system on the LUN has the required HBAdriver for booting from a LUN. You must download these drivers from the QLogic or Emulex Website.

Steps

1. Download the Emulex or QLogic driver:

• Emulex: go to Emulex downloads for Windows and download the Storport Miniport driver.

SAN Booting | 93

Page 94: Windows Host Util 5.2setup

• QLogic: go to QLogic downloads for Windows and download the appropriate STOR Miniportdriver.

2. Copy the driver files to a floppy disk (Windows Server 2003) or a CD (Windows Server 2008).

Related information

Emulex downloads for Windows - http://www.emulex.com/support/windows/index.jsp

QLogic downloads for Windows -http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/default.aspx

Installing Windows on the boot LUNTo install Windows on the LUN, complete the following steps.

Steps

1. Insert the Windows CD-ROM and reboot the host.

A message displays, indicating that the HBA BIOS is installed along with the boot LUN.

LUN: 00 NETAPP LUNBIOS is installed successfully!

Note:

If the message does not display, do not continue installing Windows. Check to ensure that theLUN is created and mapped, and that the target HBA is in the correct mode for directly connectedhosts. Also, ensure that the WWPN for the HBA is the same WWPN you entered when creatingthe igroup.

If the LUN appears but the message indicates the BIOS is not installed, reboot and enable theBIOS.

2. When prompted, press any key to boot from the CD-ROM.3. When prompted, press F6 to install a third-party SCSI array driver.4. Insert the HBA driver floppy disk you created when the following message is displayed:

Setup could not determine the type of one or more mass storage devices installed in your system, or you have chosen to manually specify an adapter.

5. Press S to continue.6. From the list of HBAs, select the supported HBA you are using and press Enter.

ExampleThe driver for the selected HBA is configured in the Windows operating system.

7. Follow the prompts to set up the Windows operating system. When prompted, set up the Windowsoperating system in a partition formatted with NTFS.

8. The host system reboots and then prompts you to complete the server setup process.

94 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 95: Windows Host Util 5.2setup

9. Complete the server setup process. The following prompts require specific responses:

• Digital Signature Not Found window — Click Yes to continue with the Windows setup.This window asks you whether you want to search the Microsoft Web site for a digital signaturefor the HBA software. You do not need a digital signature to use the HBA software, so clickYes to continue.

• The Found New Hardware wizard prompts you to install and configure the driver. Click Cancelto continue the setup process.

10. Reboot the host.11. Install Windows.12. Install Windows Host Utilities.

Related references

About BootBIOS and SAN booting on page 84

Configuring a VCS or MSCS cluster with Veritas in a SANbooted environment

When configuring a VCS or MSCS cluster with Veritas Storage Foundation for Windows in a SANbooted environment, run the vxclus command to set required registry entries.

About this task

The vxclus command enables the cluster disk group to reside on the same bus as the cluster node'ssystem and boot disk.

Step

1. Enter

vxclus UseSystemBus ON

on the Windows host.

See the Veritas Storage Foundation Administrator’s Guide for details about using the vxcluscommand.

The Windows registry is updated to enable a dynamic cluster disk group to be created on the same busas the cluster node's system and boot disk.

SAN Booting | 95

Page 96: Windows Host Util 5.2setup
Page 97: Windows Host Util 5.2setup

Index

A

ALUAdisabling for Data ONTAP DSM for Windows MPIO42enabling for msdsm 43

applicationconfiguring to start after iSCSI 69, 70

B

bindingport set to igroup 88

BIOSconfiguring Dell for SAN booting 91configuring for SAN booting 91configuring IBM for SAN booting 92configuring Phoenix for SAN booting 92

boot LUNcreating 89

BootBIOSconfiguring for Emulex HBA 89configuring for QLogic HBA 90displaying FC WWPN 52enabling for Emulex HBA using HBAnyware 84enabling for Emulex HBA using LP6DUTIL 85enabling for QLogic HBA using Fast!UTIL 85overview 84

brocade_infocollecting FC switch information 80

C

CHAPsecurity options for iSCSI 41

cisco_infocollecting FC switch information 80

ClusSvcHangTimeout setting 57cluster

configuring with SAN booting 95mapping LUNs 66

clusteringwith Hyper-V 28

command linerepairing or removing Host Utilities 49

command line installation 47configuration

verifying 34configurations

Fibre Channel 22Fibre Channel over Ethernet 24supported 21

controller_infocollecting storage system information 79

converged network adapter (CNA) 24

D

data center bridging (DCB) switch 24Data ONTAP

licences required 35Data ONTAP DSM for Windows MPIO 26, 42, 50

disabling ALUA 42effect of removing Host Utilities 50installing 42

DDI-1 packagedownloading and installing for Veritas StorageFoundation 46

Dell BIOSconfiguring for SAN booting 91

device-specific modulechoices 26installing 42

diagnostic programsbrocade_info 80cisco_info 80controller_info 79how to run 78mcdata_info 80qlogic_info 80troubleshooting with 77windows_info 79

diskinitializing and partioning 71

Disk Managementaccessing LUNs 71

Index | 97

Page 98: Windows Host Util 5.2setup

documentationfinding more information 30

driververifying HBA version 74

DSMchoices 26installing 42

DsmMaximumRetryTimeDuringStateTransition setting 59DsmMaximumStateTransitionTime setting 59DsmSupportedDeviceList setting 60dynamic disk support 25

E

Emulex BootBIOSdisplaying FC WWPN 52

Emulex HBAconfiguring 35configuring BootBIOS 89enabling BootBIOS using HBAnyware 84enabling BootBIOS using LP6DUTIL 85enabling logging 76getting correct driver for SAN booting 93settings for Host Utilities 61verifying driver version 74

error recovery levelfor iSCSI 40

F

failover clusteradding Hyper-V VM 44

Fast!UTILenabling BootBIOS for QLogic HBA 85setting iSCSI parameters for SAN booting 93

FCoEoverview 23supported configurations 24

Fibre channeltargets 66

Fibre Channelaccess to storage 22collecting FC switch troubleshooting information 80configuring switch 35recording WWPN 51storage system port media type 36supported configurations 22

Fibre Channel HBAconfiguring 35

Fibre Channel over Ethernetaccess to storage 23overview 23supported configurations 24

filer_infocollecting storage system information 79

finding more information 30

H

HBAconfiguring 35configuring BootBIOS for Emulex 89configuring BootBIOS for QLogic 90configuring iSCSI 37enabling Emulex BootBIOS using HBAnyware 84enabling logging on Emulex 76enabling logging on QLogic 77enabling QLogic BootBIOS using Fast!UTIL 85getting correct driver for SAN booting 93installing iSCSI 39recording FC WWPN 51setting iSCSI parameters for SAN booting 93setting summary 56settings changed by Host Utilities 75settings for Host Utilities 61verifying driver version 74

hba_infodisplaying FC WWPN 52

HBAnywareenabling BootBIOS for Emulex HBA 84enabling logging on Emulex HBA 76verifying Emulex HBA settings 75

hostcollecting troubleshooting information for 79verifying configuration 34

Host Utilitiescomponents 19components of Host Utilities 19installation overview 46installing and configuring (high level) 33installing and configuring overview 20installing from a command line 47installing interactively 47introduction 19programs included 20repairing and removing 49repairing or removing from command line 49repairing or removing interactively 49settings 53

98 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 99: Windows Host Util 5.2setup

Host Utilities (continued)settings summary 54, 56supported Windows configurations 21

hotfixesinstalling 47

Hyper-Vadding VM to failover cluster 44clustering 28configuring SUSE Linux guest OS 44configuring virtual machines 44LUN layout 28overview 27storage options 27

hyper-v LUN type 63

I

IBM BIOSconfiguring for SAN booting 92

igroupbinding port set 88disabling ALUA 42enabling ALUA 43mapping LUN 65overview 65

info scriptshow to run 78troubleshooting with 77

informationfinding more 30

initializing a disk 71initiator

downloading iSCSI 38installing iSCSI 39iSCSI configuring 37iSCSI options 37

initiator groupmapping LUN 65overview 65

initiator node namerecording iSCSI 53

installingDSM software 42Host Utilities 46Host Utilities from command line 47Host Utilities interactively 47iSCSI software initiator 39MPIO software 41SnapDrive for Windows software 48Veritas Storage Foundation for Windows 45

installing and configuringoverview 20

installing and configuring Host Utilities (high level) 33IPSecConfigTimeout setting 60IPv6

overview 25IQN

recording iSCSI 53iSCSI

access to storage 24adding target 68CHAP security options 41configuring HBA 37configuring HBA for SAN booting 92configuring initiator 37creating single path when installing SAN booting 93dependent services 69, 70downloading initiator 38error recovery level 40initiator service for HBA 39installing HBA 39installing software initiator 39iscsicli command 68multiple connections per session 40node name overview 51recording initiator node name 53setting iSCSI HBA parameters for SAN booting 93using IPv6 25

iSCSI initiatorfor Virtual Server 2005 virtual machine 29options 37

L

language versions of Windows supported 30licenses

required for Data ONTAP 35LinkDownTime setting 60LinkDownTimeout setting

verifying for QLogic HBA 76LinkDownTimeOut setting 61LinkTimeOut setting 61, 75

verifying for Emulex HBA 75Linux

configuring Hyper-V guest OS 44Linux Integration Components 44linux LUN type 63LP6DUTIL

enabling BootBIOS for Emulex HBA 85

Index | 99

Page 100: Windows Host Util 5.2setup

LUNaccessing when using native Windows storage stack71accessing when using Veritas Storage Foundation 70creating 64creating boot 89getting correct HBA driver for SAN booting 93layout with Hyper-V 28mapping for Windows cluster 66mapping to igroup 65overview 63type to use for host or guest OS 63

M

ManageDisksOnSystemBuses setting 57MaxRequestHoldTime setting 61mcdata_info

collecting FC switch information 80MCS

enabling for iSCSI 40media type

storage system FC port 36Microsoft DSM 26, 43

enabling ALUA 43Microsoft iSCSI DSM

installing 42Microsoft iSCSI initiator

downloading 38MPIO

installing software 41overview 26

MPIOSupportedDeviceList setting 60MSCS

configuring cluster with SAN booting 95msdsm 26, 43

enabling ALUA 43msiexec command

installing Host Utilities 47repairing or removing Host Utilities 49

msiscsidsm 26multipathing

installing software 41overview 26

multiple connections per sessionenabling for iSCSI 40

N

NodeTimeOut settingverifying for Emulex HBA 75

NodeTimeOut settings 61non-English versions of Windows supported 30NTFS disk format 71

P

partitioning a disk 71passthru disk 27path

verifying correct FC path used 66path configuration

fixing using Data ONTAP DSM 67fixing using Microsoft DSM 68fixing using Veritas DMP DSM 68

PathVerifyEnabled setting 58PDORemovePeriod setting 58Phoenix BIOS

configuring for SAN booting 92port

adding to port set 88removing from port set 88

port setsadding port to 88binding 88for SAN booting 86limiting FC paths for SAN booting 87removing port from 88

PortDownRetryCount setting 61, 76verifying for QLogic HBA 76

programs included in Host Utilities 20proxy path

correcting configuration using Data ONTAP DSM 67correcting configuration using Microsoft DSM 68correcting configuration using Veritas DMP DSM 68overview 66verifying not used 66

publicationsfinding more information 30

Q

QLogic BootBIOSdisplaying FC WWPN 52

QLogic HBAconfiguring 35configuring BootBIOS 90

100 | Windows Host Utilities 5.2 Installation and Setup Guide

Page 101: Windows Host Util 5.2setup

QLogic HBA (continued)configuring iSCSI for SAN booting 92enabling logging 77enabling QLogic BootBIOS using Fast!UTIL 85getting correct driver for SAN booting 93installing iSCSI 39setting iSCSI parameters for SAN booting 93settings for Host Utilities 61verifying driver version 74

qlogic_infocollecting FC switch information 80

quiet installation 47

R

raw disk 27references

finding more information 30registry

settings for Host Utilities 53values summary 54

removingHost Utilities from command line 49Host Utilities interactively 49

Repairoption of Host Utilities installation program 49

repairingHost Utilities from command line 49Host Utilities interactively 49

requirementsverifying 34

RetryCount setting 59RetryInterval setting 59

S

SAN bootingBootBIOS overview 84configuring BIOS 91configuring BootBIOS for Emulex HBA 89configuring BootBIOS for QLogic HBA 90configuring cluster with Veritas Storage Foundation95configuring iSCSI HBA 92create single path during installation 86, 93creating boot LUN 89enabling Emulex BootBIOS using HBAnyware 84enabling Emulex BootBIOS using LP6DUTIL 85enabling QLogic BootBIOS using Fast!UTIL 85getting correct HBA driver for boot LUN 93

SAN booting (continued)installing Windows on boot LUN 94overview 29, 83setting iSCSI HBA parameters 93WWPN of HBA required 86

SANsurferenabling logging on QLogic HBA 77verifying QLogic HBA settings 76

scripts included in Host Utilities 20security

CHAP for iSCSI 41settings

RetryInterval 59ClusSvcHangTimeout 57DsmMaximumRetryTimeDuringStateTransition 59DsmMaximumStateTransitionTime 59DsmSupportedDeviceList 60HBA 61IPSecConfigTimeout 60LinkDownTime 60LinkDownTimeOut 61LinkTimeOut 61ManageDisksOnSystemBuses 57MaxRequestHoldTime 61MPIOSupportedDeviceList 60NodeTimeOut 61PathVerifyEnabled 58PDORemovePeriod 58PortDownRetryCount 61RetryCount 59TimeOutValue 57

silent installation 47SnapDrive for Windows

creating LUNs 64installation order 48

software bootiSCSI initiator requirement 38

storage accessFibre Channel (FC) 22Fibre Channel over Ethernet (FCoE) 23iSCSI 24

storage configurationverifying 34

storage systemprotocol licenses 35

supported configurationsFibre Channel 22Fibre Channel over Ethernet 24

SUSE Linuxconfiguring Hyper-V guest OS 44

Index | 101

Page 102: Windows Host Util 5.2setup

switchcollecting troubleshooting information 80configuring Fibre Channel 35

sysstat commandverifying paths 66

T

targetadding iSCSI 68Fibre Channel 66

TimeOutValue setting 57troubleshooting

collecting FC switch information 80collecting host information with windows_info 79collecting storage system information withcontroller_info 79collecting Veritas Storage Foundation information 80diagnostic program overview 77enabling logging on Emulex HBA 76enabling logging on QLogic HBA 77HBA driver version 74HBA settings 75how to run diagnostic programs 78items to check 73verifying Emulex HBA settings 75verifying QLogic HBA settings 76

U

uninstallingHost Utilities 49

V

VCSconfiguring cluster with SAN booting 95

Veritas DMP DSM 26, 42installing 42

Veritas Enterprise Administratorconnecting 70

Veritas Storage Foundationaccessing LUNs 70collecting troubleshooting information 80configuring and using 46configuring applications dependent on iSCSI 70configuring cluster with SAN booting 95dynamic disk support 25

Veritas Storage Foundation (continued)installation order 45logging on to iSCSI targets 68

virtual hard disk (VHD)for Hyper-V 27for Virtual Server 2005 29

Virtual Server 2005iSCSI initiator for virtual machine 29overview 28using virtual hard disk (VHD) 29

vxclusconfiguring cluster with SAN booting 95

VxExplorercollecting Veritas Storage Foundation troubleshootinginformation 80

W

Windowsdynamic disk support 25installing on boot LUN 94registry settings for Host Utilities 53support for non-English language versions 30supported configurations 21

Windows clustermapping LUNs 66

Windows Disk Managementaccessing LUNs 71

Windows failover clusteradding Hyper-V VM 44

Windows hostcollecting troubleshooting information for 79

Windows hotfixesinstalling 47

windows LUN type 63Windows registry

values summary 54windows_2008 LUN type 63windows_gpt LUN type 63windows_info

collecting host information 79windows_lhs LUN type 63WWPN

displaying using Emulex BootBIOS 52displaying using hba_info 52displaying using QLogic BootBIOS 52overview 51recording from host 51required for boot LUN igroup 86

102 | Windows Host Utilities 5.2 Installation and Setup Guide