656
FalconStor CDP/NSS ADMINISTRATION GUIDE

CDP-NSS Administration Guide

  • Upload
    qssamir

  • View
    410

  • Download
    17

Embed Size (px)

Citation preview

Page 1: CDP-NSS Administration Guide

FalconStorCDP/NSS ADMINISTRATION GUIDE

Page 2: CDP-NSS Administration Guide

FalconStor® CDP/NSS Administration Guide

Version 7.00

FalconStor Software, Inc.2 Huntington Quadrangle, Suite 2S01Melville, NY 11747Phone: 631-777-5188Fax: 631-501-7633Web site: www.falconstor.com

Copyright © 2001-2013 FalconStor Software. All Rights Reserved.

FalconStor Software, IPStor, MicroScan, RecoverTrac, HyperTrac, DynaPath, HotZone, SafeCache, TimeMark, TimeView, and ZeroImpact are either registered trademarks or trademarks of FalconStor Software, Inc. in the United States and other countries.

Linux is a registered trademark of Linus Torvalds.

Windows is a registered trademark of Microsoft Corporation.

All other brand and product names are trademarks or registered trademarks of their respective owners.

FalconStor Software reserves the right to make changes in the information contained in this publication without prior notice. The reader should in all cases consult FalconStor Software to determine whether any such changes have been made.

This product is protected by United States Patents Nos. 7,093,127 B2; 6,715,098; 7,058,788 B2; 7,330,960 B2; 7,165,145 B2 ;7,155,585 B2; 7.231,502 B2; 7,469,337; 7,467,259; 7,418,416 B2; 7,406,575 B2 , and additional patents pending.

31513.7088

Administration Guide content may change between major product versions in order to reflect product updates released via patches. In the guide and its table of contents, the heading for changed content will be followed by “(updated Month Year)”.

The document code at the bottom of the page includes the guide publication date and the associated software build number, in the format date.build.

Page 3: CDP-NSS Administration Guide

CDP/NSS Administration Guide

ContentsIntroduction

Network Storage Server (NSS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Continuous Data Protector (CDP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20Web Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27Additional resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28

FalconStor Management Console

Launch the console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29Connect to your storage server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30Configure your server using the configuration wizard . . . . . . . . . . . . . . . . . . . . . . . . . . .31

Step 1: Enter license keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31Step 2: Setup network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31Step 3: Set hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32

FalconStor Management Console user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .33Discover storage servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34

Protect your storage server’s configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34Manage licenses (updated January 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35

Offline registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35Set server properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37Manage accounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43

Change the root user’s password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46Check connectivity between the server and console . . . . . . . . . . . . . . . . . . . . . . . . . . .47Add an iSCSI User or Mutual CHAP User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47Apply software patch updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49

Server patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49Console patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50

Perform system maintenance (updated March 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . .51Physical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .55

Physical resource icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56Prepare devices to become logical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .56Rename a physical device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .57Use IDE drives with CDP/NSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58Rescan adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .58Import a disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60Test physical device throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61Manage multiple paths to a device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61Repair paths to a device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .61

Logical Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .63Logical resource icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64Enable write caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65

Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .65

CDP/NSS Administration Guide 1

Page 4: CDP-NSS Administration Guide

Contents

SAN Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66Add a client from the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . .66Add a client for FalconStor host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .67Change the ACSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68Grant access to a SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .68

Console options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69Create a custom menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .70

Storage Pools

Manage storage pools and the devices within storage pools . . . . . . . . . . . . . . . . . . . . .71Create storage pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .72Set properties for a storage pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .73

Logical Resources

Types of SAN resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77Virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .77Thin devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .78Service-Enabled devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .80

Create SAN resources - Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81Prepare devices to become SAN resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81Create a virtual device SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81

Add virtual disks for data storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89Create a SAN Client for VMware ESX server . . . . . . . . . . . . . . . . . . . . . . . . . . . . .92Create a Service-Enabled Device SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . .93

Assign a SAN resource to one or more clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97Discover devices from a client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101

Windows clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101Solaris clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101

Expand a virtual device (updated December 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . .103Expand a Service-Enabled Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106Grant access to a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106Unassign a SAN resource from a client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107Delete a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .107

CDP/NSS Server

Start the CDP/NSS server (updated October 2012) . . . . . . . . . . . . . . . . . . . . . . . . . . .108Stop the CDP/NSS server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109Log into the CDP/NSS server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109

Use Telnet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109Check CDP/NSS processes (updated October 2012) . . . . . . . . . . . . . . . . . . . . . . . . .111Check physical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113Check activity statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .114Remove a physical storage device (updated November 2012) . . . . . . . . . . . . . . . . . .115Configure iSCSI storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115

Configuring iSCSI software initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115

CDP/NSS Administration Guide 2

Page 5: CDP-NSS Administration Guide

Contents

Configuring iSCSI hardware HBA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116Uninstall a storage server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117

iSCSI Clients

Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119Configure iSCSI clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120Enable iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120Configure your iSCSI initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .120Add your iSCSI client in the FalconStor Management Console . . . . . . . . . . . . . . . . . .121

Create storage targets for the iSCSI client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .126Restart the iSCSI initiator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127

Windows iSCSI clients and failover (updated February 2013) . . . . . . . . . . . . . . . . . . .127Disable iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .127

Logs and Reports

Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .129Sort information in the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130Filter information stored in the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130Refresh the Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131Print/Export Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .131

Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132Set report properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .132Create an individual report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133View a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137Export data from a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137Schedule a report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138E-mail a scheduled report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139

Report types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139Client Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139Delta Replication Status Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140Disk Space Usage Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .142Disk Usage History Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .143Fibre Channel Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .146Physical Resources Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147Physical Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148Physical Resource Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149Resource IO Activity Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .149SCSI Channel Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151SCSI Device Throughput Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153SAN Client Usage Distribution Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .154SAN Client/Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155SAN Resources Allocation Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .156SAN Resource Usage Distribution Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .157Server Throughput and Filtered Server Throughput Report . . . . . . . . . . . . . . . . .157Storage Pool Configuration Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160User Quota Usage Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .161

CDP/NSS Administration Guide 3

Page 6: CDP-NSS Administration Guide

Contents

Report types - Global replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162Create a global replication report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162View global report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .162

Fibre Channel Target Mode

Fibre Channel over Ethernet (FCoE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164Fibre Channel target mode - configuration overview . . . . . . . . . . . . . . . . . . . . . . . . . .164Configure Fibre Channel hardware on server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164

Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164Downstream Persistent binding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165VSA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .165Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .167

Configure Fibre Channel clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169Enable Fibre Channel target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170Disable Fibre Channel target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .170Verify the Fibre Channel WWPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171Set QLogic ports to target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .171Set NPIV ports to target mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173Set up your failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .174Add Fibre Channel clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .175Associate World Wide Port Names (WWPN) with clients . . . . . . . . . . . . . . . . . . . . . . .176Assign virtualized resources to Fibre Channel Clients . . . . . . . . . . . . . . . . . . . . . . . . .177View new devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178Install and configure DynaPath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178

Spoof an HBA WWPN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179

SAN Clients

Add a client from the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . .180Add a client for FalconStor host applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181

Security

System management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .182Data access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .182Account management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183Security recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183

Storage network topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184Physical security of machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184Disable ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184

Failover

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185Shared storage failover sample configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .188

CDP/NSS Administration Guide 4

Page 7: CDP-NSS Administration Guide

Contents

Failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189General failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189General failover requirements for iSCSI clients (updated February 2013) . . . . . . .190Shared storage failover requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190FC-based Asymmetric failover requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191

Pre-flight checklist for failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192Connectivity failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192

Default failover behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193Storage device path failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .194Storage device failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .194Storage server or device failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195Failover restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196Failover setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196Recreate the configuration repository (updated January 2013) . . . . . . . . . . . . . . . . . .207Power Control options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207Check Failover status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .210

Failover Information report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .210Failover network failure status report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211

Recover from failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211Manual recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211Auto recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .213Fix a failed server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .213

Recover from a cross-mirror disk failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .214Re-synchronize Cross mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .215Remove Cross mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .215Check resources and swap if possible . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .215Verify and repair a cross mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .215

Modify failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220Make changes to the servers in your failover configuration . . . . . . . . . . . . . . . . . .220Convert a failover configuration into a mutual failover configuration . . . . . . . . . . .221Exclude physical devices from health checking . . . . . . . . . . . . . . . . . . . . . . . . . . .221Change your failover intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222Verify physical devices match . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .222

Start/stop failover or recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223Force a takeover by a secondary server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223Manually start a server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223Manually initiate a recovery to your primary server . . . . . . . . . . . . . . . . . . . . . . . .223Suspend/resume failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .224

Remove a failover configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .225Power cycle servers in a failover setup (updated September 2012) . . . . . . . . . . . . . . .226Mirroring and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .227TimeMark/CDP and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .227Throttle and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .227HotZone and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .227

Enable HotZone using local storage with failover . . . . . . . . . . . . . . . . . . . . . . . . .228

Performance

SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .230

CDP/NSS Administration Guide 5

Page 8: CDP-NSS Administration Guide

Contents

Configure SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231Create a cache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .231Global Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .235SafeCache for groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236Check the status of your SafeCache resource . . . . . . . . . . . . . . . . . . . . . . . . . . .236Configure SafeCache properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236Disable a SafeCache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236

HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237Read Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237Prefetch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .237

Configure HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .238Check the status of HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242Configure HotZone properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .244Disable HotZone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .244

Mirroring

Synchronous mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .245Asynchronous mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .246Mirror requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .247Enable mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .247Create cache resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .254Check mirroring status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255Swap the primary disk with the mirrored copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .255Promote the mirrored copy to become an independent virtual drive . . . . . . . . . . . . . .255Recover from a mirroring hardware failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .257Replace a disk that is part of an active mirror configuration . . . . . . . . . . . . . . . . . . . . .257Expand the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258Manually synchronize a mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .258Set mirror throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259Set alternative read mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260Set mirror resynchronization priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .260Rebuild a mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262Suspend/resume mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262Change mirroring configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263

Set global mirroring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .263Remove a mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264Mirroring and failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264

Snapshot Resource

Create a Snapshot Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .265Snapshot Resource policy behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .272

Check status of a Snapshot Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .273Protect your Snapshot Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274Options for Snapshot Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .274Snapshot Resource shrink and reclamation policies . . . . . . . . . . . . . . . . . . . . . . . . . .275

Enable Reclamation Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .275

CDP/NSS Administration Guide 6

Page 9: CDP-NSS Administration Guide

Contents

Global reclamation policy and retention schedule . . . . . . . . . . . . . . . . . . . . . . . . .277Disable Reclamation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .278Check reclamation status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .279Shrink Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .279Shrink a snapshot resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281

Use Snapshot to copy a SAN resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281Check Snapshot Copy status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .285

Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .286Create a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .286Groups with TimeMark/CDP enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287Groups with SafeCache enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287Groups with replication enabled . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .287

Grant access to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .288Add resources to a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .288Remove resources from a group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290

TimeMarks and CDP

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .292Enable TimeMark (updated February 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .293

TimeMark retention policy (updated February 2013) . . . . . . . . . . . . . . . . . . . . . . .298Check TimeMark status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .301Check CDP journal status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302Protect your CDP journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302Add a tag to the CDP journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .302Add a comment or change priority of an existing TimeMark . . . . . . . . . . . . . . . . . . . . .303Manually create a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .304Copy a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .305Recover data using the TimeView feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .306Remap a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314Delete a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .314Remove TimeView Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315Set TimeView Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .316Rollback or roll forward a drive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .317Change your TimeMark/CDP policies (updated February 2013) . . . . . . . . . . . . . . . . .318Delete TimeViews in batch mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .319Suspend/resume CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320Delete TimeMarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320Disable TimeMark and CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320Replication and TimeMark/CDP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .320

NIC Port Bonding

Enable NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .321Remove NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324Change IP address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .324

CDP/NSS Administration Guide 7

Page 10: CDP-NSS Administration Guide

Contents

Replication

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .325Remote replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .325Local replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .325How replication works (updated January 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . .326Delta replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .326Continuous replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .326

Configure Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327Setup (updated January 2013) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327Create a Continuous Replication Resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .336

Check replication status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338Replication tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .338Event Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339Replication object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .339Delta Replication Status Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .340

Configure Replication performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341Set global replication options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341Tune replication parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341

Assign clients to the replica disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .342Switch clients to the replica disk when the primary disk fails . . . . . . . . . . . . . . . . . . . .342Recreate your original replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343Use TimeMark/TimeView to recover files from your replica . . . . . . . . . . . . . . . . . . . . .344Change your replication configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .344Suspend/resume replication schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .346Stop a replication in progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .346Manually start the replication process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .346Set the replication throttle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .347

Add a Target Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .348Manage Throttle windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350Manage Link Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .352Add link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353Edit link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353Delete link types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .353

Set replication synchronization priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .354Reverse a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .354Reverse a replica when the primary is not available . . . . . . . . . . . . . . . . . . . . . . . . . . .355

Forceful role reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .355Repair a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .356Relocate a replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .356

Remove a replication configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .357Expand the size of the primary disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .358Replication with other CDP or NSS features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359

Replication and TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359Replication and Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359Replication and Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359Replication and Thin Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .359

CDP/NSS Administration Guide 8

Page 11: CDP-NSS Administration Guide

Contents

Near-line Mirroring

Near-line mirroring requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .361Setup Near-line mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .361Enable Near-line Mirroring on multiple resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . .369What’s next? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .369Check near-line mirroring status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .370Near-line recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .371Recover data from a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .371Recover data from a near-line replica . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .373Recover from a near-line replica TimeMark using forceful role reversal . . . . . . . . . . . .376Swap the primary disk with the near-line mirrored copy . . . . . . . . . . . . . . . . . . . . . . . .379Manually synchronize a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .379Rebuild a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .379Expand a near-line mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .380

Expand a service-enabled disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .382Suspend/resume near-line mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .383Change your mirroring configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .383

Set global mirroring options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .383Remove a near-line mirror configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .384Recover from a near-line mirroring hardware failure . . . . . . . . . . . . . . . . . . . . . . . . . .385Replace a disk that is part of an active near-line mirror . . . . . . . . . . . . . . . . . . . . . . . .386Set Recovery Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .386

ZeroImpact Backup

Configure ZeroImpact backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387Back up a CDP/NSS logical resource using dd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .390Restore a volume backed up using ZeroImpact Backup Enabler . . . . . . . . . . . . . . . . .391

Multipathing

Load distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .393Preferred paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .393Path management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .394

Command Line Interface

Install and configure the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .396Use the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .396Common arguments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .397Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .398

SNMP Integration

SNMPTraps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .413Implement SNMP support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .414Microsoft System Center Operations Manager (SCOM) . . . . . . . . . . . . . . . . . . . . . . . .415

CDP/NSS Administration Guide 9

Page 12: CDP-NSS Administration Guide

Contents

HP Network Node Manager (NNM) i9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .416HP OpenView Network Node Manager 7.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .417

Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .417Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .417View statistics in NNM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .418

CA Unicenter TNG 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .419Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .419Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .419View traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .420View statistics in TNG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .420Launch the FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . .420

IBM Tivoli NetView 6.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .421Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .421Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .421View statistics in Tivoli . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .422

BMC Patrol 3.4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .423Install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .423Configure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .423View traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .424View statistics in Patrol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .424

Advanced SNMP topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .425The snmpd.conf file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .425Use an SNMP configuration for multiple storage servers . . . . . . . . . . . . . . . . . . .425

IPSTOR-MIB tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .426

Email Alerts (updated March 2013)

Configure Email Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .447Modify Email Alerts properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .458

Email format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .459Limiting repetitve Emails . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .459

Script/program trigger information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .459

BootIP

Set up BootIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .462Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .463Create a boot image for a diskless client computer . . . . . . . . . . . . . . . . . . . . . . . . . . .464Initialize the configuration of the storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .465Enable the BootIP from the FalconStor Management Console . . . . . . . . . . . . . . . . . .465Use DiskSafe to clone a boot image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .465Set BootIP properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .466Set the Recovery Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .466

Set the Recovery password from the iSCSI user management . . . . . . . . . . . . . .466Set the authentication and Recovery password from iSCSI client properties . . . .466

Remote boot the diskless computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .467For Windows 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .467For Windows Vista/2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .467

CDP/NSS Administration Guide 10

Page 13: CDP-NSS Administration Guide

Contents

Use the Sysprep tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .468For Windows 2003: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .468Use the Setup Manager tool to create the Sysprep.inf answer file . . . . . . . . . . . .469For Windows Vista/2008 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .470

Create a TimeMark . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .471Create a TimeView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .472Assign a TimeView to a diskless client computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . .472

Add a SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .472Assign a TimeView to the SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473

Recover Data via Remote boot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .473Remotely boot the Linux Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .475

Remotely install CentOS to an iSCSI disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .475Remote boot from the FalconStor Management Console . . . . . . . . . . . . . . . . . . .475Remote boot from the Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476BootIP and DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476Remote boot and DiskSafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .476

Troubleshooting / FAQs

Frequently Asked Questions (FAQ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .477NIC Port Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .478Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .478SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479Virtual devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479FalconStor Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .479Multipathing method: MPIO vs. MC/S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .480BootIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .482SCSI adapters and devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .483Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .484Fibre Channel target mode and storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .485Power control option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .486iSCSI Downstream Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487Protecting data in a Windows environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .487Protecting data in a Linux environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488Protecting data in an AIX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488Protecting data in an HP-UX environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .488

Logical resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489Network connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .489

Jumbo frames support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .491Diagnosing client connectivity issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .491Windows Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .492

Windows client debug information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .492Clients with iSCSI protocol (updated February 2013) . . . . . . . . . . . . . . . . . . . . . .494Clients with Fibre Channel protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .495Linux SAN Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .495

Storage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496Storage server X-ray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .496

Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .498

CDP/NSS Administration Guide 11

Page 14: CDP-NSS Administration Guide

Contents

Cross-mirror failover on a virtual appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .499Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .500TimeMark Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501Snapshot Resource policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501SafeCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .501Command line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .502Service-Enabled Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .502Error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .503UNIX SAN Client error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .575Command Line Interface (CLI) error codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .577

Port Usage

SMI-S Integration

SMI-S Terms and concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603Enable SMI-S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .603Use the SMI-S Provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .604

Launch the Command Central Storage console . . . . . . . . . . . . . . . . . . . . . . . . . .604Add FalconStor Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .604View FalconStor Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .605View Storage Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .605View LUNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .605View Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .605View Masking Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .606

RAID Management for VS-Series Appliances

Prepare for RAID management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608Preconfigured storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .608Unconfigured storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .609

Launch the RAID Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .610Discover storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .610

Future storage discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .612Display a storage profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .613

Rename storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .614Refresh the display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .614

Configure controller connection settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .615View enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616

Individual enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .616Manage controller modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .618

Individual controller modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .619Manage disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .620

Interactive enclosure images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .620Individual disk drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .622Configure a hot spare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .623Remove a hot spare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .623

Manage RAID arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .624

CDP/NSS Administration Guide 12

Page 15: CDP-NSS Administration Guide

Contents

Create a RAID array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .625Create a Logical Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .626Individual RAID arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .627Rename the array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629Delete the array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .629Check RAID array actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .630Replace a physical disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .630Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .633Define LUN mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .633Remove LUN mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .635Rename LU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .635Delete Logical Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .636

Logical Unit Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637Unmapped Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .637Mapped Logical Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .639

Upgrade RAID controller firmware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .640Event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641

Filter the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641Clear the event log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .641

Monitor storage from the FalconStor Management console . . . . . . . . . . . . . . . . . . . . .642Storage information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .642Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .643

Index

CDP/NSS Administration Guide 13

Page 16: CDP-NSS Administration Guide

IntroductionAs business IT operations grow in size and complexity, many computing environments are stressed in attempting to keep up with the demand to store and access data. Information and the effective management of the corresponding storage infrastructure are critical to a company's success. Reliability, availability, and disaster recovery capabilities are all key factors in the successful management and protection of data.

FalconStor® Continuous Data Protector (CDP) and Network Storage Server (NSS) solutions address the growing need for data management, protection, preservation, and integrity.

Network Storage Server (NSS)

FalconStor® Network Storage Server (NSS) enables storage virtualization, optimization, and efficiency across heterogeneous storage from any storage system, providing consolidation, business continuity, and automated disaster recovery (DR).

NSS enables high availability and data assurance, provides instant recovery with data integrity for all applications, and protects investments across mixed storage environments.

Business continuity is a critical aspect of operations and revenue generation. As businesses evolve, storage infrastructures increase in complexity. Some resources remain under-utilized while others are over-utilized - an inefficient use of power, capacity, and money. FalconStor solutions allow organizations to consolidate storage resources for simple and centralized management with high availability. Complete application awareness provides quick and 100% transactionally consistent data recovery. Automated DR technology simplifies operations. An open, flexible architecture enables organizations to leverage existing IT resources to create an integrated, multi-tiered, cost-efficient storage environment that ensures business continuity

FalconStor NSS includes FalconStor TimeMark® snapshots that work with Snapshot Agents for databases and messaging applications, providing 100% transactional integrity for instant recovery to known points in time - helping IT meet recovery point objectives (RPO) and recovery time objectives (RTO). Data managed by FalconStor NSS may be efficiently replicated via IP using the FalconStor Replication option for real-time disaster recovery (DR) protection. Thin Provisioning helps automate storage resource allocation and capacity management while virtualization provides centralized management for large, heterogeneous storage environments.

The storage appliance is the central component of the network. It is the storage device that connects to hosts via industry standard iSCSI (or Fibre Channel) protocols.

Before you undertake the activities described in this guide, make sure the appliance has already been racked, connected, and the initial power-on instructions have been completed for the appliance according to the FalconStor® Hardware QuickStart Guide that was shipped with the appliance.

14

Page 17: CDP-NSS Administration Guide

Introduction

Also make sure Web Setup has been completed according to the instructions in the FalconStor NSS Software QuickStart Guide, which was also shipped with the appliance.

Once you have connected your NSS hardware, you can discover all storage servers on your storage subnet by selecting Tools --> Discover. For details, refer to ‘Connect to your storage server’ in the FalconStor Management Console section.

Continuous Data Protector (CDP)

FalconStor® Continuous Data Protector (CDP) advanced data protection solutions allows organizations to customize and define protection policies per business application, maximizing IT business operations and profitability.

Protecting data from loss or corruption requires thorough, effective planning. Comprehensive data protection solutions from FalconStor provide unified backup and disaster recovery (DR) for continuous data availability. Organizations can recover emails, files, applications, and entire systems within minutes, locally and remotely. Application-level integration ensures quick, 100% transactionally consistent recovery to any point in time. WAN-optimized replication maximizes network efficiency. By fully automating the resumption of servers, storage, networks, and applications in a pre-determined, coordinated process, embedded DR automation technology stages the recovery of complete services - thus facilitating service-oriented disaster recovery.

CDP automates disaster recovery for physical and virtual servers, allows rapid recovery of files, databases, systems, and entire sites while reducing the cost and complexity associated with recovery.

Once you have connected your CDP hardware to your network and set your network configuration via Web Setup, you are ready to protect your data.

The Host-based CDP method uses a host-based device driver to mirror existing user volumes/LUNs to the CDP appliance. For information on protecting your data in a Windows or Linux environment, refer to the DiskSafe User Guide.

For Unix platforms, such as HP-UX, the native OS volume manager is used to mirror data to the CDP appliance. For information on protecting your data in an AIX, HP-UX, or Solaris environment, refer to the related vendor user guide.

Protection can also be set using the FalconStor Management Console. Refer to the FalconStor Management Console section.

On the CDP appliance, TimeMark and CDP journaling can be configured to create recovery points to protect the mirrored disk. Replication can also be used for disaster recovery protection. FalconStor Snapshot Agents are installed on the host machines to ensure transactional level integrity of each snapshot or replica.

15

Page 18: CDP-NSS Administration Guide

Introduction

Architecture

NSS FalconStor NSS is available in multiple form factors. Appliances with internal storage are available in various sizes for easy deployment to remote sites or offices. Two FalconStor NSS devices can be interconnected for mirroring and active/active failover, ensuring HA operations. FalconStor NSS gateway appliances can be connected to any external storage arrays, allowing you to leverage the storage systems you have in place. FalconStor NSS can also be purchased as software (software appliance kits) to install on servers.

CDP FalconStor CDP can be deployed in several ways to best fit your organization’s needs. FalconStor CDP is available in multiple configurations suitable for remote offices, branch offices, data centers, and remote DR sites. Appliances with internal storage for both physical and virtual servers are available in various sizes for easy deployment to remote sites or offices. Gateway appliances can be connected to any existing external storage array, allowing you to use and reuse the storage systems you already have in place.

FalconStor CDP can also be purchased as a software appliance kit to install on servers or as a virtual appliance that integrates with virtual server technology.

FalconStor CDP can use both a host-based approach and a fabric-based approach to capture and track data changes. For a host-based model, a FalconStor DiskSafe Agent runs on the application server to capture block-level changes made to a system or data disk without impacting application performance. It mirrors the data to a back-end FalconStor CDP appliance, which handles all of the data protection operations. All journaling, snapshot processing, mirroring, and replication occur on the out-of-band FalconStor CDP appliance, so that primary storage I/O remains unaffected.

In the fabric-based model, a pair of FalconStor CDP Connector write-splitting appliances are placed into a FC or iSCSI SAN fabric. FalconStor CDP gateway appliances function similarly to switches: they split data writes off to one or more out-of-band FalconStor CDP appliances that provide data protection functionality. The pair of FalconStor connector appliances is always configured in a high availability (HA) cluster to provide fault tolerance.

16

Page 19: CDP-NSS Administration Guide

Introduction

Components

The primary components of the CDP/NSS Storage Network are the storage server, SAN clients, and the FalconStor Management Console. These components all sit on the same network segment, the storage network.

Server The storage server is a dedicated network storage server. The storage server is attached to the physical SCSI and/or Fibre Channel storage devices on one or more SCSI or Fibre Channel busses.

The job of the storage server is to communicate data requests between the clients and the logical (SAN) resources (logically mapped storage devices on the storage network) via Fibre Channel or iSCSI.

SAN Clients SAN Clients are the actual file and application servers. They are sometimes referred to as IPStor SAN Clients because they utilize the storage resources via the storage server.

You can have iSCSI or Fibre Channel SAN Clients on your storage network. SAN Clients access their storage resources via iSCSI initiators (for iSCSI) or HBAs (for Fibre Channel or iSCSI). The storage resources appear as locally attached devices to the SAN Clients’ operating systems (Windows, Linux, Solaris, etc.) even though the SCSI devices are actually located at the storage server.

Console The ‘FalconStor Management Console’ is the administration tool for the storage network. It is a Java application that can be used on a variety of platforms and allows IPStor administrators to create, configure, manage, and monitor the storage resources and services on the storage network.

PhysicalResource

Physical resources are the actual devices attached to this storage server. These can be hard disks, tape drives, device libraries, and RAID cabinets.

LogicalResource

All resources defined on the storage server, including physical SAN Resources (virtual drives, and Service-Enabled Devices), Replica Resources, and Snapshot Groups.

Clients do not gain access to physical resources; they only have access to logical resources. This means that an administrator must configure each physical resource to one or more logical resources so that they can be assigned to the clients.

Logical Resources consists of sets of storage blocks from one or more physical hard disk drives. This allows the creation of Logical Resources that contain a portion of a larger physical disk device or an aggregation of multiple physical disk devices. Understanding how to create and manage Logical Resources is critical to a successful storage network. See “Logical Resources” for more information.

17

Page 20: CDP-NSS Administration Guide

Introduction

Acronyms

Acronym Definition

ACL Access Control List

ACSL Adaptor, Channel, SCSI ID, LUN

ALUA Asymmetric Logical Unit Access

API Application Programming Interface

BDC Backup Domain Controller

BMR Bare Metal Recovery

CCM Central Client Manager

CCS Command Central Storage

CDP Continuous Data Protector

CDR Continuous Data Replication

CHAP Challenge Handshake Authentication Protocol

CIFS Common Internet File System

CLI Command Line Interface

DAS Direct Attached Storage

FC Fibre Channel

FCoE Fibre Channel over Ethernet

GUI Graphical User Interface

GUID Globally Unique Identifier

HBA Host Bus Adapter.

HCA Host Channel Adapter.

IMA Intelligent Management Administrator

I/O Input / Output

IPMI Intelligent Platform Management Interface

iSCSI Internet Small Computer System Interface

JBOD Just a Bunch Of Disks

LAN Local Area Network

LUN Logical Unit Number.

MIB Management Information Base

MPIO Microsoft Multipath I/O

18

Page 21: CDP-NSS Administration Guide

Introduction

NFS Network File System

NIC Network Interface Card

NPIV N_Port ID Virtualization

NSS Network Storage Server

NTFS NT File System

NVRAM Non-volatile Random Access Memory

OID Object Identifier

PDC Primary Domain Controller

POSIX Portable Operating System Interface

RAID Redundant Array of Independent Disks

RAS RAS (Reliability, Availability, Service)

RPC Remote Procedure Call

SAN Storage Area Network

SCSI Small Computer System Interface

SDM SAN Disk Manager

SED Service-Enabled Device

SMI-S Storage Management Initiative – Specification

SNMP Simple Network Management Protocol

SRA Snapshot Resource Area

SSD Solid State Disk

VAAI vStorage APIs for Array Integration

VLAN Virtual Local Area Network

VSA Volume Set Addressing

VSS Volume Shadow Copy Service

WWNN World Wide Node Number

WWPN World Wide Port Number

Acronym Definition

19

Page 22: CDP-NSS Administration Guide

Introduction

Terminology

Appliance-Based

protection

Appliance-based protection, or In-band protection refers to a storage server that is placed in the data path between an application host and its storage. This allows CDP/NSS to provision the disk back to the application host while allowing data protection services.

Bare MetalRecovery

(BMR)

The process of rebuilding a computer after a catastrophic failure. The normal bare metal restoration process is: install the operating system from the product disks, install the backup software (so you can restore your data), and then restore your data.

Central ClientManager

(CCM)

A Java console that provides central management of client-side applications (DiskSafe, snapshot agents) and monitors client storage. CCM allows you to manage your clients in groups, enhancing accuracy and consistency of policies across grouped servers. For example, Exchange groups, SharePoint groups. For additional information, refer to the Central Client Manager User Guide.

CDP Gateway Additional term for a storage server/ CDP appliance that is providing Continuous Data Protection.

CDP VirtualAppliance (VA)

A virtual machine (VM) that runs FalconStor CDP software. This VA is responsible for capturing point-in-time TimeMark Snapshots of data and maintaining the CDP data journal. A snapshot storage pool and CDP storage pool are managed by the CDPVA using virtual disks mapped from disk files or RDMs. Refer to the CDP Virtual Appliance User Guide for more information.

Command LineInterface

The Command Line Interface (CLI) is a simple interface that allows client machines to perform some of the more common functions currently performed by the FalconStor Management Console. Administrators can use the CLI to automate many tasks, as well as integrate CDP/NSS with their existing management tools.

The CLI is installed as part of the CDP/NSS Client installation. Once installed, a path must be set up for Windows clients in order to be able to use the CLI. Refer to the ‘Command Line Interface’ section for details.

Cross-MirrorFail over

(For virtual appliances only). A non-shared storage failover option that provides high availability without the need for shared storage. Used with virtual appliances containing internal storage. Mirroring is facilitated over a dedicated, direct IP connection. This option removes the requirements of shared storage between two partner storage server nodes. For additional information on using this feature for your virtual appliances, refer to ‘Cross-mirror failover requirements’.

DiskSafe™

AgentThe CDP DiskSafe Agent is host-based replication software that delivers block-level data protection with centralized management for Microsoft Windows-based servers as part of the CDP solution. The DiskSafe Agent delivers real-time and periodic mirroring for both DAS and SAN storage to complement the CDP Journaling feature, TimeMark® Snapshots, and Replication.

20

Page 23: CDP-NSS Administration Guide

Introduction

DynaPath® DynaPath is a load balancing/path redundancy application that ensures constant data availability and peak performance across the SAN by performing Fibre Channel load-balancing, transparent failover, and fail-back services. DynaPath creates parallel active storage paths that transparently reroute server traffic without interruption in the event of a storage network problem. For additional information, refer to the DynaPath User Guide.

E-mail Alerts Using pre-configured scripts (called triggers), Email Alerts monitors a set of pre-defined, critical system components (SCSI drive errors, offline device, etc.) so that system administrators are able to take corrective measures within the shortest amount of time, ensuring optimum service uptime and IT efficiency. For additional information, refer to the ‘Email Alerts (updated March 2013)’ section.

FalconStorManagement

Console

Comprehensive, graphical administration tool to configure all data protection services, set properties, and manage storage. For more information, refer to the ‘FalconStor Management Console’ section.

FileSafe FileSafe is a software application that protects your data by backing up files and folders to another location. Data is backed up to a location called a repository. The repository can be local (on your computer or on a USB device), remote (on a shared network server or NAS resource), or on a storage server where the FileSafe Server option is licensed and enabled. For more information, see the FileSafe User Guide.

GUID The Globally Unique Identifier (GUID) is a unique 128-bit number that is used to identify a particular component, application, file, database entry, and/or user.

Host Zone Usually a Fibre Channel zone that is comprised of an application server's initiator port and a CDP/NSS target port. More more information, refer to ‘Zoning’.

Host-basedprotection

Host-based protection refers to DiskSafe™ and FileSafe™, where the locally attached disk is mirrored to a CDP-provisioned disk with data protection services.

HotZone® A CDP/NSS option that automatically re-maps data from frequently used areas of disks to higher performance storage devices in the infrastructure, resulting in enhanced read performance for the application accessing the storage. This feature is not available for CDP connector appliances. For additional information, refer to the ‘HotZone’ section.

HyperTrac™ The HyperTrac Backup Accelerator (HyperTrac) works in conjunction with CDP and NSS to increase tape backup speed, eliminate backup windows, and off load processing from application servers.

HyperTrac for VMware enhances the functionality of VMware Consolidated Backup (VCB) by allow TimeViews of the production virtual disk to be used as the source of the VCB snapshot. Unlike the traditional HyperTrac model, the TimeViews are not mounted directly to the storage server.

HyperTrac for Hyper-V enables mounting production TimeViews for backup via Microsoft Hyper-V machines. For more information, refer to the HyperTrac User Guide.

21

Page 24: CDP-NSS Administration Guide

Introduction

IPMI Intelligent Platform Management Interface (IPMI) is a hardware level interface that monitors various hardware functions on a server.

iSCSI Client iSCSI clients are the file and application servers that access CDP/NSS SAN Resources using the iSCSI protocol.

iSCSI Target A storage target for the client.

LogicalResources

Logically mapped devices on the storage server. They are comprised of physical storage devices, known as Physical Resources.

MIB A Management Information Base (MIB) is an ASCII text file that describes SNMP network elements as a list of data objects. It is a database of information, laid out in a tree structure, with MIB objects as the leaf nodes, that you can query from an SNMP agent. The purpose of the MIB is to translate numerical strings into human-readable text. When an SNMP device sends a Trap, it identifies each data object in the message with a number string called an object identifier (OID). Refer to the ‘SNMP Integration’ section for additional information.

MicroScan™ FalconStor MicroScan is a patented de-duplication technology that minimizes the amount of data transferred during replication by eliminating inefficiencies at the application and file system layer. Data changes are replicated at the smallest possible level of granularity, reducing bandwidth and associated storage costs for disaster recovery (DR). FalconStor CDP and FalconStor NSS leverage MicroScan technology for changed-block tracking, which allows for extremely low-level, disk sector (512-bytes) delta data replication, as well as compression and encryption, for the most efficient and secure sub-block-based replication available. This considerably reduces the recovery point objectives (RPO), as replication can be performed more frequently and with less network bandwidth utilization.

NIC PortBonding

NIC Port Bonding is a load-balancing/path-redundancy feature (available for Linux) that enables your storage server to load-balance network traffic across two or more network connections creating redundant data paths throughout the network.

NPIV N_Port ID Virtualization (NPIV) allows multiple N_Port IDs to share a single physical N_Port, this allows us to have an initiator, target and standby occupying the same physical port This is not supported when using a non-NPIV driver.

All Fibre Channel switches must support NPIV as NPIV (point-to-point) mode is enabled by default.

NSS VirtualAppliance (VA)

A virtual machine (VM) that runs FalconStor NSS software. This VA delivers high-speed iSCSI and virtualization storage service through VMware’s virtual appliance architecture: a plug-and-play VMware virtual machine running on VMware vSphere hosts. NSSVA is a VMware-certified Storage Virtual Appliance (SVA). Refer to the NSS Virtual Appliance User Guide for more information.

22

Page 25: CDP-NSS Administration Guide

Introduction

OID The Object Identifier (OID) is the unique number written as a sequence of sub identifiers in decimal notation. For example, 1.3.6.1.4.1.2681.1.2.102. It uniquely identifies data objects that are the subjects of an SNMP message. When your SNMP device sends a Trap or a GetResponse, it transmits a series of OIDs, paired with their current values.

Prefetch A feature that enables pre-fetching of data for clients. This allows clients to read ahead consecutively, which can result in improved performance because the storage server will have the data ready from the anticipatory read as soon as the next request is received from the client. This will reduce the latency of the command and improve the sequential read benchmarks in most cases. For additional information, refer to the ‘Prefetch’ section.

Read Cache An intelligent, policy-driven, disk-based staging mechanism that automatically remaps "hot" (frequently used) areas of disks to high-speed storage devices, such as RAM disks, NVRAM, or Solid State Disks (SSDs). For additional information, refer to the ‘Read Cache’ section.

RecoverTrac™ FalconStor RecoverTrac is a disaster recovery automation tool that extends the functionality of FalconStor CDP/NSS business continuity disaster recovery (BCDR) solutions, offering service-oriented recovery to both physical and virtual server infrastructures.

RecoverTrac eliminates complex and time-consuming recovery operations by allowing administrators to create jobs to manage the recovery process for multiple host machines for rapid recovery of files, databases, systems, and entire sites. The RecoverTrac solution works by mapping servers, applications, networking storage, and failover procedures from source sites to recovery sites, automating the logistics involved in resuming business operations at the recovery sites.

RecoveryAgents

FalconStor recovery agents (available from the FalconStor Customer Service portal.) offer recovery solutions for your database and messaging systems. FalconStor® Message Recovery for Microsoft® Exchange (MRE) and Message Recovery for Lotus Notes/ Domino (MRN) expedite mailbox/message recovery by enabling IT administrators to quickly recover individual mailboxes from point-in-time snapshot images of their messaging server. FalconStor® Database Recovery for Microsoft® SQL Server expedites database recovery by enabling IT administrators to quickly recover a database from point-in- time snapshot images of their SQL database. For details, refer to the Recovery Agents User Guide.

Replication The process by which a SAN Resource maintains a copy of itself either locally or at a remote site. The data is copied, distributed, and then synchronized to ensure consistency between the redundant resources. The SAN Resource being replicated is known as the primary disk. The changed data is transmitted from the primary to the replica disk so that they are synchronized. Under normal operation, clients do not have access to the replica disk.

The replication option works with both CDP and NSS solutions to replicate data over any existing infrastructure. In addition, it can be used for site migration, remote site consolidation for backup, and similar tasks. Using a TOTALLY Open™ storage-

23

Page 26: CDP-NSS Administration Guide

Introduction

centric approach, replication is configured and managed independently of servers, so it integrates with any operating system or application for cost-effective disaster recovery (DR). For additional information, refer to the “Replication” section.

ReplicationScan

A scan comparing the primary and replica disk for differences. If primary and replica disk are known to have similar data (bit by bit, not file by file) then a manual scan is recommended. The initial scan is automatically triggered and all subsequent scans must be manually triggered (right-click on a device and select Replication > Scan).

Retention A TimeMark retention schedule and reclamation policy set at the server level allow you to set global TimeMark preservation patterns. The TimeMark retention schedule can be set by right-clicking on the server, and selecting Properties --> TimeMark Maintenance tab. A TimeMark Retention Policy can be set for an individual SAN resource, replica resource, or group when you right-click the resource and select TimeMark/CDP --> Enable.

SafeCache™ This option offers improved performance by using high-speed storage devices as a persistent (non-volatile) read/write cache. The persistent cache can be mirrored for added protection. This option is not available for CDP connector appliances. For additional information, refer to the ‘SafeCache’ section.

SAN Resource Provides storage for file and application servers (called SAN Clients). When a SAN Resource is assigned to a SAN client, a virtual adapter is defined for that client. The SAN Resource is assigned a virtual SCSI ID on the virtual adapter. This mimics the configuration of actual SCSI storage devices and adapters, allowing the operating system and applications to treat them like any other SCSI device. For information on creating a SAN resource, refer to the ‘Create SAN resources - Procedures’ section.

Service-Enabled Device

Service-Enabled Devices are hard drives or RAID LUNs with existing data that can be accessed by CDP or NSS to make use of all key CDP/NSS storage services (mirroring, snapshot, etc.). This can be done without any migration/copying, without any modification of data, and with minimal downtime. Service-Enabled Devices are used to migrate existing drives into the SAN.

SMI-S The FalconStor® Storage Management Initiative – Specification (SMI-S) Provider for CDP and NSS storage enables CDP and NSS users to have central management of multi-vendor storage networks for more efficient utilization. CDP and NSS solutions use the SMI-S standard to expose the storage systems it manages to the SMI-S Client. A typical SMI-S Client can discover FalconStor devices through this interface. It utilizes CIM-XML while is a WBEM protocol that uses XML over HTTP to exchange Common Information Model (CIM) information. For additional information, refer to the ‘SMI-S Integration’ section.

Snapshot A snapshot of an entire device allows us to capture data at any given moment in time and move it to either tape or another storage medium, while allowing data to be written to the device. You can perform a snapshot to capture a point-in-time image of your data volumes (virtual drives) using minimal storage space. For additional information, refer to the ‘Snapshot Resource’ section.

24

Page 27: CDP-NSS Administration Guide

Introduction

Snapshot Agent Application-aware Snapshot Agents provide complete data protection for active databases such as Microsoft SQL Server, Oracle, Sybase, and DB2, and messaging applications such as Microsoft Exchange and Lotus Notes. These agents work with both CDP and NSS to ensure that snapshots are taken with full transactional integrity. For details, refer to the Snapshot Agents User Guide.

SNMP Simple Network Management Protocol (SNMP) is an Internet-standard protocol for managing devices on IP networks. For additional information, refer to the “SNMP Integration” section.

Storage ClusterInterlink Port

A physical connection between two servers. Version 7.0 and later requires a Storage Cluster Interlink Port for failover setup. For additional information regarding the ‘Storage Cluster Interlink’, refer to the ‘Failover’ section.

ThinProvisioning

For virtual resources, Thin Provisioning allows you to use your storage space more efficiently by allocating a minimum amount of space for the virtual resource. Then, when usage thresholds are met, additional storage is allocated as necessary. Thin Provisioning may be applied to primary storage, replica storage (at the disaster recovery [DR] site), and mirrored storage. For additional information, refer to the ‘Thin devices’ section.

TimeMark® TimeMark technology works with CDP and NSS to enable you to create scheduled and on-demand point-in-time delta snapshot copies of data volumes. TimeMark includes the FalconStor TimeView® feature, which creates an accessible, mountable image of any snapshot. This provides a tool to freely create multiple and instantaneous virtual copies of an active data set. The TimeView images can be assigned to multiple application servers with read/write access for concurrent, independent processing, while the original data set is actively accessed and updated by the primary application server. For additional information, refer to the ‘TimeMarks and CDP’ section.

TimeView® An extension of the TimeMark option that allows you to mount a virtual drive as of a specific point in time. For additional information, refer to the ‘Recover data using the TimeView feature’ section.

Trap Asynchronous notification from agent to manager. Includes current sysUpTime value, an OID identifying the type of trap and optional variable bindings. Destination addressing for traps is determined in an application specific manner typically through trap configuration variables in the MIB. For additional information, refer to the ‘SNMPTraps’ section.

Trigger An event that tells your CDP/NSS-enabled application when it is time to perform a snapshot of a virtual device. FalconStor’s Replication, TimeMark/CDP, Snapshot Copy, and ZeroImpact Backup options all trigger snapshots.

VAAI A VAAI- aware storage device is able to understand commands from hypervisor resources and perform storage functions. CDP/NSS version 7.00 supports VAAI when assigning NSS LUNs to a vSphere 5 host. VAAI is automatically enabled; no additional configuration is required.

25

Page 28: CDP-NSS Administration Guide

Introduction

WWN Zoning Zoning which uses the WWPN in the configuration. The WWPN remains the same in the zoning configuration regardless of the port location. If a port fails, you can simply move the cable from the failed port to another valid port without having to reconfigure the zoning.

ZeroImpact™

Backup EnablerAllows you to perform a local raw device tape backup/restore of your virtual drives. This eliminates the need for the application server to play a role in backup and restore operations.

26

Page 29: CDP-NSS Administration Guide

Introduction

Web Setup

Once you have physically connected the appliance, powered it on, and the following steps have been performed via the Web Setup installation and server setup, you are ready to begin using your CDP or NSS storage server.

This step may have already been completed for you. Refer to the Software Quick Start Guide for details regarding each of the following steps:

1. Configure the Appliance

• The first time you connect, you will be asked to:• Select a language.

(If the wrong language is selected, click your browser back button or go to: //10.0.0.2/language.php to return to the language selection page.

• Read and agree to the FalconStor End User License Agreement.• (Storage appliances only) Configure your RAID system.• Enter the network configuration for your appliance

2. Manage License Keys

• Enter the server license keys.

3. Check for Software Updates

• Click the Check for Updates button to check for updated agent software. • Click the Download Updates button to download the selected client

software.

4. Install Management Software and Guides

5. Install Client Software and Guides

6. Configure Advanced Features

Advanced features allow you to add storage capacity via Fibre Channel or iSCSI or disable web services if your business policy requires web services to be disabled.

If you encounter any problems while configuring your appliance, contact FalconStor technical support via the web at: www.falconstor.com/supportrequest.

27

Page 30: CDP-NSS Administration Guide

Introduction

Additional resources

You can download software builds, patches, and other documentation related to your FalconStor product from the FalconStor Customer Support Portal at support.falconstor.com (account required). Click the View Builds, Patches, & Documentation link in the GA Releases area to complete a simple search form and display available downloads.

Note that product release notes and patch descriptions may include information that is not in the user guide. Be sure to review all available documents.

If you need technical support, create a support ticket on the FalconStor Customer Support portal.

28

Page 31: CDP-NSS Administration Guide

FalconStor Management ConsoleThe FalconStor Management Console is the administration tool for the storage network. It is a Java application that can be used on a variety of platforms and allows administrators to create, configure, manage, and monitor the storage resources and services on the storage server network as well as run/view reports, enter licensing information, and add/delete administrators.

The FalconStor Management Console software can be installed on each machine connected to a storage server. The console is also available via download from your storage server appliance. If you cannot install the FalconStor Management Console on every client, you can launch a web-based version of the console from your browser and enter the IP address of the CDP/NSS server.

Launch the console

To launch an installed version of the console in a Microsoft Windows environment, select Start --> Programs --> FalconStor --> IPStor --> IPStor Console.

In a Linux and other UNIX environment, execute the following:

To launch a web-based version of the console, open a browser from any machine and enter the IP address of the CDP/NSS server (for example: http://10.0.0.2) and the console will launch. If you have Web Setup, select the Go button next to Install Management Software and Guides and click the Launch Console link.

In the future, to skip going through Web Setup, open a browser from any machine and enter the IP address of the storage server followed by :81, for example: http://10.0.0.2:81/ to launch the console. The computer running the browser must have Java Runtime Environment (JRE) version 1.6.

cd /usr/local/ipstorconsole./ipstorconsole

Notes:

• If your screen resolution is 640 x 480, the splash screen may be cut off while the console loads.

• The console might not launch on certain systems with display settings configured to use 16 colors.

• The console needs to be run from a directory with “write” access. Otherwise, the host name information and message log file retrieved from the storage server will not be able to be saved to the local directory. As a result, the console will display event messages as numbers and console options will not be able to be saved.

• You must be signed on as the local administrator of the machine on which you are installing the Windows console package.

CDP/NSS Administration Guide 29

Page 32: CDP-NSS Administration Guide

FalconStor Management Console

Connect to your storage server

1. Discover all storage servers on your storage subnet by selecting Tools --> Discover.

2. Connect to a storage server.

You can connect to an existing storage server, by right-clicking on it and selecting Connect. Enter a valid user name and password (both are case sensitive).

To connect to a server that is not listed, right-click on the Servers object and select Add, enter the name of the server, a valid user name and password.

When you connect to a server for the first time, a configuration wizard is launched to guide you through the set up process.

You may see a dialog box notifying you of new devices attached to the server. Here, you will see all devices that are either unassigned or reserved devices. At this point you can either prepare the device (reserve it for a virtual or Service-Enabled Device) and/or create a logical resource.

Once you are connected to a server, the server icon will change to show that you are connected:

If you connect to a server that is part of a failover configuration, you will automatically be connected to both servers.

Note: The FalconStor Management Console remembers the servers to which the console has successfully connected. When you close and restart the con-sole, the servers display in the tree but you are not automatically connected to them.

CDP/NSS Administration Guide 30

Page 33: CDP-NSS Administration Guide

FalconStor Management Console

Configure your server using the configuration wizard

The configuration wizard guides you through entering license keycodes and setting up your network configuration. If this is the first time you are connecting to your CDP or NSS server, you will see one of the following:

Step 1: Enter license keys

Click the Add button and enter your keycodes.

Be sure to enter keycodes for any options you have purchased. Each FalconStor option requires that a keycode be entered before the option can be configured and used. Refer to ‘Manage licenses (updated January 2013)’ for more information.

Step 2: Setup network

Enter information about your network configuration.

If you need to change storage server IP addresses, you must make these changes using System Maintenance --> Network Configuration in the console. Using yast or other third-party utilities will not update the information correctly.

Refer to ‘Network configuration’ for more information.

You will only see step 4 if IPStor detected IPMI when the server booted up.

Note: After completing the configuration wizard, if you need to add license keycodes, you can right-click on your CDP/NSS appliance and select License.

Note: After completing the configuration wizard, if you need to change these settings, you can right-click on your CDP/NSS appliance and select System Maintenance --> Network Configuration.

CDP/NSS Administration Guide 31

Page 34: CDP-NSS Administration Guide

FalconStor Management Console

Step 3: Set hostname

Enter a valid name for your storage appliance.

Valid characters are letters, numbers, underscore, or dash.

You will need to restart the server if you change the hostname.

Note: Do not change the hostname if you are using block devices. If you do, all block devices claimed by CDP/NSS will be marked offline and seen as foreign devices.

CDP/NSS Administration Guide 32

Page 35: CDP-NSS Administration Guide

FalconStor Management Console

FalconStor Management Console user interface

The FalconStor Management Console displays the configuration for the storage servers on your storage network. The information is organized in a familiar Explorer-like tree view.

The tree allows you to navigate the various storage servers and their configuration objects. You can expand or collapse the display to show only the information that you wish to view. To expand an item that is collapsed, you can click on the symbol next to the item. To collapse an item, click on the symbol next to the item. Double-clicking on the item will also toggle the expanded/collapsed view of the item.

You need to connect to a server before you can expand it.

When you highlight an object in the tree, the right-hand pane contains detailed information about the object. You can select one of the tabs for more information.

The console log located at the bottom of the window displays information about the local version of the console. The log features a drop-down box that allows you to see activity from this console session.

Search forobjects in the

tree

The console has a search feature that helps you find any physical device, virtual device, or client on any storage server. To search:

1. Highlight a storage server in the tree.

2. Select Edit menu --> Find.

3. Select the type of object to search for and the search criteria.

Once you select an object type, a list of existing objects appears. If you highlight one, you will be taken directly to that object in the tree.

Alternatively, you can type the full name, ID, ACSL (adapter, channel, SCSI, LUN), or GUID (Globally Unique Identifier). Once you click the Search button, you will be taken directly to that object in the tree.

Storage serverstatus and

configuration

The console displays the configuration and status of the storage server. Configuration information includes the version of the CDP or NSS software and base operating system, the type and number of processors, amount of physical and swappable memory, supported protocols, and network adapter information.

The Event Log tab displays system events and errors.

CDP/NSS Administration Guide 33

Page 36: CDP-NSS Administration Guide

FalconStor Management Console

Alerts The console displays all critical alerts upon login to the server. Select the Display only the new alerts next time if you only want to see new critical alerts the next time you log in. Selecting this option indicates acknowledgement of the alerts.

Discover storage servers

CDP/NSS can automatically discover all storage servers on your storage subnet. Storage servers running CDP or NSS will be recognized as storage servers. To discover the servers:

1. Select Tools --> Discover IPStor Servers

2. Enter your network criteria.

Protect your storage server’s configuration

FalconStor provides several convenient ways to protect your CDP or NSS configuration. This is useful for disaster recovery purposes, such as if a storage server is out of commission but you have the storage disks and want to use them to build a new storage server. You should create a configuration repository even on a standalone server.

Continuouslysave

configuration

You can create a configuration repository that maintains a continuously updated version of your storage system configuration. The status of the configuration repository is displayed on the console under the General tab. In the case of a failure of the configuration repository, the console displays the time of the failure along with the last successful update. This feature works seamlessly with the FalconStor Failover option to provide business continuity in the event that a storage server fails. For additional redundancy, the configuration repository can be mirrored to another disk.

To create a configuration repository, make sure there is at least 10 GB of available space.

1. Highlight a storage server in the tree.

2. Right-click on the server and select Options --> Enable Configuration Repository.

3. Select the physical device(s) for the Configuration Repository resource.

4. Confirm all information and click Finish to create the repository.

You will now see a Configuration Repository object in the tree under Logical Resources.

To mirror the repository, right-click on it and select Mirror --> Add.

Refer to the CDP/NSS System Recovery Guide for details on repairing or replacing your CDP/NSS server.

CDP/NSS Administration Guide 34

Page 37: CDP-NSS Administration Guide

FalconStor Management Console

Manage licenses (updated January 2013)

To license CDP/NSS and its options, make sure you have obtained your CDP/NSS keycode(s) from FalconStor or its representatives. Once you have the license keycodes, follow the steps below:

1. In the console, right-click on the server and select License.

The License Summary window is informational only and displays a list of the options supported for this server. You can enter keycodes for your purchased options on the Keycodes Detail window.

2. Press the Add button on the Keycodes Detail window to enter each keycode

If multiple administrators are logged into a storage server at the same time, license changes made from one console will take effect in other console only when the administrator disconnects and then reconnects to the server.

3. If your licenses have not been registered yet, click the Register button on the Keycodes Detail window.

Select Online to register online if you have an Internet connection. Otherwise, select Offline registration.

Offline registration

Offline registration is useful when you do not have an internet connection. When you select the Offline registration option, you will see the Offline Registration dialog:

CDP/NSS Administration Guide 35

Page 38: CDP-NSS Administration Guide

FalconStor Management Console

To register offline:

1. Specify a path and file name in which to save the registration information.

Registration information file names can only use English alphanumeric characters and must have a .dat extension. You cannot use a single digit as the name. For example, company1.dat is valid (1.dat is not valid).

2. Click the Save button.

It is a good idea to keep the dialog open while you complete the remaining steps.

3. Copy the saved file to a computer with an Internet connection and email it to FalconStor’s registration server: ([email protected]).

It is not necessary to write anything in the subject or body of the e-mail.

If your email is working correctly, you should receive a reply within a few minutes.

4. When you receive a reply, save the attachment (with the .sig extension) to the local drive of the computer where the console is running.

5. Back in the Offline Registration dialog, specify the path and name of the .sig file.

6. Click the Send button to send the file (do not change the file name) to the VTL server to complete your registration.

7. Click the Finish button.

Note: In order to prevent the possibility of unsuccessful email delivery to the FalconStor registration server, disable Delivery Status Notification (DSN) before you send the activation request email.

Note: If you do not receive a reply to your offline registration email within one hour after sending it, check your email encoding. Change it to UNICODE (UTF-8) if it is set otherwise and then send the email again.

CDP/NSS Administration Guide 36

Page 39: CDP-NSS Administration Guide

FalconStor Management Console

Set server properties

To set properties for a specific server:

1. Right-click on the server and select Properties.

The tabs you see will depend upon your storage server configuration.

2. If you have multiple NICs (network interface cards) in your server, enter the IP addresses using the Server IP Addresses tab.

If the first IP address stops responding, the CDP/NSS clients will attempt to communicate with the server using the other IP addresses you have entered in the order they are listed.

Notes:

• In order for the clients to successfully use an alternate IP address, your subnet must be set properly so that the subnet itself can redirect traffic to the proper alternate adapter.

• You cannot assign two or more NICs within the same subnet.• The client becomes aware of the multiple IP addresses when it

initially connects to the server. Therefore, if you add additional IP addresses in the console while the client is running, you must rescan devices (Windows clients) or restart the client (Linux/Unix clients) to make the client aware of these IP addresses.

CDP/NSS Administration Guide 37

Page 40: CDP-NSS Administration Guide

FalconStor Management Console

3. On the Activity Database Maintenance tab, indicate how often the SAN data should be purged.

The Activity Log is a database that tracks all system activity, including all data read, data written, number of read commands, write commands, number of errors etc. This information is used to generate SAN information for the CDP/NSS reports.

CDP/NSS Administration Guide 38

Page 41: CDP-NSS Administration Guide

FalconStor Management Console

4. On the SNMP Maintenance tab, indicate which types of messages should be sent as traps to your SNMP manager.

Five levels are available:

• None – (Default) No messages will be sent.• Critical - Only critical errors that stop the system from operating properly

will be sent.• Error – Errors (failure such as a resource is not available or an operation

has failed) and critical errors will be sent. • Warning – Warnings (something occurred that may require maintenance

or corrective action), errors, and critical errors will be sent.• Informational – Informational messages, errors, warnings, and critical error

messages will be sent.

CDP/NSS Administration Guide 39

Page 42: CDP-NSS Administration Guide

FalconStor Management Console

5. On the iSCSI tab, set the iSCSI portal that your system should use as default when creating an iSCSI target.

If you have multiple NICs, when you create an iSCSI target, this IP address will be selected by default for you.

6. If necessary, change settings for mirror resynchronization and replication on the Performance tab.

CDP/NSS Administration Guide 40

Page 43: CDP-NSS Administration Guide

FalconStor Management Console

The settings on this tab affect system performance. The defaults should be optimal for most configurations. You should only need to change the settings for special situations, such as if your mirror is remotely located.

Mirror Synchronization Throttle - Set the default value for the individual mirror device to use (since throttle is disabled by default for individual mirror devices). Each mirror device will be able to synchronize up to the value set here (in KB per second). If you select 0 (zero), all mirror devices will use their own throttle value (if set), otherwise there is no limit for the device.

Select the Start initial synchronization when mirror is added option to have synchronization begin immediately for newly created mirrors. The synchronize out-of-sync mirror policy does not apply in this case. If the Start initial synchronization when mirror is added option is not selected, the mirror begins synchronization based on the policy configured.

Synchronize Out-of-Sync Mirrors - Determine how often the system should check and attempt to resynchronize active out-of-sync mirrors, how often it should retry synchronization if it fails to complete, and whether or not to include replica mirrors. These setting will only be used for active mirrors. If a mirror is suspended because the lag time exceeds the acceptable limit, that resynchronization policy will apply instead. This is the mirror policy that applies to all individual mirrors that contain the following settings:

• Check and synchronize out-of-sync mirrors every [n][unit] - Check the mirror status at this interval and trigger a mirror synchronization when the mirror is not synchronized.

• Up to [n] mirrors at each interval - Indicate the number of mirrors that can be synchronized concurrently. This rule does not apply to user-initiated operations, such as synchronize, resume, and rebuild. This rule also does not apply when the Start initial synchronization when mirror is added option is enabled.

• Retry synchronization for each resource up to [n] times when synchronization failed - Indicate the number of times that an out-of-sync mirror will retry to synchronize the mirror at the interval set by the Check and synchronize out-of-sync mirrors every rule. Once the mirror fails to synchronize the number of times specified, a manual synchronization will be required to initiate the mirror synchronization again.

• Include replica mirrors in the automatic synchronization process - Enable this option to include replica mirrors in the automatic synchronization process. This option is disabled by default, which means the mirror policy will not apply to any replica device with mirror on the server. In this case, a manual synchronization is required to re-sync the replica mirror. When this option is enabled, then the mirror policies will apply to the replica mirror.

Replication Throttle - Click the Configure Throttle button to launch the Configure Target Throttle screen, allowing you to set, modify, or delete replication throttle settings. Refer to Set the replication throttle for additional information.

Enable MicroScan - MicroScan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small

CDP/NSS Administration Guide 41

Page 44: CDP-NSS Administration Guide

FalconStor Management Console

random updates to the disk. The global MicroScan option sets a default in all replication setup wizards. MicroScan can still be enabled/disabled for each individual replication via the wizard regardless of the global MicroScan setting.

7. Optional: Select the Auto Save Config tab and enter information to save your storage server system configuration for journaling purposes. This option cannot be used to restore your system configuration. Refer to the CDP-NSS System Recovery Guide for information regarding restoring your system.

You can set your system to automatically replicate your system configuration to an FTP server on a regular basis. Auto Save takes a point-in-time snapshot of the storage server configuration prior to replication.

The target server you specify in the Ftp Server Name field must have FTP server installed and enabled.

The Target Directory is the directory on the FTP server where the files will be stored. The directory name you enter here (such as ipstorconfig) is a directory on the FTP server (for example ftp\ipstorconfig). You should not enter an absolute path like c:\ipstorconfig.

The Username is the user that the system will log in as. You must create this user on the FTP site. This user must have read/write access to the directory named here.

In the Interval field, determine how often to replicate the configuration. Depending upon how frequently you make configuration changes to CDP/NSS, set the interval accordingly.

In the Number of Copies field, enter the maximum copies to keep. The oldest copy will be deleted as each new copy is added.

CDP/NSS Administration Guide 42

Page 45: CDP-NSS Administration Guide

FalconStor Management Console

8. On the Location tab, you can enter a specific physical location of the machine. You can also select an image (smaller than 500 KB) to identify the server location. Once the location information is saved, the new tab displays in the FalconStor Management Console for that server.

9. On the TimeMark Maintenance tab, you can set a global reclamation policy.

Manage accounts

Only the root user can manage users and groups or reset passwords. You will need to add an account for each person who will have administrative rights in CDP/NSS. You will also need to add a user account for clients that will be accessing storage resources from a host-based application (such as FalconStor DiskSafe or FileSafe).

To make account management easier, users can be grouped together and handled simultaneously.

To manage users and groups:

1. Right-click on the server and select Accounts.

A list of all existing users and administrators are listed on the Users tab and a list of all existing groups is listed on the Groups tab.

CDP/NSS Administration Guide 43

Page 46: CDP-NSS Administration Guide

FalconStor Management Console

2. Select the appropriate option.

Add a user To add a user:

1. Click the Add button.

2. Enter the name for this user.

The username must adhere to the naming convention of the operating system running on your storage server. Refer to your operating system’s documentation for naming restrictions.

3. Enter a password for this user and then re-enter it in the Confirm Password field.

For iSCSI clients and host-based applications, the password must be between 12 and 16 characters. The password is case sensitive.

4. Specify the type of account.

Users and administrators have different levels of permissions in CDP/NSS.

• IPStor Admins can perform any CDP/NSS operation other than managing accounts. They are also authorized for CDP/NSS client authentication.

• IPStor Users can manage virtual devices assigned to them and can allocate space from the storage pool(s) assigned to them. They can also create new SAN resources, clients, and groups as well as assign resources to clients, and join resources to groups, as long as they are authorized. IPStor Users can only view resources to which they are assigned. IPStor Users are also authorized for CDP/NSS client authentication. Any time an IPStor User creates a new SAN resource, client, or group, access rights will automatically be granted for the user to that object.

5. (IPStor Users only) If desired, specify a quota.

Quotas enable the administrator to place manageable restrictions on storage usage as well as storage used by groups, users, and/or hosts.

Note: You cannot manage accounts or reset a password when a server is in failover state.

CDP/NSS Administration Guide 44

Page 47: CDP-NSS Administration Guide

FalconStor Management Console

A user quota limits how much space is allocated to this user for auto-expansion. Resources managed by this user can only auto-expand if the user’s quota has not been reached. The quota also limits how much space a host-based application, such as DiskSafe, can allocate.

6. Click OK to save the information.

Add a group To add a group:

1. Select the Groups tab.

2. Click the Add button.

3. Enter a name for the group.

You cannot have any spaces or special characters in the name.

4. If desired, specify a quota.

The quota limits how much space is allocated to each user in this group. The group quota overrides any individual user quota that may be set.

5. Click OK to save the information.

Add users togroups

Each user can belong to only one group.

You can add users to groups on both the Users and Groups tabs.

On the Users tab, you can highlight a single user and click the Membership button to add the user to a group.

CDP/NSS Administration Guide 45

Page 48: CDP-NSS Administration Guide

FalconStor Management Console

On the Groups tab, you can highlight a group and click the Membership button to add multiple users to that group.

Set a quota You can set a quota for a user on the Users tab and you can set a quota for a group on the Groups tab.

The quota limits how much space is allocated to each user. If a user is in a group, the group quota will override the user quota.

Reset apassword

To change a password, select Reset Password. You will need to enter a new password and then re-type the password to confirm.

You cannot change the root user’s password from this dialog. Use the Change Password option below.

Change the root user’s password

This option lets you change the root user’s CDP/NSS password if you are currently connected to a server.

1. Right-click on the server and select Change Password.

2. Enter your old password, the new one, and then re-enter it to confirm.

You will see this dialogfrom the Users tab.

You will see this dialogfrom the Groups tab.

CDP/NSS Administration Guide 46

Page 49: CDP-NSS Administration Guide

FalconStor Management Console

Check connectivity between the server and console

You can check if the console can successfully connect to the storage server by right-clicking on a server and selecting Connectivity Test.

By running this test, you can determine if your network connectivity is good. If it is not, the test may fail at some point. You should then check with your network administrator to determine the problem.

Add an iSCSI User or Mutual CHAP User

As a root user, you can add, delete or reset the CHAP secret of an iSCSI User or a mutual CHAP user. Other users (i.e. IPStor administrator or IPStor user) can also change the CHAP secret of an iSCSI user if they know the original CHAP secret.

To add an iSCSI user or Mutual CHAP User from an iSCSI server:

1. Right-click on the server and select iSCSI Users from the menu.

2. Select Users.

The iSCSI User Management screen displays.

From this screen, you can select an existing user from the list to delete the user or reset the Chap secret.

3. Click the Add button to add a new iSCSI user.

CDP/NSS Administration Guide 47

Page 50: CDP-NSS Administration Guide

FalconStor Management Console

The iSCSI User add dialog screen displays.

4. Enter a unique user name for the new iSCSI user.

5. Enter and confirm the password and click OK.

The Mutual CHAP level of security allows the target and the initiator authenticate each other. A separate secret is set for each target and for each initiator in the storage area network (SAN). You can select Mutual CHAP Users (Right-click on the iSCSI server --> iSCSI Users --> Mutual CHAP User) to manage iSCSI Mutual CHAP Users.

The iSCSI Mutual CHAP User Management screen displays allowing you to delete users or reset the Mutual CHAP secret.\

CDP/NSS Administration Guide 48

Page 51: CDP-NSS Administration Guide

FalconStor Management Console

Apply software patch updates

This section describes how to apply server and console patches in a standalone or failover environment.

Server patches

You can apply maintenance patches to your storage server through the console.

Apply patch -standalone

server

To apply a patch on a standalone server:

1. Download the patch onto the computer where the console is installed or a location accessible from that machine.

Patches can be downloaded from the FalconStor customer support portal (support.falconstor.com).

2. Highlight a storage server in the tree.

3. Select Tools menu --> Add Patch.

4. Confirm that you want to continue.

5. Locate the patch file and click Open.

The patch will be copied to the server and installed.

6. Check the Event Log to confirm that the patch installed successfully.

Apply patch -failover

configuration

To apply a patch on servers in a failover configuration and avoid unnecessary failover:

1. Make sure both servers are healthy and are not in a failover state.

2. From the console, log in to server B, select Failover --> Start Takeover A to manually take over server A.

Verify that takeover has completed successfully before continuing.

3. Apply the patch on server A.

Refer to the section above about applying a patch on a standalone server for details.

4. Check the Event Log to confirm that the patch installed successfully.

5. Make sure server A is ready by checking the result of the command sms -v.

6. From the console, on server B, select Failover --> Stop Takeover A to fail back.

Note: Server upgrade patches must be applied directly on the server and cannot be applied or rolled back via the console.

CDP/NSS Administration Guide 49

Page 52: CDP-NSS Administration Guide

FalconStor Management Console

7. Repeat steps 2-6, substituting “B” for “A”, to apply the patch on server B.

Rollback patch To remove (uninstall) a patch and restore the original files:

1. Highlight a storage server in the tree.

2. Select Tools menu --> Rollback Patch.

3. Confirm that you want to continue.

4. Select the patch and click OK.

5. Check the Event Log to confirm that the patch uninstalled successfully.

Console patches

Windowsconsole

You need an account with administrator privileges to install the full Windows console package.

1. Close any console that is running.

2. Run the Windows executable file to uninstall the current version of the console.

You might need to select the Run as administrator option to launch the program based on your login account.

3. Re-run the Windows executable file to install the new version.

Java console 1. Close any console that is running.

2. Go to the Bin sub-directory of the console installation folder.

3. Copy the existing console jar file to another folder and add the date to the name so that the file can be used as a backup.

4. Copy the new jar file to the Bin directory, making sure it has the same name as the existing jar file.

Java Web Startconsole

1. Close any console that is running.

2. Apply the server patch containing the new Web Start jar files on the server side.

CDP/NSS Administration Guide 50

Page 53: CDP-NSS Administration Guide

FalconStor Management Console

Perform system maintenance (updated March 2013)

The FalconStor Management Console gives you a convenient way to perform system maintenance for your storage server.

Networkconfiguration

If you need to change storage server IP addresses, you must make these changes using Network Configuration. Using YaST or other third-party utilities will not update the information correctly.

1. Right-click on a server and select System Maintenance --> Configure Network.

Domain name - Internal domain name.

Append suffix to DNS lookup - If a domain name is entered, it will be appended to the machine name for name resolution.

DNS - IP address of your DNS server.

Default gateway - IP address of your default gateway.

NIC - List of Ethernet cards in the server. Select the NIC you wish to modify from the drop-down list.

Enable Telnet - Enable/disable the ability to Telnet into the server.

Enable FTP - Enable/disable the ability to FTP into the server. The storage server must have the "pure-ftpd" package installed in order to use FTP.

Allow root to log in to telnet session - Log in to your telnet session using root.

Note: The system maintenance options are hardware-dependent. Refer to your hardware documentation for specific information.

CDP/NSS Administration Guide 51

Page 54: CDP-NSS Administration Guide

FalconStor Management Console

Network Time Protocol - Allows you to keep the date and time of your storage server in sync with Internet NTP servers. Click Config NTP to enter the IP addresses of up to five Internet NTP servers.

2. Click Config to configure each Ethernet card.

If you select Static, you must add addresses and net masks.

MTU - Set the maximum transfer unit of each IP packet. If your card supports it, set this value to 9000 for jumbo frames.

Set hostname Right-click on a server and select System Maintenance --> Set Hostname to change your hostname. You must restart the server if you change the hostname.

Restart IPStor Right-click on a server and select System Maintenance --> Restart IPStor to restart the Server processes.

Restart network Right-click on a server and select System Maintenance --> Restart Network to restart your local network configuration.

Reboot Right-click on a server and select System Maintenance --> Reboot to reboot your server.

Note: If the MTU is changed from 9000 to 1500, a performance drop will occur. If you then change the MTU back to 9000, the performance will not increase until the server is restarted.

Note: Do not change the hostname if you are using block devices. If you do, all block devices claimed by CDP/NSS will be marked offline and seen as foreign devices.

CDP/NSS Administration Guide 52

Page 55: CDP-NSS Administration Guide

FalconStor Management Console

Halt Right-click on a server and select System Maintenance --> Halt to turn off the server without restarting it.

IPMI Intelligent Platform Management Interface (IPMI) is a hardware level interface that monitors various hardware functions on a server.

If CDP/NSS detects IPMI when the server boots up, you will see several IPMI options on the System Maintenance --> IPMI sub-menu, Monitor, and Filter.

Monitor - Displays the hardware information that is presented to CDP/NSS. Information is updated every five minutes but you can click the Refresh button to update more frequently.

You will see a red warning icon in the first column if there is a problem with a component.

In addition, you will see a red exclamation mark on the server . An Alert tab will appear with details about the error.

Filter - You can filter out components you do not want to monitor. This may be useful for hardware you do not wish to monitor or erroneous errors, such as when you no longer have the hardware that is being monitored. You must enter the Name of the component being monitored exactly as it appears on the hardware monitor above. In addition you can enter the range (high and low values) to be filtered from sending an alert. If the value stays within the range, no alert will be sent. Only when the value goes out of the specified range will an alert display in the console.

CDP/NSS Administration Guide 53

Page 56: CDP-NSS Administration Guide

FalconStor Management Console

CDP/NSS Administration Guide 54

Page 57: CDP-NSS Administration Guide

FalconStor Management Console

Physical Resources

Physical resources are the actual devices attached to this storage server. SCSI adapters supported include SAS, FC, FCoE, and iSCSI. The SCSI adapters tab displays the adapters attached to this server and the SCSI Devices tab displays the SCSI devices attached to this server. These devices can include hard disks, tape libraries, and RAID cabinets. For each device, the tab displays the SCSI address (comprised of adapter number, channel number, SCSI ID, LUN) of the device, along with the disk size (used and available). If you are using FalconStor’s Multipathing, you will see entries for the alternate paths as well.

The Storage Pools tab displays a list of storage pools that have been defined, including the total size and number of devices in each storage pool.

The Persistent Binding tab displays the binding of each storage port to its unique SCSI ID.

When you highlight a physical device, the Category field in the right-hand pane describes how the device is being used. Possible values are:

• Reserved for virtual device - A hard disk that has not yet been assigned to a SAN resource or Snapshot area.

• Used by virtual device(s) - A hard disk that is being used by one or more SAN resources or Snapshot areas.

• Reserved for Service-Enabled Device - A hard disk with existing data that has not yet been assigned to a SAN resource.

• Used by Service-Enabled Device - A hard disk with existing data that has been assigned to a SAN resource.

• Unassigned - A physical resource that has not been reserved yet.

CDP/NSS Administration Guide 55

Page 58: CDP-NSS Administration Guide

FalconStor Management Console

• Not available for IPStor - A miscellaneous SCSI device that is not used by the storage server (such as a scanner or CD-ROM).

• System - A hard disk where system partitions exist and are mounted (i.e. swap file, file system installed, etc.).

Physical resource icons

The following table describes the icons that are used to describe physical resources:

Prepare devices to become logical resources

You can use one of the FalconStor disk preparation options to change the category of a physical device. This is important to do if you want to create a logical resource using a device that is currently unassigned.

Icon Description

The D icon is indicates that the port is both an initiator AND a target.

The T icon indicates that this is a target port.

The I icon indicates that this is an initiator port.

The V icon indicates that this disk has been virtualized or is reserved for a virtual disk.

The S icon indicates that this is a Service-Enabled Device or is reserved for a Service-Enabled Device.

The a icon indicates that this device is used in the logical resource that is currently being highlighted in the tree.

The D icon indicates that an adapter using NPIV when it's enabled in dual-mode.

Failover and Cross-mirror icons:

The physical disk appearing in color indicates that it is local to this server. The V indicates that the disk is virtualized for this server. If there were a Q on the icon, it would indicate that this disk is the quorum disk that contains the configuration repository.

The physical disk appearing in black and white indicates that it is a remote physical disk. The F indicates that the disk is a foreign disk.

CDP/NSS Administration Guide 56

Page 59: CDP-NSS Administration Guide

FalconStor Management Console

• The storage server detects new devices when you connect to it. When they are detected you will see a dialog box notifying you of the new devices. At this point you can highlight a device and press the Prepare Disk button to prepare it. The Physical Devices Preparation Wizard will help you to virtualize, service-enable, unassign, or import physical devices.

• At any time, you can prepare a single unassigned device by doing the following: Highlight the device, right-click, select Properties and select the device category. (You can find all unassigned devices under the Physical Resources/Adapters node of the tree view.)

• For multiple unassigned devices, highlight Physical Resources, right-click and select Prepare Disks. This launches a wizard that allows you to virtualize, unassign, or import multiple devices at the same time.

Rename a physical device

When a device is renamed on a server in a failover pair, the device gets renamed on the partner server also. However, it is not possible to rename a device when the server has failed over to its partner.

1. To rename a device, right-click on the device and select Rename.

2. Type the new name and press Enter.

CDP/NSS Administration Guide 57

Page 60: CDP-NSS Administration Guide

FalconStor Management Console

Use IDE drives with CDP/NSS

If you have an IDE drive that you want to virtualize and use as storage, you must create a block device from it. To do this:

1. Right-click on Block Devices (under Physical Devices) and select Create Disk.

2. Select the device and specify a SCSI ID and LUN for it.

The defaults are the next available SCSI ID and LUN.

3. Click OK when done.

This virtualizes the device. When it is finished, you will see the device listed under Block Devices. You can now create logical resources from this device.

Unlike a regular SCSI virtual device, block devices can be deleted.

Rescan adapters

1. To rescan all adapters, right-click on Physical Resources and select Rescan.

Note: Do not change the hostname if you are using block devices. If you do, all block devices claimed by CDP/NSS will be marked offline and seen as foreign devices.

CDP/NSS Administration Guide 58

Page 61: CDP-NSS Administration Guide

FalconStor Management Console

If you only want to scan a specific adapter, right-click on that adapter and select Rescan.

If you want to discover new devices without scanning existing devices, click the Discover New Devices radio button and then check the Discover new devices only without scanning existing devices check box. You can then specify additional scan details.

2. Determine what you want to rescan.

If you are discovering new devices, set the range of adapters, SCSI IDs, and LUNs that you want to scan.

CDP/NSS Administration Guide 59

Page 62: CDP-NSS Administration Guide

FalconStor Management Console

Use Report LUNs - The system sends a SCSI request to LUN 0 and asks for a list of LUNs. Note that this SCSI command is not supported by all devices. (If VSA is enabled and the actual LUN is beyond 256, you will need to use this option to discover them.)

LUN Range - It is only necessary to use the LUN range if the Use Report LUN option does not work for your adapter.

Stop scan when a LUN without a device is encountered - This option (used with LUN Range) will scan LUNs sequentially and then stop after the last LUN is found. Use this option only if all of your LUNs are sequential.

Auto detect FC HBA SCSI ID - Select this option to enable auto detection of SCSI IDs with persistent binding. This will scan QLogic HBAs to discover devices beyond the scan range specified above.

Read partition from inactive path when all the paths are inactive. - Select this option to force a status check of the from a path that is not in use.

Import a disk

You can import a ‘foreign’ disk into a CDP or NSS appliance. A foreign disk is a virtualized physical device containing FalconStor logical resources previously set up on a different storage server. You might need to do this if a storage server is damaged and you want to import the server’s disks to another storage server.

When you right-click on a disk that CDP/NSS recognizes as ‘foreign’ and select the Import option, the disk’s partition table is scanned and an attempt is made to reconstruct the virtual drive out of all of the segments.

If the virtual drive was constructed from multiple disks, you can highlight Physical Resources, right-click and select Prepare Disks. This launches a wizard that allows you to import multiple disks at the same time.

As each drive is imported, it is marked offline until the drive locates all of the segments. Once all of the disks that were part of the virtual drive have been imported, the virtual drive is re-constructed and is marked online.

Importing a disk preserves the data on the disk but does not preserve the client assignments. Therefore, after importing, you must reassign clients to the resource.

Notes:

• The GUID (Global Unique Identifier) is the permanent identifier for each virtual device. When you import a disk, the virtual ID, such as SANDisk-00002, may be different from the original server. Therefore, you should use the GUID to identify the disk.

• If you are importing a disk that can be seen by other storage servers, you should perform a rescan before importing. Otherwise, you may have to rescan after performing the import.

CDP/NSS Administration Guide 60

Page 63: CDP-NSS Administration Guide

FalconStor Management Console

Test physical device throughput

You can test the following for your physical devices:

• Sequential throughput• Random throughput• Sequential I/O rate• Random I/O rate• Latency

To check the throughput for a device:

1. Right-click on the device (under Physical Resources).

2. Select Test from the menu.

The system will test the device and then display the throughput results on a new Throughput tab.

Manage multiple paths to a device

SCSI aliasing works with the FalconStor Multipathing option to eliminate a potential point of failure in your storage network by providing multiple paths to your storage devices using multiple Fibre Channel switches and/or multiple adapters and/or storage devices with multiple controllers. In a multiple path configuration, CDP/NSS automatically detects all paths to the storage devices. If one path fails, CDP/NSS automatically switches to another.

Refer to the “Multipathing” chapter for more information.

Repair paths to a device

Repair is the process of removing one or more physical device paths from the system and then adding them back. Repair may be necessary when a device is not responsive which can occur if a storage controller has been reconfigured or if a standby alias path is offline/disconnected.

If a path is faulty, adding it back may not be possible. To repair paths to a device:

CDP/NSS Administration Guide 61

Page 64: CDP-NSS Administration Guide

FalconStor Management Console

1. Right-click on the device and select Repair.

If all paths are online, the following message will be displayed instead: “There are no physical device paths that can be repaired.”

2. Select the path to the device that needs to be repaired.

If the path is still missing after the repair or the entire physical device is gone from the console, the path could not be repaired. You should investigate the cause, correct the problem, and then rescan adapters with the Discover New Devices option.

CDP/NSS Administration Guide 62

Page 65: CDP-NSS Administration Guide

FalconStor Management Console

Logical Resources

Logical resources are all of the resources defined on the storage server, including SAN resources, and groups.

SANResources

SAN logical resources consist of sets of storage blocks from one or more physical hard disk drives. This allows the creation of logical resources that contain a portion of a larger physical disk device or an aggregation of multiple physical disk devices.

Clients do not gain access to physical resources; they only have access to logical resources. This means that an administrator must configure each physical resource to one or more logical resources so that they can be assigned to the clients.

CDP/NSS Administration Guide 63

Page 66: CDP-NSS Administration Guide

FalconStor Management Console

When you highlight a SAN resource, you will see a small icon next to each device that is being used by the resource. In addition, when you highlight a SAN resource,

you will see a GUID field in the right-hand pane.

The GUID (Global Unique Identifier) is the permanent identifier for this virtual device. The virtual ID, SANDisk-00002, is not. You should make note of the GUID, because, in the event of a disaster, this identifier will be important if you need to rebuild your system and import this disk.

Groups Groups are multiple drives (virtual drives and Service-Enabled drives) that will be assembled together for SafeCache or snapshot synchronization purposes. For example, when one drive in the group is to be replicated or backed up, the entire group will be snapped together to maintain a consistent image.

Logical resource icons

The following table describes the icons that are used to show the status of logical resources:

Icon Icon alert / warning description

• Virtual device offline (or has incomplete segments)• Mirror is out of sync• Mirror is suspended• TimeMark rollback failed• Replication failed• One or more supporting resources is not accessible (SafeCache,

CDP, Snapshot resource, HotZone, etc.)

CDP/NSS Administration Guide 64

Page 67: CDP-NSS Administration Guide

FalconStor Management Console

Enable write caching

You can leverage a third party disk subsystem's built-in caching mechanism to improve I/O performance. Write caching allows the third party disk subsystem to utilize its internal cache to accelerate I/O.

To write cache a resource, right-click on it and select Write Cache --> Enable.

Replication

The Incoming and Outgoing objects under the Replication object display information about each server that replicates to this server or receives replicated data from this server. If the server’s icon is white, the partner server is "connected" or "logged in". If the icon is yellow, the partner server is "not connected" or "not logged in".

When you highlight the Replication object, the right-hand pane displays a summary of replication to/from each server.

For each replica disk, you can promote the replica or reverse the replication. Refer to the “Replication” chapter for more information about using replication.

• Replica in disaster recovery state (after forcing a replication reversal)

• Cross-mirror need to be repaired on the virtual appliance• Primary replica is no longer valid as a replica• Invalid replica

Icon Icon alert / warning description

CDP/NSS Administration Guide 65

Page 68: CDP-NSS Administration Guide

FalconStor Management Console

SAN Clients

Storage Area Network (SAN) Clients are the file and application servers that utilize the storage resources via the storage server. Since SAN resources appear as locally attached SCSI devices, the applications, such as file services, databases, web and email servers, do not need to be modified to utilize the storage.

On the other hand, since the storage is not locally attached, there may be some configuration needed to locate and mount the required storage. The SAN Clients access their storage resources via iSCSI initiators (for iSCSI) or HBAs (for Fibre Channel or iSCSI). The storage resources appear as locally attached devices to the SAN Clients’ operating systems (Windows, Linux, Solaris, etc.) even though the devices are actually located at the storage server site.

When you highlight a specific SAN client, the right-hand pane displays the Client ID, type, and authentication status, as well as information about the client machine.

The Resources tab displays a list of SAN resources that are allocated to this client. The adapter, SCSI ID and LUN are relative to this CDP/NSS SAN client only; other clients that may have access to the SAN resource may have different adapter SCSI ID and LUN information.

Add a client from the FalconStor Management Console

1. In the console, right-click on SAN Clients and select Add.

2. Enter a name for the SAN Client, select the operating system, and indicate whether or not the client machine is part of a cluster.

CDP/NSS Administration Guide 66

Page 69: CDP-NSS Administration Guide

FalconStor Management Console

If the client’s machine name is not resolvable, you can enter an IP address and then click Find to discover the machine.

3. Determine if you want to limit the amount of space that can be automatically assigned to this client.

The quota represents the total allowable space that can be allocated for all of the resources associated with this client. It is only used to restrict certain types of resources (such as Snapshot Resource and CDP Resource) that expand automatically. This prevents them from allocating storage space indefinitely. Instead, they can only expand if the total size of all the resources associated with the client does not exceed the pre-defined quota for that client.

4. Indicate if you want to enable persistent reservation.

This option allows clustered SAN Clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes.

5. Select the client’s protocol(s).

If you select iSCSI, you must indicate if this is a mobile client. You will then be asked to select the initiator that this client uses and add/select users who can authenticate for this client. Refer to ‘Add iSCSI clients’ for more information.

If you select Fibre Channel, you will have to select WWPN initiators. You will then be asked to select Volume Set Addressing. Refer to Add Fibre Channel clients for more information.

6. Confirm all information and click Finish to add this client

Add a client for FalconStor host applications

If you are using FalconStor client/agent software, such as snapshot agents, or HyperTrac, refer to the FalconStor Intelligent Management Agent (IMA) User Guide or the appropriate agent user guide for details regarding adding clients via FalconStor Intelligent Management Agent (IMA).

FalconStor client/agent software allows you to add a storage server directly in IMA/SDM or the SAN Client.

For example, if you are using HyperTrac, the first time you start HyperTrac, the system scans and imports all storage servers identified by IMA/SDM or the SAN Client. These storage servers are then listed in the HyperTrac the console. Alternatively, you can add a storage server directly in IMA/SDM or the SAN Client.

Note: If you are using AIX SAN Client cluster nodes, this option should be cleared.

CDP/NSS Administration Guide 67

Page 70: CDP-NSS Administration Guide

FalconStor Management Console

Change the ACSL

You can change the ACSL (adapter, channel, SCSI, LUN) for a SAN resource assigned to a SAN client if the device is not currently attached to the client. To change, right-click on the SAN resource under the SAN Client object (you cannot do this from the SAN resources object) and select Properties. You can enter a new adapter, SCSI ID, or LUN.

Grant access to a SAN Client

By default, only the root user and IPStor admins can manage SAN resources, groups, or clients. While IPStor users can create new SAN Clients, if you want an IPStor user to manage an existing SAN Client, you must grant that user access. To do this:

1. Right-click on a SAN Client and select Access Control.

2. Select which user can manage this SAN Client.

Each SAN Client can only be assigned to one IPStor user. This user will have rights to perform any function on this SAN Client, including assigning, adding protocols, and deletion.

Note: For Windows clients: One SAN resource for each client must have a LUN of 0. Otherwise, the operating system will not see the devices assigned to the SAN client. In addition, for the Linux OS, the rest of the LUNs must be sequential.

CDP/NSS Administration Guide 68

Page 71: CDP-NSS Administration Guide

FalconStor Management Console

Console options

To set options for the console, select Tools --> Console Options.

You can make the following changes.

• Remember password for session - If the console is already connected to a server, when you attempt to open a second, third, or subsequent server, the console will use the credentials that were used for the last successful connection. If this option is unchecked, you will be prompted to enter a password for every server you try to open.

• Automatically time out servers after nn minute(s) - The console will collapse a server that has been idle for the number of minutes you specify. If you need to access the server again, you will have to reconnect to it. The default is 10 minutes. Enter 00 minutes to disable the timeout.

• Update statistics every nn second(s) - The console will update statistics by the frequency you specify.

• Automatically refresh the event log every nn second(s) - The console will update the event log by the frequency you specify, only when you are viewing it.

• Console Log Options - The console log (ipstorconsole.log) is kept on the local machine and stores information about the local version of the console. The console log is displayed at the very bottom of the console screen. The options affect how information for each console session will be maintained:

• Overwrite log file - Overwrite the information from the last console session when you start a new session.

• Append to log file - Keep all session information.• Do not write to log file - Do not maintain a console log.

CDP/NSS Administration Guide 69

Page 72: CDP-NSS Administration Guide

FalconStor Management Console

Create a custom menu

You can create a menu in the FalconStor Management Console from which you can launch external applications. This can add to the convenience of FalconStor’s centralized management paradigm by allowing administrators to start all of their applications from a single place. The Custom menu will appear in your console along with the normal menu (between Tools and Help).

To create a custom menu, select Tools --> Set up Custom Menu. Then click Add and

enter the information needed to launch this application.

• Menu Label - The application title that will be displayed in the Custom menu.• Command - The file (usually an.exe) that launches this application.• Command Argument - An argument that will be passed to the application. If

you are launching an Internet browser, this could be a URL.• Menu Icon - The graphics file that contains the icon for this application. This

will be displayed in the Custom menu.

CDP/NSS Administration Guide 70

Page 73: CDP-NSS Administration Guide

CDP/NSS Administration GuideCDP/NSS Administration Guide

Storage PoolsA storage pool is a group of one or more physical devices. Creating a storage pool enables you to provide all of the space needed by your clients in a very efficient manner. You can create and manage storage pools in a variety of ways, including:

• Tiers - Performance levels, cost, or redundancy• Device categories - Virtual, Service-Enabled

• Types - Primary storage, Journal, CDR, Cache, HotZone, virtual headers, Snapshot, TimeView, and configuration.

• Specific application use - FalconStor DiskSafe, etc.

For example, you can classify your storage by tier (low-cost, high-performance, high-redundancy, etc.) and assign it based on these classifications. Using this example, you may want to have your business critical applications use storage from the high-redundancy or high-performance pools while having your less critical applications use storage from other pools.

Storage pools work with all automatic allocation mechanisms in CDP/NSS. This capacity-on-demand functionality automatically allocates storage space from a specific pool when storage is needed for a specific use.

As your storage needs grow, you can easily extend your storage capacity by adding more devices to a pool and then creating more logical resources or allocating more space to your existing resources. The additional space is immediately and seamlessly available.

Manage storage pools and the devices within storage pools

Only root users and IPStor administrators can manage storage pools. The root user and the IPStor Administrator have full privileges for storage pools. The root user or the IPStor Administrator must create the pools first and then the IPStor Users can manage them.

IPStor Users can create virtual devices and allocate space from the storage pools assigned to them but they cannot create, delete, or modify storage pools. The storage pool management rights of each type of user are summarized in the table below:

Refer to the ‘Account management’ section for additional information regarding user access rights.

Type of User Can create/delete pools?Can add/remove

storage from pools

Root Yes Yes

IPStor Administrator Yes Yes

IPStor User No No

CDP/NSS Administration Guide 71

Page 74: CDP-NSS Administration Guide

Storage Pools

Create storage pools

Physical devices must be prepared (virtualized, service-enabled) before they can be added into a storage pool.

Each storage pool can only contain the same type of physical devices. Therefore, a storage pool can contain only virtualized drives or only service-enabled drives. A storage pool cannot contain mixed types.

Physical devices that have been allocated for a logical resource can still be added to a storage pool.

To create a storage pool:

1. Right-click on Storage Pools and select New.

2. Enter a name for the storage pool.

3. Indicate which type of physical devices will be in this storage pool.

Each storage pool can only contain the same type of physical devices.

4. Select the devices that will be assigned to this storage pool or you can leave the storage pool empty for later use.

Physical devices that have been allocated for any logical resource can still be added to a storage pool.

5. Click OK to create the storage pool.

CDP/NSS Administration Guide 72

Page 75: CDP-NSS Administration Guide

Storage Pools

Set properties for a storage pool

To set properties:

1. Right-click on a storage pool and select Properties.

On the General tab you can change the name of the storage pool and add/delete devices assigned to this storage pool.

2. Select the Type tab to designate how each storage pool should be allocated.

CDP/NSS Administration Guide 73

Page 76: CDP-NSS Administration Guide

Storage Pools

The type affects how each storage pool should be allocated. When you are in a CDP/NSS creation wizard, the applicable storage pool(s) will be presented for selection. However, you can still select from another storage pool type if needed.

• All Types can be used for any type of resource.• Storage is the preferred storage pool to create SAN resources and their

corresponding replicas.• Snapshot is the preferred storage pool for snapshot resources.• Cache is the preferred storage pool for SafeCache resources.• HotZone is the preferred storage pool for HotZone resources.• Journal is the preferred storage pool for CDP resources and CDP resource

mirrors.• CDR is the preferred storage pool for continuous data replicas.• VirtualHeader is the preferred storage pool for the virtual header that is

created for a Service-Enabled Device SAN Resource.• Configuration is the preferred storage pool to create the configuration

repository for failover.• TimeView is the preferred storage pool for TimeView images.• ThinProvisioning is the preferred storage pool for thin disks.

Allocation Block Size allows you to specify the minimum size that will be allocated when a virtual resource is created or expanded.

Using this feature is highly recommended for thin disks (ThinProvisioning selected as the type for this storage pool) for several reasons.

The maximum number of segments that is supported per virtual device is 1024. When Allocation Block Size is not enabled, thin disks are expanded in increments of 10 GB. With frequent expansion, it is easy to reach the maximum number of segments. Using Allocation Block Size with the largest block size feasible for your storage can prevent devices from reaching the maximum.

In addition, larger block sizes mean more consecutive space within each block, limiting disk fragmentation and improving performance for thin disks.

The default for the Allocation Block Size is 16 GB and the possible choices are 1, 2, 4, 8, 16, 32, 64, 128, and 256 GB.

If you enable Allocation Block Size for resources other than thin disks, Service-Enabled Devices, or any copy of a resource (replica, mirror, snapshot copy, etc…), you should be aware that the allocation will round up to the next multiple when you create a resource. For example, if you have the Allocation Block Size set to 16 GB and you attempt to create a 20 GB virtual device, the system will create a 32 GB device.

If you do not enable Allocation Block Size, you can specify any size when creating/expanding devices. You may want to do this for disks that are not thin disks since they do not expand as often and will rarely reach the maximum number of segments.

When specifying an Allocation Block Size, your physical disk should be evenly divisible by the number you specify so that all space can be used. For example, if you have a 500 GB disk and you select 128 GB as the block size, the system

CDP/NSS Administration Guide 74

Page 77: CDP-NSS Administration Guide

Storage Pools

will only be able to allocate three blocks of 128 GB each (128*3=384) from that disk because the remaining 116 GB is not enough to allocate. When you look at the “Available Disk Space” statistics in the console, this remaining 116 GB will be excluded.

3. Select the Tag tab to set a tag string to limit client side applications to specific storage pools.

When an application requests storage with a specific tag string, only the storage pools with the same tag can be used. You can have your own internal application that has been programmed to use a tag.

4. Select the Security tab to designate which users and administrators can manage this storage pool.

Each storage pool can be assigned to one or more users or groups. The assigned users can create virtual devices and allocate space from the storage pools assigned to them but they cannot create, delete, or modify storage pools.

CDP/NSS Administration Guide 75

Page 78: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Logical ResourcesOnce you have physically attached your physical SCSI or Fibre Channel devices to your storage server you are ready to create Logical Resources to be used by your CDP/NSS clients. This configuration can be done entirely from the FalconStor Management Console.

Logical Resources are logically mapped devices on the storage server. They are comprised of physical storage devices, known as Physical Resources. Physical resources are the actual SCSI and/or Fibre Channel devices (such as hard disks, tape drives, and RAID cabinets) attached to the server.

Clients do not have access to physical resources; they have access only to Logical Resources. This means that physical resources must be defined as Logical Resources first, and then assigned to the clients so they can access them.

SAN resources provide storage for file and application servers (called SAN Clients). When a SAN resource is assigned to a SAN client, a virtual adapter is defined for that client. The SAN resource is assigned a virtual SCSI ID on the virtual adapter. This mimics the configuration of actual SCSI storage devices and adapters, allowing the operating system and applications to treat them like any other SCSI device.

Understanding how to create and manage Logical Resources is critical to a successful CDP/NSS storage network. Please read this section carefully before creating and assigning Logical Resources.

CDP/NSS Administration Guide 76

Page 79: CDP-NSS Administration Guide

Logical Resources

Types of SAN resources

SAN resources can be of the following types: virtual devices and Service-Enabled Devices.

Virtual devices

IPStor technology gives CDP and NSS the ability to aggregate multiple physical storage devices (such as JBODs and RAIDs) of various interface protocols (such as SCSI or Fibre Channel) into logical storage pools. From these storage pools, virtual devices can be created and provisioned to application servers and end users. This is called storage virtualization.

Virtual devices are defined as sets of storage blocks from one or more physical hard disk drives. This allows the creation of virtual devices that can be a portion of a larger physical disk drive, or an aggregation of multiple physical disk drives.

Virtual devices offer the added capability of disk expansion. Additional storage blocks can be appended to the end of existing virtual devices without erasing the data on the disk.

Virtual devices can only be assembled from hard disk storage. It does not work for CD-ROM, tape, libraries, or removable media.

When a virtual device is allocated to an application server, the server thinks that an actual SCSI storage device has been physically plugged into it.

Virtual devices are assigned to virtual adapter 0 (zero) when mapped to a client. If there are more than 15 virtual devices, a new adapter will be defined.

Virtualizationexamples

The following diagrams show how physical disks can be mapped into virtual devices.

Adapter = 1SCSI ID = 4sectors 0-

9999

Adapter = 1SCSI ID = 3sectors 0-

9999Adapter = 0SCSI ID = 1sectors 0-

19999

Physical Devices

SAN Resources

Virtual Device:SCSI ID = any.Adapter number does not need to match.Sectors are mapped, combining sectors from multiple physical disks.

CDP/NSS Administration Guide 77

Page 80: CDP-NSS Administration Guide

Logical Resources

The diagram above shows a virtual device being created out of two physical disks. This allows you to create very large virtual devices for application servers with large storage requirements. If the storage device needs to grow, additional physical disks may be added to increase the size of a virtual device. Note that this will require that the client application server resize the partition and file system on the virtual device.

The example above shows a single physical disk split into two virtual devices. This is useful when a single large device exists, such as a RAID, which could be shared among multiple client application servers.

Virtual devices can be created using various combining and splitting methods, although you will probably not create them in this manner in the beginning. You may end up with devices like this after growing virtual devices over time.

Thin devices

Thin Provisioning allows storage space to be assigned to clients dynamically, on a just-enough and just-in-time basis, based on need. This avoids under-utilization of storage by applications while allowing for expansion in the long-term. The maximum size of a disk (virtual SAN resource) with Thin Provisioning enabled is limited to 67,108,596 MB. You can expand a thin disk up to the maximum size of 67,108,596 MB. When expanded, the mirror on it automatically expands also. A replica on a thin disk will be able to use space on other virtualized devices as long as there is available space. If space is not available for expansion, the Thin Provisioned disk on primary will be prevented from expanding and a message will display on the console indicating why expansion is not possible. The minimum permissible size of a thin disk is 10 GB. Once the threshold is met, the thin disk expands in 10 GB increments.

SANResources

PhysicalDevices

Virtual Disk:SCSI ID = anyAdapter number does not need tomatchSectors are mapped, a singlephysical device maps to multiplevirtual devices

Adapter = 2SCSI ID = 3

sectors 5000-9999

Adapter = 2SCSI ID = 3sectors 0-

4999

Adapter = 1SCSI ID = 6sectors 0-

4999

Adapter = 1SCSI ID = 5sectors 0-

4999

CDP/NSS Administration Guide 78

Page 81: CDP-NSS Administration Guide

Logical Resources

With Thin Provisioning, a single pool of storage can be provisioned to multiple client hosts. Each client sees the full size of its provisioned disk while the actual amount of storage used is much smaller. Because so little space is actually being used, Thin Provisioning allows resources to be over-allocated, meaning that more storage can be provisioned to hosts than actually exists.

Because each client sees the full size of its provisioned disk, Thin Provisioning is the ideal solution for users of legacy databases and operating systems that cannot handle dynamic disk expansion.

The mirror of a disk with Thin Provisioning enabled is another disk with Thin Provisioning enabled. When a thin disk is expanded, the mirror also automatically expands. If the mirrored disk is offline, storage cannot be added to the thin disk manually.

If the mirror is offline when the threshold is reached and automatic storage addition is about to occur, the offline mirror is removed. Storage is automatically added to the Thin Provisioned disk, but the mirror must be recreated manually.

A replica on a thin disk can use space on other virtualized devices as long as space is available. If there is no space available for expansion, the thin disk on the primary will be prevented from expanding and a message will display on the console.

Note: When using Thin Provisioning, it is recommended that you create a disk with an initial size that is at least 15% the maximum size of the disk. Some write operations, such as creating a file system in Linux, may scatter their writes across the span of a disk.

CDP/NSS Administration Guide 79

Page 82: CDP-NSS Administration Guide

Logical Resources

You can check the status of the thin disk from the FalconStor Management Console by highlighting the thin disk and clicking the General tab.

The usage percentage is displayed in green as long as the available sectors are greater than 120% of the threshold (in sectors).

It is displayed in Blue when available sectors are less than 120% of the threshold (in sectors) but still greater than the threshold (in sectors). The usage percentage is displayed in Red when the available sectors are less than the threshold (in sectors).

Service-Enabled devices

Service-Enabled Devices are hard drives with existing data that can be accessed by CDP/NSS to make use of all key CDP/NSS storage services (mirroring, snapshot, etc.), without any migration/copying, without any modification of data, and with minimal downtime. Service-Enabled Devices are used to migrate existing drives into the SAN.

Note: Do not perform disk defragmentation on a Thin Provisioned disk. Doing so may cause data from the used sectors of the disk to be moved into non-used sec-tors and result in unexpected thin-provisioned disk space increase. In fact, any disk or filesystem utility that might scan or access any unused sector could also cause a similar unexpected space usage increase.

CDP/NSS Administration Guide 80

Page 83: CDP-NSS Administration Guide

Logical Resources

Because Service-Enabled Devices are preserved intact, and existing data is not moved, the devices are not virtualized and cannot be expanded. Service-Enabled Devices are all maintained in a one-to-one mapping relationship (one physical disk equals one logical device). Unlike virtual devices, they cannot be combined or split into multiple logical devices.

Create SAN resources - Procedures

SAN resources are created in the FalconStor Management Console.

Prepare devices to become SAN resources

You can use one of FalconStor’s disk preparation options to change the category of a device. This is important if you want to create a logical resource using a device that is currently unassigned.

• CDP and NSS appliances detect new devices as you connect to them (or when you execute the Rescan command). When new devices are detected, a dialog box displays notifying you of the discovered devices. At this point you can highlight a device and press the Prepare Disk button to prepare it.

• At any time, you can prepare a single unassigned device by following the steps below: • Highlight the device and right-click• Select Properties• Select the device category. (You can find all unassigned devices under

the Physical Resources/Adapters node of the tree view.)• For multiple unassigned devices, highlight Physical Resources, right-click

and select Prepare Disks. This launches a wizard that allows you to virtualize, unassign, or import multiple devices at the same time.

Create a virtual device SAN resource

You can create a virtual device SAN resource by following the steps below. Each storage server supports a maximum of 1024 SAN resources.

1. Right-click on SAN Resources and select New.

Note: After you make any configuration changes, you may need to rescan or restart the client in order for the changes to take effect. After you create a new vir-tual device, assign it to a client, and restart the client (or rescan), you will need to write a signature, create a partition, and format the drive so that the client can use it.

CDP/NSS Administration Guide 81

Page 84: CDP-NSS Administration Guide

Logical Resources

2. Select Virtual Device.

3. Select the storage pool or physical device(s) from which to create this SAN resource.

You can create a SAN resource from any single storage pool. Once the resource is created from a storage pool, additional space (automatic or manual expansion) can only be allocated from the same storage pool.

You can select List All to see all storage pools, if needed.

CDP/NSS Administration Guide 82

Page 85: CDP-NSS Administration Guide

Logical Resources

Depending upon the resource type, you may have the option to select to Use Thin Provisioning for more efficient space allocation.

4. Select the Use Thin Provisioning checkbox to allocate a minimum amount of space for a virtual resource. When usage thresholds are met, additional storage is allocated as necessary.

5. Specify the fully allocated size of the resource to be created.

For NSS, the default initial size is 1 GB and the default allocation is 10 GB.

For CDP, the default initial size is 16 GB and the default allocation is 16 GB.

A disk with Thin Provisioning enabled can be configured to replicate to a SAN resource or to another disk with Thin Provisioning enabled.

From the client side, it appears that the full disk size is available.

Thin provisioning is supported for the following resource types:

• SAN Virtual• SAN Virtual Replica

SAN resources can replicate to a disk with Thin Provisioning as long as the size of the SAN resource is 10GB or greater.

CDP/NSS Administration Guide 83

Page 86: CDP-NSS Administration Guide

Logical Resources

6. Select how you want to create the virtual device.

• Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

• Express lets you designate how much space to allocate and then automatically creates a virtual device using an available device.

• Batch lets you create multiple SAN resources at one time. These SAN resources will all be the same size.

CDP/NSS Administration Guide 84

Page 87: CDP-NSS Administration Guide

Logical Resources

If you select Custom, you will see the following windows:

Select either an entirely unallocated or partially unallocated device.

Only one device can be selected at a time from this dialog. To create a virtual device SAN resource from multiple physical devices, you will need to add the devices one at a time. After selecting the parameters for the first device, you will have the option to add more devices.

Indicate how much space to allocate from this device.

Click Add More if you want to add another physical device to this SAN resource.If you select to add more devices, you will go back to the physical device selection screen where you can select another device.

CDP/NSS Administration Guide 85

Page 88: CDP-NSS Administration Guide

Logical Resources

If you select Batch, you will see a window similar to the following:

• Indicate how to name each resource. The SAN Resource Prefix is combined with the starting number to form the name of each SAN resource. You can deselect the Use default ID for Starting Number option to restart numbering from one.

• In the Resource Size field, indicate how much space to allocate for each resource.

• Indicate how many SAN resources to create in the Number of Resources field.

CDP/NSS Administration Guide 86

Page 89: CDP-NSS Administration Guide

Logical Resources

7. (Express and Custom only) Enter a name for the new SAN resource.

The Express screen is shown above and the Custom screen is shown below:

Note:

• The name is not case sensitive.• The Set this as the resource name (not prefix) option does not append

the name with the virtual ID number.

CDP/NSS Administration Guide 87

Page 90: CDP-NSS Administration Guide

Logical Resources

8. Confirm that all information is correct and then click Finish to create the virtual device SAN resource.

9. (Express and Custom only) Indicate if you would like to assign the new SAN resource to a client.

If you select Yes, the Assign a SAN Resource Wizard will be launched.

Note: After you assign the SAN resource to a client, you may need to restart the client. You will also need to write a signature, create a partition, and format the drive so that the client can use it.

CDP/NSS Administration Guide 88

Page 91: CDP-NSS Administration Guide

Logical Resources

Add virtual disks for data storage

The FalconStor CDP and NSS Virtual Appliance supports up to 10 TB of space for storage virtualization, depending upon the storage source. Before you create the virtual disks for the virtualization storage, you should know the block size of the datastore volume, and the maximum size of one virtual disk size controlled by the volume block size. It is recommended that you select an 8 MB block size when creating a VMFS datastore on your VMware ESX servers.

If you create a virtual disk that exceeds the maximum size supported by its located volume, an "Insufficient disk space on datastore" error will display. You can resolve the error by changing to the disk size supported by the volume block.

You can check the block size of your volume via the VMware vSphere Client:

1. Launch the VMware vSphere Client, connect to the ESX server and log into the account with root privileges.

2. Click the ESX server in the inventory and then click the Configuration setting.

3. On the Configuration tab, click Storage under the Hardware list. Then right-click one of the datastores and click Properties.

Volume Block Size Maximum size of one virtual disk

1MB 256GB

2MB 512GB

4MB 1024GB

8MB 2048GB

CDP/NSS Administration Guide 89

Page 92: CDP-NSS Administration Guide

Logical Resources

On the Volume Properties, you can see the Block Size and the Maximum File Size in the Format information.

The maximum file size depends on the block size of the VMFS storage. You cannot create a standard virtual disk that exceeds this maximum capacity; otherwise you will encounter an “insufficient disk space on datastore” pro).blem (even if the VMFS storage has enough capacity.

4. Add a new virtual disk by following the steps below.

There is no need to power-off the virtual appliance to add the new virtual disk for storage virtualization usage when using the CDP or NSS Virtual Appliance.

• On the VMware vSphere Client, right-click the CDP or NSS Virtual Appliance: FalconStor-NSSVA and then click Edit Settings.

• On the Hardware tab, click the Add button.• For Select Device Type, click Hard Disk and then click Next.• For Select a Disk, click Create a new virtual disk and then click Next.• When prompted to Specify Disk Capacity, Provisioning, and Location,

enter the size of the new virtual disk. Make sure the value does not exceed the maximum file size supported by the volume.

• Check the Support clustering features such as Fault Tolerance option to force creation of an eagerzeroedthick disk.

Notes:

• Do not select the Fault Tolerance option for the guest VM's vmdks at this step. • Creating an EagerZeroThick disk is a time-consuming process. You may

experience a significant waiting period,

CDP/NSS Administration Guide 90

Page 93: CDP-NSS Administration Guide

Logical Resources

• Browse to select a datastore with available free space to create the virtual disk.

• Click Next to set the disk mode as Independent Persistent on Specify Advanced Options.

• Review your choices and click Finish to complete the virtual disk creation setting.

• In the FalconStor-NSSVA (or FalconStor-CDPVA) Virtual Machine properties, you will see New Hard Disk (adding) in the hardware list,

• Click OK to save the setting and the new virtual disk will be created on the datastore.

• Repeat the steps above to add another virtual disk for virtualization storage.

5. Add a new device to the storage pool.

The FalconStor CDP/NSS Virtual Appliance uses storage pools to manage storage usage and security. Each storage pool can contain one or more physical devices and can be used to consolidate the capacity from all storage pool members. You can also expand the capacity easily by adding a new device category to a newly added virtual disk and add it into a storage pool. All devices must be added into the storage pool for central resource management.

Refer to the ‘Storage Pools’ chapter for more information regarding storage pools.

CDP/NSS Administration Guide 91

Page 94: CDP-NSS Administration Guide

Logical Resources

Create a SAN Client for VMware ESX server

Follow the steps below to create a SAN client for a VMware ESX server for storage resource assignment.

On the VMware ESX server, log into the console and use the vmkping command to test the IP network connection from the ESX server iSCSI software adapter to the CDP or NSS virtual appliance. In addition, you can add the CDP or NSS virtual appliance IP into the iSCSI server list of the iSCSI software adapter and check whether the iSCSI initiator name registered on the CDP or NSS virtual appliance.

Adding theiSCSI server on

ESX SoftwareiSCSI Adapter

1. Launch VMware vSphere Client and connect to the ESX server.

2. Highlight the ESX server and click the Configuration tab.

3. Click the Storage Adapters and right-click the device under iSCSI Software Adapter. Select the iSCSI software adapter device and then click Properties.

4. On the iSCSI initiator (device name) Properties, check the iSCSI properties and record the iSCSI name, for example: iqn.1998-01.com.vmware:esx03.

5. Click the Dynamic Recovery tab, and then click the Add button.

6. On Send Targets, enter the IP address of the virtual appliance.

It will take several minutes to complete the configuration.

Once the IP address has been added into the iSCSI server list, click Close.

Creating theSAN Client for

the ESX server

1. Launch the FalconStor Management Console and connect to the NSS Virtual Appliance with IPStor administrator privileges.

2. Click and expand the NSSVA, then right click the SAN Clients node and select Add.

3. The Add Client Wizard launches.

4. Click Next to start the administration task.

5. When prompted to Select Client Protocols, click to enable the iSCSI protocol and click Next.

The Create Default iSCSI target option is selected by default to create a iSCSI target automatically.

6. Select Target IP by enabling one or both networks providing the iSCSI service.

7. On the Set Client's initiator, the iSCSI initiator name of the ESX server displays if the iSCSI server was added successfully. Click to enable it and then click Next.

8. On Set iSCSI User Access, change it to Allow unauthenticated access or enter the CHAP secret (12 to 16 characters).

CDP/NSS Administration Guide 92

Page 95: CDP-NSS Administration Guide

Logical Resources

9. On Enter the Generic Client Name, enter the Client IP address using the ESX server's IP address.

10. On Select Persistent Reservation Option, keep the default setting and click Next.

11. On Add the client, review all configuration settings and then click Finish to add the SAN client into the system.

12. Expand the SAN Clients.

You will see the newly created SAN client for ESX server and the iSCSI Target.

The screen below displays the SAN client and iSCSI target created for the ESX server connection.

Create a Service-Enabled Device SAN resource

1. Right-click on SAN Resources and select New.

CDP/NSS Administration Guide 93

Page 96: CDP-NSS Administration Guide

Logical Resources

2. Select Service Enabled Device.

3. Select how you want to create this device.

Custom lets you select one physical device to use.

Batch lets you create multiple SAN resources at one time.

4. Select the device that you want to make into a Service-Enabled Device.

CDP/NSS Administration Guide 94

Page 97: CDP-NSS Administration Guide

Logical Resources

A list of the storage pools and physical resources that have been reserved for this purpose are displayed.

5. (Service-Enabled Devices only) Select the physical device(s) for the Service-Enabled Device’s virtual header.

Even though Service-Enabled Devices are used as is, a virtual header is created on another physical device to allow CDP/NSS storage services to be supported.

CDP/NSS Administration Guide 95

Page 98: CDP-NSS Administration Guide

Logical Resources

6. Enter a name for the new SAN resource.

The name is not case sensitive.

7. Confirm that all of the information is correct and then click Finish to create the SAN resource.

8. Indicate if you would like to assign the new SAN resource to a client.

If you select Yes, the Assign a SAN Resource Wizard is launched.

CDP/NSS Administration Guide 96

Page 99: CDP-NSS Administration Guide

Logical Resources

Assign a SAN resource to one or more clients

You can assign a SAN resource to one or more clients or you can assign a client to one or more SAN resources. While the wizard is initiated differently, the outcome is the same.

1. Right-click on a SAN Resources object and select Assign.

The wizard can also be launched from the Create SAN Resource wizard.

Alternatively, you can right-click on a SAN Client and select Assign.

2. If this server has multiple protocols enabled, select the type of client to which you will be assigning this SAN resource.

3. Select the client to be assigned and determine client access rights.

If you initiated the wizard by right-clicking on a SAN Client instead of a SAN resource, you will need to select the SAN resource(s) instead.

Read/Write - Only one client can access this SAN resource at a time. All others (including Read Only) will be denied access. This is the default.

Read/Write Non-Exclusive - Two clients can connect at the same time with both read and write access. You should be careful with this option because if you have multiple clients writing to a device at the same time, you have the potential to corrupt data. This option should only be used by clustered servers, because the cluster itself prevents multiple clients from writing at the same time.

Read Only - This client will have read only access to the SAN resource. This option is useful for a read-only disk.

Note: (For AIX Fibre Channel clients running DynaPath) If you are re-assigning SAN resources to the same LUN, you must reboot the AIX client after unassign-ing a SAN resource.

Notes:

• In a Fibre Channel environment, we recommend that only one CDP/NSS Client be assigned to a SAN resource (with Read/Write access). If two or more Fibre Channel clients attempt to connect to the same SAN resource, error messages will be generated each time the second client attempts to connect to the resource.

CDP/NSS Administration Guide 97

Page 100: CDP-NSS Administration Guide

Logical Resources

For Fibre Channel clients, you will see the following screen:

For iSCSI clients, you will see the following screen:

You must have already created a target for this client. Refer to ‘’ for more information.

You can add any application server, even if it is currently offline.

Note: You must enter the client’s name, not an IP address.

CDP/NSS Administration Guide 98

Page 101: CDP-NSS Administration Guide

Logical Resources

4. If this is a Fibre Channel client and you are using Multipath software (such as FalconStor DynaPath), enter the World Wide Port Name (WWPN) mapping.

This WWPN mapping is similar to Fibre Channel zoning and allows you to provide multiple paths to the storage server to limit a potential point of network failure. You can select how the client will see the virtual device in the following ways:

One to One - Limits visibility to a single pair of WWPNs. You will need to select the client’s Fibre Channel initiator WWPN and the server’s Fibre Channel target WWPN.

One to All - You will need to select the client’s Fibre Channel initiator WWPN.

All to One - You will need to select the server’s Fibre Channel target WWPN.

All to All - Creates multiple data paths. If ports are ever added to the client or server, they will automatically be included in the WWPN mapping.

CDP/NSS Administration Guide 99

Page 102: CDP-NSS Administration Guide

Logical Resources

5. If this is a Fibre Channel client and you selected a One to n option, select which port to use as an initiator for this client.

6. If this is a Fibre Channel client and you selected an n to One option, select which port to use as a target for this client.

7. Confirm all of the information and then click Finish to assign the SAN resource to the client(s).

CDP/NSS Administration Guide 100

Page 103: CDP-NSS Administration Guide

Logical Resources

The SAN resource will now appear under the SAN Client in the configuration tree view:

Discover devices from a client

Depending upon the operating system of the client, you may be required to reboot the client machine in order to be able to use the new SAN resource.

Windows clients

If an assigned SAN resource is larger than 3GB, formatting the resource as a FAT partition will not format properly.

Solaris clients

x86 vs SPARC If you create a virtual device and format it for Solaris x86, the device will fail to mount if you try to use that same virtual device under Solaris SPARC.

Label devices When you create a new virtual device, it needs to be labeled (the drive metrics need to be specified) and a file system has to be put on the virtual device in order to mount it. Refer to the steps below.

Labeling a virtual disk for Solaris:

1. From the command prompt, execute the following command: format

A list of available disk selections will be displayed on the screen and you will be asked to specify which disk should be selected. If you are asked to specify a disk type, select Auto Configure.

2. Once the disk has been selected, you must label the disk.

For Solaris 7 or 8, you will automatically be prompted to label the disk once you have selected it.

3. If you want to partition the newly formatted disk, type partition at the format prompt.

You may accept the default partitions created by the format command or re-partition the disk according to your needs.

On Solaris x86, if the disk is not fixed with the fdisk partitions, the format command will prompt you to run fdisk first.

Note: If the drive has already been labeled and you restart the client, you do not need to run format and label it again.

CDP/NSS Administration Guide 101

Page 104: CDP-NSS Administration Guide

Logical Resources

For further information about the format utility, refer to the man pages.

4. To exit the format utility, type quit at the format prompt.

Creating a file system on a disk managed by the CDP/NSS software:

Warning: Make sure to choose the correct raw device when creating a file system. If in doubt, check with an administrator.

1. To create a new file system, execute the following command:

newfs /dev/rdsk/c2t0d0s2

where c2t0d0s2 is the name of the raw device.

2. To create a mount point for the new file system, execute the following command:

mkdir /mnt/ipstor1

where /mnt/ipstor1 is the name of the mount point you are creating.

3. To mount the disk managed by the CDP/NSS software, execute the following command:

mount /dev/dsk/c2t0d0s2 /mnt/ipstor1

where /dev/dsk/c2t0d0s2 is the name of the block device and /mnt/ipstor1 is the name of the mount point you created.

For further information, refer to the man pages.

Virtual devicefrom a different

server

When assigning a virtual device from a different storage server, the SAN client software must be restarted in order to add the virtual device to the client machine.

The reason for this is that when virtual devices are added from other storage servers, a new virtual SCSI adapter gets created on the client machine. Since Solaris does not allow new adapters to be added dynamically, the CDP/NSS client software needs to be restarted in order for the new adapter and device to be added to the system.

CDP/NSS Administration Guide 102

Page 105: CDP-NSS Administration Guide

Logical Resources

Expand a virtual device (updated December 2012)

Since virtual devices do not represent actual physical resources, they can be expanded as more storage is needed. A virtual device can be increased in size by adding more blocks of storage from any unallocated space on the same server.

Once a virtual device is expanded on the server side, you must repartition the devices and adjust the file-system on the partition on the client side to which the device is assigned. Partition and file-system formats are specific to the operating system that the client is running. You can use tools like Partition Magic, Windows Dynamic Disk, or Veritas Volume Manager to add more drives to expand an existing volume on-the-fly without application down time. Refer to the sections below for information regarding disk expansion for clients running different operating systems.

1. Right-click on a virtual device (SAN) and select Expand.

2. Select how you want to expand the virtual device.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express lets you designate how much space to allocate and then automatically creates a virtual device using an available device.

The Size to Allocate is the maximum space available on all available devices. If this drive is mirrored, this number will be half the full amount because the mirrored drive will need an equal amount of space.

Note: Expanding a virtual device while clients are accessing the drive is not rec-ommended.

CDP/NSS Administration Guide 103

Page 106: CDP-NSS Administration Guide

Logical Resources

If you select Custom, you will see the following windows:

3. Confirm that all information is correct and then click Finish to expand the virtual device.

Select either an entirely unallocated or partially unallocated device.

Only one device can be selected at a time from this dialog. To expand a virtual device from multiple physical devices, you will need to add the devices one at a time. After selecting the parameters for the first device, you will have the option to add more devices.

Indicate how much space to allocate from this device. Note: If this drive is mirrored, you can only select up to half of the available total space (from all available devices). This is because the mirrored drive will need an equal amount of space.

Click Add More if you want to select space from another physical device.

CDP/NSS Administration Guide 104

Page 107: CDP-NSS Administration Guide

Logical Resources

WindowsDynamic disks

Expansion of dynamic disks using the Expand SAN Resource Wizard is not supported for clients using Fibre Channel. Due to the nature of dynamic disks, it is not safe to alter the size of the virtual device. However, dynamic disks do provide an alternative method to extend the dynamic volume.

To extend a dynamic volume using SAN resources, perform the following steps:

1. Create a new SAN resource and assign it to the CDP/NSS Client. This will become an additional disk which will be used to extend the dynamic volume.

2. Use Disk Manager to write the disk signature and upgrade the disk to "Dynamic”.

3. Use Disk Manager to extend the dynamic volume.

The new SAN resource should be available in the list box of the Dynamic Disk expansion dialog.

Solaris clients To extend a volume on a Solaris client, perform the following steps:

1. Use expand.sh to get the new capacity of the disk.

This will automatically label the disk.

2. Use the format utility to add a new partition or, if your file system supports expansion, use your file system’s utility to expand the file system.

Windowsclients

To extend a device on a Windows client, rescan the physical device from the Computer Manager to see the expanded area.

Linux clients To extend a device on a Linux client, perform the following steps:

1. Use the echo commands to dynamically remove and add the SCSI single device or restart the client to reload the device driver.

2. Use the fdisk utility to create a second partition on the device.

AIX clients Expanding a CDP/NSS virtual disk will not change the size of an existing AIX volume group. To expand the volume group, perform the following steps:

1. Assign a new disk to the AIX client.

2. Use the extendvg command to enlarge the size of the volume group.

CDP/NSS Administration Guide 105

Page 108: CDP-NSS Administration Guide

Logical Resources

Expand a Service-Enabled Device

SED expansion must be done from the storage side first. Therefore, it is recommended that you check with the storage vendor regarding how to expand the underlying physical LUNs. It is also recommended that you schedule downtime (under most scenarios) to avoid an unexpected outages.

A rescan from the FalconStor Management console is necessary to reflect the new size of the disk. To rescan, right-click on the physical adapter and select Rescan.

Grant access to a SAN resource

By default, only the root user and IPStor administrators can manage SAN resources, groups, or clients. While IPStor users can create new SAN resources, if you want an IPStor user to manage an existing SAN resource, you must grant that user access. To do this:

1. Right-click on a SAN resource and select Access Control.

2. Select which user can manage this SAN resource.

Each SAN resource can only be assigned to one IPStor user. This user will have rights to perform any function on this SAN resource, including assigning, configuring for storage services, and deletion.

If a SAN Resource is already assigned to a client, you cannot grant access to the SAN resource if the user is not already assigned to the client. You will have to unassign the SAN resource first, change the access for both the client and the SAN resource, and then reassign the SAN resource to the client.

To check whether a san resource has user Access Control enabled, highlight the san resource in the FalconStor Management console and see if there is a value entered in the right panel.

CDP/NSS Administration Guide 106

Page 109: CDP-NSS Administration Guide

Logical Resources

Unassign a SAN resource from a client

1. Right-click on the client or client protocol and select --> Unassign.

2. Select the resource(s) and click Unassign.

Note that when you unassign a Linux client connected, the client may be temporarily disconnected from the server. If the client has multiple devices offered from the same server, the temporary disconnect may affect these devices. However, once I/O activities from those devices are detected, the connection will be restored automatically and transparently.

Delete a SAN resource

1. (AIX and HP-UX clients) Prior to removing a CDP/NSS device, make sure any logical volumes that were built on top have been removed.

If the CDP/NSS device is removed while logical volumes exist, you will not be able to remove the logical volumes and the system will display error messages.

2. (Windows 2003, Linux, Unix clients) You should disconnect/unmount the client from the SAN resource(s) prior to deleting the SAN resource.

3. Detach the SAN resource from any client that is using it.

For non-Windows clients, type ./ipstorclient stop from /usr/local/ipstorclient/bin.

4. In the Console, highlight the SAN resource, right-click and select Delete.

CDP/NSS Administration Guide 107

Page 110: CDP-NSS Administration Guide

CDP/NSS Administration Guide

CDP/NSS ServerCDP and NSS storage servers are designed to require little or no maintenance.

All day-to-day CDP/NSS administrative functions can be performed through the FalconStor Management Console. However, there may be situations when direct access to the Server is required, particularly during initial setup and configuration of physical storage devices attached to the Server or for troubleshooting purposes.

If access to the server’s operating system is required, it can be done either directly or remotely from computers on the SAN.

Start the CDP/NSS server (updated October 2012)

Execute the ipstor start command to start the processes:

If the server is already started, you can use ./ipstor restart to stop and then start the processes. When you start the server, the following processes start:

* You will only see this module if this feature is enabled.

Starting IPStor FC Initiator Module [OK]Starting IPStor Authentication Module [OK]Starting IPStor Block Device Module [OK]Starting IPStor Base Module [OK]Starting IPStor IO Core Module [OK]Starting IPStor Upcall Module [OK]Starting IPStor FC Target Module [OK]Starting IPStor iSCSI Target Module* [OK]Starting IPStor iSCSI Daemon* [OK]Starting IPStor Communication Module [OK]Starting IPStor CLI Proxy Module [OK]Starting IPStor Logger Module [OK]Starting IPStor Central Client Manager Module [OK]Starting IPStor SNMP Module [OK]Starting IPStor Email Alerts Module* [OK]Starting IPStor Self-monitor Module [OK]Starting IPStor Failover Module * [OK]

.

CDP/NSS Administration Guide 108

Page 111: CDP-NSS Administration Guide

CDP/NSS Server

Stop the CDP/NSS server

Warning: Stopping the CDP/NSS processes will shut down all access to the storage resources managed by the server. This can halt processing on your application servers, or even cause them to crash, depending upon how they behave if a disk is unexpectedly shut off or removed. It is recommended that you make sure your application servers are not accessing the storage resources when you stop the CDP/NSS processes.

To shut down the processes, execute the ipstor stop command:

You should see the processes stop.

Log into the CDP/NSS server

You can log in from a terminal connected directly to the CDP/NSS server. There is no graphical user interface (GUI) shell required. By default, only the root user has login privileges to the operating system. Other IPStor administrators do not. To log in, enter the password for the root user.

Use Telnet

By default, IPStor administrators do not have telnet access to the server. The server is configured to deny all TCP/IP access, including telnet. To enable telnet:

1. Install the following RPM files on the machine:

• #rpm –ivh xinetd-…..rpm• #rpm –ivh telnet-…..rpm

2. Enter the following command:

#vi /etc/xinetd.d/telnet

3. Then change disable=yes to disable=no.

#service xinetd restart

Linux Serveronly

To grant telnet access to another computer on the network:

1. Log into the CDP/NSS srver directly (on the local terminal).

2. Change the etc/passwd file.

For the appropriate administrator, change the line that looks like:

/dev/null:/dev/null

Warning: Do not permit storage server login access by anyone except your most trusted system or storage administrators. Administrators with login access to the server have the ability to modify, damage or destroy data managed by the server.

CDP/NSS Administration Guide 109

Page 112: CDP-NSS Administration Guide

CDP/NSS Server

To:

Username:/homedirectory:/bin/bash

Where Username is an actual administrator name and homedirectory is the actual home directory.

Note: For a more secure session, you may want to use the program ssh, which is supplied by some versions of the Linux operating system. Refer to the Linux man-ual that came with your operating system for more details about configuration.

CDP/NSS Administration Guide 110

Page 113: CDP-NSS Administration Guide

CDP/NSS Server

Check CDP/NSS processes (updated October 2012)

You can type the ipstor status command from the shell prompt to check the status of CDP/NSS server processes:

You should see something similar to the following:

Status of IPStor FC Initiator Module [RUNNING]Status of IPStor Authentication Module [RUNNING]Status of IPStor Block Device Module [RUNNING]Status of IPStor ServerBase Module [RUNNING]Status of IPStor IO Core Module [RUNNING]Status of IPStor Upcall Module [RUNNING]Status of IPStor FC Target Module * [RUNNING]Status of IPStor iSCSI Target Module * [RUNNING]Status of IPStor iSCSI Daemon [RUNNING]Status of IPStor Communication Module [RUNNING]Status of IPStor CLI Proxy Module [RUNNING]Status of IPStor Logger Module [RUNNING]Status of IPStor SNMPD Module [RUNNING]Status of IPStor Email Alerts [RUNNING]Status of IPStor Self-monitor Module [RUNNING]Status of IPStor Failover Module ** [RUNNING]

.

.

CDP/NSS Administration Guide 111

Page 114: CDP-NSS Administration Guide

CDP/NSS Server

The following table lists the name and description of the CDP/NSS processes:

CDP/NSS processes

Description

Initiator Module Represents the QLogic Fibre Channel initiator module, which provides interaction between the CDP/NSS server and the FC storage.

Authentication Module Manages authentication requests between replica servers.

Block Device Module Represents a generic block-to-SCSI driver that provides the SCSI interface for CDP/NSS to access non-SCSI block devices.

Base Module Provides basic memory management and SCSI device management to IO modules.

Upcall Module Handles interactions between kernel mode and user mode components.

IO Core Module Provides core IO services.

FC Target Module Provides Fibre Channel target functionality.

iSCSI Target Module Provides iSCSI target functionality via a network adapter.

iSCSI Daemon Represents a user daemon which handles the login process to CDP/NSS iSCSI target from iSCSI initiators.

Communication Module

Handles console-to-server communication and manages overall system configuration information.

CLI Proxy Module Facilitates communication between the CLI utility and the CDP/NSS server.

Logger Module Provides the information logging function for CDP/NSS reports.

Central Client Manager Module

Provides integration with Central Client Manager.

Email Alerts Module Sends alerts via email to indicate alarming situations.

SNMPD Module Interacts with SNMP management software to reply MIB browsing queries and to send SNMP traps for abnormal situations.

Self-monitor Module Checks the server's own health.

Failover Module Checks the partner server’s health in a failover setup.

CDP/NSS Administration Guide 112

Page 115: CDP-NSS Administration Guide

CDP/NSS Server

Check physical resources

When adding physical resources or testing to see if the physical resources are present, the following command can be executed from the shell prompt in Linux:

cat /proc/scsi/scsi.

These commands display the SCSI devices attached to the IPStor Server. For example, you will see something similar to the following:

[0:0:0:0] disk 3ware Logical Disk 0 1.2 /dev/sda[0:0:1:0] disk 3ware Logical Disk 1 1.2 /dev/sdb[2:0:1:0] disk IBM-PSG ST318203FC !# B324 -[2:0:2:0] disk IBM-PSG ST318203FC !# B324 -[2:0:3:0] disk IBM-PSG ST318304FC !# B335 -

CDP/NSS Administration Guide 113

Page 116: CDP-NSS Administration Guide

CDP/NSS Server

Check activity statistics

There is a utility that is installed with CDP/NSS that allows you to view activity statistics for virtual and physical devices as well as for Fibre Channel target ports. This utility can also report pending commands for physical and virtual devices.

To run this utility, type the ismon command on the storage server:

This command displays all virtual resources (SAN, Snapshot, etc.) for this storage server. For each resource, the screen shows its size, amount of reads/writes in KB/second, and number of read/write commands per second. Information on the screen is automatically refreshed every five seconds.

You can change the information that is displayed or the way it is sorted. The following options are available by pressing the appropriate key on your server:

Option Description

c Toggle incremental/cumulative mode

v Display information for virtual devices

p Display information for physical devices. You can launch ismon -p at the command prompt to view this information directly.

t Display information for each FC target mode. You can launch ismon -t at the command prompt to view this information directly.

u Page up

d Page down

V Sort by virtual device ID

R Sort by KB read

r Sort by read SCSI command

o Sort by other SCSI command

A Sort by ACSL

S Sort by virtual device size

W Sort by KB written

w Sort by write SCSI command

E Sort by SCSI command error

N Sort by virtual device name

P Sort by WWPN

m Display Max value fields (incremental mode only)

l Start logging

CDP/NSS Administration Guide 114

Page 117: CDP-NSS Administration Guide

CDP/NSS Server

Remove a physical storage device (updated November 2012)

Follow the steps below to remove a physical storage device from a storage server.

1. Suspend failover (select Failover --> Suspend Failover) if you are removing a physical device from a server that is part of a failover setup.

1. Unassign and delete all SAN resources used by the physical storage device you are removing.

2. Remove all Fibre Channel zones between the storage and the storage server.

3. From the console, perform a rescan on the physical adapters.

4. After the rescan has finished and the devices are offline, right-click and select Delete.

5. Resume failover (select Failover --> Resume Failover) if the server was part of a failover setup.

Configure iSCSI storage

This section provides details regarding the requirements and procedures needed to prepare your CDP/NSS appliance to use dedicated iSCSI downstream storage, using either a software HBA (iscsi-initiator) or a hardware iSCSI HBA.

Configuring iSCSI software initiator

The iSCSI software initiator is provided with every CDP and NSS appliance and can be configured to use dedicated iSCSI downstream storage using the iscsiadm command line interface.

The CDP/NSS iSCSI software initiator supports up to 32 initiator-target host connections. If you have n Ethernet port devices on the appliance, you are allowed 32 / n storage targets. An iSCSI hardware initiator does not have this limitation.

In order for the iSCSI software initiator to be properly configured, it must be configured so it is aware of the individual interfaces it will use for connectivity to the downstream storage.

k Reload virtual device name alias

K Edit virtual device name alias

h View help page

q Quit

Option Description

CDP/NSS Administration Guide 115

Page 118: CDP-NSS Administration Guide

CDP/NSS Server

1. Create a blank default configuration for each Ethernet device on the CDP/NSS appliance using the iscsiadm command line interface.

iscsiadm -m iface -I iface-eth<device Number> -o new

For example, if you are using 4 Ethernet devices for an iSCSI connection, run the following command (using the iscsiadm commands):

iscsiadm -m iface -I iface-eth0 -o new

iscsiadm -m iface -I iface-eth1 -o new

iscsiadm -m iface -I iface-eth2 -o new

iscsiadm -m iface -I iface-eth3 -o new

2. Bind persistently each Ethernet device to a MAC address to ensure that the same device is always used for iSCSI connection. To do this, use the following command:

iscsiadm -m iface -I iface-eth0 -o update -n iface.hwaddress -v <MAC address>

3. Connect each Ethernet device to the iSCSI targets.

4. Discover targets that are accessible from your initiators using the following command:

iscsiadm -m discovery -t st -p 192.168.0.254

5. Log the iSCSI initiator to the target using the following command:

iscsiadm -m node -L

6. Confirm configured Ethernet devices are associated with targets by running the following command:

iscsiadm -m session

Command output example:

tcp: [1] 192.168.0.254:3260,0 <target iqn name>

7. Perform a rescan from the FalconStor Management Console to see all of the iSCSI devices.

Configuring iSCSI hardware HBA

Only QLogic iSCSI HBAs are supported on a CDP or NSS appliance. The QLogic SANSurfer command line interface "iscli" allows configuration of an iSCSI HBA.

The iSCSI HBAs should be configured such that they are in the same subnet as the iSCSI storage. The iSCSI hardware initiator does not require any special configuration for multipath support; you can just connect multiple HBA ports to a downstream iSCSI target. The QLogic iSCSI HBA driver handles multipath traffic.

1. Run QLogic SANSurfer CLI to display the HBA configuration menu:

CDP/NSS Administration Guide 116

Page 119: CDP-NSS Administration Guide

CDP/NSS Server

/opt/QLogic_Corporation/SANsurferiCLI/iscli

Note the information displayed in the menu header for the current HBA port. By default, the configuration for HBA 0, port 0 displays.

The Port Level Info & Operations menu displays.

2. To configure the selected HBA port, select option 4 - Port Level Info & Operations.

Make sure to save your changes the previous port before selecting another port, otherwise your changes will be lost.

3. To change the IP address of the selected port, select option 2 - Port Network Setting Menu.

The Port Network Setting Menu interface allows you to configure the IP address for the selected port.

4. To change target parameters for the selected HBA port, select option 7 - Target Level Info & Operations.

The HBA Target Menu displays.

5. Discover iSCSI targets by selecting option 10 - Target Discovery Menu.

The HBA Target Discovery Menu displays.

6. To add a new target, select option 3 - Add a Send Target.

• Answer Yes when asked if you want the new send target to be auto-login and persistent. Otherwise the target will not persist through reboot and require manual intervention.

• Enter the IP address • Indicate whether or not the send target requires CHAP authentication.• Confirm the send target has been added by listing the send targets (option

1).

7. Save the configuration changes for the selected HBA port by selecting option 12 - Save changes and reset HBA.

8. Once all of the ports are configured, return to the HBA Target Menu and select option 11 - List LUN Information.

All discovered and connected targets are listed. Select a target to view all LUNs associated with that target.

Uninstall a storage server

To uninstall a storage server:

1. Execute the following command:

rpm –e ipstor

CDP/NSS Administration Guide 117

Page 120: CDP-NSS Administration Guide

CDP/NSS Server

This command removes the installation of the storage server but leaves the /ipstor directory and its subdirectories.

2. To remove the /ipstor directory and its subdirectories, execute the rm –rf ipstor command from the /usr/local directory:

To determine the package name, check the Server directory on the CDP/NSS installation media. This will force a re-installation of the software. Refer to the rpm man pages for more information.

Note: We do not recommend deleting the storage server files without using rpm –e. However, to re-install the CDP/NSS software if the storage server was removed without using the rpm utility, or to install over an existing storage server installation, the following command should be executed:

rpm -i - - force <package name>

CDP/NSS Administration Guide 118

Page 121: CDP-NSS Administration Guide

CDP/NSS Administration Guide

iSCSI ClientsiSCSI clients are the file and application servers that access CDP/NSS SAN resources using the iSCSI protocol. Just as the CDP/NSS appliance supports different types of storage devices (such as SCSI, Fibre Channel, and iSCSI), the CDP/NSS appliance is protocol-independent and supports multiple outbound target protocols, including iSCSI Target Mode. This chapter provides an overview for configuring iSCSI clients with CDP or NSS.

iSCSI builds on top of the regular SCSI standard by using the IP network as the connection link between various entities involved in a configuration. iSCSI inherits many of the basic concepts of SCSI. For example, just like SCSI, the entity that makes requests is called an initiator, while the entity that responds to requests is called a target. Only an initiator can make requests to a target; not the other way around. Each entity involved, initiator or target, is uniquely identified.

By default, when a client machine is added as an iSCSI client of a CDP or NSS appliance, it becomes an iSCSI initiator. The initiator name is important because it is the main identity of an iSCSI initiator.

iSCSI target mode is supported for iSCSI initiators on various platforms, including Windows, VMware, and Linux. Refer to the Certification Matrix for all support information.

Requirements

The following requirements are valid for all iSCSI clients regardless of platform:

• You must install an iSCSI initiator on each of your client machines. iSCSI software/hardware initiator is available from many sources and needs to be installed and configured on all clients that will access shared storage. Refer to the FalconStor certification matrix for a list of supported iSCSI initiators.

• You should not install any storage server client software on the client unless you are using a FalconStor snapshot agent.

CDP/NSS Administration Guide 119

Page 122: CDP-NSS Administration Guide

iSCSI Clients

Configure iSCSI clients

Refer to the following sections for an overview for configuring iSCSI clients with CDP/NSS.

‘Enable iSCSI’

‘Configure your iSCSI initiator’

‘Create storage targets for the iSCSI client’

‘Add your iSCSI client in the FalconStor Management Console’

Enable iSCSI

In order to add a client using the iSCSI protocol, you must enable iSCSI for your storage server. To do this, in the FalconStor Management Console, right-click on your storage server and select Options --> Enable iSCSI.

As soon as iSCSI is enabled, a new SAN client called Everyone_iSCSI is automatically created on your storage server. This is a special SAN client that does not correspond to any specific client machine. Using this client, you can create iSCSI targets that are accessible by any iSCSI client that connects to the storage server.

Before an iSCSI client can be served by a CDP or NSS appliance, the two entities need to mutually recognize each other. The following sections take you through this process.

Configure your iSCSI initiator

You need to register your iSCSI client as an initiator to your storage server. This enables the storage server to see the initiator.

To do this, you must launch the iSCSI initiator on the client machine and identify your storage server as the target server. You will have to enter the IP address or name (if resolvable) of your storage server.

Refer to the documentation provided by your iSCSI initiator for detailed instructions about how to do this.

Afterwards, you may need to start, or restart the initiator if it is a Unix client.

CDP/NSS Administration Guide 120

Page 123: CDP-NSS Administration Guide

iSCSI Clients

Add your iSCSI client in the FalconStor Management Console

1. Right-click on SAN Clients and select Add.

2. Select the protocol for the client you want to add.

Note: If you have more than one IP address, a screen will display prompting you to select the IP address that the iSCSI target will be accessible over.

CDP/NSS Administration Guide 121

Page 124: CDP-NSS Administration Guide

iSCSI Clients

3. Select the initiator that this client uses.

If the initiator does not appear, you may need to rescan. You can also manually add it, if necessary.

4. Select the initiator or select the client to have mobile access.

Stationary iSCSI clients corresponds to specific iSCSI client initiators, and consequently, the client machine that owns the specific initiator names. Only a client machine with a correct initiator name can connect to the storage server to access the resources assigned to this stationary client.

CDP/NSS Administration Guide 122

Page 125: CDP-NSS Administration Guide

iSCSI Clients

5. Add/select users who can authenticate for this client. The user name defaults to the initiator name. You will also need to enter the CHAP secret.

Click Advanced to add existing users to this target.

For unauthenticated access, select Allow Unauthenticated Access. With unauthenticated access, the storage server will recognize the client as long as it has an authorized initiator name. With authenticated access, an additional check is added that requires the user to type in a username and password. More than one username/password pair can be assigned to the client, but they will only be useful when coming from the machine with an authorized initiator name.

Select the Enable Mutual CHAP secret if you want the target and the initiator to authenticate to each other. A separate secret will be set for each target and each initiator.

CDP/NSS Administration Guide 123

Page 126: CDP-NSS Administration Guide

iSCSI Clients

6. Enter the name of the client, select the operating system, and indicate whether or not the client machine is part of a cluster.

7. Click Find to locate the client machine.

The IP address of the machine with the specified host name will be automatically filled in if the name is resolvable.

Note: It is very important that you enter the correct client name.

CDP/NSS Administration Guide 124

Page 127: CDP-NSS Administration Guide

iSCSI Clients

8. Indicate if you want to enable persistent reservation.

This option allows clustered SAN Clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes.

9. Confirm all information and click Finish.

10.

CDP/NSS Administration Guide 125

Page 128: CDP-NSS Administration Guide

iSCSI Clients

Create storage targets for the iSCSI client

1. In the FalconStor Management Console, right-click on the iSCSI protocol object for an iSCSI client and select Create Target.

2. Enter a new target name for the client or accept the default.

3. Select the IP address(es) of the storage server to which this client can connect.

You can select multiple IPs if your iSCSI initiator has multipathing support (such as the Microsoft initiator version 2.0).

If you specified a default portal (in Server Properties), that IP address will be selected for you.

4. Select an access mode.

Read/Write - Only one client can access this SAN resource at a time. All others (including Read Only) will be denied access.

Read/Write Non-Exclusive - Two or more clients can connect at the same time with both read and write access. You should be careful with this option because if you have multiple clients writing to a device at the same time, you have the potential to corrupt data. This option should only be used by clustered servers, because the cluster itself prevents multiple clients from writing at the same time.

Read Only - This client will have read only access to the SAN resource. This option is useful for a read-only disk.

5. Select the SAN resource(s) to be assigned to the client.

If you have not created any SAN resources yet, you can assign them at a later time. You may need to restart the iSCSI initiator afterwards.

Note: The Microsoft iSCSI initiator can only connect to an iSCSI target if the target name is no longer than 221 characters. It will fail to connect if the target name is longer than this.

CDP/NSS Administration Guide 126

Page 129: CDP-NSS Administration Guide

iSCSI Clients

6. Use the default starting LUN.

Once the iSCSI target is created for a client, LUNs can be assigned under the target using available SAN resources.

7. Confirm all information and click Finish.

Restart the iSCSI initiator

In order for the client to be able to access its storage, you must restart the iSCSI initiator on Unix clients or log the client onto the target (Windows).

It may be desirable to have a persistent target. Refer to the documentation provided by your iSCSI initiator for detailed instructions about how to do this.

Windows iSCSI clients and failover (updated February 2013)

The Microsoft iSCSI initiator has a default retry period of 60 seconds. If you are using Windows Server 2003, you must change it to 300 seconds in order to sustain the disk for five minutes during failover so that applications will not be disrupted by temporary network problems. This setting is changed through the registry as described below. If you are using Windows Server 2008, use DynaPath to sustain the disk for up to five minutes during failover.

1. Go to Start --> Run and type regedit.

2. Find the following registry key:

HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\4D36E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\

where iscsi adapter interface corresponds to the adapter instance, such as 0000, 0001, .....

3. Right-click Parameters and select Export to create a backup of the parameter values.

4. Double-click MaxRequestHoldTime.

5. Pick Decimal and change the Value data to 300.

6. Click OK.

7. Double-click EnableNOPOut

8. Set the Value data to 1.

9. Click OK.

10. Reboot Windows for the change to take effect.

Disable iSCSI

To disable iSCSI for a CDP or NSS appliance, right-click on the server node in the FalconStor Management Console, and select Options --> Disable iSCSI.

CDP/NSS Administration Guide 127

Page 130: CDP-NSS Administration Guide

iSCSI Clients

Note that before disabling iSCSI, all iSCSI initiators and targets for this ICDP or NSS appliance must be removed.

CDP/NSS Administration Guide 128

Page 131: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Logs and ReportsThe CDP/NSS appliance retains information about the health and behavior of the physical and virtual storage resources on the server. It maintains an Event log to record system events and errors. The appliance also maintains performance data on the individual physical storage devices and SAN resources, which can be filtered to produce various reports through the FalconStor Management Console.

Event Log

The Event Log details significant occurrences during the operation of the storage server. The Event Log can be viewed in the FalconStor Management Console when you highlight a Server in the tree and select the Event Log tab in the right pane.

The following is a sample event log display You can double-click on an event to display additional information, such as the probable cause of the error and suggested action.:

CDP/NSS Administration Guide 129

Page 132: CDP-NSS Administration Guide

Logs and Reports

The columns displayed are:

Sort information in the Event Log

When you initially view the Event Log, all information is displayed in chronological order (most recent at the top). If you want to reverse the order (oldest at top) or change the way the information is displayed, you can click on a column heading to re-sort the information. For example, if you click on the ID heading, you can sort the events numerically. This can help you identify how often a particular event occurs.

Filter information stored in the Event Log

By default, all informational system messages, warnings, and errors are displayed. To filter the information that is displayed, right-click on a Server and select Event Log --> Filter.

Type I: This is an informational message. No action is required.W: This is a warning message that states that something occurred that may require maintenance or corrective action. However, the storage server system is still operational.E: This is an error that indicates a failure has occurred such that a resource is not available, an operation has failed, or a licensing violation. Corrective action should be taken to resolve the cause of the error.C: This is a critical error that stops the system from operating properly. You will be alerted to all critical errors when you log into the server from the console.

Date The date on which the event occurred.

Time The time at which the event occurred.

ID This is the message number.

Event Message

This is a text description of the event describing what has occurred.

CDP/NSS Administration Guide 130

Page 133: CDP-NSS Administration Guide

Logs and Reports

Refresh the Event Log

You can refresh the current Event Log display by right-clicking on the Server and selecting Event Log --> Refresh.

Print/Export Event Log

You can print the Event Log to a printer or save it as a text file. These options are available (once you have displayed the Event Log) when you right-click on the Server and select the Event Log options.

Select which messagetypes you want to

include.

Select a time or daterange for messages.

Specify the maximumnumber of lines to

display.

Search for records thatcontain/do not contain

specific text.

Select a category ofmessages to display.

CDP/NSS Administration Guide 131

Page 134: CDP-NSS Administration Guide

Logs and Reports

Reports

FalconStor provides reports that offer a wide variety of information:

• Performance and throughput - By SAN Client, SAN resource, SCSI channel, and SCSI device.

• Usage/allocation - By SAN Client, SAN resource, Physical resource, and SCSI adapter.

• System configuration - Physical Resources.• Replication reports - You can run an individual report for a single server or

you can run a global report for multiple servers.

Individual reports are viewed from the Reports object in the console. Global replication reports are created from the Servers object.

Set report properties

Prior to setting up reports, review the properties you have set in the Activity Database Maintenance tab (Right click on the server and select Properties ' --> Activity Database Maintenance).

• Report feature polls log files to generate reports. • The default maximum size is 50MB. If the size of the log database is over

50MB, older logs will be deleted to maintain the maximum 50MB limit. • The default maximum days of log history to keep is 30. Log data older than

30 days will be deleted. If you are planning to create reports viewing data older than 30 days, you must increase this value. For example: if you generate a report viewing data for the past year but maximum log history is set only to 30 days, you will only get 30 days of data in the report.

CDP/NSS Administration Guide 132

Page 135: CDP-NSS Administration Guide

Logs and Reports

Create an individual report

1. To create a report, right-click on the Reports object and select New.

2. Select a report.

Depending upon which report you select, additional windows appear to allow you to filter the information for the report. Descriptions of each report appear on the following pages.

3. Select the reporting schedule.

Depending upon which report you select, you can select to run the report for one time only, or select a daily, weekly, or monthly date range.

CDP/NSS Administration Guide 133

Page 136: CDP-NSS Administration Guide

Logs and Reports

• To create a one-time only report, click the For One Time Only radio button and click Next

If applicable, specify the date or date range for the report and indicate which SAN resources and Clients to use in the report.

Selecting Past n days/weeks/months will create reports that generate data relative to the time of execution.

Include All SAN Resources and Clients – Includes all current and previous configurations for this server (including SAN resources and clients that you may have changed or deleted).

Include Current Active SAN Resources and Clients Only – Includes only those SAN resource and clients that are currently configured for this server.

The Delta Replication Status report has a different dialog that lets you specify a range by selecting starting and ending dates.

CDP/NSS Administration Guide 134

Page 137: CDP-NSS Administration Guide

Logs and Reports

• To create a daily report, click the Daily radio button, give the schedule a name if desired and click Next.

• Set the schedule frequency, duration, start time and click Next.• To create a weekly report, click the Weekly radio button.

CDP/NSS Administration Guide 135

Page 138: CDP-NSS Administration Guide

Logs and Reports

• To create a monthly report, click the Monthly radio button.

4. If applicable, select the objects necessary to filter the information in the report.

Depending upon which report you selected, you may be asked to select from a list of storage servers, SCSI adapters, SCSI devices, SAN clients, SAN resources, or replica resources.

5. If applicable, select which columns you want to display in the report and in which sort order.

Depending upon which report you selected, you may be able to select which column fields to display on the report. All available fields are selected by default. You can also select whether you want the data sorted in ascending or descending order.

CDP/NSS Administration Guide 136

Page 139: CDP-NSS Administration Guide

Logs and Reports

6. Enter a name for the report.

7. Confirm all information and click Finish to create the report.

View a report

When you create a report, it is displayed in the right-hand pane and is added beneath the Reports object in the configuration tree.

Expand the Reports object to see the existing reports available for this Server.

When you select an existing report, it is displayed in the right-hand pane.

Export data from a report

You can save the data from the server and device throughput and usage reports. The data can be saved in a comma delimited (.csv) or tab delimited (.txt) text file. To export information, right-click on a report that is generated and select Export.

CDP/NSS Administration Guide 137

Page 140: CDP-NSS Administration Guide

Logs and Reports

Schedule a report

Reports can be generated on a regular basis or as needed. Some tips to remember on scheduling are as follows:

The start and end dates in the report scheduler are inclusive.

When scheduling a monthly report, be sure to select a date that exists in every month. For example, if you select to run a report on the 31st day, the report will not be generated on months that do not have 31 days.

When scheduling a report to run every “n” days in selected months, the first report is always generated on the first of the month and then every “n” number of days after. Therefore if you chose 30 days (n = 30) and there are not 30 days left in the month, the schedule will jump to the first day of the next month.

Some reports allow you to select a range of dates from the day you are generating the report for the past “n” number of days. If you select for the past one day, the report will be generated for one day.

When scheduling a daily report, it is best practice to schedule the report to run at the end of the day to capture the most amount of data. Daily report data accumulation begins at 12:00 am and ends at the scheduled run time.

CDP/NSS Administration Guide 138

Page 141: CDP-NSS Administration Guide

Logs and Reports

E-mail a scheduled report

Scheduled reports can be sent to one or more e-mail addresses by selecting the E-mail option in the Report Wizard.

Select the E-mail option in the Report Wizard to enter e-mail addresses to have the report sent. Enter e-mail addresses, separated by semi-colons. You can also have the report sent to distribution groups, as long as the E-Mail server being used supports this feature.

Report types

The FalconStor reporting feature includes many useful reports including allocation, usage, configuration, and throughput reports. A description of each report follows.

Client Throughput Report

The SAN Resource tab of the Client Throughput Report displays the amount of data read/written between this client and SAN resource. To see information for a different SAN resource, select a different Resource Name from the drop-down box in the lower right hand corner.

The Data tab shows the tabular data that was used to create the graphs.

CDP/NSS Administration Guide 139

Page 142: CDP-NSS Administration Guide

Logs and Reports

The following is a sample page from a Client Throughput Report:

Delta Replication Status Report

This report displays information about replication activity, including compression, encryption, MicroScan and protocol. It provides a centralized view for displaying real-time replication status for all disks enabled for replication. It can be generated for an individual disk, multiple disks, source server or target server, for any range of dates. This report is useful for administrators managing multiple servers that either replicate data or are the recipients of replicated data.

The report can display information about existing replication configurations only or it can include information about replication configurations that have been deleted or promoted (you must select to view all replication activities in the database).

CDP/NSS Administration Guide 140

Page 143: CDP-NSS Administration Guide

Logs and Reports

The following is a sample Delta Replication Status Report:

The Replication Status Summary tab displays a consolidated summary for multiple servers.

CDP/NSS Administration Guide 141

Page 144: CDP-NSS Administration Guide

Logs and Reports

Disk Space Usage Report

This report shows the amount of disk space being used by each SCSI adapter.

The Disk Space Usage tab displays a pie chart showing the following space usage amounts:

• Storage Allocated Space• Snapshot Allocated Space• Cache Allocated Space• HotZone Allocated Space• Journal Allocated Space• CDR Allocated Space• Configuration Allocated Space• Total Free Space

A sample is displayed below:

The Data tab breaks down the disk space information for each physical device. The Utilization tab breaks down the disk space information for each logical device.

CDP/NSS Administration Guide 142

Page 145: CDP-NSS Administration Guide

Logs and Reports

Disk Usage History Report

This report allows you to create a custom report with the statistical history information collected. You must have “statistic log” enabled to generate this report. The data is logged once a day at a specified time. The data collected is a representative sample of the day.

In addition, if servers are set up in as a failover pair, the “Disk usage history” log must be enabled on the both servers in order for data to be logged during failover. In a failover state, the data logging time set on the secondary server is followed.

Select the reporting period range, whether to include the disk usage information from the storage pools, and the sorting criteria.

A sample is displayed below:

CDP/NSS Administration Guide 143

Page 146: CDP-NSS Administration Guide

Logs and Reports

CDP/NSS Administration Guide 144

Page 147: CDP-NSS Administration Guide

Logs and Reports

CDP/NSS Administration Guide 145

Page 148: CDP-NSS Administration Guide

Logs and Reports

Fibre Channel Configuration Report

This report displays information about each Fibre Channel adapter, including type, WWPN, mode (initiator vs. target), and a list of all WWPNs with client information.

The following is a sample Fibre Channel Configuration Report:

CDP/NSS Administration Guide 146

Page 149: CDP-NSS Administration Guide

Logs and Reports

Physical Resources Configuration Report

This report lists all of the physical resources on this Server, including each physical adapter and physical device. To make this report more meaningful, you can rename the physical adapter (right-click on the adapter and select Rename). For example, instead of using the default name, you can use a name such as “Target Port A”.

The following is a sample Physical Resources Configuration Report:

CDP/NSS Administration Guide 147

Page 150: CDP-NSS Administration Guide

Logs and Reports

Physical Resources Allocation Report

This report shows the disk space usage and layout for each physical device. The following is a sample Physical Resources Allocation Report:

CDP/NSS Administration Guide 148

Page 151: CDP-NSS Administration Guide

Logs and Reports

Physical Resource Allocation Report

This report shows the disk space usage and layout for a specific physical device.

The following is a sample Physical Resource Allocation Report:

Resource IO Activity Report

The Resource IO Activity Report shows the input and output activity of selected resources. The report options and filters allow you to select the SAN resource and client to report on within a particular date/time range.

You can view a graph of the IO activity for each SAN resource including errors, delayed IO, data, and configuration information. The Data tab shows the tabular data that was used to create the graph and the Configuration Information tab shows which SAN resources and Clients were included in the report.

CDP/NSS Administration Guide 149

Page 152: CDP-NSS Administration Guide

Logs and Reports

The following is a sample of the Resource IO Activity Report.

CDP/NSS Administration Guide 150

Page 153: CDP-NSS Administration Guide

Logs and Reports

The Resource IO Activity - data tab report results is displayed below:

SCSI Channel Throughput Report

The SCSI Channel Throughput Report shows the data going through each SCSI channel on the Server. This report can be used to determine which SCSI bus is heavily utilized and/or which bus is under utilized. If a particular bus is too heavily utilized, it may be possible to move one or more devices to a different or new SCSI adapter.

Some SCSI adapters have multiple channels. Each channel is measured independently.

CDP/NSS Administration Guide 151

Page 154: CDP-NSS Administration Guide

Logs and Reports

During the creation of the report, you select which SCSI channel to include in the report.

When this report is created, there are three tabs of information.

The SAN Resource tab displays a graph showing the throughput of the channel. The horizontal axis displays the time segments. The vertical axis measures the total data transferred through the selected SCSI channel, in each time segment for both reads and writes.

The System tab displays the CPU and memory utilization for the same time period as the main graph.

The Data tab shows the tabular data that was used to create the graphs.

CDP/NSS Administration Guide 152

Page 155: CDP-NSS Administration Guide

Logs and Reports

The following is a sample SCSI Channel Throughput Report:

SCSI Device Throughput Report

The SCSI Device Throughput Report shows the utilization of the physical SCSI storage device on the Server. This report can show if a particular device is heavily utilized or under utilized.

During the creation of the report, you select which SCSI device to include.

The SAN Resource tab displays a graph showing the throughput of the SCSI device. The horizontal axis displays the time segments. The vertical axis measures the total data transferred through the selected SCSI device, in each time segment for both reads and writes.

The System tab displays the CPU and memory utilization for the same time period as the main graph.

The Data tab shows the tabular data that was used to create the graphs.

CDP/NSS Administration Guide 153

Page 156: CDP-NSS Administration Guide

Logs and Reports

The following is a sample SCSI Device Throughput Report:

SAN Client Usage Distribution Report

The Read Usage tab of the SAN Client Usage Distribution Report displays a bar chart that shows the amount of data read by Clients of the current Server. The chart shows three bars, one for each Client.

The Read Usage % tab displays a pie chart showing the percentage for each Client.

The Write Usage tab displays a bar chart that shows the amount of data written to the Clients. The chart shows three bars, one for each active Client.

The Write Usage % tab displays a pie chart showing the percentage for each Client.

CDP/NSS Administration Guide 154

Page 157: CDP-NSS Administration Guide

Logs and Reports

The following is a sample page from a SAN Client Usage Distribution Report:

SAN Client/Resources Allocation Report

For each Client selected, this report displays information about the resources assigned to the Client, including disk space assigned, type of access, and breakdown of physical resources.

The following is a sample SAN Client / Resources Allocation Report:

CDP/NSS Administration Guide 155

Page 158: CDP-NSS Administration Guide

Logs and Reports

SAN Resources Allocation Report

This report displays information about the resources assigned to each Client, including disk space assigned, type of access, and breakdown of physical resources.

The following is a sample SAN Resources Allocation Report:

CDP/NSS Administration Guide 156

Page 159: CDP-NSS Administration Guide

Logs and Reports

SAN Resource Usage Distribution Report

The Read Usage tab of the SAN Resource Usage Distribution Report displays a bar chart that shows the amount of data read from each SAN Resource associated with the current Server. The chart shows six bars, one for each SAN Resource (in order of bytes read).

The Read Usage % tab displays a pie chart showing the percentage for each SAN resource.

The Write Usage tab displays a bar chart that shows the amount of data written to the SAN resources.

The Write Usage % tab displays a pie chart showing the percentage for each SAN resource.

The following is a sample page from a SAN Resource Usage Distribution Report:

Server Throughput and Filtered Server Throughput Report

The Server Throughput Report displays the overall throughput of the Server.

The Filtered Server Throughput Report takes a subset of clients and/or SAN resources and displays the throughput of that subset.

When creating the Filtered Server Throughput Report, you can specify which SAN resources and which clients to include.

When these reports are created, there are several tabs of information.

CDP/NSS Administration Guide 157

Page 160: CDP-NSS Administration Guide

Logs and Reports

The SAN Resource tab displays a graph showing the throughput of the Server. The horizontal axis displays the time segments. The vertical axis measures the total data transferred in each time segment for both reads and writes. For example:

The System tab displays the CPU and memory utilization for the same time period as the main graph:

CDP/NSS Administration Guide 158

Page 161: CDP-NSS Administration Guide

Logs and Reports

This helps the administrator identify time periods where the load on the Server is greatest. Combined with the other reports, the specific device, client, or SAN resource that contributes to the heavy usage can be identified.

The Data tab shows the tabular data that was used to create the graphs:

The Configuration Information tab shows which SAN Resources and Clients were included in the report.

CDP/NSS Administration Guide 159

Page 162: CDP-NSS Administration Guide

Logs and Reports

Storage Pool Configuration Report

This report shows detailed Storage Pool information. You can select the information to display in each column as well as the order. This includes:

• Device Name • SCSI Address• Sectors • Total (MB)• Used (MB)• Available (MB)

The following is a sample Storage Pool Configuration Report

CDP/NSS Administration Guide 160

Page 163: CDP-NSS Administration Guide

Logs and Reports

User Quota Usage Report

This report shows a detailed description of the amount of space used by each of the resources from the selected users on the current server. You can select the information to display in each column, the sort order and the user on which to report information. Report columns include:

• ID• Resource Name• Type• Category• Size (MB)

The following is a sample User Quota Usage Report.

CDP/NSS Administration Guide 161

Page 164: CDP-NSS Administration Guide

Logs and Reports

Report types - Global replication

While you can run a replication report for a single server from the Reports object, you can also run a global report for multiple servers from the Servers object.

From the Servers object, you can also create a report for a single server, consolidate existing reports from multiple servers, and create a template for future reports.

Create a global replication report

1. To run a global replication report, highlight the Servers object and select Replication Status Reports --> New.

2. When prompted, enter a date range for the report and indicate whether you want to use a saved template to create this report or if you are going to define this report as you go through the wizard.

3. Select which servers to include in the report.

4. Select which resources to include from each server.

Be sure to select each primary server from the drop-down box to select resources.

5. Select what type of information you want to appear in the report and the order.

Use the up/down arrows to order the information.

6. Set the sorting criteria for the columns.

Click in the Sorting field to alternate between Ascending, Descending, or Not Sorted. You can also use the up/down arrows to change the sorting order of the columns.

7. Give the report a name and indicate where to save it.

You can also save the current report template for future use.

8. Review all information and click Finish to create the report.

View global report

The group replication report will open in its own window. Here you can change what is displayed, change the sort order, export data, or print.

Since you can select more columns than can fit on a page, when printing a report where many columns have been selected, it is recommended that you preview the report before printing. You may need to make sure the columns have not overlapped.

CDP/NSS Administration Guide 162

Page 165: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Fibre Channel Target ModeJust as CDP and NSS supports different types of storage devices (such as SCSI, Fibre Channel, and iSCSI), CDP and NSS appliances are protocol-independent and support multiple outbound target protocols, including Fibre Channel target mode. CDP/NSS support for the Fibre Channel protocol allows any Fibre Channel-enabled system to take advantage of FalconStor’s extensive storage capabilities such as virtualization, mirroring, replication, NPIV, and security. Support is offered for all Fibre Channel (FC) topologies including, Point-to-Point, and Fabric.

This section provides configuration information for Fibre Channel target mode as well as the associated Fibre Channel SAN equipment (i.e. switch, T3, etc.). An application server can be an iSCSI Client, a Fibre Channel Client, or both. Using separate cards and switches, you can have all types of FalconStor Clients (FC, or iSCSI) on your storage network.

Fibre Channel target mode is supported on various platforms, including Windows, VMware, and Linux. Refer to the FalconStor Certification Matrix at Falconstor.com for all support information.

CDP/NSS Administration Guide 163

Page 166: CDP-NSS Administration Guide

Fibre Channel Target Mode

Fibre Channel over Ethernet (FCoE)

NSS supports FCoE using QLogic QLE8152 and QLAE8142 Converged Network Adapters (CNAs) along with the CISCSO MDS 5010 FCoE switch. The storage server detects the installed CNAs. The CNA is seen as a regular Fibre Channel adapter with WWPN association.

Fibre Channel target mode - configuration overview

The installation and configuration of Fibre Channel target mode involves several steps. Detailed information for each step appears in subsequent sections.

1. Prepare your Fibre Channel hardware configuration.

2. Enable Fibre Channel target mode.

3. (If applicable) Set QLogic ports to target mode.

4. (Optionally) ‘Set up your failover configuration’.

5. Add Fibre Channel clients.

6. (Optionally) Associate World Wide Port Names (WWPN) with clients.

7. Assign virtualized resources to Fibre Channel Clients.

8. View new devices.

9. (Optionally) Install and configure DynaPath.

Configure Fibre Channel hardware on server

CDP and NSS supports the use of QLogic HBAs for the storage server. For a list of all supported HBAs, refer to the certification matrix on the FalconStor website for a list of HBAs that are currently certified.

Ports

Your CDP/NSS appliance is equipped with several Fibre Channel ports. The ports that connect to storage arrays are commonly known as Initiator Ports. The ports that will interface with the backup servers' FC initiator ports will run in a different mode known as Target Mode.

CDP/NSS Administration Guide 164

Page 167: CDP-NSS Administration Guide

Fibre Channel Target Mode

Downstream Persistent binding

Persistent binding is automatically configured for all QLogic HBAs connected to storage device targets upon the discovery of the device (via a Console physical device rescan with the Discover New Devices option enabled). However, persistent binding will not be SET until the HBA is reloaded. You can reload HBAs by restarting CDP/NSS with the ipstor restart all command.

After the HBA has been reloaded and the persistent binding has been set, you can change the target port ID through the console. To do this, right-click on Physical Resources or a specific adapter and select Target Port Binding.

Important: Do not change the target-port ID prior to setting persistent binding.

VSA

The Volume Set Addressing (VSA) allows an increased number of LUNs to be addressed on a target port. CDP/NSS supports up to 4096 LUN assignments per VSA client when VSA is enabled. For upstream, you can set VSA for the client at the time of creation or you can modify the setting after creation by right-clicking on the client.

When VSA is enabled and the actual LUN is beyond 256, use the Report LUN option to discover them. Use the LUN range option only if Report LUN does not work for the adapter.

If new devices are assigned (from the storage server) to a VSA-enabled storage server before loading up the CDP/NSS storage server, the newly assigned devices will not be discovered during start up. A manual rescan will be required.

The VSA option must be disabled if you are using the FalconStor Management Console to set up a near-line mirror on a version 6.0 server. This also applies if you are setting up a near-line mirror from a version 6.0 server to a later server.

Some storage devices (such as EMC Symmetric storage controller and older HP storage) use VSA (Volume Set Addressing) mode. This addressing method is used primarily for addressing virtual buses, targets, and LUNs.

Zoning

Two types of zoning can be configured on each switch: hard zoning (based on port number) and soft zoning (based on WWPNs).

Soft zoning is zoning which is implemented in software and uses the WWPN in the configuration. By using filtering implemented in Fibre Channel switches, ports cannot be seen from outside of their assigned zones. The WWPN remains the same in the zoning configuration regardless of the port location. If a port fails, you can simply move the cable from the failed port to another valid port without having to reconfigure the zoning.

CDP/NSS Administration Guide 165

Page 168: CDP-NSS Administration Guide

Fibre Channel Target Mode

CDP/NSS requires isolated zoning where one initiator is zoned to one target in order to minimize I/O interruptions by non-related FC activities, such as port login/out, reset, etc. With isolated zoning, each zone can contain no more than two ports or two WWPNs. This applies to both initiator zones (storage) and target zones (clients).

For example, for the case of upstream (to client) zoning, if there are two client initiators and two CDP/NSS targets on the same FC fabric and if it is desirable for all four path combinations to be established, you should use four specific zones, one for each path (Client_Init1/IPStor_Tgt1, Client_Init1/IPStor_Tgt2, Client_Init2/IPStor_Tgt1, and Client_Init2/IPStor_Tgt2). You cannot create a single zone that includes all four ports. The four-zone method is cleaner because it does not allow the two client initiators nor the two CDP/NSS target ports to see each other. This eliminates all of the potential issues such as initiators trying to log in to each other under certain conditions.

The same should be done for downstream (to storage) zoning. If there are two CDP/NSS initiators and two storage targets on the same fabric, there should be four zones (IPStor_Init1/Storage_Tgt1, IPStor_Init1/Storage_Tgt2, IPStor_Init2/Storage_Tgt1, and IPStor_Init2/Storage_Tgt2).

Make sure that storage devices are not zoned directly connected to the client. Instead, since CDP/NSS will be provisioning the storage to the clients, the target ports of the storage devices should be zoned to the CDP/NSS initiator ports while the clients are zoned to the CDP/NSS target ports. Make sure that from the storage unit’s management GUI (such as SANtricity and NaviSphere), the LUNs are re-assigned to the storage server as the “host”. CDP/NSS will either virtualize these LUNS (if they are newly created without existing data) or “service-enable” them (which preserves existing data). CDP/NSS can then define SAN resources out of these LUNS and further provision them to the clients as Service-Enabled Devices.

Switches

For the best performance, if you are using 4 or 8 Gig switches, all of your cards should be 4 or 8 Gig cards. For example, the QLogic 2432 or 2462 4GB cards. Check the certification matrix on the FalconStor website to see a complete list of certified cards.

NPIV (point-to-point) mode is enabled by default. Therefore, all Fibre Channel switches must support NPIV.

CDP/NSS Administration Guide 166

Page 169: CDP-NSS Administration Guide

Fibre Channel Target Mode

QLogic HBAsTarget mode

settingsThe table below lists the recommended settings (changes are indicated in bold) for QLogic HBA target mode. These values are set in the fshba.conf file and will override those set through the BIOS settings of the HBA.

For initiators, please consult the best practice guideline as published by the storage subsystem vendor. If an initiator is to be used by multiple storage brands, the best practice is to select a setting that best satisfies both brands. If this is not possible, consult FalconStor technical support for advice, or separate the conflicting storage units to their own initiator connections.

Name Default Recommendation

frame_size 2 (2048byte) 2 (2048byte)

loop_reset_delay 0 0

adapter_hard_loop_id 0 0 but set to 1 if using arbitrated loop topology

connection_option 1 (point to point) 1 (point to point) but set to 0 if using arbitrated loop topology

hard_loop_id 0 0-124

Make sure that both primary target adapter and secondary standby adapter (the failover pair) are set to the SAME value.

fibre_channel_tape_support 0 (disable) 0 (disable)

data_rate 2 (auto) Based on switch capability -modify to either 0 (1 GB), 1 (2 GB), 2 (auto), or 3 (4GB)

execution_throttle 255 255m

LUNs_per_target 256 256

enable_lip_reset 1 (enable) 1 (enable)

enable_lip_full_login 1 (enable) 1 (enable)

enable_target_reset 1 (enable) 1 (enable)

login_retry_count 8 8

port_down_retry_count 8 8

link_down_timeout 45 45

extended_error_logging_flag 0 (no logging) 0 (no logging)

CDP/NSS Administration Guide 167

Page 170: CDP-NSS Administration Guide

Fibre Channel Target Mode

interrupt_delay_timer 0 0

iocb_allocation 512 512

enable_64bit_addressing 0 (disable) 0 (disable)

fibrechannelconfirm 0 (disable) 0 (disable)

class2service 0 (disable) 0 (disable)

acko 0 (disable) 0 (disable)

responsetimer 0 (disable) 0 (disable)

fastpost 0 (disable) 0 (disable)

driverloadrisccode 1 (enable) 1 (enable)

q12xmaxqdepth 255 255 (configurable via the console)

max_srbs 4096 4096

q12xfailover 0 0

q12xlogintimeout 20 seconds 20 seconds

q12xretrycount 20 20

q12xsuspendcount 10 10

q12xdevflag 0 0

q12xplogiabsentdevice 0 (no PLOGI) 0 (no PLOGI)

busbusytimeout 60 seconds 60 seconds

displayconfig 1 1

retry_gnnft 10 10

recoverytime 10 seconds 10 seconds

failbacktime 5 seconds 5 seconds

bind 0 (by port name) 0 (by port name)

qfull_retry_count 16 16

qfull_retry_delay 2 2

q12xloopupwait 10 10

Name Default Recommendation

CDP/NSS Administration Guide 168

Page 171: CDP-NSS Administration Guide

Fibre Channel Target Mode

Configure Fibre Channel clients

Persistentbinding

Persistent binding should be configured for all HBAs that support it. Check with the HBA vendor for specific persistent binding procedures.

Fabric topology When setting up clients on a Fibre Channel network using a Fabric topology, we recommend that you set the topology that each HBA will use to log into your switch to Point-to-Point Only.

If you are using a QLogic HBA, the topology is set through the QLogic BIOS: Configure Settings --> Extended Firmware settings --> Connection Option: Point-to-Point Only

DynaPath FalconStor DynaPath must be installed on the client to support CDP/NSS storage failover. Refer to the DynaPath User Guide for details.

Linux Native Linux DM-Multipath is recommended for Linux systems. If no version of FalconStor® DynaPath exists for your Linux kernel, you must use Linux DM-Multipath. Refer to the Linux DM-Multipath Configuration with CDP/NSS Best Practice Guide.

VMware VMware clients may require configuration modifications to be supported. Refer to Knowledge Base article number 663 for details.

HP-UX 11iV3 The native multipathing in HP-UX 11iv3 can survive CDP/NSS storage failover without DynaPath. However, the "Transient time period" value should be extended if the failover time takes more than 60 seconds.

To adjust the "Transient time period" value, follow the procedure below:

1. Run “ioscan -m dsf” to get the persistent DSF [disk number] for the assigned device.

# ioscan -m dsf

Persistent DSF Legacy DSF(s)

========================================

/dev/rdisk/disk5 /dev/rdsk/c9t0d0

/dev/rdsk/c7t0d0

/dev/rdsk/c17t0d0

/dev/rdsk/c11t0d0

2. Run “scsimgr get_info -D /dev/rdisk/disk# |grep "Transient time period" “to see the timeout value.

# scsimgr get_info -D /dev/rdisk/disk5 |grep "Transient time period"

Note: : For QLogic HBAs, it is recommend that you hard code the link speed of the HBA to be in line with the switch speed.

CDP/NSS Administration Guide 169

Page 172: CDP-NSS Administration Guide

Fibre Channel Target Mode

Transient time period = 60

3. Run “scsimgr set_attr -D /dev/rdisk/disk# -a transient_secs=<timeout value>” to set to desire timeout value.

# scsimgr set_attr -D /dev/rdisk/disk5 -a transient_secs=120

Value of attribute transient_secs set successfully

4. Once the desire value is tested, run “scsimgr save_attr -D /dev/rdisk/disk# -a transient_secs=<timeout value>” to save the value

# scsimgr save_attr -D /dev/rdisk/disk5 -a transient_secs=120

Value of attribute transient_secs saved successfully

Enable Fibre Channel target mode

To enable Fibre Channel Target Mode:

1. In the console, highlight the storage server that has the FC HBAs.

2. Right-click on the Server and select Options --> Enable FC Target Mode.

An Everyone_FC client will be created under SAN Clients. This is a generic client that you can assign to all (or some) of your SAN resources. It allows any WWPN not already associated with a Fibre Channel client to have read/write non-exclusive access to any SAN resources assigned to Everyone.

Disable Fibre Channel target mode

To disable Fibre Channel Target Mode:

1. Unassign all resources from the Fibre Channel client.

2. Remove the Fibre Channel client.

3. Switch all targets to initiator mode.

4. Disable FC mode by right-clicking on the Server and selecting Options --> Disable FC Target Mode.

5. Run the stop IPStor all command to stop

6. Power off the server.

Optional: Remove the FC cards

7. Run the IPStor Configtgt command and select q for no Fibre Channel support.

CDP/NSS Administration Guide 170

Page 173: CDP-NSS Administration Guide

Fibre Channel Target Mode

Verify the Fibre Channel WWPN

The World Wide Port Name (WWPN) must be unique for the Fibre Channel initiator, target, and the client initiator. To verify:

Right-click on the server and select Verify FC WWPN

If duplicate WWPN’s are found, a message will display advising you to check your Fibre Channel configuration to avoid data corruption.

Set QLogic ports to target mode

By default, all QLogic point-to-point ports are set to initiator mode, which means they will initiate requests rather than receive them. Determine which ports you want to use in target mode and set them to become target ports so that they can receive requests from your Fibre Channel Clients.

It is recommended that you have at least four Fibre Channel ports per server in initiator mode, one of which is attached to your storage device.

You need to switch one of those initiators into target mode so your clients will be able to see the storage server. You will then need to select the equivalent adapter on the Secondary server and switch it to target mode.

To set a port:

1. In the FalconStor Management Console, expand Physical Resources.

2. Right-click on a HBA and select Options --> Enable Target Mode.

You will get a Loop Up message on your storage server if the port has successfully been placed in target mode.

3. When done, make a note of all of your WWPNs.

Note: If a port is in initiator mode and has devices attached to it, that port cannot be set for target mode.

CDP/NSS Administration Guide 171

Page 174: CDP-NSS Administration Guide

Fibre Channel Target Mode

It may be convenient for you to highlight your server and take a screenshot of the Console.

CDP/NSS Administration Guide 172

Page 175: CDP-NSS Administration Guide

Fibre Channel Target Mode

Set NPIV ports to target mode

With a N_Port ID Virtualization (NPIV) HBA, each port can be both a target and an initiator (dual mode). When using a NPIV HBA, there are two WWPNs, the base port and the alias.

Each NPIV port can be both a target and an initiator. To use target mode, you must enable target mode on a port.

In order to use target mode, the port needs to be in NPIV mode. This was set automatically for you when you loaded the driver (./ipstor configtgt,select ‘qlogicnpic’).

To set target mode:

1. In the Console, expand Physical Resources.

2. Right-click on a NPIV HBA and select Enable Target Mode.

3. Click OK to enable.

You will see two WWPNs listed for the port.

Notes:

• You should not use the NPIV driver if you intend to directly connect a target port to a client host.

• With dual mode, clients will need to be zoned to the alias port (called Target WWPN). If they are zoned to the base port, clients will not see any devices.

• You will only see the alias port when that port is in target mode. • NPIV allows multiple N_Port IDs to share a single physical N_Port. This

allows us to have an initiator, target and standby occupying the same physical port. This type of configuration is not supported when not using NPIV.

• As a failover setup best practice, it is recommended that you do not put more than one standby WWPN on a single physical port.

CDP/NSS Administration Guide 173

Page 176: CDP-NSS Administration Guide

Fibre Channel Target Mode

Set up your failover configuration

If you will be using the FalconStor Failover option, and you have followed all of the steps in this Fibre Channel target mode section, you are now ready to launch the Failover Setup Wizard and begin configuration. Refer to ‘The Failover Option’ for more details.

HBAs andfailover

Asymmetric failover modes are supported with QLogic HBAs.

Failover withmultiple

switches

When setting up Fibre Channel failover using multiple Fibre Channel switches, we recommend the following:

• If multiple switches are connected via inter-switch link (ISL), then the primary storage server’s Target Port and the secondary storage server’s Port can be on different switches.

• If the switches are not connected via ISL, where they can be managed as one fabric, the primary storage server’s Target Port and the secondary storage server’s Standby Port must be on the same switch.

Failoverlimitations

When using failover in Fibre Channel environments, it is recommended that you use the same type of Fibre Channel HBAs for all CDP/NSS client hosts.

When configuring HA (failover), avoid zoning client initiators to the base WWPN of the FC ports(s) on the secondary server that are dedicated to be the standby port(s) in an HA pair.

CDP/NSS Administration Guide 174

Page 177: CDP-NSS Administration Guide

Fibre Channel Target Mode

Add Fibre Channel clients

Client software is only required for Fibre Channel clients running a FalconStor Snapshot Agent or for clients using multiple protocols.

If you do not install the Client software, you must manually add the Client in the Console. To do this:

1. In the Console, right-click on SAN Clients and select Add.

2. Select Fibre Channel as the Client protocol.

3. Select WWPN initiators. See ‘Associate World Wide Port Names (WWPN) with clients’.

4. Select Volume Set Addressing.

Volume Set Addressing is used primarily for addressing virtual buses, targets, and LUNs. If your storage device uses VSA, you must enable it. Note that Volume Set Addressing is selected by default for HP-UX clients.

5. Enter a name for the SAN Client, select the operating system and indicate whether or not the client machine is part of a cluster.

If the client’s machine name is not resolvable, you can enter an IP address and then click Find to discover the machine.

6. Indicate if you want to enable persistent reservation.

This option allows clustered SAN Clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes.

7. Confirm all information and click Finish to add this client.

Note: If you are using AIX SAN Client cluster nodes, this option should be cleared.

CDP/NSS Administration Guide 175

Page 178: CDP-NSS Administration Guide

Fibre Channel Target Mode

Associate World Wide Port Names (WWPN) with clientsSimilar to an IP address, the WWPN uniquely identifies a port in a Fibre Channel environment. Unlike an IP address, the WWPN is vendor assigned and is hardcoded and embedded.

Depending upon whether or not you are using a switched Fibre Channel environment, determining the WWPN for each port may be difficult.

• If you are using a switched Fibre Channel environment, CDP/NSS will query the switch for its Simple Name Server (SNS) database and will display a list of all available WWPNs. You will still have to identify which WWPN is associated with each machine.

• If you are not using a switched Fibre Channel environment, you can manually determine the WWPN for each of your ports. There are different ways to determine it, depending upon the hardware vendor. You may be able to get the WWPN from the BIOS during bootup or you may have to read it from the physical card. Check with your hardware vendor for their preferred method.

To simplify this process, when you enabled Fibre Channel, an Everyone client was created under SAN Clients. This is a generic client that you can assign to all (or some) of your SAN resources. It allows any WWPN not already associated with a Fibre Channel client to have read/write non-exclusive access to any SAN resources assigned to Everyone.

For security purposes, you may want to assign specific WWPNs to specific clients. For the rest, you can use the Everyone client.

Do the following for each client for which you want to assign specific virtual devices:

1. Highlight the Fibre Channel Client in the FalconStor Management Console.

2. Right-click on the Client and select Properties.

CDP/NSS Administration Guide 176

Page 179: CDP-NSS Administration Guide

Fibre Channel Target Mode

3. Select the Initiator WWPN(s) belonging to your client.

Here are some methods to determine the WWPN of your clients:

- Most Fibre Channel switches allow administration of the switch through an Ethernet port. These administration applications have utilities to reveal or allow you to change the following: Configuration of each port on the switch, zoning configurations, the WWPNs of connected Fibre Channel cards, and the current status of each connection. You can use this utility to view the WWPN of each Client connected to the switch.

- When starting up your Client, there is usually a point at which you can access the BIOS of your Fibre Channel card. The WWPN can be found there.

- The first time a new Client connects to the storage server, the following message appears on the server screen:FSQLtgt: New Client WWPN Found: 21 00 00 e0 8b 43 23 52

4. If necessary, click Add to add WWPNs for the client.

You will see the following dialog if there are no WWPNs in the server’s list. This could occur because the client machines were not turned on or because all WWPNs were previously associated with clients.

Assign virtualized resources to Fibre Channel Clients

For security purposes, you can assign specific SAN resources to specific clients. For the rest, you can use the Everyone client. This is a generic client that you can assign to all (or some) of your SAN resources. It allows any WWPN not already associated with a Fibre Channel client to have read/write non-exclusive access to any SAN resources assigned to Everyone.

To assign resources, right-click on a specific client or on the Everyone client and select Assign.

If a client has multiple ports and you are using Multipath software (such as DynaPath), after you select the virtual device, you will be asked to enter the WWPN mapping. This WWPN mapping is similar to Fibre Channel zoning and allows you to provide multiple paths to the storage server to limit a potential point of network failure.

You can select how the client will see the virtual device in the following ways:

• One to One - Limits visibility to a single pair of WWPNs. You will need to select the client’s Fibre Channel initiator WWPN and the server’s Fibre Channel target WWPN.

CDP/NSS Administration Guide 177

Page 180: CDP-NSS Administration Guide

Fibre Channel Target Mode

• One to All - You will need to select the client’s Fibre Channel initiator WWPN.

• All to One - You will need to select the server’s Fibre Channel target WWPN.

• All to All - Creates multiple data paths. If ports are ever added to the client or server, they will automatically be included in the WWPN mapping.

View new devices

In order to see the new devices, after you have finished configuring your Fibre Channel Clients, you will need to trigger a device rescan or reboot the Client machine, depending upon the requirements of the operating system.

Install and configure DynaPath

During failover, the storage server is temporarily unavailable. Since the failover process can take a minute or so, the Clients need to keep attempting to connect so that when the Server becomes available they can continue normal operations. One way of ensuring that the Clients will retry the connection is to use FalconStor's DynaPath Agent. DynaPath is a load-balancing/path-redundancy application that manages multiple pathways from your Client to the switch that is connected to your storage servers. Should one path fail, DynaPath will tap the other path for all I/O operations.

If you are not using the DynaPath agent, you may be able to use other third-party multi-pathing software or you may be able to configure your HBA driver to perform the retries. We recommend that the Clients retry the connection for a minimum of two minutes.

If you are using DynaPath, it should be installed on each Fibre Channel Client that will be part of your failover configuration. Refer to your DynaPath User Guide for more details.

CDP/NSS Administration Guide 178

Page 181: CDP-NSS Administration Guide

Fibre Channel Target Mode

Spoof an HBA WWPN

Your FalconStor software contains a unique feature that can spoof initiator port WWPNs. This feature can be used to pre-configure HBAs, making the process of rebuilding a server simpler and less time consuming. This feature can also be useful when migrating from an existing server to a new one.

This feature can also create a potential problem if not used carefully. If the old HBA is somehow connected back to the same FC fabric, the result will be two HBAs with the same WWPN, which can cause in a fabric outage. It is strongly recommended that you take the following measures to minimize the chance of a WWPN conflict:

1. Physically destroy the old HBA if it was replaced for defect.

2. Use your HBA vendor's tool to reprogram and swap the WWPN of the two HBAs.

3. Avoid spoofing. This can be done if you plan extra time for the zoning change.

To configure HBAs for spoofing:

1. In the FalconStor Management Console, right-click on a specific adapter and select Spoof WWPN.

2. Enter the desired WWPN for the HBA and click OK.

3. Repeat steps 1-2 for each HBA that needs to be spoofed and exit the Console.

4. Reload the HBA driver by typing:

ipstor restart all

5. Log back into your storage server from the console.

You will notice the WWPN of the initiator port now has the spoofed WWPN.

6. If desired, switch the spoofed HBA to target mode.

Notes:

• Each HBA port must be spoofed to a unique WWPN.• Spoofing and un-spoofing are disabled after failover is configured. You

must spoof HBAs and enable target mode before setting up Fibre Channel failover.

• Spoofing can only be performed when QLogic HBAs are in initiator mode. • After a QLogic HBA has been spoofed, and the HBA driver is restarted,

the HBA can then be changed to target mode and have resources assigned through it.

• Since most switch software applications use an "Alias" to represent a WWPN, you need to change the WWPN of the Alias and all the zones can be preserved.

CDP/NSS Administration Guide 179

Page 182: CDP-NSS Administration Guide

CDP/NSS Administration Guide

SAN ClientsStorage Area Network (SAN) Clients are the file and application servers that access SAN resources. Since SAN resources appear as locally attached SCSI devices, the applications, such as file services, databases, web and email servers, do not need to be modified to utilize the storage.

On the other hand, since the storage is not locally attached, there is some configuration needed to locate and mount the required storage.

Add a client from the FalconStor Management Console

1. In the console, right-click on SAN Clients and select Add.

2. Enter a name for the SAN Client, select the operating system, and indicate whether or not the client machine is part of a cluster.

If the client’s machine name is not resolvable, you can enter an IP address and then click Find to discover the machine.

3. Determine if you want to limit the amount of space that can be automatically assigned to this client.

This quota represents the total allowable space that can be allocated for all resources associated with this client. It is used to restrict certain types of resources (such as Snapshot Resource and CDP Resource) that expand automatically. This prevents them from allocating storage space indefinitely. Instead, expansion can only happen if the total size of all the resources associated with the client does not exceed the pre-defined quota for that client.

4. Indicate if you want to enable persistent reservation.

This option allows clustered SAN Clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes.

5. Select the client’s protocol(s).

If you select iSCSI, you must indicate if this is a mobile client. You will then be asked to select the initiator that this client uses and add/select users who can authenticate for this client. Refer to ‘Add iSCSI clients’ for more information.

If you select Fibre Channel, you will have to select WWPN initiators. You will then be asked to select Volume Set Addressing. Refer to ‘Add Fibre Channel clients’ for more information.

6. Confirm all information and click Finish to add this client

Note: If you are using AIX SAN Client cluster nodes, this option should be cleared.

CDP/NSS Administration Guide 180

Page 183: CDP-NSS Administration Guide

SAN Clients

Add a client for FalconStor host applications

If you are using FalconStor client/agent software, such as snapshot agents, or HyperTrac, refer to the FalconStor Intelligent Management Agent (IMA) User Guide or the appropriate agent user guide for details regarding adding clients via FalconStor Intelligent Management Agent (IMA).

FalconStor client/agent software allows you to add a storage server directly in IMA/SDM or the SAN Client.

For example, if you are using HyperTrac, the first time you start HyperTrac, the system scans and imports all storage servers identified by IMA/SDM or the SAN Client. These storage servers are then listed in the HyperTrac the console. Alternatively, you can add a storage server directly in IMA/SDM or the SAN Client.

Refer to UNIX SAN Client error codes in the Troubleshooting / FAQs section for information regarding UNIX SAN Client error codes you may encounter.

CDP/NSS Administration Guide 181

Page 184: CDP-NSS Administration Guide

CDP/NSS Administration Guide

SecurityCDP/NSS utilizes strict authorization policies to ensure proper access to storage resources on the FalconStor storage network. Since applications and storage resources are now separated, and it is possible to transmit storage traffic over a non-dedicated network, extra measures have been taken to ensure that data is only accessible to those authorized to use it.

To accomplish this, CDP/NSS safeguards the areas of potential vulnerability:

• System management – allowing only authorized administrators to modify the configuration of the CDP/NSS storage system.

• Data access – authenticating and authorizing the Clients who access the storage resources.

System management

CDP/NSS protects your system by ensuring that only the proper administrators have access to the system’s configuration. This means that the administrator’s user name and password are always verified against those defined on the storage server before access to the configuration is granted.

While the server verifies the administrator’s login, the root user is the only one who can add or delete IPStor administrators. The root user can also change other administrator’s passwords and has privileges to the operating system. Therefore, the server’s root user is the key to protecting your server and the root user password should be closely guarded. It should never be revealed to other administrators.

As best practice, IPStor administrator accounts should be limited to trusted administrators that can safely modify the server configuration. Improper modifications of the server configuration can result in lost data if SAN resources are deleted or modified.

Data access

Just as CDP/NSS protects your system configuration by verifying each administrator as they login, CDP/NSS protects storage resources by ensuring that only the proper computer systems have access to the system’s resources.

For access by application servers, two things must happen, authentication and authorization.

Authentication is the process of establishing the credentials of a Client and creating a trusted relationship (shared-secret) between the client and server. This prevents other computers from masquerading as the Client and accessing the storage.

CDP/NSS Administration Guide 182

Page 185: CDP-NSS Administration Guide

Security

Authentication occurs once per Client-to-Server relationship and occurs the first time a server is successfully added to a client. Subsequent access to a server from a client uses the authenticated shared secret to verify the client. Credentials do not need to be re-established unless the software is re-installed. The authentication process uses the authenticated Diffie-Hellman protocol. The password is never transmitted through the network, not even in encrypted form to eliminate security vulnerabilities.

Authorization is the process of granting storage resources to a Client. This is done through the console by an IPStor administrator or the server’s root user. The client will only be able to access those storage resources that have been assigned to it.

Account management

Only the root user can manage users and groups or reset passwords. You will need to add an account for each person who will have administrative rights in CDP/NSS. You will also need to add a user account for clients that will be accessing storage resources from a host-based application (such as FalconStor DiskSafe or FileSafe).

To make account management easier, users can be grouped together and handled simultaneously. To manage users and groups, right-click on the server and select Accounts. A list of all existing users and administrators are listed on the Users tab and a list of all existing groups is listed on the Groups tab.

The rights of each are summarized in the table below:

For additional information regarding user access rights, refer to the ‘Manage accounts’ section and ‘Manage storage pools and the devices within storage pools’.

Security recommendations

In order to maintain a high level of security, a CDP/NSS installation should be configured and used in the following manner:

Type of Administrator

Create/Delete Pools

Add/Remove Storage from

Pools

Assigns Rights to IPStor Users

Create/Modify/Delete Logical

Resources

Assign Storage to

Clients

Root x x x x x

IPStor Administrator

x x x x x

IPStor User x - IPStor Users can only modify/delete logical resources that they created.

x

CDP/NSS Administration Guide 183

Page 186: CDP-NSS Administration Guide

Security

Storage network topology

For optimal performance, CDP/NSS does not encrypt the actual storage data that is transmitted between the server and clients. Encrypting and decrypting each block of data transferred involves heavy CPU overhead for both the server and clients. Since CDP/NSS transmits data over potentially shared network channels instead of a computer’s local bus, the storage data traffic can be exposed to monitoring by other devices on the same network. Therefore, a separate segment should be used for the storage network if a completely secure storage system is required. Only the CDP/NSS clients and storage servers should be on this storage network segment.

If the configuration of your storage network does not maintain a totally separate segment for the storage traffic, it is still possible to maintain some level of security by using encryption or secure file systems on the host computers running the CDP/NSS Client. In this case, data written to storage devices is encrypted, and cannot be read unless you have the proper decryption tool. This is entirely transparent to the CDP/NSS storage system; these tools can only be used at the CDP/NSS client as the storage server treats the data as block storage data.

Physical security of machines

Due to the nature of computer security in general, if someone has physical access to a server or client, the security of that machine is compromised. By compromised, we mean that a person could copy a password, decipher CDP/NSS or system credentials, or copy data from that computer. Therefore, we recommend that your servers and clients be maintained in a secure computer room with limited access.

This is not necessary for the console, because the console does not leave any shared-secret behind. Therefore, the console can be run from any machine, but that machine should be a "safe", non-compromised machine, specifically one that you are sure does not have a Trojan horse-like program hidden that may be monitoring or recording key strokes. Such a program can collect your password as you type, thereby compromising your system’s security. Of course, this is a general computer security concern which is not unique to CDP/NSS. In addition, you should be aware that there is no easy way to detect the presence of such malicious programs, even by using anti-virus software. Unfortunately, many people with programming knowledge are capable of creating these types of malicious programs, which will not have a signature that anti-virus software can identify. Therefore, you should never type in your password, or any password, in an environment you cannot trust 100%.

Disable ports

Disable all unnecessary ports. The only ports required by CDP/NSS are shown in “Port Usage”:

CDP/NSS Administration Guide 184

Page 187: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Failover

Overview

To support mission-critical computing, CDP/NSS-enabled technology provides high availability for the entire storage network, protecting you from a wide variety of problems, including:

• Connectivity failure• Storage device path failure• Storage device failure• Storage server or device failure

The following illustrates a basic CDP/NSS configuration with potential points of failure and a high availability configuration, where FalconStor’s high availability options work with redundant hardware to eliminate the points of failure:

CDP/NSS Administration Guide 185

Page 188: CDP-NSS Administration Guide

Failover

The FailoverOption

The FalconStor failover option provides high availability for CDP and NSS operations by eliminating the down time that can occur should a storage server (software or hardware) or a storage device fail. There are two modes of failover:

• Shared storage failover - Uses a two-node failover pair to provide node level redundancy. This model requires a shared storage infrastructure and is typically Fibre Channel based.

• Non-shared storage failover (Cross-mirror failover) - Provides high availability without the need for shared storage. Used with appliances containing internal storage. Mirroring is facilitated over a dedicated, direct IP connection. (Available in a Virtual Appliance environment.)

Best Practice As a failover setup best practice, it is recommended that you do not put more than one standby WWPN on a single physical port. Both NSS/CDP nodes in a cluster configuration require the same number of physical Fibre Channel target ports to achieve best practice failover configurations.

Primary/Secondary

StorageServers

FalconStor’s Primary and Secondary servers are separate, independent storage servers that each have their own assigned clients. The primary storage server is the server that is being monitored by the secondary Storage server. In the event the primary fails, the secondary takes over. This is referred to as Active-Passive Failover.

The terms Primary and Secondary are purely from the client’s perspective since these servers may be configured to monitor each other. This is referred to as Mutual Failover or Failover. In that case, each server is primary to its own clients and secondary to the other’s clients. Each server normally services its own clients. In the event one server fails, the other will take over and serve the failed server’s clients.

Failover/Takeover

Failover/takeover is the process that occurs when the secondary server takes over the identity of the primary. In the case of cross-mirroring on a virtual appliance, failover occurs when all disks are swapped to the secondary server. Failover will occur under the following conditions:

• One or more of the storage server processes goes down.• There is a network connectivity problem, such as a defective NIC or a loose

network cable with which this NIC client is associated.• (Shared storage failover) There is a storage path failure.• The heartbeat cannot be retrieved.• There is a power failure.• One or more Fibre Channel target is down.

Recovery/Failback

Recovery/Failback is the process that occurs when the secondary server releases the identity of the primary to allow the primary to restore its operation. Once control has returned to the primary server, the secondary server returns to its normal monitoring mode.

After recovering a virtual appliance cross-mirror failure, the secondary server swaps disks back to the primary server after the disks are re-synchronized.

CDP/NSS Administration Guide 186

Page 189: CDP-NSS Administration Guide

Failover

Storage ClusterInterlink

A physical connection between two servers to mirror snapshot and SafeCache metadata between high-availability (HA) pairs. This enables rapid failover and reduces the time required to load snapshot and SafeCache data from the disk. Two Ethernet ports (sci0 and sci1) are reserved for this purpose.

Sync StandbyDevices

This menu option is available from the console (Failover --> Sync Standby Devices) and is useful when the Storage Cluster Interlink connection in a failover pair is broken. Select this option to manually synchronize the standby device information on both servers once the Storage Cluster Interlink is reconnected.

Asymmetricmode

(Fibre Channel only) Asymmetric failover requires standby ports on the secondary server in case a target port on your primary server fails.

Swap For virtual appliances: Swap is the process that occurs with cross-mirroring when data functions are moved from a failed virtual disk on the primary server to the mirrored virtual disk on the secondary server. The disks are swapped back once the problem is resolved.

CDP/NSS Administration Guide 187

Page 190: CDP-NSS Administration Guide

Failover

Shared storage failover sample configuration

This diagram illustrates a shared storage failover configuration. In this example, both servers are monitoring each other. Because both servers are actively serving their own clients, this configuration is referred to as an active-active or mutual failover configuration. When server A fails, server B takes over and serves the clients of server A in addition to its own clients.

CDP/NSS Administration Guide 188

Page 191: CDP-NSS Administration Guide

Failover

Failover requirements

The following are the requirements for setting up a failover configuration:

General failoverrequirements

• You must have two storage servers. The failover pair should be installed with identical Linux operating system versions.

• Version 7.0 and later requires a Storage Cluster Interlink Port for failover setup. This is a physical connection (also used as a hidden heartbeat IP) between two servers. If you wish to disable the Storage Cluster Interlink heartbeat functionality, contact Technical Support.Note: When USEQUORUMHEALTH is disabled and there are no client-associated network interfaces, all network interfaces - including the Storage Cluster Interlink Port - must go down before failover can occur. When the Storage Cluster Interlink heartbeat functionality is disabled, it is no longer treated as a heartbeat IP connection for failover.

• Both servers must reside on the same network segment, because in the event of a failover, the secondary server must be reachable by the clients of the primary server. This network segment must have at least one other device that generates a network ping (such as a router, switch, or server). This allows the secondary server to detect the network in the event of a failure.

• You need to reserve an IP address for each network adapter in your primary failover server. The IP address must be on the same subnet as the secondary server and is used by the secondary server to monitor the primary server's health. In a mutual failover configuration, these IP addresses are used by the servers to monitor each other's health. The health monitoring IP address remains with the server in the event of failure so that the server’s health can be continually monitored. Note: The storage server clients and the console cannot use the health monitoring IP address to connect to a server.

• You must use static IP addresses for your failover configuration. It is also recommended that the IP addresses of your servers be defined in a DNS server so they can be resolved.

• If you will be using Fibre Channel target mode or iSCSI target mode, you must enable it on both the primary and secondary servers before creating your failover configuration.

• The first time you set up a failover configuration, the secondary server must not have any replica resources.

• You must have at least one device reserved for a virtual device on each primary server with enough space to hold the configuration repository that will be created. The main repository should be established on a RAID5 or RAID1 file system for ultimate reliability.

• It is strongly recommended that you use some type of power control option for failover servers.

• If you are using an external hardware power controller for your failover pair, you should set it up before creating your failover configuration. Refer to ‘Power Control options’ for more information.

CDP/NSS Administration Guide 189

Page 192: CDP-NSS Administration Guide

Failover

General failoverrequirements

for iSCSI clients(updated February

2013)

(Window iSCSI clients) The Microsoft iSCSI initiator has a default retry period of 60 seconds. You must change it to 300 seconds in order to sustain the disk for five minutes during failover so that applications will not be disrupted by temporary network problems. If you are using Windows Server 2003, this setting is changed through the registry as described below. If you are using Windows Server 2008, use DynaPath to sustain the disk for up to five minutes during failover.

1. Go to Start --> Run and type regedit.

2. Find the following registry key:

HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\4D36E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\

where iscsi adapter interface corresponds to the adapter instance, such as 0000, 0001, .....

3. Right-click Parameters and select Export to create a backup of the parameter values.

4. Double-click MaxRequestHoldTime.

5. Pick Decimal and change the Value data to 300.

6. Click OK.

7. Reboot Windows for the change to take effect.

Shared storagefailover

requirements

• Both servers must have at least one Network Interface Card (NIC) each (on the same subnet). Unlike other clustering software, the “heartbeat” co-exists on the same NIC as the storage network. The heartbeat does not require and should NOT be on a dedicated heartbeat interface and subnet.

• The failover pair must have connections to the same common storage; if storage cannot be seen by both servers, it cannot be accessed from both servers. However, the storage does not have to be represented the same way to both servers. Each server needs at least one path to each commonly-shared physical storage device, but there is no maximum and they do not need to be equal (i.e., server A has two paths while server B has four paths). Make sure to properly configure LUN masking on storage arrays so both storage server nodes can access the same LUNs.

• Storage devices must be attached in a multi-host SCSI configuration or attached on a Fibre loop or switched fabric. In this configuration, both servers can access the same devices at the same time (both read and write).

• (SCSI only) Termination should be enabled on each adapter, but not on the device, in a shared bus arrangement.

• If you will be using the FalconStor NIC Port Bonding option, you must set it up before creating a failover configuration. You cannot change or remove NIC Port Bonding once failover is set up. If you need to change NIC Port Bonding, you will have to remove failover first.

CDP/NSS Administration Guide 190

Page 193: CDP-NSS Administration Guide

Failover

Cross-mirrorfailover

requirements

• Available only for virtual appliances.• Each server must have identical internal storage.• Each server must have at least two network ports (one for the required

network cable). The network ports must be on the same subnet. • Only one dedicated cross-mirror IP address is allowed for the mirror. The IP

address must be 192.168.n.n. • Only virtual devices can be mirrored. Service Enabled Devices and system

disks cannot be mirrored.• The number of physical disks on each machine must match and the disks

must have matching ACSLs (adapter, channel, SCSI ID, LUN). • When failover occurs, both servers may have partial storage. To prevent a

possible dual mount situation, we strongly recommend that you use a hardware power controller, such as IPMI. Refer to ‘Power Control options’ for more information.

• Prior to configuration, virtual resources can exist on the primary server as long as the identical ACSL is unassigned or unowned by the secondary server. After configuration, pre-existing virtual resources will not have a mirror. You will need to use the Verify & Repair option to create the mirror.

FC-basedAsymmetric

failoverrequirements

• During failover, the storage server is temporarily unavailable. Since the failover process can take a minute or so, clients need to keep attempting to connect so that when the server becomes available they can continue normal operations. One way of ensuring that clients will retry the connection is to use a FalconStor multi-pathing agent, such as DynaPath. If you are not using DynaPath because there is no corresponding Linux kernel, you may be able to use Linux DM-Multipath or you may be able to configure your HBA driver to perform the retries. It is recommended that clients retry the connection for a minimum of two minutes.

• Fibre Channel target ports are not required on either server for Asymmetric mode. However, if Fibre Channel is enabled on both servers, the primary server MUST have at least one target port and the secondary server MUST have a standby port. If Fibre Channel is disabled on both servers, neither server needs to have target/standby ports.

• If target ports are configured on a server, you must have at least the same number of initiators (or aliases depending on the adapter) on the other server.

• Asymmetric failover supports the use of QLogic HBAs.

CDP/NSS Administration Guide 191

Page 194: CDP-NSS Administration Guide

Failover

Pre-flight checklist for failover

Prior to configuring failover, follow the steps below for both the primary and secondary NSS device:

1. Make sure all expected physical LUNs and their paths are detected properly under the Physical Resources node in the FalconStor Management Console. If any physical LUNs or paths are missing, rescan the appropriate adapter to discover all expected devices and paths.

2. Make sure the configuration repository exists. If it does not exist, you create it using the Enable Configuration Repository option or the Failover Setup Wizard will prompt you to create it during configuration. Refer to ‘Protect your storage server’s configuration’ for details.

3. Ensure that Service Enabled Devices have been configured for all physical LUNs that are reserved for SED.

4. If any physical LUNs are reserved for SED, but SED devices are not yet configured, change the property of these physical LUNs from "Reserved for Service Enabled Device" to "Unassigned". Without this step, you will have a device configuration mismatch in the Failover configuration wizard and will not be able to proceed.

5. Rescan all existing devices.

6. Make sure unique Storage Cluster Interlink (SCI) IP addresses have been set for sci0 and sci1 on each server. You can verify/modify the IP addresses from the console by right-clicking the server and selecting System Maintenance -> Configure Network. Refer to ‘Network configuration’ for details.

7. Make sure there is a physical connection between the SCI ports on the HA servers. One network cable should connect the two sci0 ports and another network cable should connect the two sci1 ports.

Connectivity failure

A connectivity failure can occur due to a NIC, Fibre Channel HBA, cable, or switch/router failure. You can eliminate potential points of failure by providing multiple paths to the storage server with multiple NICs, HBAs, cables, switches/routers.

The client always tries to connect to the server with its original IP address (the one that was originally set in the client when the server was added to the client). You can re-direct traffic to an alternate adapter by permitting you to specify the alternate IP addresses for the storage server. This can be done in the console (right-click on the server and select Properties --> Server IP Addresses tab).

CDP/NSS Administration Guide 192

Page 195: CDP-NSS Administration Guide

Failover

When you set up multiple IP addresses, the clients will attempt to communicate with the server using an alternate IP address if the original IP address stops responding.

Default failover behavior

Default failover behavior is described below:

Fibre ChannelTarget failure

• Fibre Channel Target failure: If a Fibre Channel target port links down, the partner server will immediately takeover. This is true regardless of the number of target ports on the NSS server. For example, the server can use multiple targets to provide virtual devices to the client. If a target loses connectivity, the client will still have alternate paths to access those devices. However, the default behavior is to failover. The default behavior can be modified by Technical Support.

Networkconnection

failure

• Network connection failure and iSCSI clients: By default CDP/NSS server failover will occur when a network connection goes down and that connection is also associated with the iSCSI target of a client. If multiple subnets are used to connect to the CDP or NSS server, the default behavior can be modified by Technical Support so that failover will not occur until all network connections are down.

Notes:

• In order for failover to occur when there is a failure, the device driver must promptly report the failure. Make sure you have the latest driver available from the manufacturer.

• In order for the clients to successfully use an alternate IP address, your subnet must be set properly so that the subnet itself can redirect traffic to the proper alternate adapter.

• The client becomes aware of the multiple IP addresses when it initially connects to the server. Therefore, if you add additional IP addresses in the console while the client is running, you must rescan devices (Windows clients) or restart the client (Unix clients) to make the client aware of these IP addresses. In addition, if you recover from a network path failure, you will need to restart the client so that it can use the original IP address.

CDP/NSS Administration Guide 193

Page 196: CDP-NSS Administration Guide

Failover

Storage device path failure

(Shared storage failover) A storage device path failure can occur due to a cable or switch/router failure.

You can eliminate this potential point of failure by providing a multiple path configuration, using multiple Fibre Channel switches, and/or multiple adapters, and/or storage devices with multiple controllers. In a multiple path configuration, all paths to the storage devices are automatically detected. If one path fails, there is an automatic switch to another path.

Storage device failure

The FalconStor Mirroring and Cross-mirror failover options provide high availability by minimizing the down time that can occur if a physical disk fails.

With mirroring, each time data is written to a designated disk, the same data is also written to another disk. This disk maintains an exact copy of the primary disk. In the event that the primary disk is unable to read/write data when requested to by a SAN Client, the data functions are seamlessly swapped to the mirrored copy disk.

Note: Fibre Channel switches can demonstrate different behavior in a multiple path configuration. Before using this configuration with CDP or NSS, you must verify that the configuration can work on your server without the CDP or NSS software. To verify:

1. Use the hardware vendor’s utility or Linux’s cat /proc/scsi/scsi. command to see the devices after the driver is loaded.

2. Use the hardware vendor’s utility or Linux’s hdparm command to access the devices.

3. Unplug the cable from one device and use the utilities listed above to verify that everything is working.

4. Repeat the test by reversing which device is unplugged and verify that everything is still working.

CDP/NSS Administration Guide 194

Page 197: CDP-NSS Administration Guide

Failover

Storage server or device failure

The FalconStor failover option provides high availability by eliminating the down time that can occur should a CDP or NSS appliance (software or hardware) fail.

In the failover design, a storage server is configured to monitor another storage server. In the event that the server being monitored fails to fulfill its responsibilities to the clients it is serving, the monitoring server will seamlessly take over its identity so the clients will transparently fail over to the monitoring server.

A unique monitoring system is used to ensure the health of the storage servers. This system includes a self-monitor and an intelligent heartbeat monitor.

The self-monitor is part of all CDP/NSS appliances, not just the servers configured for failover and provides continuous health status of the server. It is part of the process that provides operational status to any interested and authorized parties, including the console and supported network management applications through SNMP. The self-monitor checks all storage server processes and connectivity to the server’s storage devices.

In a failover configuration, FalconStor’s intelligent heartbeat monitor continuously monitors the primary server through the same network path that the server uses to serve its clients.

When the heartbeat is retrieved, the results are evaluated. There are several possibilities:

• All is well and no failover is necessary.• The self-monitor detects a critical error in the IPStor Server processes that

is determined to be fatal, yet the error did not affect the network interface. In this case, the secondary will inform the primary to release its CDP/NSS identity and will take over serving the failed server’s clients.

• The self-monitor detects a storage device connectivity failure but cannot determine if the failure is local or applies to the secondary also. In that case the device error condition will be reported through the heartbeat.The secondary will check to see if it can successfully access the storage. If it can, it attempts to access all devices. If it can successfully access all devices, the secondary initiates a failover. If it cannot successfully access all devices, no failover occurs. If you are using the FalconStor Cross-mirror feature, a swap will occur.

Because the heartbeat uses the same network path that the server uses to serve its clients, if the heartbeat cannot be retrieved and there are iSCSI clients associated with those networks, the secondary server knows that the clients cannot access the server. This is considered a Catastrophic failure because the server or the network connectivity is incapacitated. In this case the secondary will immediately initiate a failover.

CDP/NSS Administration Guide 195

Page 198: CDP-NSS Administration Guide

Failover

Failover restrictions

The following information is important to be aware of when configuring failover:

• JBODs are not recommended for failover. If you use a JBOD as the storage device for a storage server (configured in Fabric Loop), certain downstream failover scenarios, such as SCSI Aliasing, might not function properly. If a Fibre connection on the storage server is broken, the JBOD might hang and not respond to SCSI commands. SCSI Aliasing will attempt to connect using the other Fibre connection; however, since the JBOD is in an unknown state, the storage server cannot reconnect to the JBOD, causing CDP/NSS clients to disconnect from their resources.

• In a pure Fibre Channel environment, Network failure will not trigger failover.

Failover setup

You will need to know the IP address(es) of the primary server (and the secondary server if you are configuring a mutual failover scheme). You will also need the health monitoring IP address(es). It is a good idea to gather this information and find available IP addresses before you begin the setup.

1. In the console, right-click on an expanded server and select Failover --> Failover Setup Wizard.

You will see a screen similar to the following that shows you a status of options on your server.

Any options enabled/installed on the primary storage server must also be enabled/installed on the secondary storage server.

2. If you have recently made device changes, rescan the server’s physical adapters.

CDP/NSS Administration Guide 196

Page 199: CDP-NSS Administration Guide

Failover

Before a failover configuration can be created, the storage system needs to know the ownership of each physical device for the selected server. Therefore, it is recommended that you allow the wizard to rescan the server’s devices.

If you have recently used the Rescan option to rescan the selected server's physical adapters, you can skip the server scanning process.

3. Select whether or not you want to use the Cross-mirror feature (available for virtual appliances only).

4. Select the secondary server and determine if the servers will monitor each other.

Select for both servers tomonitor each other.

Shared storage failover

CDP/NSS Administration Guide 197

Page 200: CDP-NSS Administration Guide

Failover

5. (Cross-mirror only) Select the disks that will be used for the primary server.

System disks will not be listed.

The disks you select will be used as storage by the primary server. The ones that are not selected will be used as storage by the secondary server.

Click Find or manuallyenter IP address for thesecondary server. BothIP addresses must start

with 192.168.

Cross mirror failover(non-shared storage)

CDP/NSS Administration Guide 198

Page 201: CDP-NSS Administration Guide

Failover

6. (Cross-mirror only) Confirm the disks that will be used for the secondary server.

7. (Cross-mirror only) Confirm the physical device allocation.

8. Follow the wizard to create a configuration repository on this server.

The configuration repository maintains a continuously updated version of your storage system configuration. For additional security, after your failover configuration is complete, you can enable mirroring on the configuration repository. It is also recommended that you create a configuration repository even if you have a standalone server. Be sure to use a different physical drive for the mirror.

9. Determine if there are any conflicts with the server you have selected.

Note: If you need to recreate the configuration repository for any reason, such as switching to another physical drive, you can use the Reconfigure option. Refer to ‘Recreate the configuration repository (updated January 2013)’ for details.

CDP/NSS Administration Guide 199

Page 202: CDP-NSS Administration Guide

Failover

If physical disks, pre-existing virtual disks, or service enabled disks cannot be seen by both primary and secondary storage servers, you will be alerted.

If there are conflicts, a window similar to the following will display:

You will see mismatched devices listed here. For example, if you have a RAID array and one server sees all eight devices and the other server sees only four devices, you will see the devices listed here as mismatched.

You must resolve the mismatch before continuing. For example, if the QLogic driver did not load on one server, you will have to load it before going on.

Note that you can exclude physical devices from failover consideration, if desired.

10. Determine if you need to rescan this server’s physical adapters.

If you fixed any mismatched devices in the last step, you will need to rescan before the wizard can continue.

If you are re-running the Failover wizard because you made a change to a physical device on one of the servers, you should rescan before continuing.

If you had no conflicts and have recently used the Rescan option to rescan the selected server's physical adapters, you can skip the scanning process.

11. If this is a mutual failover configuration, follow the wizard to create a configuration repository on the secondary server.

Note: If this is the first time you are setting up a failover configuration, you will get a warning message if there are any Replica resources on the secondary server. You will need to remove them and then restart the failover wizard.

CDP/NSS Administration Guide 200

Page 203: CDP-NSS Administration Guide

Failover

12. Verify the Storage Cluster Interlink Port IP addresses for failover setup.

The IP address fields are automatically populated with the IP address associated with sci0. If the IP addresses listed are incorrect, you will need to click Cancel to exit the failover setup wizard and modify the IP address.

You can verify/modify the IP addresses from the console by right-clicking the server and selecting System Maintenance -> Configure Network. Refer to ‘Network configuration’ for details.

13. Select at least one subnet that you want to configure from the list.

If there are multiple subnets, use the arrows to set the order in which the heartbeat is to be checked.

CDP/NSS Administration Guide 201

Page 204: CDP-NSS Administration Guide

Failover

By re-ordering the subnet list, failover can be avoided due to a failure on eth0.

If you are using the Cross-mirror feature, you will not see the 192.168... cross-mirror link that you entered earlier listed here.

14. Indicate if you want to use this network adapter.

Select the IP addresses that clients will use to access the storage servers when using iSCSI, replication and for console communication.

This is the window you will see for a non-mutual failover.

Mutual failover configuration.

Notes:

• If you change the Server IP addresses while the console is connected using those IP addresses, then the Failover wizard will not be able to successfully create the configuration.

• If you uncheck the Include this Network Adapter for failover box, the wizard will display the next card it finds. You must choose at least one.

• For SAN resources, because failover can occur at any time, you should use only those IP addresses that are configured as part of the failover configuration to connect to the server.

CDP/NSS Administration Guide 202

Page 205: CDP-NSS Administration Guide

Failover

15. Enter the health monitoring IP address you reserved for the selected network adapter.

The health monitoring IP address remains with the server in the event of failure so that the server’s health can be continually monitored. Therefore it is recommended that you use static IP addresses.

Select health monitoring “heartbeat” addresses which will be used exclusively by the storage servers to monitor each other’s health. These addresses must not be used for any other purpose.

16. If you want to use additional network adapter cards, repeat the steps above.

This is the window you will see for a non-mutual failover.

You have to enter IP addresses for both servers in a mutual failover configuration.

CDP/NSS Administration Guide 203

Page 206: CDP-NSS Administration Guide

Failover

17. (Asymmetric mode only) For Fibre Channel failover, select the initiator on the secondary server that will function as a standby in case the target port on your primary server fails.

For QLogic HBAs, you will need to select a dedicated standby port for each target port used by clients. You should confirm that the adapter shown is not the initiator on your secondary server that is connected to the storage array, and also that it is not the target adapter on your secondary server. You can only pick a standby port once. The exception to this rule is when you are using NPIV.

If you are configuring a mutual failover, you will need to set up the standby adapter for the secondary server as well.

18. Select which Power Control option the primary server is using.

Power Control options force the primary server to release its resources after a failure. Refer to ‘Power Control options’ for more information.

CDP/NSS Administration Guide 204

Page 207: CDP-NSS Administration Guide

Failover

HP iLO - This option will power down the primary server in addition to forcing the release of the server’s resources and IP address. In order to use HP iLO, several packages must be installed on the server and you must have configured the controller’s IP address to be accessible from the storage servers. In this dialog, enter the HP iLO port’s IP address. Refer to ‘HP iLO’ for more information.

For Red Hat 5, the following packages are automatically installed on each server (if you are using the EZStart USB key) in order to use HP iLO power control:

• perl-IO-Socket-SSL-1.01-1.fc6.noarch.rpm

• perl-Net-SSLeay-1.30-4.fc6.x86_64.rpm

RPC100 - This option will power down the primary server in addition to forcing the release of the server’s resources and IP address. RPC100 is an external power controller available in both serial and parallel versions. Select the correct port, depending upon which version you are using. Refer to ‘RPC100’ for more information.

IPMI - This option will reset the power of the primary server, forcing the release of the server’s resources and IP address. In order to use IPMI, you must have created an administrative user via your IPMI configuration tool. The IP address cannot be the virtual IP address that was set for failover. Refer to ‘IPMI’ for more information.

APC PDU - This option will reset the power of the primary server, forcing the release of the server’s resources and IP address. The APC PDU external hardware power controller must be set up before you can use it. In this dialog, enter the IP address of the APC PDU, the community name that was given Write+ access, and the port(s) that the failover partner is physically plugged into on the PDU. Use a space to separate multiple ports. Refer to ‘APC PDU’ for more information.

For Red Hat 5, you will need to install the following packages on each server in order to use APC PDU:

• lm_sensors-2.10.7-9.el5.x86_64.rpm• net-snmp-5.3.2.2-9.el5_5.1.x86_64.rpm• net-snmp-libs-5.3.2.2-9.el5_5.1.i386.rpm• net-snmp-libs-5.3.2.2-9.el5_5.1.x86_64.rpm• net-snmp-utils-5.3.2.2-9.el5_5.1.x86_64.rpm

19. Select which Power Control option the secondary server is using.

CDP/NSS Administration Guide 205

Page 208: CDP-NSS Administration Guide

Failover

20. Confirm all of the information and then click Finish to create the failover configuration.

Once your configuration is complete, each time you connect to either server in the console, you will automatically be connected to the other as well.After configuring cross-mirror failover, you will see all of the virtual machine disks listed in the tree, similar to the following:

These are local physicaldisks for this server. TheV indicates the disk is

virtualized for this serverand an F indicates a

foreign disk. The Qindicates a quorum disk

containing theconfiguration repository.

These are remotephysical disks for this

server.

Notes:

• If the setup fails during the setup configuration stage (for example, the configuration is written to one server but then the second server is unplugged while the configuration is being written to it), use the Remove Failover Configuration option to delete the partially saved configuration. You can then create a new failover configuration.

• Do not change the host name of a server that is part of a failover pair.

CDP/NSS Administration Guide 206

Page 209: CDP-NSS Administration Guide

Failover

After a failover occurs, if a client machine is rebooted while either of the failover servers is powered off, the client must rescan devices once the failover server is powered back on, but before recovery occurs. If this is not done, the client machine will need to be rebooted in order to discover the newly restored paths.

Recreate the configuration repository (updated January 2013)

To recreate the configuration repository for any reason, such as switching to another physical drive, you can use the Reconfigure option. To do this, follow the steps below:

1. Suspend failover on both servers in the configuration.

Right-click on the server and select Failover --> Suspend Failover.

2. Navigate to Logical Resources --> Configuration Repository.

3. Right-click and select Reconfigure.

4. Follow the instructions on the wizard to select a physical device (10240 MB of space in one contiguous physical disk segment is required).

5. Click Finish to recreate the configuration repository.

6. Repeat these steps on the second node of the failover pair

Power Control options

At times, a server may become unresponsive, but, because of network or internal reasons, it may not release its resources or its IP address, thereby preventing failover from occurring. To allow for a graceful failover, you can use the Power Control options to force the primary server to release its resources after a failure.

Power Control options are used to prevent clusters from competing for access to the same storage. They are triggered when a secondary server fails to communicate with the primary server over both the network and the quorum drive. When this occurs, the secondary server triggers a forceful take over of the primary server and triggers the selected Power Control option.

When a partner server has been forcefully taken over, it cannot communicate with the power control device (i.e. IPMI, HP iLO), and failover will not occur. However, you may issue a manual takeover from the console, if necessary. This default behavior (for version 7.00 and later) also occurs if the failover configuration has been set up with no power control option. Failure to communicate to the power control devices may be caused by one the following reasons:

• Authentication error (password and/or username is incorrect)• Network connectivity issue• Server power cable is unplugged• Wrong information used for power control device such as incorrect IP

CDP/NSS Administration Guide 207

Page 210: CDP-NSS Administration Guide

Failover

Power Control is set during failover configuration. To change options, right-click on either failover server and select Failover --> Power Control.

HP iLO This option powers down the primary server in addition to forcing the release of the server’s resources and IP address. HP iLO is available on HP servers with the ILO (Integrated Lights Out) option. In order to use HP iLO, you must have configured the controller’s IP address to be accessible from the storage servers. The console will prompt you to enter the HP iLO port’s IP address of the server.

RPC100 This option will power down the primary server in addition to forcing the release of the server’s resources and IP address. RPC100 is an external power controller available in both serial and parallel versions. The console will prompt you to select the serial or parallel port, depending upon which version of the RPC100 you are using. Note that the RPC100 power controller only controls one power connection. If the storage server has multiple power supplies there will be a need for a special power cable to connect them all.

SCSI Reserve/Release

(Not available in version 7) This option is not an actual Power Control option, but a storage solution to prevent two storage servers from accessing the same physical storage device simultaneously. Note that this option is only available on those storage devices that support SCSI Reserve & Release. This option will not force a hung storage server to reboot and will not force the hung server to release its IP addresses or bring down its FC targets. The secondary server will simply reserve the primary server’s physical resources, thereby preventing the possibility of a double mount. If the primary server is not actually hung and is only temporarily unable to communicate with the secondary server through normal means, the triggering of the SCSI Reserve/Release from the secondary server will trigger a reservation conflict on the primary server. At this point the primary server will release both its IP addresses and FC targets so the secondary can successfully take over. If this occurs the primary server will need to be rebooted before the reservation conflict can be resolved. The commands, ipstor restart and ipstor restart all will NOT resolve the reservation conflict.

IPMI This option will reset the power of the primary server, forcing the release of the server’s resources and IP address. Intelligent Platform Management Interface (IPMI) is a hardware level interface that monitors various hardware functions on a server. If IPMI is provided by your hardware vendor, you must follow the vendor’s instructions to configure it and you must create an administrative user via your IPMI configuration tool. The IP address cannot be the virtual IP address that was set for failover.

Note: The HP iLO power control option depends on the storage server being able to access the HP iLO port through its regular network connection. If the HP iLO port is inaccessible, this option will not function. Each time the power control dialog screen is launched, the username/password fields will be blank. The fields are available for update but the current username and password information is not revealed for security purposes. You can make changes by re-entering your username and password.

CDP/NSS Administration Guide 208

Page 211: CDP-NSS Administration Guide

Failover

If you are using IPMI, you will see several IPMI options on the server’s System Maintenance menu, Monitor, and Filter. Refer to ‘Perform system maintenance (updated March 2013)’ for more information.

You should check the FalconStor certification matrix for a current list of FalconStor appliances and server hardware that has been certified for use with IPMI.

APC PDU This option will reset the power of the primary server, forcing the release of the server’s resources and IP address. The APC PDU is an external hardware power controller that must be set up before you can use it.

To set up the APC PDU power controller:

1. Connect the APC PDU to your network.

2. Via the COM port on the unit, set an IP address that is accessible from the storage servers.

3. Launch the APC PDU user interface from the COM port or the Web.

4. Enable SNMP on the APC PDU.

This can be found under Network.

5. Add or edit a Community Name and give it Write+ access.

You will use this Community Name as the password for configuration of the power control option. For example, if you want to use the password apc, you have to create a Community Name called apc or change the default Community Name to APC and give it Write+ access.

6. Connect the power plugs of your storage servers to the APC PDU.

Be sure to note which outlets are used for each server.

CDP/NSS Administration Guide 209

Page 212: CDP-NSS Administration Guide

Failover

Check Failover status

You can see the current status of your failover configuration, including all settings, by checking the Failover Information tab for the server.

The server is highlighted in a specific color indicating the following conditions:

• Red - The server is currently in failover mode and has been taken over by the secondary server.

• Green - The server has taken over the primary server's resources.• Yellow - The user has suspended failover on this server. The current server

will NOT take over the primary server's resources even it detects abnormal condition from the primary server.

Failover events are also written to the primary server's Event Log, so you can check there for status and operational information, as well as any errors. You should be aware that when a failover occurs, the console will show the failover partner’s Event Log for the server that failed.

For troubleshooting issues pertaining to failover, refer to the ‘Failover Troubleshooting’ section.

Failover Information report

The Failover Information Report can be viewed by double clicking on the server status of the failed server from the console in the General tab.

Failover settings, including which IP addresses are being monitored for failover.

Current status of failover configuration.

CDP/NSS Administration Guide 210

Page 213: CDP-NSS Administration Guide

Failover

Failover network failure status report

The Network failure status report can be viewed using the sms command on the failed server when failover has been triggered due to a client associated NIC link being down.

Recover from failover

When a failed server is restarted, it communicates with the acting primary server and must receive the okay from the acting primary server in order to recover its role as the primary server. If there is a communication problem, such as a network error, and no notification is received, the failed server remains in a 'ready' state but does not recover its role as the primary server. After the communication problem has been resolved, the storage server will then be able to recover normally.

If failover is suspended on the secondary server, or if the failover module is stopped, the primary will not automatically recover until the ipstorsm.sh recovery command is entered.

If both failover servers go offline and then only one is brought up, type the ipstorsm.sh recovery command to bring the storage server back online.

Manual recovery

Manual recovery is the process when the secondary server releases the identity of the primary to allow the primary to restore its operation. Manual recovery can be triggered by selecting the Stop Takeover option from the FalconStor Management Console.

CDP/NSS Administration Guide 211

Page 214: CDP-NSS Administration Guide

Failover

If the primary server is not ready to recover, and you can still communicate with the server, a detailed failover screen displays.

If the primary server is not ready to recover, and you cannot communicate with the server, a warning message displays.

CDP/NSS Administration Guide 212

Page 215: CDP-NSS Administration Guide

Failover

Auto recovery

You can enable auto recovery by changing the Auto Recovery option after failover, when control is returned to the primary server once the primary server has recovered. Once control has returned to the primary server, the secondary server returns to its normal monitoring mode.

Fix a failed server

If the primary server fails over to the secondary and hardware changes are made to the failed server, the secondary server will not be aware of these changes. When failback occurs, the original configuration parameters will be returned to the primary server.

To ensure that both servers become synchronized with the new hardware information, you will need to issue a physical device rescan for the machine whose hardware has changed as soon as the failback occurs.

CDP/NSS Administration Guide 213

Page 216: CDP-NSS Administration Guide

Failover

Recover from a cross-mirror disk failure

For virtual appliances: Whether your cross-mirror disk was brought down for maintenance or because of a failure requires that you follow the procedure listed below to properly bring up the cross-mirror appliance.

When powering down both servers in an Active-Active cross-mirror configuration for maintenance, the server must be properly brought up as follows in order to successfully recover from failover.

If the cross-mirror environment is in a healthy state, all resources are in sync, and all storage is local to the server (none have swapped), the procedure would be as follows.

1. Stop CDP/NSS on the secondary and wait for the primary to take over,.

2. Power down the server.

3. After the primary has successfully taken over, stop CDP/NSS on the primary server and power it down as well.

4. Power up the primary server.

5. Power up the secondary server.

6. CDP/NSS will automatically start.

7. Verify in the /proc/scsi/scsi that both servers can see their remote storage (usually identified by having 50 as the adapter number, for example the first LUN would be 50:0:0:0.) If this is not the case restart the iSCSI initiator or re-login to the servers respective targets to see the remote storage.

Restarting the iSCSI initiator: "/etc/init.d/iscsi restart"

Logging into a target: iscsiadm -m node -p <ip-address>:3261,0 -T <remote-target-name> -l

Example: "iscsiadm -m node -p 192.168.200.201:3261,0 -T

iqn.2000-03.com.falconstor:istor.PMCC2401 -l"

8. Once you have verified that both servers can access the remote storage, restart CDP/NSS on both servers. Failure to do so will result in server recovery issues.

9. After CDP/NSS has been restarted, verify that both servers are in a ready state by using the sms -v command.

Both servers should now be recovered and in a healthy state.

Note: This would be considered a graceful way of powering down both servers for maintenance. After maintenance is complete this would be the proper way to bring up the servers and put the servers in a healthy and up state.

CDP/NSS Administration Guide 214

Page 217: CDP-NSS Administration Guide

Failover

Re-synchronize Cross mirror

After recovering from a cross mirror failure, the disks will automatically be re-synchronized according to the server properties that have been set up. You can click on the Performance tab to configure the synchronization options.

The disks must manually re-synchronized if the disk is offline for more than 20 minutes. Right-click on the server and select Cross Mirror --> Synchronize to manually re-synchronize the disks.

Remove Cross mirror

You can remove cross mirror failover to enable both servers to act as a stand-alone storage server. To remove the cross mirror failover:

1. Restart both servers from the console.

2. Re-login to the servers and manually remove all mirrors from the virtual devices left behind after cross-mirror removal.

This can also be done in batch mode by right-clicking SAN resources --> Mirror --> Remove.

Check resources and swap if possible

Swapping takes place when data functions are moved from a failed disk on the primary server to the mirrored disk on the secondary server. Afterwards, the system automatically checks every hour to see if the disks can be swapped back.

If the disk has been replaced/repaired and the cross mirror has been synchronized, you can force a swap to occur immediately by selecting Cross Mirror --> Check & Swap. The system verifies that the local mirror disk is usable and that the cross mirror is synchronized. Once verified, the system swaps the disks. You can verify the status after the swap operation by selecting the Layout tab for the SAN resource.

Verify and repair a cross mirror configuration

There may be circumstances in which you need to use the Verify & Repair option. For example:

Use the Verify & Repair option for the following situations:

• A physical disk used by the cross mirror has been replaced• A mirror resource was offline when auto expansion occurred • Create a mirror for virtual resources that existed on the primary server prior

to configuration• View the storage exception information that cannot be repaired and requires

further assistance.

CDP/NSS Administration Guide 215

Page 218: CDP-NSS Administration Guide

Failover

When replacing local or remote storage, if a mirror needs to be swapped first, a swapping request will be sent to the server to trigger the swap. Storage can only be replaced when the damaged segments are part of the mirror, either local or remote. New storage has to be available for this option.

To use the Verify & Repair option:

1. Log into both cross mirror servers.

2. Right-click on the primary server and select Cross Mirror --> Verify & Repair.

3. Click the button for any issue that needs to be corrected.

You will only be able to select a button if that is the scenario where the problem occurred. The other buttons will not be selectable.

Resources If everything is working correctly, this option will be labeled Resources and will not be selectable. The option will be labeled Incomplete Resources for the following scenarios:

• The mirror resource was offline when auto expansion (i.e. Snapshot resource or CDP journal) occurred but the device is now back online.

• You need to create a mirror for virtual resources that existed on the primary server prior to cross mirror configuration.

1. Right-click on the server and select Cross Mirror --> Verify & Repair.

Note: If you have replaced disks, you should perform a rescan on both servers before using the Verify & Repair option.

CDP/NSS Administration Guide 216

Page 219: CDP-NSS Administration Guide

Failover

2. Click the Incomplete Resources button.

3. Select the resource to be repaired.

4. When prompted, confirm that you want to repair this resource.

RemoteStorage

If everything is working correctly, this option will be labeled Remote Storage and will not be selectable. The option will be labeled Damaged or Missing Remote Storage when a physical disk being used by cross mirroring on the secondary server has been replaced.

1. Right-click the primary server and select Cross Mirror --> Verify & Repair.

Note: You must suspend failover before replacing the storage.

CDP/NSS Administration Guide 217

Page 220: CDP-NSS Administration Guide

Failover

2. Click the Damaged or Missing Remote Storage button.

3. Select the remote device to be repaired.

Local Storage If everything is working correctly, this option will be labeled Local Storage and will not be selectable. The option will be labeled Damaged or Missing Local Storage when a physical disk being used by cross mirroring is damaged on the primary server and has been replaced.

1. Right-click the primary server and select Cross Mirror --> Verify & Repair.

Note: You must suspend failover before replacing the storage.

CDP/NSS Administration Guide 218

Page 221: CDP-NSS Administration Guide

Failover

2. Click the Damaged or Missing Local Storage button.

3. Select the local device to be replaced.

4. Confirm that this is the device to replace.

Storage andComplete

Resources

If everything is working correctly, this option will be labeled Storage and Complete Resources and will not be selectable. The option will be labeled Resources with Missing segments on both Local and Remote Storage when a virtual device spans multiple physical devices and one physical device is offline on both the primary and secondary server. This situation is very rare and this option is informational only.

1. Right-click on the server and select Cross Mirror --> Verify & Repair.

CDP/NSS Administration Guide 219

Page 222: CDP-NSS Administration Guide

Failover

2. Click the Resources with Missing segments on both Local and Remote Storage button.

You will see a list of failed devices. Because this option is informational only, no action can be taken here.

Modify failover configuration

Make changes to the servers in your failover configuration

The first time you set up your failover configuration, the secondary server cannot have any Replica resources.

In order to make any changes to a mutual failover configuration, you must be running the console with write access to both servers. CDP/NSS will automatically “log on" to the failover pair when you attempt any configuration on the failover set. While it is not required that both servers have the same username and password, the system will try to connect to both servers using the same username and password. If the servers have different usernames/passwords, it will prompt you to enter them before you can continue.

Changephysical device

If you make a change to a physical device (such as if you add a network card that will be used for failover), you will need to re-run the Failover wizard. Be sure to scan both servers during the wizard.

At that point, the secondary server is permitted to have Replica resources. This makes it easy for you to upgrade your failover configuration.

CDP/NSS Administration Guide 220

Page 223: CDP-NSS Administration Guide

Failover

Change subnet If you switch IP segments for an existing failover configuration, the following needs to be done:

1. Remove failover from both storage servers.

2. Delete the current failover servers from the FalconStor Management Console.

3. Make network modifications to the storage servers (i.e. change IP segments).

4. Add the storage servers back to the FalconStor Management Console.

5. Configure failover using the new IP segment.

Convert a failover configuration into a mutual failover configuration

Right-click on the server and select Failover --> Setup Mutual Failover to convert your failover configuration into a mutual failover configuration where both servers monitor each other. A configuration repository should be created even if you have on standalone server. The status of the configuration repository is always displayed on the console under the General tab. In the case of a configuration repository failure, the console displays the time of failure along with the last successful update.

Exclude physical devices from health checking

You can create a storage exception list that will exclude one or more specific physical devices from being monitored. Devices on this list will not prompt the system to fail over, even if the device stops functioning.

This is useful when using less reliable storage (for asynchronous mirroring or local replication), whose temporary loss will not be critical.

When removing failover, this list is reset and cleaned up.

To exclude devices, right-click on the server and select Failover --> Storage Exception List.

Note: If no configuration repository is found on the secondary server, the wizard to set up mutual failover includes the creation of a configuration repository on the secondary server. The configuration repository requires 10 GB of free space.

CDP/NSS Administration Guide 221

Page 224: CDP-NSS Administration Guide

Failover

Change your failover intervals

Right-click on the server and select Failover --> View/Update Failover Options to change the intervals (heartbeat, self-checking, and auto recovery) for this configuration.

The Self-checking Interval determines how often the primary server will check itself.

The Heartbeat Interval determines how often the secondary server will check the heartbeat of the primary server.

If enabled, Auto Recovery determines how long to wait before returning control to the primary server once the primary server has recovered.

Verify physical devices match

The Check Consistency tool (right-click on the server and select Failover --> Check Consistency) helps verify that both nodes can still see the same LUNs or the same number of LUNs. This is useful when physical storage devices need to be added or removed. After suspending failover and removing/adding storage to both nodes, you would first perform a rescan of the resources on both sides to pick up the changes in configuration. After verifying storage consistency between the two nodes, failover can be resumed without risking a failover trigger.

Note: We recommend keeping the Self-checking Interval and Heartbeat Interval set to the default values. Changing the values can result in a significantly longer failover and recovery process.

CDP/NSS Administration Guide 222

Page 225: CDP-NSS Administration Guide

Failover

Start/stop failover or recovery

Force a takeover by a secondary server

On the secondary server, select Failover --> Start Takeover <servername> to initiate a failover to the secondary server. You may want to do this if you are taking your primary server offline, such as when you will be performing maintenance on it.

Once failover is complete, a failover message will blink in red at the bottom of the console and you will be disconnected from the primary server.

Manually start a server

If you cannot connect to a server via the virtual IP, you have the option to bring up the server by attempting to log into the server from the FalconStor Management Console. The server must be powered on and have IPStor services running in order to be forced to an “up” state. You can verify that a server is in a ready state by connecting to the server via SSH using the heartbeat address and running the sms command.

When attempting to force a server up from the console, log into the server you are attempting to manually start. Do not attempt to log into the server from the console using the Heartbeat IP address.

The Bring up Primary Server window displays if the server is accessible via the heartbeat IP address.

Type YES in the dialog box to bring the server to a ready state and then force the server up via the monitor IP address.

Manually initiate a recovery to your primary server

Select Failover --> Stop Takeover if your failover configuration was not set up to use the FalconStor Auto Recovery feature and you want to force control to return to your primary server or if you manually forced a takeover and now want to recover to your primary server.

Once failback is complete, you will be logged off from the virtual primary server.

CDP/NSS Administration Guide 223

Page 226: CDP-NSS Administration Guide

Failover

Suspend/resume failover

Select Failover --> Suspend Failover to stop monitoring its partner server.

In the case of Active-Active failover, you can suspend from either server. However, the server that you suspend from will stop monitoring its partner and will not take over for that partner server in the event of failure. It can still fail over itself. For example, server A and server B are configured for Active-Active failover. If you go to server B and suspend failover, server A will no longer fail over to server B. However, server B can still fail over to server A.

Select Failover --> Resume Failover to restart the monitoring.

Notes: If the cross mirror link goes down, failover will be suspended. Use the Resume Failover option when the cross mirror link comes back up. The disks will automatically be re-synced at the scheduled interval or you can manually synchronize using the cross mirror synchronize option.

• If you stop the CDP/NSS processes on the primary server after suspending failover, you must do the following once you restart your storage server:

• 1. At a Linux command prompt, type sms to see the failover status. • 2. When the system is in a ready state, type the following:

ipstorsm.sh recovery

• Once the connection is repaired, the failover status is not cleared until failover is resumed on both servers.

CDP/NSS Administration Guide 224

Page 227: CDP-NSS Administration Guide

Failover

Remove a failover configuration

Right-click on one of your failover servers and select Failover --> Remove Failover Server to remove the selected server from the failover configuration. In a non-mutual failover configuration, this eliminates the configuration and returns the servers to independent storage servers.

If this is a mutual failover configuration and you want to eliminate the failover relationship from both sides, select the Remove Mutual Failover option.

If this is a mutual failover configuration, and you do not select the Remove Mutual Failover option, this server (the one you right-clicked on) becomes the secondary server in a non-mutual configuration.

If everything is checked, this eliminates the failover relationship and removes the health monitoring IP addresses from the servers and restores the Server IP addresses. If you uncheck the IP address(es) for a server, the health monitoring address becomes the Server IP address.

Select if you want toeliminate the failover

relationship from bothsides.

Note: If you are using cross mirror failover, after removal the cross mirror rela-tionship will be gone but the configuration of your iSCSI initiator will remain and the disks will still be presented to both primary and secondary servers.

CDP/NSS Administration Guide 225

Page 228: CDP-NSS Administration Guide

Failover

Power cycle servers in a failover setup (updated September 2012)

If you need to shut down servers in a failover setup for maintenance purposes, follow the steps below:

1. Make sure the failover pair has been added to the console and the console has been closed. Refer to the Failover setup section for additional information on failover setup.

The login and the server heartbeat information is saved to the console configuration file when you close the console. Once the information has been saved, it is safe to restart the console and begin the shutdown procedure on the servers for maintenance.

2. Suspend failover on each server.

From the FalconStor Management Console, select Failover --> Suspend Failover.

3. Log in to each server using the heartbeat IP via SSH.

4. Stop IPStor services on each server using the "ipstor stop all" command.

5. Power off each server.

6. Perform any required maintenance while the servers are powered off.

7. Once maintenance is complete, power on each server.

8. IPStor services automatically start. If services are not started, manually start services by running the "IPStor start all" command.

9. Verify that both servers are in a ready state using the "sms" command.

10. Log in to each server from the FalconStor Management Console. When prompted to bring each server up via the monitor IP address, type YES to do so.

11. Resume failover on each server.

CDP/NSS Administration Guide 226

Page 229: CDP-NSS Administration Guide

Failover

Mirroring and Failover

(Shared storage failover) If a physical drive contains only a mirrored resource and the physical drive fails, the server will not fail over. If the physical drive contained a primary mirrored resource, the mirror will swap roles so that its mirrored copy becomes the primary disk. If the physical drive contained a mirrored copy, nothing will happen to the mirror because there is no need for the mirror to swap. If there are other virtual devices on the same physical drive and the other virtual devices are not mirrored resources, the server will fail over. Swapping will only occur if all of the virtual devices on the physical drive contain mirrored resources.

TimeMark/CDP and Failover

Clients may not be able to access TimeViews during failover.

Throttle and Failover

Setting up throttle on a failover pair requires the following additional considerations:

• The failover pair must have matching target site names. (This does not apply to the target server name)

• The failover pair can have different throttle settings, even if they are replicating to the same server.

• During failover, the throttle values of the two partners combine and are used on the "up" server to maintain throttle settings. In other words, from the software perspective, each server is still maintaining it's throttle. From hardware perspective, the "up" server is the combined throttle level of itself and it's partner.

• The failover pair's throttle levels may be combined to equal over 100%. Example: 80%+80%=160%. Note: This percentage is relative to the link type. This value is the maximum speed allowed, not the instantaneous speed.

• If one of the throttle levels is set to no limit, then in failover state, both servers throttle level becomes no limit.

• It is highly recommended that you avoid the use of different link types. Using different link types may cause unexpected results in network traffic while in a failover state.

HotZone and Failover

Using HotZone with failover improves failover performance as disk read operations are faster and more efficient. Failover with HotZone on local storage further improves performance since it is mapped locally.

Local Storage prepared disks can only be used for HotZone using Read Cache. They cannot be used to create virtual device, mirror, snapshot resource, SafeCache, CDP journal, replica, or join a storage pool.

CDP/NSS Administration Guide 227

Page 230: CDP-NSS Administration Guide

Failover

For failover with HotZone created on local storage, the failover must be setup first. The local storage cannot be created on standalone server. For additional information regarding HotZone, refer to HotZone.

Enable HotZone using local storage with failover

Local Storage must be prepared from an individual physical disk, instead of using the Physical Devices Preparation Wizard to ensure proper mapping of physical disks to the partner server.

1. Right-click on the physical device and select Properties. Select Reserved for Local Storage on the Disk Preparation screen.

1. The Disk Preparation screen displays.

2. Select Reserved for Local Storage from the drop-down menu.

Local Storage is only available when devices are detected on both servers. The servers do not need to be the same size as long as the preparation is initiated from the smaller size device. For example, if server A has a 1GB disk and server B has 2GB disk, Local Storage can only be prepared/initiated from server A.

3. Right click on SAN Resources and select HotZone --> Enable.

The Enable HotZone Resources for SAN Resources wizard launches.

CDP/NSS Administration Guide 228

Page 231: CDP-NSS Administration Guide

Failover

4. On the Storage Option screen, select the Allocate from Local Storage option to allocate space from the high performance disks.

Note: If you need to remove failover setup, it is recommended that you unassign the physical disks so they can be re-used as virtual devices or SED devices after failover has been removed.

CDP/NSS Administration Guide 229

Page 232: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Performance

FalconStor offers several options that can dramatically increase the performance of your SAN.

• SafeCache - Allows the storage server to make use of high-speed storage devices as a staging area for write operations, thereby improving the overall performance.

• HotZone - Offers two methods to improve performance, Read Cache and Prefetch.

SafeCache

The FalconStor SafeCache option improves the overall performance of CDP/NSS-managed disks (virtual and/or service-enabled) by making use of high-speed storage devices, such as RAM disk, NVRAM, or solid-state disk (SSD), as a persistent (non-volatile) read/write cache.

In a centralized storage environment where a large set of database servers share a smaller set of storage devices, data tends to be randomly accessed. Even with a RAID controller that uses cache memory to increase performance and availability, hard disk storage often cannot keep up with application servers’ I/O requests.

SafeCache, working in conjunction with high-speed devices (RAM disk, NVRAM or SSDs) to ‘front’ slower real disks, can significantly improve performance. Since these high-speed devices are 100% immune to random access, SafeCache can write data blocks sequentially to the cache and then move (flush) them to the data disk (random write) as a separate process once the writes have been acknowledged, effectively accelerating the performance of the slower disks.

The SafeCache default throttle speed is 10,240 KB/s, which can be adjusted depending on your client IO pattern.

Regardless of the type of high-speed storage device being used as persistent cache (RAM disk, NVRAM, or SSD), the persistent cache can be mirrored for added protection using the FalconStor Mirroring option. In addition, SSDs and NVRAM have a built-in power supply to minimize potential downtime.

CDP/NSS Administration Guide 230

Page 233: CDP-NSS Administration Guide

Performance

SafeCache is fully compatible with the NSS failover option, which allows one server to automatically fail over to another without any data loss and without any cache write coherency problems. It is highly recommended that you use a Solid State disk as SafeCache.

Configure SafeCache

To set up SafeCache for a SAN Resource you must create a cache resource. You can create a cache resource for a single SAN resource or you can use the batch feature to create cache resources for multiple SAN resources.

To enable SafeCache:

1. Navigate to Logical Resources --> SAN Resources and right-click on a SAN resource.

2. Select SafeCache --> Enable.

The Create Cache Resource wizard displays to guide you through creating the cache resource and allocating space for the storage.

Create a cache resource

1. For a single SAN resource, right-click on a SAN Resource and select SafeCache --> Enable.

Note: If Cache is enabled, up to 256 unflushed TimeMarks are supported. Once the Cache has 256 unflushed TimeMarks, new TimeMarks cannot be created.

CDP/NSS Administration Guide 231

Page 234: CDP-NSS Administration Guide

Performance

For multiple SAN resources, right-click on the SAN Resources object and select SafeCache --> Enable.

2. Select how you want to create the cache resource.

Note that the cache resource cannot be expanded. Therefore, you should allocate enough space for your cache resource, taking into account future growth. If you “outgrow” your cache resource, you will need to disable it and then recreate it.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express automatically creates the cache resource using the criteria you select:

• Select different drive - CDP/NSS will look for space on another hard disk.• Select drives from different adapter/channel - CDP/NSS will look for space

on another hard disk only if it is on a separate adapter/channel.• Select any available drive - CDP/NSS will look for space on any disk,

including the original. This option is useful if you have mapped a device (such as a RAID device) that appears as a single physical device.

CDP/NSS Administration Guide 232

Page 235: CDP-NSS Administration Guide

Performance

If you select Custom, you will see the following windows

Select either an entirely unallocated or partially unallocated disk.

Only one disk can be selected at a time from this dialog. To create a cache resource from multiple physical disks, you will need to add the disks one at a time. After selecting the parameters for the first disk, you will have the option to add more disks.

Indicate how much space to allocate from this disk.

Click Add More if you need to add more space to this cache resource.If you select to add more disks, you will go back to the physical device selection screen where you can select another disk.

CDP/NSS Administration Guide 233

Page 236: CDP-NSS Administration Guide

Performance

3. Configure when and how the cache should be flushed.

These parameters can be used to further enhance performance.

• Flush cache when data reaches n% of threshold - Specify what percentage of the cache resource can be used before the cache is flushed. The default value is 50%.

• Flush cache after n milliseconds of inactivity - Specify how many milliseconds of inactivity should pass before the cache is flushed even if the threshold from the above is not met. The default value is 0 milliseconds.

• Flush cache up to the speed of - Specify the flush speed / number of KB/s to flush at a time. The default value is 256,000 KB/s.

• Skip Duplicate Write Commands - This option prevents the system from writing more than once to the same block during the cache flush. Therefore, when the cache flushes data to the underlying virtual device, if there is more than one write to the same block, it skips all except the most recent write. Leave this option unchecked if you are using asynchronous mirroring through a WAN or an unreliable network.

4. Confirm that all information is correct and then click Finish to create the cache resource.

You can now mirror your cache resource by highlighting the SAN resource and selecting SafeCache --> Mirror --> Add.

Note: If you take a snapshot manually (via the Console or the command line) of a SafeCache-enabled resource, the snapshot will not be created until the cache has been flushed. If failover should occur before the cache is empty, the snapshot will be inserted into the cache. The snapshot will be created after the snapshot marker has flushed.

CDP/NSS Administration Guide 234

Page 237: CDP-NSS Administration Guide

Performance

Global Cache

Global SafeCache can be viewed from the FalconStor Management Console by selecting the Global SafeCache node under Logical Resources.

You can choose to create a global or private cache resource. A global cache allows you to share the cache with up to 128 resources. To create a global cache, select Use Global Cache Resource in the Create Cache Resource Wizard.

Notes:

• Global Cache can be enabled in batch mode by selecting Logical Resources --> Global SafeCache --> Enable. Otherwise, Global Cache must be enabled for each device one at a time.

• To enable Global Cache for multiple resources, navigate to Logical Resources --> Global SafeCache and select Enable

• Each server can only have one Global Cache.• If the Global Cache is suspended, resumed or its properties is changed

on a virtual device, it also affects on the rest of the members.• Disabling the Global Cache only removes the Global Cache on that

specific device.• Importing the Global Cache from one server to another server is not

supported.

CDP/NSS Administration Guide 235

Page 238: CDP-NSS Administration Guide

Performance

SafeCache for groups

If you want to preserve the write order across SAN resources, you should create a group and enable SafeCache for the group. This is useful for large databases that span over multiple devices. In such situations, the entire group of devices is acting as one huge device that contains the database. When changes are made to the database, it may involve different places on different devices, and the write order needs to be preserved over the group of devices in order to preserve database integrity. Refer to ‘Groups’ for more information about creating a group.

Check the status of your SafeCache resource

You can see the current status of your cache resource by checking the SafeCache tab for a cached resource.

Unlike a snapshot resource that continues to grow, the cache resource is cleared out after data blocks are moved to the data disk. Therefore, you can see the Usage Percentage decrease, even return to 0% if there is no write activity.

For troubleshooting issues pertaining to SafeCache operations, refer to the ‘SafeCache Troubleshooting’ section.

Configure SafeCache properties

You can update the parameters that control how and when data will get flushed from the cache resource to the CDP/NSS-managed disk. To update these parameters:

1. Right-click on a SAN resource that has SafeCache enabled and select SafeCache --> Properties.

2. Type a new value for each parameter you want to change.

Refer to the SafeCache configuration section for more details about these parameters.

Disable a SafeCache resource

The SafeCache --> Disable option causes the cache to be flushed, and once completely flushed, removes the cache resource.

Because there is no dynamic free space expansion when the cache resource is full, you can use this option to disable your current cache resource and then manually create a larger one.

If you want to temporarily suspend the SafeCache, use the SafeCache --> Suspend option instead. You will then need to use the SafeCache --> Resume option to begin using the SafeCache again.

CDP/NSS Administration Guide 236

Page 239: CDP-NSS Administration Guide

Performance

HotZone

The FalconStor HotZone option offers two methods to improve performance, Read Cache and Prefetch.

Read Cache

Read Cache is an intelligent, policy-driven, disk-based staging mechanism that automatically remaps "hot" (frequently used) areas of disks to high-speed storage devices, such as RAM disks, NVRAM, or Solid State Disks (SSDs). This results in enhanced read performance for the applications accessing the storage. It also allows you to manage your storage network with a minimal number of high-speed storage devices by leveraging their performance capabilities.

When you configure the Read Cache method, you must divide your virtual or Service-Enabled disk into “zones” of equal size. The HotZone storage is then automatically created on the specified high-speed disk. This HotZone storage is divided into zones equal in size to the zones on the virtual or service-enabled disk (e.g.,32 MB), and is provisioned to the disk.

Reads/writes to each zone are monitored on the virtual or service-enabled disk. Based on the statistics collected, the application determines the most frequently accessed zones and re-maps the data from these “hot disk segments” to the HotZone storage (located on the high-speed disk) resulting in enhanced read performance for the application accessing the storage. Using the continually collected statistics, if it is determined that the corresponding “hot disk segment” is no longer “hot”, the data from the high performance disk is moved back to its original zone on the virtual or service-enabled disk.

Prefetch

Prefetch enables pre-fetching of data for clients. This allows clients to read ahead consecutively, which can result in improved performance because the data is ready from the anticipatory read as soon as the next request is received from the client. This will reduce the latency of the command and improve the sequential read benchmarks in most cases.

Prefetch may not be helpful if the client is already submitting sequential reads with multiple outstanding commands. However, the stop-and-wait case (with one read outstanding) can often be improved dramatically by enabling Prefetch.

Prefetch does not affect writing, or random reading.

Applications that copy large files (i.e. video streaming) and applications that back up files are examples of applications that read sequentially and might benefit from Prefetch.

CDP/NSS Administration Guide 237

Page 240: CDP-NSS Administration Guide

Performance

Configure HotZone

1. Right-click on a SAN resource and select HotZone --> Enable.

For multiple SAN resources, right-click on the SAN Resources object and select HotZone --> Enable.

2. Select the HotZone method to use.

3. (Prefetch only) Set Prefetch properties.

CDP/NSS Administration Guide 238

Page 241: CDP-NSS Administration Guide

Performance

These properties control how the prefetching (read ahead) is done. While you may need to adjust the default settings to enhance performance, FalconStor has determined that the defaults shown here are best suited for most disks/applications.

• Maximum prefetch chains - Number of locations from the disk to read from.

• Maximum read ahead - The maximum per chain. This can override the Read ahead option.

• Read ahead - How much should be read ahead at a time. No matter how this is set, you can never read more than the Maximum read ahead setting allows.

• Chain Timeout - Specify how long the system should wait before freeing up a chain.

4. (Read Cache only) Select the storage pool or physical device(s) from which to create this HotZone.

5. (Read Cache only) Select how you want to create the HotZone.

Note that the HotZone cannot be expanded. Therefore, you should allocate enough space for your SAN resource, taking into account future growth. If you “outgrow” your HotZone, you will need to disable it and then recreate it.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express automatically creates the HotZone storage using the criteria you select:

• Select different drive - CDP/NSS will look for space on another hard disk.

CDP/NSS Administration Guide 239

Page 242: CDP-NSS Administration Guide

Performance

• Select drives from different adapter/channel - CDP/NSS will look for space on another hard disk only if it is on a separate adapter/channel.

• Select any available drive - CDP/NSS will look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that appears as a single physical device.

6. (Read Cache only) Select the disk to use for the HotZone storage.

If you selected Custom, you can piece together space from one or more disks.

CDP/NSS Administration Guide 240

Page 243: CDP-NSS Administration Guide

Performance

7. (Read Cache only) Enter configuration information about the zones.

• Size of each zone - Indicate how large each zone should be. Reads/writes to each zone on the disk are monitored. Based on the statistics collected, The application determines the most frequently accessed zones and re-maps the data from these “hot zones” to the HotZone storage. You should check with your application server to determine how much data is read/written at one time. The block size used by the application should ideally match the size of each zone.

• Minimum stay time - Indicate the minimum amount of time data should remain in the HotZone before being moved back to its original zone once it is determined that the zone is no longer “hot”.

CDP/NSS Administration Guide 241

Page 244: CDP-NSS Administration Guide

Performance

8. (Read Cache only) Enter configuration information about zone access.

• Access type - Indicate whether the zone should be monitored for reads, writes, or both.

• Access intensity - Indicate how to determine if a zone is “hot”. Number of IOs performed at the site uses the amount of data transferred (read/write) as a determining factor for each zone.

9. Confirm that all information is correct and then click Finish to enable HotZone.

Check the status of HotZone

You can see the current status of your HotZone by checking the HotZone tab for a configured resource.

CDP/NSS Administration Guide 242

Page 245: CDP-NSS Administration Guide

Performance

Note that if you manually suspend HotZone from the Console when the device configured with the HotZone option is running normally, the Suspended field will display Yes.

You can also see statistics about the zone by checking the HotZone Statistics tab:

The information displayed is initially for the current interval (hour, day, week, or month). You can go backward (and then forward) to see any particular interval. You can also view multiple intervals by moving backward to a previous interval and then clicking the Play button to see everything from that point to the present interval.

Click the Detail View button to see more detail. There you will see the information presented more granularly, for smaller amounts of the disk.

If HotZone is being used in conjunction with Fibre Channel or iSCSI failover and a failover has occurred, the HotZone Statistics will not be displayed while in a failover state. The reason for this is because the server that took over does not contain the failed server’s information on the HotZone Statistics. As a result, the Console will display empty statistics for the primary server while the secondary has taken over. Once the failed server is restored, the statistics will display properly. This does not affect the functionality of the HotZone option while in a failover state.

CDP/NSS Administration Guide 243

Page 246: CDP-NSS Administration Guide

Performance

Configure HotZone properties

You can configure HotZone properties by right-clicking on the storage server and selecting HotZone. If HotZone has already been enabled, you can select the properties option to configure the Zone and Access policies if the HotZone was set up using the Read Cache method. Alternatively, you will be able to set the Prefetch Properties if your HotZone has been set up using the Prefetch method.

For additional information on these parameters, see ‘Configure HotZone’.

Disable HotZone

The HotZone --> Disable option permanently stops HotZone for the specific SAN resource.

Because there is no dynamic free space expansion when the HotZone is full, you can use this option to disable your current HotZone and then manually create a larger one.

If you want to temporarily suspend HotZone, use the HotZone --> Suspend option instead. You will then need to use the HotZone --> Resume option to begin using HotZone again

CDP/NSS Administration Guide 244

Page 247: CDP-NSS Administration Guide

CDP/NSS Administration Guide

MirroringMirroring provides high availability by minimizing the down time that can occur if a physical disk fails. The mirror can be defined with disks that are not necessarily identical to each other in terms of vendor, type, or even interface (SCSI, FC, iSCSI).

With mirroring, the primary disk is the disk that is used to read/write data for a SAN Client and the mirrored copy is a copy of the primary. Both disks are attached to a single storage server and are considered a mirrored pair. If the primary disk fails, the disks swap roles so that the mirrored copy becomes the primary disk.

There are two Mirroring options, Synchronous Mirroring and Asynchronous Mirroring.

Synchronous mirroring

FalconStor’s Synchronous Mirroring option offers the ability to define a synchronous mirror for any CDP/NSS managed disk (virtualized or service-enabled).

In the Synchronous Mirroring design, each time data is written to a designated disk, the same data is simultaneously written to another disk. This disk maintains an exact copy of the primary disk. In the event that the primary disk is unable to read/write data when requested to by a SAN Client, CDP/NSS seamlessly swaps data functions to the mirrored copy disk.

CDP/NSS Administration Guide 245

Page 248: CDP-NSS Administration Guide

Mirroring

Asynchronous mirroring

FalconStor’s Asynchronous Mirroring Option offers the ability to define a near real-time mirror for any CDP/NSS-managed disk (virtual or service-enabled) over long distances between data centers.

When you configure an asynchronous mirror, you create a dedicated cache resource and associate it to a CDP/NSS-managed disk. Once the mirror is created, the primary and secondary disks are synchronized if the Start initial synchronization when mirror is added option is enabled in global settings. This process does not involve the application server. After the synchronization is complete, all write-requests from the associated application server are sequentially delivered to the dedicated cache resource. This data is then committed to both the primary and its mirror as a separate background process. For added protection, the cache resource can also be mirrored.

1 2 3 4 5

6 7 8 9

1

2

3

4

5 6

7

8

9

STAGING AREA

PRIMARY DISK

1

2

3

4

5 6

7

8

9

MIRROR DISK

For read operations, thecache resource ischecked first in case anewly written block hasnot yet been moved to thedata disk.

Blocks are moved to the primarydisk and mirror disk (random write)as a secondary operation, afterwrites have been acknowledgedfrom the cache resource.

Data blocks arewritten sequentially tothe cache resource toprovide enhancedwrite performance.

IPStor

PRIMARY

STAGINGAREA

MIRROR

Primary Site Remote Site

WRITES ACKNOWLEDGEMENT

CDP/NSS Administration Guide 246

Page 249: CDP-NSS Administration Guide

Mirroring

Mirror requirements

The following are the requirements for setting up a mirroring configuration:

• The mirrored devices must be composed of one or more hard disks. • The mirrored devices must both be accessible from the same storage

server.• The mirrored devices must be the same size. If you try to expand the

primary disk, CDP/NSS will also expand the mirrored copy to the same size. • A mirror of a Thin Provisioned disk is another Thin Provisioned disk.

Enable mirroring

You can enable mirroring for a single SAN resource or you can use the batch feature to enable mirroring for multiple SAN resources. You can also enable mirroring for an existing snapshot resource, cache resource, or incoming replica resource.

1. For a single SAN resource, right-click on the resource and select Mirror --> Add.

For multiple SAN resources, right-click on the SAN Resources object and select Mirror --> Add.

For an existing snapshot resource or cache resource, right-click on the SAN resource and select Snapshot Resource or Cache Resource --> Mirror --> Add.

Note: For asynchronous mirroring, if you want to preserve the write order of data that is being mirrored asynchronously, you should create a group for your SAN resources and enable SafeCache for the group. This is useful for large databases that span over multiple devices. In such situations, the entire group of devices is acting as one huge device that contains the database. When changes are made to the database, it may involve different places on different devices, and the write order needs to be preserved over the group of devices in order to preserve database integrity. Refer to ‘Groups’ for more information about creating a group.

CDP/NSS Administration Guide 247

Page 250: CDP-NSS Administration Guide

Mirroring

2. (SAN resources only) Select the type of mirrored copy you are creating.

3. Select the storage pool or physical device(s) from which to create the mirror.

CDP/NSS Administration Guide 248

Page 251: CDP-NSS Administration Guide

Mirroring

4. Select how you want to create this mirror.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express automatically creates the Mirrored Copy using the criteria you select:

• Select different drive - Look for space on another hard disk.• Select drives from different adapter/channel - Look for space on another

hard disk only if it is on a separate adapter/channel.• Select any available drive - Look for space on any disk, including the

original. This option is useful if you have mapped a device (such as a RAID device) that looks like a single physical device.

CDP/NSS Administration Guide 249

Page 252: CDP-NSS Administration Guide

Mirroring

If you select Custom, you will see the following windows:

Select either an entirely unallocated or partially unallocated disk.

Only one disk can be selected at a time from this dialog. To create a mirrored disk from multiple physical disks, you will need to add the disks one at a time. After selecting the parameters for the first disk, you will have the option to add more disks.

Indicate how much space to allocate from this disk.

Click Add More if you need to add more space to this mirrored disk.If you select to add more disks, you will go back to the physical device selection screen where you can select another disk.

CDP/NSS Administration Guide 250

Page 253: CDP-NSS Administration Guide

Mirroring

5. (SAN resources only) Indicate if you want to use synchronous or asynchronous mirroring.

If a cache resource already exists, mirroring will automatically be set to asynchronous mode.

If no cache resource exists, you can use either synchronous or asynchronous mode. However, if you select asynchronous mode, you will need to create a cache resource. The wizard will guide you through creating it.

If you select synchronous mode for a resource without a cache and later create a cache, the mirror will switch to asynchronous mode.

Note: If you are enabling asynchronous mirroring for multiple resources that are being used by the same application (for example, your Oracle database spans three disks), to ensure write order consistency you must first create a group. You must enable SafeCache for this group and add all of the related resources to it before enabling asynchronous mirroring for each resource. By doing this, all of the resources will share the same read/write cache and will be flushed at the same time, thereby guaranteeing the consistency of the data.

CDP/NSS Administration Guide 251

Page 254: CDP-NSS Administration Guide

Mirroring

6. Determine if you want to monitor the mirroring process.

If you select to monitor the mirroring process, the I/O performance is evaluated to decide if I/O to the mirror disk is lagging beyond an acceptable limit. If it is, mirroring will be suspended so it does not impact the primary storage.

• Monitor mirroring process every n seconds - Specify how frequently the system should check the lag time (delay between I/O to the primary disk and the mirror). Checking more or less frequently will not impact system performance. On systems with very low I/O, a higher number may help get a more accurate representation.

• Maximum lag time for mirror I/O - Specify an acceptable lag time.• Suspend mirroring when the failure threshold reaches n% - Specify what

percentage of I/O must pass the lag time test. For example, you set the percentage to 10% and the maximum lag time to 15 milliseconds. During the test period, 100 I/O occurred and 20 of them took longer than 15 milliseconds to update the mirror disk. With a 20% failure rate, mirroring would be suspended.

Note: Mirror monitoring settings are retained when a mirror is enabled on the same device.

Note: If a mirror becomes out of sync because of a disk failure or an I/O error (rather than having too much lag time), the mirror will not be suspended. Because the mirror is still active, re-synchronization will be attempted based on the global mirroring properties that are set for the server. Refer to ‘Set global mirroring options’ for more information.

CDP/NSS Administration Guide 252

Page 255: CDP-NSS Administration Guide

Mirroring

7. If mirroring is suspended, specify when re-synchronization should be attempted.

Re-synchronization can be started based on time (every n minutes/hours - default is every five minutes) and/or I/O activity (when I/O is less than n KB/MB). If you select both, the time will be applied first before the I/O activity level. If you do not select either, the mirror will stay suspended until you manually synchronize it.

If you select one or both re-synchronization methods, you must also specify how many times the system should retry the re-synchronization if it fails to complete.

When the system initiates re-synchronization, it does not check lag time and mirroring will not be suspended if there is too much lag time.

If you manually resume mirroring, the system will monitor the process during synchronization and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.

8. Confirm that all information is correct and then click Finish to create the mirroring configuration.

Note: If CDP/NSS is restarted or the server experiences a failover while attempting to re-synchronize, the mirror will remain suspended.

CDP/NSS Administration Guide 253

Page 256: CDP-NSS Administration Guide

Mirroring

Create cache resource

The cache resource wizard will be launched automatically when you configure Asynchronous Mirroring but you do not have a cache resource. You can also create a cache resource by right-clicking on a SAN resource and selecting SafeCache --> Enable. For multiple SAN resources, right-click on the SAN Resources object and select SafeCache --> Add.

1. Select how you want to create the cache resource.

Note that the cache resource cannot be expanded. Therefore, you should allocate enough space for your SAN resource, taking into account future growth. If you “outgrow” your cache resource, you will need to disable it and then recreate it.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express automatically creates the cache resource using the criteria you select:

• Select different drive - Look for space on another hard disk.• Select drives from different adapter/channel - Look for space on another

hard disk only if it is on a separate adapter/channel.• Select any available drive - Look for space on any disk, including the

original. This option is useful if you have mapped a device (such as a RAID device) that looks like a single physical device.

2. Confirm that all information is correct and then click Finish to create the cache resource.

You can now mirror your cache resource by highlighting the SAN resource and selecting SafeCache --> Mirror --> Add.

CDP/NSS Administration Guide 254

Page 257: CDP-NSS Administration Guide

Mirroring

Check mirroring status

You can see the current status of your mirroring configuration by checking the General tab for a mirrored resource.

• Synchronized - Both disks are synchronized. This is the normal state.• Not synchronized - A failure in one of the disks has occurred or

synchronization has not yet started. If there is a failure in the Primary Disk, the Primary Disk is swapped with the Mirrored Copy.

• If the synchronization is occurring, you will see a progress bar along with the percentage that is completed.

Swap the primary disk with the mirrored copy

Right-click on the SAN resource and select Mirror --> Swap to reverse the roles of the primary disk and the mirrored copy. You will need to do this if you are going to perform maintenance on the primary disk or if you need to remove the primary disk.

Promote the mirrored copy to become an independent virtual drive

Right-click on the mirrored drive and select Mirror --> Promote to break the mirrored pair and convert the mirrored copy into an independent virtual drive. The new virtual drive will have all of the properties of a regular virtual drive.

Note: In order to update the mirror synchronization status, refresh the Console screen (View --> Refresh).

CDP/NSS Administration Guide 255

Page 258: CDP-NSS Administration Guide

Mirroring

This feature is useful as a “safety net” when you perform major system maintenance or upgrades. Simply promote the mirrored copy and you can perform maintenance on the primary disk without worrying about anything going wrong. If there is a problem, you can use the newly promoted virtual drive to serve your clients.

Notes:

• Before promoting a mirrored drive, all clients should first detach or unmount from the drive. Promoting a drive while clients are attached or mounted may cause the file system to become corrupt on the promoted drive.

• If you are copying files over in Windows to a SAN resource that has a mirror, you need to wait for the cache to flush out before promoting the mirrored drive on the SAN resource. If you do not wait for the cache to flush, you may see errors in the files.

• If you are using asynchronous mirroring, you can promote the mirror only when the SafeCache option is suspended and there is no data in the cache resource that needs to be flushed.

• When you promote the mirror of a replica resource, the replication configuration is maintained.

• Depending upon the replication schedule, when you promote the mirror of a replica resource, the mirrored copy may not be an identical image of the replication source. In addition, the mirrored copy may contain corrupt data or an incomplete image if the last replication was not successful or if replication is currently occurring. Therefore, it is best to make sure that the last replication was successful and that replication is not occurring when you promote the mirrored copy.

CDP/NSS Administration Guide 256

Page 259: CDP-NSS Administration Guide

Mirroring

Recover from a mirroring hardware failure

Replace afailed disk

If one of the mirrored disks has failed and needs to be replaced:

1. Right-click on the SAN resource and select Mirror --> Remove to remove the mirroring configuration.

2. Physically replace the failed disk.

The failed disk is always the mirrored copy because if the Primary Disk fails, the primary disk is swapped with the mirrored copy.

Important: To replace the disk without having to reboot your storage server, refer to ‘Expand the primary disk’.

3. Run the Create SAN Resource Mirror Wizard to create a new mirroring configuration.

Fix a minor diskfailure

If one of the mirrored disks has a minor failure, such as a power loss:

1. Fix the problem (turn the power back on, plug the drive in, etc.).

2. Right-click on the SAN resource and select Mirror --> Synchronize.

This re-synchronizes the disks and restarts the mirroring.

Replace a disk that is part of an active mirror configuration

If you need to replace a disk that is part of an active mirror configuration:

1. If you need to replace the Primary Disk, right-click on the SAN resource and select Mirror --> Swap to reverse the roles of the disks and make it a Mirrored Copy.

2. Select Mirror --> Remove to cancel mirroring.

3. Replace the disk.

Important: To replace the disk without having to reboot your storage server, refer to ‘Expand the primary disk’.

4. Run the Create SAN Resource Mirror Wizard to create a new mirroring configuration.

CDP/NSS Administration Guide 257

Page 260: CDP-NSS Administration Guide

Mirroring

Expand the primary disk

The mirrored devices must be the same size. If you want to enlarge the primary disk, you will need to enlarge the mirrored copy to the same size. When you use the Expand SAN Resource Wizard, it will automatically launch the Create SAN Resource Mirror Wizard so that you can enlarge the Mirrored Copy as well.

Manually synchronize a mirror

The Synchronize option re-synchronizes a mirror and restarts the mirroring process once it is synchronized. This is useful if one of the mirrored disks has a minor failure, such as a power loss.

1. Fix the problem (turn the power back on, plug the drive in, etc.).

2. Right-click on the resource and select Mirror --> Synchronize.

During the synchronization, the system will monitor the process and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.

Notes:

• As you expand the primary disk, the wizard only shows half the available disk space as available because it reserves an equal amount of space for the mirrored drive.

• On a Thin Provisioned disk, if the mirror is offline, it will be removed when storage is being added automatically. If this occurs, you must recreate the mirror.

Note: If your mirror disk is offline, storage cannot be added to the thin disk manually.

CDP/NSS Administration Guide 258

Page 261: CDP-NSS Administration Guide

Mirroring

Set mirror throttle

The default throttle speed is 10,240 KB/s, which can be adjusted depending on your client IO pattern. To set the mirror throughput speed/throttle for mirror synchronization, select Mirror --> Throttle.

Select the Enable Mirror Throttle checkbox and enter the throughput speed for mirror synchronization. This option is disabled by default. If this option is disabled for an individual device, the global settings will be followed. Refer to ‘Set global mirroring options’.

The synchronization speed can go up to the specified value, but the actual throughput depends upon the storage environment.

The throughput speed can also be set for multiple devices (in batch mode) by right-clicking on Logical Resources in the console and selecting Set Mirror Throttle.

Note: The mirror throttle settings are retained when the mirror is enabled on the same device.

CDP/NSS Administration Guide 259

Page 262: CDP-NSS Administration Guide

Mirroring

Set alternative read mirror

To set the alternative read mirror for mirror synchronization, select Mirror --> Alternative read mirror.

Enable this option to have the I/O alternatively read from both the primary resource and the mirror.

The alternative read mirror can also be set in batch mode by right-clicking on Logical Resources in the console and selecting Set Alternative Read Mirror.

Set mirror resynchronization priority

To set the resynchronization priority for pending mirror synchronization, select Mirror --> Priority.

The Mirror resynchronization priority screen displays, allowing you to prioritize the order that device/group will begin mirroring if scheduled to start at the same time. This option can be set for a single resource or a single group via the Mirror submenu.

CDP/NSS Administration Guide 260

Page 263: CDP-NSS Administration Guide

Mirroring

The resynchronization priority can also be set in batch mode by right-clicking on Logical Resources in the console and selecting Set Mirror Priority.

CDP/NSS Administration Guide 261

Page 264: CDP-NSS Administration Guide

Mirroring

Rebuild a mirror

The Rebuild option rebuilds a mirror from beginning to end and starts the mirroring process once it is synchronized. The rebuild feature is useful if the mirror disk you want to synchronize is from a different storage server

A rebuild might be necessary if your disaster recovery site has been servicing clients due to some type of issue, such as a storm or power outage, at your primary data center. Once the problem is resolved, the mirror is out of sync. Because the mirror disk is located on a different storage server in a remote location, the local storage server must rebuild the mirror from beginning to end.

Before you rebuild a mirror, you must stop all client activity. After rebuilding the mirror, swap the mirror so that the primary data center can service clients again.

To rebuild the mirror, right-click on a resource and select Mirror --> Rebuild.

You can see the current settings by checking the Mirror Synchronization Status field on the General tab of the resource.

Suspend/resume mirroring

You can suspend mirroring for an individual resource or for multiple resources. When you manually suspend a mirror, the system will not attempt to re-synchronize, even if you have a re-synchronization policy. You will have to resume the mirror in order to synchronize.

When mirroring is resumed, if the mirror is not synchronized, a synchronization will be triggered immediately. During the synchronization, the system will monitor the process and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.

To suspend/resume mirroring for an individual resource:

1. Right-click on a resource and select Mirror --> Suspend (or Resume).

You can see the current settings by checking the Mirror Synchronization Status field on the General tab of the resource.

To suspend/resume mirroring for multiple resources:

1. Right-click on the SAN Resources object and select Mirror --> Suspend (or Resume).

2. Select the appropriate resources.

3. If the resource is in a group, select the checkbox to include all of the group members enabled with mirroring.

CDP/NSS Administration Guide 262

Page 265: CDP-NSS Administration Guide

Mirroring

Change mirroring configuration options

Set globalmirroring

options

You can set global mirroring options that affect system performance during mirroring. While the default settings should be optimal for most configurations, you can adjust the settings for special situations.

To set global mirroring properties for a server:

1. Right-click on the server and select Properties.

2. Select the Performance tab.

• Throttle [n] KB/s (Range 128 - 1048576, 0 to disable) - The throttle parameter allows you to set the maximum allowable mirror synchronization speed, thereby minimizing potential impact to performance for your devices. This option is set at 10 MB per second by default. If disabled, throughput is unlimited. Note: Actual throughput depends upon your storage environment.

• Select the Start Initial Synchronization when mirror check box to have the mirror sync when added. By default, the mirror will not automatically synchronize when added. If this option is not selected, the mirror will not sync until the next synchronization interval or until a manual synchronization operation is performed. This option is not applicable for Near-line recovery and thin disk relocation.

• Synchronize Out-of-Sync Mirrors - Indicate how often the system should check and attempt to re-synchronize active out-of-sync mirrors. The default is every five minutes and up to two mirrors at each interval. These settings are also used for the initial synchronization during creation or loading of the mirror. Manual synchronizations can be performed at any time and are not included in the number of mirrors at each interval set here.

• Enter the retry value to indicate how often synchronization should be retried if it fails to complete. The default is to retry 20 times. These settings will only be used for active mirrors. If a mirror is suspended because the lag time exceeds the acceptable limit, that re-synchronization policy will apply instead.

• Indicate whether or not to include replica mirrors in the re-synchronization process by selecting the Include replica mirrors in the automatic synchronization process checkbox. This is unchecked by default.

Changeproperties for a

specificresource

You can change the following mirroring configuration for a resource:

• Policy for monitoring the mirroring process• Conditions for re-synchronization

To change the configuration:

1. Right-click on the primary disk and select Mirror --> Properties.

2. Make the appropriate changes and click OK.

CDP/NSS Administration Guide 263

Page 266: CDP-NSS Administration Guide

Mirroring

Remove a mirror configuration

Right-click on the SAN resource and select Mirror --> Remove to delete the mirrored copy and cancel mirroring. You will not be able to access the mirrored copy afterwards.

Mirroring and failover

If mirroring is in progress during failover/recovery, mirroring will restart from where it left off once the failover/recovery is complete.

If the mirror is synchronized but there is a Fibre disconnection between the server and storage, the mirror may become unsynchronized. It will re-synchronize automatically after failover/recovery.

A synchronized mirror will always remain synchronized during a recovery process.

CDP/NSS Administration Guide 264

Page 267: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Snapshot ResourceTimeMark® snapshots allow you to create point-in-time delta snapshot copies of data volumes. The concept of performing a snapshot is similar to taking a picture. When we take a photograph, we are capturing a moment in time and transferring this moment in time to a photographic medium, even while changes are occurring to the object we focused our picture on. Similarly, a snapshot of an entire device allows us to capture data at any given moment in time and move it to either tape or another storage medium, while allowing data to be written to the device.

The basic function of the snapshot engine is to allow images to be created of data volumes (virtual drives) using minimal storage space. The snapshot initially uses no disk space. As new data is written to the source volume, the old data blocks are moved to a temporary snapshot storage area. By combining the snapshot storage with the source volume, the data can be recreated exactly at it appeared at the time the snapshot was taken. For added protection, a Snapshot Resource can also be mirrored.

A trigger is an event that notifies the application when it is time to perform a snapshot of a virtual device. FalconStor’s Replication, TimeMark/CDP, Snapshot Copy, and ZeroImpact Backup options all trigger snapshots.

Create a Snapshot Resource

Each SAN resource can have one Snapshot Resource. The Snapshot Resource supports up to 64 TB and is shared by all of the FalconStor options that use Snapshot (Replication, TimeMark/CDP, Snapshot Copy, and ZeroImpact backup).

Each snapshot initially uses no disk space. As new data is written to the source volume, the old data blocks are moved to the Snapshot Resource. Therefore, it is not necessary to have 100% of the size of the SAN resource reserved as a Snapshot Resource. The amount of space initially reserved for each Snapshot Resource is calculated as follows:

Using the table above, if you create a 10 GB SAN resource, your initial Snapshot Resource will be 2 GB but you can set the Snapshot Resource to expand automatically, as needed.

If you create a SAN resource that is less than 500 MB, the amount of space reserved for the Snapshot Resource will be 100% of the virtual drive size. This is because a smaller-sized volume can overfill quickly, leaving no time for the auto-

Size of SAN Resource Reserved for Snapshot Resource

Less than 500 MB 100%

500 MB or more but less than 2 GB 50%

2 GB or more 20%

CDP/NSS Administration Guide 265

Page 268: CDP-NSS Administration Guide

Snapshot Resource

expansion to take effect. By reserving a Snapshot Resource equal to 100% of the SAN resource, the snapshot is able to free up enough space so normal write operations can continue.

If you do not create a Snapshot Resource for your SAN resource, when you configure Replication, TimeMark/CDP, Snapshot Copy, or backup, the Create Snapshot Resource wizard will launch first, allowing you to create it.

You can create a Snapshot Resource for a single SAN resource or you can use the batch feature to create snapshot resources for multiple SAN resources:

1. For a single SAN resource, right-click on the resource and select Snapshot Resource --> Create.

For multiple SAN resources, right-click on the SAN Resources object and select Snapshot Resource --> Create.

2. Select the storage pool or physical device that should be used to create this Snapshot Resource.

CDP/NSS Administration Guide 266

Page 269: CDP-NSS Administration Guide

Snapshot Resource

3. Select how you want to create this Snapshot Resource.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express lets you designate how much space to allocate and then automatically creates a Snapshot Resource using an available device.

• Select different drive - The storage server will look for space on another hard disk.

• Select drives from different adapter/channel - The storage server will look for space on another hard disk only if it is on a separate adapter/channel.

• Select any available drive - The storage server will look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that looks like a single physical device to your storage server.

CDP/NSS Administration Guide 267

Page 270: CDP-NSS Administration Guide

Snapshot Resource

If you select Custom, you will see the following windows:

4. Verify the physical devices you have selected.

Select either an entirely unallocated or partially unallocated device.

Indicate how much space to allocate from this device.

Click Add More if you need to add another physical disk to this Snapshot Resource.You will go back to the physical device selection screen where you can select another disk.

CDP/NSS Administration Guide 268

Page 271: CDP-NSS Administration Guide

Snapshot Resource

5. Determine whether the storage server should expand your Snapshot Resource if it runs low and how it should be expanded.

Specify a threshold as a percentage of the space used. The threshold is used to determine if more space is needed for the Snapshot Resource. The default is 50%.

If you want your storage server to automatically expand the Snapshot Resource when space is running low, set the threshold level and make sure the option Automatically allocate more space for the Snapshot Resource is selected. The default expansion size is 20%. Make sure not to set this expansion increment too low, otherwise the snapshot resource may go offline if the snapshot expansion cannot be complete on time. However, if you have a very large snapshot resource, you can set this value as a small percentage size.

Then, determine the amount of space to be allocated for each expansion. You can set this to be a specific size (in MB) or a percentage of the size of the Snapshot Resource. There is no limit to the number of times a Snapshot Resource can be expanded.

Once the low space threshold is triggered, the system will attempt to expand the resource by allocating additional space. The time required to accomplish this may be in milliseconds or even seconds, depending on how busy the system is.

If expansion fails, depending on the snapshot policy set, you will experience either client I/O failure (once snapshot resource is full) or earlier TimeMarks being deleted so that the Snapshot Resource does not run out of space.

To prevent this from happening, we recommend that you allow enough time for expansion after the low space threshold is reached. We recommend that your safety margin be at least five seconds. This means that from the time the low

CDP/NSS Administration Guide 269

Page 272: CDP-NSS Administration Guide

Snapshot Resource

space threshold is reached, while data is being written to the drive at maximum throughput, it will take a minimum of five seconds to fill up the rest of the drive. Therefore, if the maximum throughput is 50 MB/s, the threshold should be set for when the space is below 250 MB. Of course if the throughput is lower, the allowance can be lowered accordingly.

The Maximum size allowed for the Snapshot Resource can be set to limit automatic expansion. Specify 0 for no limit.

6. Configure what your storage server should do under different error conditions.

The default is to Always maintain write operations. However, if you are setting the Snapshot Resource policy on a near-line mirror or replica, the default is to Preserve all TimeMarks.

If you select Always maintain write operations, the system will delete the earliest TimeMark once the threshold for your Snapshot Resource is reached and cannot be expanded. If there is a failure due to the Snapshot Resource not being accessible or a memory error, you will lose all TimeMarks.

If you select Preserve all TimeMarks, the system will prevent any new writes to the primary device and its Snapshot Resource once an error is detected, regardless of whether the error is due to the Snapshot Resource running out of space, a Snapshot Resource disk error, or a memory error. As a result, clients can experience write errors.This option is useful when you want to preserve

Note: If you do not select automatic expansion, old TimeMarks will be deleted to prevent the Snapshot Resource from running out of space.

CDP/NSS Administration Guide 270

Page 273: CDP-NSS Administration Guide

Snapshot Resource

backups (i.e. for a DiskSafe client or for a replica site), but would not be desirable for an in-band client.

If you select Preserve recent TimeMarks, the system will delete the earliest TimeMark once the threshold for your Snapshot Resource is reached and cannot be expanded. If the errors were due to a Snapshot Resource disk error or memory error, all new writes to the primary device and its Snapshot Resource are blocked. The client will experience write error behavior similar to the Preserve all TimeMarks option.

If you select Enable MicroScan, the data block will be analyzed and only the changed data will be copied.

Refer to the Snapshot Resource policy behavior table for additional information regarding Snapshot Resource Policy settings and the associated behavior for error conditions.

7. Determine if you want to use Snapshot Notification.

Snapshot Notification works with the Snapshot Agents to initiate a snapshot request to a SAN client. When used, the system notifies the client to quiet activity on the disk before a snapshot is taken. Using Snapshot Notification guarantees that you will get a transactionally consistent image of your data.

8. Confirm that all information is correct and then click Finish.

You will now see a new Snapshot tab for this SAN resource.

Note: For Always maintain write operations and Preserve recent TimeMarks, the earliest TimeMark will be deleted for all resources in a group when any one member of the group cannot write to the Snapshot Resource due to lack of space. All TimeMarks may be deleted to free up necessary space in the Snapshot Resource.

CDP/NSS Administration Guide 271

Page 274: CDP-NSS Administration Guide

Snapshot Resource

Snapshot Resource policy behavior

The following table summarizes the system behavior under boundary or error conditions based on different Snapshot Resource policies.

Snapshot Resource Policy

Preserve all TimeMarksPreserve recent

TimeMarksAlways maintain write

operation

Condition

Snapshot Resource threshold has been reached. The resource cannot be expanded or the expansion policy is not configured.

When the Snapshot Resource is full, any new I/O to the primary resource that requires writing to the Snapshot Resource will be blocked.

When the Snapshot Resource reaches the threshold, the system will start deleting the earliest TimeMarks and continuing one-by-one (regardless of priority or if they are in use) until the available space falls below the threshold or there are no more TimeMarks to be deleted. Even the last TimeMark can be deleted.

When the Snapshot Resource reaches the threshold, the system starts deleting the earliest TimeMarks one-by-one (regardless of priority or if in use) until the available space falls below the threshold or there are no more TimeMarks to be deleted. Even the last TimeMark can be deleted

Snapshot Resource failure with the exception of a disk full (i.e. a recoverable storage error) or system error (i.e. out of memory)

All new I/O to the primary resource that requires writing to the Snapshot Resource will fail. All TimeMarks will be kept.

All new I/O to the primary resource that requires writing to the Snapshot Resource will fail. All TimeMarks will be kept.

New I/O to the primary resource is allowed, but the Snapshot Resource will go offline and all TimeMarks will be lost.

CDP/NSS Administration Guide 272

Page 275: CDP-NSS Administration Guide

Snapshot Resource

Check status of a Snapshot Resource

You can see how much of your Snapshot Resource is currently being used and your expansion methods by checking the Snapshot tab for a SAN resource.

Because Snapshot Resources record block-level changes, not file-level, you may not see the Usage Percentage decrease when you delete files. This is because deleted files really still exist on the disk.

The Usage Percentage bar colors indicate usage percentage in relation to the threshold level:

The usage percentage is displayed in green as long as the available sectors are greater than 120% of the threshold (in sectors). It is displayed in blue when available sectors are less than 120% of threshold (in sectors) but still greater than the threshold (in sectors). The usage percentage is displayed in red when the available sectors are less than the threshold (in sectors).

Note that Snapshot resources will be marked off-line if the physical resource from which they have been created is disconnected from a single server in a failover set prior to a failing over to the secondary server.

CDP/NSS Administration Guide 273

Page 276: CDP-NSS Administration Guide

Snapshot Resource

Protect your Snapshot Resources

If the physical disk that contains a snapshot resource fails, you will still be able to access your SAN resource, but the snapshot data already in the Snapshot Resource will become invalid. This means that you will not be able to roll back to a point-in-time image of your data.

However, you can protect your snapshot resources by using the Mirroring option. With Mirroring, each time data is written to the Snapshot Resource, the same data is also written to another disk which maintains an exact copy of the Snapshot Resource. If the primary Snapshot Resource disk fails, the storage server seamlessly swaps to the mirrored copy.

To mirror a Snapshot Resource, right-click on the SAN resource and select Snapshot Resource --> Mirror --> Add.

Refer to the ‘Mirroring’ section for more information.

Options for Snapshot Resources

When you right-click on a logical resource that has a Snapshot Resource, you will see a Snapshot Resource menu with the following options:

Reinitialize Reinitialize allows you to refresh your Snapshot Resource and start over. You will only need to reinitialize your Snapshot Resource if you are not mirroring it and it has gone offline but is now back online.

Expand Expand allows you to manually expand the size of your Snapshot Resource.

Shrink The Shrink Policy allows you to reduce the size of your Snapshot Resource. This is useful if your snapshot resource does not need all of the space currently allocated to it.

Based on current usage, when you select the Shrink option, the system calculates the maximum amount of space that can be used to shrink the Snapshot Resource. The amount of disk space saved by this operation is calculated from the last block of data where data is written. If there are gaps between blocks of data, the gaps are not included in the amount of space saved.

Delete Delete allows you to delete the Snapshot Resource for this logical resource.

Properties Properties allows you to change the snapshot resource automatic expansion policy and snapshot notification policies.

Mirror Mirror allows you to protect your Snapshot Resource by creating a mirror of it.

Note: Be sure to stop all I/O to the source resource before starting this operation. If you have I/O occurring during the shrinking process, the space used for the Snapshot Resource may increase and the operation may fail.

CDP/NSS Administration Guide 274

Page 277: CDP-NSS Administration Guide

Snapshot Resource

Reclaim Reclaim allows you to free available space in the snapshot resource. Enable the reclamation policy to automatically free up space when a TimeMark Snapshot is deleted. Once the snapshot is deleted, space will be reclaimed at the next scheduled reclamation.

Snapshot Resource shrink and reclamation policies

The reclamation policy allows you to save space by reclaiming previously used storage areas. In the regular course of running your business, TimeMarks are added and deleted. However, the amount of space used up by the deleted TimeMark does not automatically return to the available resource pool until the space is reclaimed.

Space can be reclaimed automatically by setting a schedule or manually. For manual reclamation, you can select a TimeMark to be reclaimed one at a time. For scheduled reclamation, you can reclaim all the deleted TimeMarks on that device.

Scheduling allows you to set the reclamation policy to automatically free up space when a TimeMark Snapshot is deleted.

Enable Reclamation Policy

The global reclamation policy is enabled by default and scheduled to run at 12:00 a.m. every seven days, automatically removing obsolete TimeView data and conserving space. You can also enable the reclamation option for an individual SAN resource.

While setting the reclamation policy for automatic reclamation works to conserve space in most instances, there are some cases where you may need to manually reclaim space. For example, if you delete a TimeMark Snapshot other than the first or the last one, space will not automatically be available.

In this case, you can manually reclaim the space by right-clicking on the SAN resource in the FalconStor Management Console and selecting Snapshot Resource --> Reclaim --> Start.

CDP/NSS Administration Guide 275

Page 278: CDP-NSS Administration Guide

Snapshot Resource

Highlight the TimeMark(s) to start the reclamation process and click OK.

You can stop a reclaim process by right-clicking on the SAN resource in the FalconStor Management Console and selecting Snapshot Resource --> Reclaim --> Stop.

To enable a reclamation policy for a particular SAN resource:

1. Right-click on the SAN resource in the FalconStor Management Console and select Snapshot Resource --> Reclaim --> Enable.

The Enable Reclamation Policy screen displays.

2. Enter the following reclamation policy parameters:

• Set the Reclaim threshold - Reclaim space from deleted TimeMarks is there is at least 2 MB of data to be reclaimed. The default is 2 MB, however you can set your own threshold (in MB or percentage) for the minimum amount of space to be reclaimed per TimeMark.

• Set the Reclaim schedule - Enter the date and time to start the reclamation schedule, along with the repeat interval.

Notes:

• If auto-expansion occurs on the Snapshot Resource while the reclamation process is in progress, the reclamation operation will not succeed. The auto-expansion will be skipped as well.

• Delete TimeMark and Rollback TimeMark operations are not supported during reclamation.You must stop reclamation before attempting either operation.

• When both the individual and global reclamation policy are enabled, only the individual policy will be in effect. The global policy will not be triggered on a device that has its own policy.

CDP/NSS Administration Guide 276

Page 279: CDP-NSS Administration Guide

Snapshot Resource

• Set the maximum processing time for reclamation - Specify the maximum time for the reclamation process. Once this threshold is reached, the reclamation process will stop. Specify 0 to set an unlimited processing time. It is recommended that you schedule lengthy reclamation processing during non-peak operation periods.

Global reclamation policy and retention schedule

You can set and/or edit the global reclamation policy and the TimeMark retention schedule via server properties by right-clicking on the server, and selecting Properties --> TimeMark Maintenance tab.

Note: If reclamation is in progress and failover occurs, the reclamation will fail gracefully. After failover, the global reclamation policy will use the setting on the primary server. For example, if the global reclamation schedule has been dis-abled on primary server, and it is enabled on the secondary server (failover pair). After failover, the global reclamation schedule will not be triggered on the device(s) on the primary server.

CDP/NSS Administration Guide 277

Page 280: CDP-NSS Administration Guide

Snapshot Resource

Select a time to Start TimeMark Retention schedule, or accept the 10pm daily default. This means, the policy will kick off every day at 10:00 pm, deleting the TimeMarks from the previous day according to the retention policy per device, if specified. The retention policy excludes TimeMarks created after 12:00 am on the current day.

Once the reclamation policy has been configured, at-a-glance information regarding reclamation settings can be obtained from the FalconStor Management Console --> Snapshot Resource tab.

Disable Reclamation

To disable the reclamation policy, right-click on the SAN resource in the FalconStor Management Console and select Snapshot Resource --> Reclaim --> Disable.

Note: If the global reclamation schedule is disabled on the primary server, and it is enabled on the secondary server (failover pair). After failed over, no global rec-lamation schedule should trigger on the device(s) on the primary server.

CDP/NSS Administration Guide 278

Page 281: CDP-NSS Administration Guide

Snapshot Resource

Check reclamation status

You can check the status of a reclaim process from the console by highlighting the appropriate node under San Resources in the console.

Shrink Policy

Just as you can set your snapshot resource to automatically expand when it requires more space; and you can also set it to "shrink" when it can reclaim unused space. Setting the shrink policy for your snapshot resources is another way to conserve space.

The shrink policy allows you to shrink the size of a Snapshot Resource after each successful scheduled reclamation. The shrink policy can be set for multiple SAN resources as well as for individual resources.

In order to set a shrink policy, a global or individual reclamation policy must be enabled for the SAN resource. Shrinkage amounts depend upon the minimum amount of disk space you set to trigger the shrink policy. When the shrink policy is triggered, the system calculates the maximum amount of space that can be used to shrink the snapshot resource. The amount of disk space saved by this operation is calculated from the last block of data where data is written. When the specified amount of space to be gained is equal to, or greater than the number entered, shrinkage occurs. The snapshot resource can shrink down to the minimum size you set for the resource.

CDP/NSS Administration Guide 279

Page 282: CDP-NSS Administration Guide

Snapshot Resource

To set the shrink policy:

1. Right-click on SAN Resources and select Snapshot Resource -- > Properties.

2. Click the Advanced button.

3. Set the minimum amount of disk space and the minimum snapshot resource size for the shrink policy. (Both criteria must be met after reclamation to trigger the shrink process.)

Set the above values to match your snapshot policy and snapshot usage requirements. If the value is set too high, the snapshot resource shrink process may not occur. If the value is set too low, frequent snapshot expansion may occur, impacting system performance and client I/O.

Minimum Disk Space to Trigger the Shrink Policy:

This is the minimum amount of available space to trigger the shrink operation. If the shrinkable size of the unused snapshot resource space is higher than the value set, then this criteria is considered met. The available size to shrink is determined by the system and it may not be the same value reflected as snapshot resource usage. The minimum Amount of Disk Space to Trigger Policy is set to 1 GB by default.

Minimum Snapshot Resource Size:

This is the minimum Snapshot Resource size that you would like to maintain after shrinkage. The new size may be equal to or greater than the value specified. The minimum Snapshot Resource size is 1 GB by default.

Once the shrink policy has been enabled, at-a-glance information regarding shrink policy settings can be obtained from the FalconStor Management Console --> Snapshot Resource tab.

CDP/NSS Administration Guide 280

Page 283: CDP-NSS Administration Guide

Snapshot Resource

Shrink a snapshot resource

1. Highlight the Replication node in the navigation tree.

2. Right-click on the replica resource that needs shrinking and select Snapshot Resource -- > Shrink.

The shrink option will be unavailable if the TimeMark option is not enabled on the replica resource.

3. Enter the amount of space to be reclaimed from the current snapshot space and enter YES to confirm.

By default, the maximum amount of space that can be reclaimed within the snapshot resource will be calculated.

If there are no TimeMarks on the replica resource, the size will automatically calculated at 50% of the actual snapshot resource space.

Use Snapshot to copy a SAN resource

FalconStor’s Snapshot Copy option allows you to create a duplicate, independent point-in-time copy of a SAN resource without impacting application servers. The entire resource is copied to another drive, overwriting any data on the target drive.

The source must have a Snapshot Resource in order to create a Snapshot Copy. If it does not have one, you will be prompted to create one. Refer to Create a Snapshot Resource for more information.

1. Right-click on the SAN resource that you want to copy and select Copy.

Note: We recommend that if a Snapshot Copy is being taken of a large database without the use of a FalconStor Snapshot Agent, the database should reside on a journaling file system (JFS). Otherwise, under heavy I/O, there is a slight possibil-ity that the file system could be changed, resulting in the need to run a file system check (fsck) in order to repair the file system.

CDP/NSS Administration Guide 281

Page 284: CDP-NSS Administration Guide

Snapshot Resource

2. Select how you want to create the target resource.

• Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

• Express automatically creates the target for you from available hard disk segments.

• Select Existing lets you select an existing resource. There are several restrictions as to what you can select:- The target must be the same type as the source.- The target must be the same size as or larger than the source.

Note: All data on the target will be overwritten.

CDP/NSS Administration Guide 282

Page 285: CDP-NSS Administration Guide

Snapshot Resource

If you select Custom, you will see the following windows:

Only one disk can be selected at a time from this dialog. To create a target resource from multiple physical disks, you will need to add the disks one at a time. After selecting the parameters for the first disk, you will have the option to add more disks. You will need to do this if the first disk does not have enough space.

Indicate how much space to allocate from this disk.

Click Add More if you need to add another physical disk to this target resource.You will go back to the physical device selection screen where you can select another disk.

CDP/NSS Administration Guide 283

Page 286: CDP-NSS Administration Guide

Snapshot Resource

If you selected Select Existing in step 2, you will see the following window from which you can select an existing resource:

3. Enter a name for the target resource.

The name is not case sensitive.

4. Confirm that all information is correct and then click Finish to perform the Snapshot Copy.

Note: If a failover or recovery occurs when snapshot copy is taking place, the snapshot copy will fail. You must resubmit the snapshot copy afterwards.

CDP/NSS Administration Guide 284

Page 287: CDP-NSS Administration Guide

Snapshot Resource

5. Assign the snapshot copy to a client.

Check Snapshot Copy status

You can see the current status of your Snapshot Copy by checking the General tab of both the virtual drive you copied from or copied to.

Snapshot Copy events are also written to the server’s Event Log, so you can check there for status information, as well as any errors.

Note: If you attempt to assign a snapshot copy of a virtual disk multiple times to the same Windows SAN Client, the snapshot copy will fail to import. This is because the import of the foreign disk uses the same disk group name as that of the current computer's disk group. This is a problem with Dynamic Disks; Basic Disks will not have this issue.

CDP/NSS Administration Guide 285

Page 288: CDP-NSS Administration Guide

Snapshot Resource

GroupsThe Group feature allows virtual drives and service-enabled drives to be grouped together. Groups can be created for different reasons, for CDP purposes, for snapshot synchronization, for organizational purposes, or for caching using the SafeCache option.

Snapshot synchronization builds on FalconStor’s snapshot technology, which ensures point-in-time consistency for data recovery purposes. Snapshots for all resources in a group are taken at the same time whenever a snapshot is triggered. Working in conjunction with the database-aware Snapshot Agents, groups ensure transactional integrity for database or messaging files that reside on multiple disks.

You can create up to 64 groups. When you create a group, you can configure TimeMark/CDP, Backup, Replication, and SafeCache (and, indirectly, asynchronous mirroring) for the entire group. All members of the group get configured the same way.

Create a group

To create a group from the FalconStor Management Console:

1. Expand the Logical Resources object, right-click on Groups and select New.

Depending upon which options you enable, the subsequent screens will let you set group policies for those options. Refer to the appropriate section(s) (Replication, ZeroImpact Backup, TimeMarks and CDP, or SafeCache) for details on configuration.

Note that you cannot enable CDP and SafeCache for the same group.

2. Indicate if you would like to add SAN resources to this group.

CDP/NSS Administration Guide 286

Page 289: CDP-NSS Administration Guide

Snapshot Resource

Refer to the following sections for limitations as to which SAN resources can/cannot join a group.

Groups with TimeMark/CDP enabled

The following notes affect groups configured for TimeMark/CDP:

• You cannot add a resource to a group configured for either TimeMark or CDP if the resource is already configured for CDP.

• You cannot add a resource to a group configured for CDP if the resource is already configured for SafeCache.

• CDP can only be enabled for an existing group if members of the group do not have CDP or SafeCache enabled.

• TimeMark can be enabled for an existing group if members of the group have TimeMark enabled.

• The group will have only one CDP journal. You will not see a CDP tab for the individual resources.

• If you want to remove a resource from a group with CDP enabled, you must first suspend the CDP journal for the entire group and wait until it finishes flushing.

• Groups with TimeMark/CDP enabled: If a member of a group has its own TimeMark that needs to be updated, it must leave the group, make the TimeMark updates individually, and then rejoin the group.

Groups with SafeCache enabled

The following notes affect groups configured for SafeCache:

• You cannot add a resource to a group configured for SafeCache if the resource is already configured for SafeCache.

• SafeCache can only be enabled for an existing group if members of the group do not have CDP or SafeCache enabled.

• The group will have only one SafeCache resource. You will not see a SafeCache tab for the individual resources.

• If you want to remove a resource from a group with SafeCache enabled, you must first suspend SafeCache for the entire group.

Groups with replication enabled

The following notes affect groups configured for replication:

• When you create a group on the primary server, the target server gets a group also.

• When you add resources to a group configured for replication, you can select any resource that is already configured for replication on the target server or any resource that does not have replication configured at all. You cannot select a resource if it is configured for replication to a different server.

CDP/NSS Administration Guide 287

Page 290: CDP-NSS Administration Guide

Snapshot Resource

• If a watermark policy is used for replication, the retry delay value configured affects each group member individually rather than the group as a whole. For example, if replication starts for the group and a group member fails during the replication process, the retry delay value will take effect. In the meantime, if another resource in the group reaches its watermark, a group replication will be triggered for all group members and the retry delay will become irrelevant.

• If you are using continuous replication, the group will have only one Continuous Replication Resource.

• If a group is configured for continuous replication, you cannot add a resource to the group if the resource has continuous replication enabled. Similarly, continuous replication can only be enabled for an existing group if members of the group do not have continuous replication enabled.

• If you add a resource to a group that is configured for continuous replication, the system switches to periodic replication mode until the next regularly-scheduled replication takes place.

Grant access to a group

By default, only the root user and IPStor administrators can manage SAN resources, groups, or clients. While IPStor users can add new groups, if you want a CDP/NSS user to manage an existing group, you must grant that user access. To do this:

1. Right-click on a group and select Access Control.

2. Select which user can manage this group.

Each group can only be assigned to one IPStor user. This user will have rights to perform any function on this group, including assigning, joining, and configuring storage services.

Add resources to a group

Each group can be comprised of multiple SAN resources. Each resource can only join one group and you cannot have both types of resources in the same group.

There are several ways to add resources to a group. After you create a group, you will be prompted to add resources. At any time afterwards, you can:

1. Right-click on any group and select Join.

You can also right-click on any SAN resource and select Group --> Join.

2. Select the type of resources that will join this group.

If this is a group with existing members, you will see a list of members instead.

Note: There is a limit of 128 resources per group. If the group is enabled for rep-lication, the recommended limit is 50.

CDP/NSS Administration Guide 288

Page 291: CDP-NSS Administration Guide

Snapshot Resource

3. Determine if you want to use Express Mode.

If you select Express Mode, you will be able to select multiple resources to join this group at one time. After you finish selecting resources, they will automatically be synchronized with the options and settings configured for the group.

If you do not select Express Mode, you will need to select resources one-by-one. For each resource, you will be taken through the applicable Replication and/or Backup wizard(s) and you will have to manually configure each option. (TimeMark is always configured automatically.)

4. Select resources to join this group.

CDP/NSS Administration Guide 289

Page 292: CDP-NSS Administration Guide

Snapshot Resource

If you started the wizard from a SAN resource instead of from a group, you will see the following window and you will select a group, instead of a resource:

When you click Next, you will see the options that must be activated. You will be taken through the applicable Replication and/or Backup wizard(s) so you can manually configure each option. (TimeMark is always configured automatically.)

5. Confirm all information and click Finish to add the resource(s) to the group.

Each resource will now have a tab for each configured option except CDP and SafeCache which share a CDP journal or SafeCache resource as a group.

By default, group members are not automatically assigned to clients. You must still remember to assign your group members to the appropriate client(s).

Remove resources from a group

Note that if you want to remove a resource from a group with CDP or SafeCache enabled, you must first suspend the CDP journal for the group and wait for it to finish flushing or suspend SafeCache. To suspend the CDP journal, right-click on the group and select TimeMark/CDP --> CDP Journal --> Suspend. Afterwards, you will need to resume the CDP journal. To suspend SafeCache, right-click on the group and select SafeCache --> Suspend.

To remove resources from a group:

1. Right-click on any group and select Leave.

CDP/NSS Administration Guide 290

Page 293: CDP-NSS Administration Guide

Snapshot Resource

2. Select resources to leave this group.

For groups enabled with Backup or Replication, leaving the group does not disable Backup or Replication for the resource.

CDP/NSS Administration Guide 291

Page 294: CDP-NSS Administration Guide

CDP/NSS Administration Guide

TimeMarks and CDPOverview

FalconStor’s TimeMark and Continuous Data Protection (CDP) options protect your mission-critical data, enabling you to recover data from a previous point in time.

TimeMarks are point-in-time images of any SAN virtual drive. Using FalconStor’s Snapshot technology, TimeMarks track multiple virtual images of the same disk marked by "time". If you need to retrieve a deleted file or "undo" data corruption, you can recreate/restore the file instantly based on any of the existing TimeMarks.

While the TimeMark option allows you to track changes to specific points in time, with Continuous Data Protection, you can roll back data to any point in time.

TimeMark/CDP guards against soft errors, non-catastrophic data loss, including the accidental deletion of files and software/virus issues leading to data corruption. TimeMark/CDP protects where high availability configurations cannot, since in creating a redundant set of data, high availability configurations also create a duplicate set of soft errors by default. TimeMark/CDP protects data from your slip-ups, from the butter fingers of employees, unforeseen glitches during backup, and from the malicious intent of viruses.

The TimeMark/CDP option also provides an undo button for data processing. Traditionally, when an administrator performed operations on a data set, a full backup was required before each “dangerous” step, as a safety net. If the step resulted in undesirable effects, the administrator needed to restore the data set and start the process all over again. With FalconStor's TimeMark/CDP option, you can easily rollback (restore) a drive to its original state.

FalconStor’s TimeView feature is an extension of the TimeMark/CDP option and allows you to mount a virtual drive as of a specific point-in-time. Deleted files can be retrieved from the drive or the drive can be assigned to multiple application servers for concurrent, independent processing, all while the original data set is still actively being accessed/updated by the primary application server. This is useful for “what if” scenarios, such as testing a new payroll application on your actual, but not live, data.

Configure TimeMark properties by right-clicking on the TimeMark/CDP option and selecting Properties.

CDP/NSS Administration Guide 292

Page 295: CDP-NSS Administration Guide

TimeMarks and CDP

Enable TimeMark (updated February 2013)

You will need a Snapshot Resource for the logical resource you are going to configure. If you do not have one, you will create it through the wizard. Refer to Create a Snapshot Resource for more information.

1. Right-click on a SAN resource, incoming replica resource, or a Group and select TimeMark/CDP --> Enable.

For multiple SAN resources, right-click on the SAN Resources object and select TimeMark/CDP --> Enable.

The Enable TimeMark/CDP Wizard launches.

2. Indicate if you want to enable CDP. Select the checkbox to enable CDP.

CDP enhances the benefits of using TimeMark by recording all changes made to data, allowing you to recover to any point in time.

Note: If you are using DiskSafe, do not specify a TimeMark retention policy for a DiskSafe mirror resource until you disable any snapshot retention policy that may have been defined in DiskSafe. Use the DiskSafe console to disable the retention policy in resource protection properties. Refer to the DiskSafe User Guide for details on modifying protection properties.

Note: If you enable CDP on the replica, it is recommended that you perform replication synchronization. CDP journaling will not begin until the next successful replication. You can wait until the next scheduled replication synchronization or manually trigger synchronization. To manually trigger replication synchronization, Right-click on the primary server and select Replication --> Synchronization.

CDP/NSS Administration Guide 293

Page 296: CDP-NSS Administration Guide

TimeMarks and CDP

3. (CDP only) Select the storage pool or physical device that should be used to create the CDP journal.

4. (CDP only) Select how you want to create the CDP journal.

The minimum size required for the journal is 1 GB, which is the default size.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express lets you designate how much space to allocate and then automatically creates a CDP journal using an available device.

• Select different drive - look for space on another hard disk.• Select drives from different adapter/channel - look for space on another

hard disk only if it is on a separate adapter/channel.

CDP/NSS Administration Guide 294

Page 297: CDP-NSS Administration Guide

TimeMarks and CDP

• Select any available drive - look for space on any disk, including the original. This option is useful if you have mapped a device (such as a RAID device) that looks like a single physical device.

5. Determine how often TimeMarks should be created.

The number selected in the Maximum number of TimeMarks allowed enforces the number of TimeMarks that can be created. The default number is based upon your license.

Note: The CDP Journal performance level is set to Moderate by default. You can modify this setting (to aggressive) by right-clicking on the SAN resource and selecting TimeMark/CDP --> CDP Journal --> Performance.

CDP/NSS Administration Guide 295

Page 298: CDP-NSS Administration Guide

TimeMarks and CDP

6. Define a TimeMark Retention Policy.

You can select one of the following three retention policies:

• Keep the maximum number of TimeMarks. This policy will keep all TimeMarks created for the device.

• Keep the ___ most recent TimeMarks. This policy will keep latest TimeMarks up to the value specified. The maximum number can be up to the value set in previous screen (Maximum number of TimeMarks allowed). The default value is 8.

• Keep TimeMarks based on the following rules:This policy allows you the flexibility to retain TimeMarks according to your scheduling needs. If nothing is checked, the default will be set to keep all TimeMarks for the past 1 day. Refer to TimeMark retention policy for more details on configuring each rule.

Note: Make sure the Maximum number of TimeMarks allowed is set high enough to retain the desired number of TimeMarks. If the number set is too low, your earlier TimeMarks will be deleted before the retention policy can take effect.

CDP/NSS Administration Guide 296

Page 299: CDP-NSS Administration Guide

TimeMarks and CDP

7. Select the Trigger replication after TimeMark is taken checkbox if TimeMark and Replication are both enabled for this device/group in order to have replication triggered automatically after each TimeMark event.

If TimeMark is enabled for a group, replication must also be enabled at the group level. If you would like to use SafeCache, a group SafeCache is required. It is strongly recommended that you manually suspend the replication schedule when using this option to avoid a scheduling conflict.

8. Confirm that all information is correct and then click Finish to enable TimeMark/CDP.

You now have a TimeMark tab for this resource or group. If you enabled CDP, you also have a separate CDP tab. If you are using CDP, the TimeMarks will be points within the CDP journal.

In order for a TimeMark to be created, you must select Create an initial TimeMark on... policy. Otherwise, you will have enabled TimeMark without creating any. In this case, you will need to manually create them using TimeMark/CDP --> Create.

If you are configuring TimeMark for an incoming replica resource, you cannot select the Create an initial TimeMark on... policy. Instead, a TimeMark will be created after each scheduled replication job finishes.

Depending upon the version of your system, the maximum number of TimeMarks that can be created is 1000. When you create a TimeMark and this limit is reached, TimeMarks are deleted based upon their priority and the snapshot resource policy. Refer to the Snapshot Resource policy behavior table for details. However, if all 1000 TimeMarks are in use by a TimeView replication or copy process, none of the TimeMarks in use are deleted and the new TimeMark is still created. In this condition, the 1000 limit can tolerate up to 24 additional TimeMarks.

CDP/NSS Administration Guide 297

Page 300: CDP-NSS Administration Guide

TimeMarks and CDP

The maximum does not include the snapshot images that are associated with TimeView resources. When a TimeMark is deleted, journal data is merged together with a previous TimeMark (or a newer TimeMark, if no previous TimeMarks exist).

The first TimeMark that is created when CDP is used will have a Medium priority. Subsequent TimeMarks will have a Medium priority by default, but can be changed manually. Refer to Add a comment or change priority of an existing TimeMark for more information.

Snapshot Notification works with FalconStor Snapshot Agents to initiate a snapshot request to a SAN client. When used, the system notifies the client to quiet activity on the disk before a snapshot is taken. Using snapshot notification guarantees that you will get a transactionally consistent image of your data.

This might take some time if the client is busy. You can speed up processing by skipping snapshot notification if you know that the client will not be updating data when a TimeMark is taken. Use the Trigger snapshot notification for every n scheduled TimeMark(s) option to select which TimeMarks should use snapshot notification.

TimeMark retention policy (updated February 2013)

When defining the rule-based policy, you can specify the offset of the moment to keep, i.e. Use the TimeMark closest to___. For example, for daily TimeMarks, you are asked to specify which hour of the day to use for the TimeMark. For weekly TimeMarks, you are asked which day of the week to keep. If you set an offset for which there is no TimeMark, the closest one to that time is taken.

Note:

• If CDP is enabled, only 256 TimeMarks are supported. This is because CDP can only allow 256 snapshot markers, regardless of whether they are flushed or not.

• A temporary TimeMark does not count toward the maximum TimeMark count Within list.

Note:

• A TimeView cannot be created from the CDP journal if TimeMarks already have TimeView data or is a VSS TimeMark.

• When a TimeView is created from the CDP journal, it is recommended that you change the default 32 MB setting to a larger size to accommodate the large amount of data.

Note: Once you have successfully enabled CDP on the replica, perform Replication synchronization.

CDP/NSS Administration Guide 298

Page 301: CDP-NSS Administration Guide

TimeMarks and CDP

The default offset values correspond to typical usage based on the fact that the older the information, the less valuable it is. For instance, you can take TimeMarks every 20 minutes, but keep only those snapshots taken at the minute 00 each hour for the last 24 hours.

This feature allows you to save a pre-determined number of TimeMarks and delete the rest. The TimeMarks that are preserved are the result of the pruning process. This method allows you to keep only meaningful snapshots after each retention run. The retention policy is scheduled to start once a day by default. To modify this setting, refer to Global reclamation policy and retention schedule.

In addition to defining a retention policy when you enable TimeMark/CDP for a resource, you can right-click on the SAN resource, and select TimeMark/CDP --> Properties, and then select the TimeMark Retention tab.

The retention policy can have the following combinations of rules. Each rule is described below:

• Keep all TimeMarks for the past [value][unit] This rule is required to retain all TimeMarks specified by the value set. The range is [1-168 hours or 1-365 days]. The default value is 1 day.

• Hourly from the past [value] DaysUse the TimeMark closest to [minute] as the hourly TimeMark.Set the above rules to retain the hourly TimeMark that is closest to the selected minute for the past specified number of days. The ranges are 1-365 days and 0-59 minutes. The default values are 1 day and 0 minute.

• Daily from the past [value] DaysUse the TimeMark closest to [hour] as the daily TimeMark.Set this rule to retain one daily TimeMark that is closest to the selected hour of the day for the past specified number of days. The ranges are 1-730 days and 0-23 hours. The default values are 1 day and the 23rd hour.

• Weekly from the past [value] WeeksUse the TimeMark closest to [day of the week] as the weekly TimeMark.Set this rule to retain one weekly TimeMark that is closest to the selected day of the week for the past specified number of weeks. The ranges are 1-110 weeks and Monday-Sunday. The default values are 1 week and Friday.

• Monthly from the past [value] MonthsUse the TimeMark closest to [day of the month] as the monthly TimeMark.Set this rule to retain one monthly TimeMark that is closest to the selected day of the month for the past specified number of months. The ranges are 1-120 months and 1-31 day. The default values are 1 month and the 31st day (last day of the month).

Retentionpolicy example

The following example illustrates how the retention policy values are interpreted.

• TimeMarks are created every 15 minutes, starting on midnight November 1, 2010 (Monday).

• Manual, critical TimeMarks were created at 12:02 on November 22, 2010 and at 15:37 on December 24, 2010.

CDP/NSS Administration Guide 299

Page 302: CDP-NSS Administration Guide

TimeMarks and CDP

• Retention policy is deployed on November 1, 2010 as follows:• Rule based policy• Keep all TimeMarks for the past 1 day• Keep hourly TimeMarks for 4 days closest to 0 minutes• Keep daily TimeMarks for 7 days closest to the 23rd hour.• Keep weekly TimeMarks for 4 weeks closest to Friday.• Keep monthly TimeMarks for 4 months closest to the 31st day.

• None of the TimeMarks are currently mounted.

After the retention policy is run at 1:00AM on December 30, 2010 (Thursday) the results are:

• Keep the two (2) critical TimeMarks; these are excluded by the critical rule.• Keep all TimeMarks on December 30, 2010; these are excluded by starting

point rule.• Keep all TimeMarks on December 29, 2010; these are protected by the "all"

retention rule.• Keep TimeMarks at 00:00, 01:00, 02:00 … 23:00 on December 26, 27, and

28; these are protected by the "hourly" retention rule. *Note that TimeMarks for December 29, 2010 are already protected by "all" retention rule.

• Keep TimeMarks at 23:00 on December 23, 24, and 25; these are protected by the "daily" retention rule.*Note that the TimeMarks on December 26, 2010 to December 29, 2010 are already protected by the "hourly" retention rule.

• Keep TimeMarks at 23:00 on December 3, 10, and 17; these are protected by the "weekly" retention rule. *Note that the TimeMarks on December 24, 2010 is already protected by the "daily" retention rule.

• Keep TimeMark at 23:00 on November 30 and TimeMark at 00:00 on November 1; this is protected by the "monthly" retention rule.*Note that since no TimeMarks were created in October, the November 1 TimeMark is protected because it is the closest to October 31.

Assuming processing completes within 40 minutes (by 1:40AM on December 30, 2010), there should be 184 Snapshots remaining:

• 2 TimeMarks each on November 22, 2010 and on December 24, 2010 (critical)

• 6 TimeMarks today - December 30, 2010 (exclude)• 96 TimeMarks on December 29, 2010 (all)• 24 TimeMarks each on December 26, 27, and 28 (hourly)• 1 TimeMark each on December 23, 24, and 25 (daily)• 1 TimeMark each on December 3, 10, and 17 (weekly)• 1 TimeMark each on November 1, 2010 and November 30, 2010 (Monthly)

CDP/NSS Administration Guide 300

Page 303: CDP-NSS Administration Guide

TimeMarks and CDP

Check TimeMark status

You can see a list of TimeMarks for this virtual drive, along with your TimeMark policies, by clicking the TimeMark tab.

TimeMarks displayed in orange are pending, meaning there is unflushed data in the CDP journal. Unflushed TimeMarks cannot be selected for rollback or TimeView.

To re-order the list of TimeMarks, click on a column heading to sort the list.

The Quiescent column indicates whether or not snapshot notification occurred when the TimeMark was created. When a device is assigned to a client, the initial value is set to No. A Yes in the Quiescent column indicates there is an available agent on the client to handle the snapshot notification, and the snapshot notification was successful.

If a device is assigned to multiple clients, such as nodes of a cluster, the Quiescent column displays Yes only if the snapshot notification is successful on all clients; if there is a failure on one of the clients, the column displays No.

However, in the case of a VSS cluster, the Quiescent column displays Yes with VSS when the entire VSS process has successfully completed on the active node and the snapshot has been created.

If you are looking at this tab for a replica resource, the status will be carried from the primary resource. For example, if the TimeMark created on the primary virtual device used snapshot notification, Quiescent will be set to Yes for the replica.

The TimeView Data column indicates whether TimeView data or a TimeView resource exists on the TimeMark.

The Status column indicates the TimeMark state.

Note: A “vdev expanded” TimeMark is created automatically when a source device with CDP is expanded.

CDP/NSS Administration Guide 301

Page 304: CDP-NSS Administration Guide

TimeMarks and CDP

Right-click on the virtual drive and select Refresh to update the TimeMark Used Size and other information on this tab. To see how much space TimeMark is using, check the Snapshot Resource tab.

Check CDP journal status

You can see the current size and status of your CDP journal by checking the CDP tab.

Protect your CDP journal

This section applies only to CDP.

You can protect your CDP journal by using FalconStor’s Mirroring option. With Mirroring, each time data is written to the journal, the same data is also written to another disk which maintains an exact copy of the journal. If the primary journal disk fails, CDP seamlessly swaps to the mirrored copy.

To mirror a journal, right-click on the SAN resource and select TimeMark/CDP --> CDP Journal --> Mirror --> Add.

Add a tag to the CDP journal

You can manually add a tag to the CDP journal. The tag will be used to notate the journal when the next I/O occurs. Adding a tag with a meaningful comment is useful for marking special situations, such as system maintenance or software upgrades.

With these tags, it is easy to find the point just prior to when the system maintenance or software upgrade began, making rollback easy and accurate.

CDP/NSS Administration Guide 302

Page 305: CDP-NSS Administration Guide

TimeMarks and CDP

1. Highlight a SAN resource and select TimeMark/CDP --> CDP Journal --> Add tag.

2. Type in a tag and click OK.

Add a comment or change priority of an existing TimeMark

You can add a comment to an existing TimeMark to make it easy to identify later. For example, you might add a “known good recovery point”, such as an application checkpoint to identify a TimeMark for easy recovery.

You can also change the priority of a TimeMark. Priority eases long term management of TimeMarks by allowing you to designate importance, aiding in the preservation of critical point-in-time images.

Priority affects how TimeMarks will be deleted once the maximum number of TimeMarks to keep has been reached. Low priority TimeMarks are deleted first, followed by Medium, High, and then Critical.

1. Right-click on the TimeMarked SAN resource that you want to update and select TimeMark/CDP --> Update

2. Click in the Comment or Priority field to make/change entries.

3. Click Update when done.

Note: Groups with TimeMark/CDP enabled: If a member of a group has its own TimeMark that needs to be updated, it must leave the group, make the TimeMark updates individually, and then rejoin the group.

CDP/NSS Administration Guide 303

Page 306: CDP-NSS Administration Guide

TimeMarks and CDP

Manually create a TimeMark

1. To create a TimeMark that is not scheduled, select TimeMark/CDP --> Create.

2. If desired, add a comment for the TimeMark that will make it easily identifiable later if you need to locate it.

3. Set the priority for this TimeMark.

Once the maximum number of TimeMarks allowed has been reached, the earliest TimeMarks will be deleted depending upon priority. Low priority TimeMarks are deleted first, followed by Medium, High, and then Critical.

4. Indicate if you want to use Snapshot Notification for this TimeMark.

Snapshot Notification works with FalconStor Snapshot Agents to initiate a snapshot request to a SAN client. When used, the system notifies the client to quiet activity on the disk before a snapshot is taken. Using snapshot notification guarantees that you will get a transactionally consistent image of your data.

This might take some time if the client is busy. You can speed up processing by skipping snapshot notification if you know that the client will not be updating data when this TimeMark is taken.

The use of this option overrides the Snapshot Notification setting in the snapshot policy.

CDP/NSS Administration Guide 304

Page 307: CDP-NSS Administration Guide

TimeMarks and CDP

Copy a TimeMark

The Copy feature works similarly to FalconStor’s Snapshot Copy option. It allows you to take a TimeMark image of a drive (for example, how your drive looked at 9:00 this morning) and copy the entire drive image to another virtual drive or SAN resource. The virtual drive or SAN resource can then be assigned to clients for use and configured for FalconStor storage services.

1. Right-click on the TimeMarked SAN resource that you want to copy and select TimeMark/CDP --> Copy.

2. Select the TimeMark image that you want to copy.

To copy the TimeMark and TimeView data, select the Copy the TimeMark and TimeView data checkbox at the bottom left of the screen.

This option is only available if there is TimeView data available. This option is not available if the TimeView data is in use/mounted or if there is no TimeView. In this case, you will only be able to create a copy of the disk image at the time of the timestamp (without new data that has been written to the TimeView). To capture the new data in this case, see the example below.

For example, if you have assigned a TimeView to a disaster recovery (DR) host and have started writing new data to the TimeView, when you use TimeMark Copy you will have a copy of the point in time without the "new" data that was written to the TimeView. In order to create a full disk copy to include the data in the TimeView, you will need to unassign the TimeView from the DR host, delete the TimeView and select the keep the TimeView data persistent option.

Note: Do not initiate a TimeMark Copy while replication is in progress. Doing so will result in the failure of both processes.

CDP/NSS Administration Guide 305

Page 308: CDP-NSS Administration Guide

TimeMarks and CDP

Afterwards, TimeMark Copy will include the new data. You can recreate the TimeView again with the new data and assign back to the DR host.

To revert back to the original TimeMark, you must delete the TimeView again, but do not select the keep the TimeView data persistent option. This will remove the new data from the TimeMark.

3. Select how you want to create the target resource.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express automatically creates the target for you from available hard disk segments. You will only have to select the storage pool or physical device that should be used to create the copy.

Select Existing lets you select an existing resource. There are several restrictions as to what you can select:

• The target must be the same type as the source.• The target must be the same size as, or larger than, the source. • The target cannot have any Clients assigned or attached.

4. Enter a name for the target resource.

5. Confirm that all information is correct and then click Finish to perform the TimeMark Copy.

You can see the current status of your TimeMark Copy by checking the General tab of either virtual drive. You can also check the server’s Event Log for status information.

Recover data using the TimeView feature

TimeView allows you to mount a virtual drive as of a specific point-in-time, based on your existing TimeMarks or your CDP journal.

Use TimeView if you need to restore individual files from a drive but you do not want to rollback the entire drive to a previous point in time. Simply use TimeView to mount the virtual drive and then copy the files you need back to your original virtual drive.

TimeView also enables you to perform “what if” scenarios, such as testing a new payroll application on your actual, but not live, data. After mounting the virtual drive, it can be assigned to an application server for independent processing without affecting the original data set. A TimeView cannot be configured for any of FalconStor’s storage services.

Note: All data on the target will be overwritten.

CDP/NSS Administration Guide 306

Page 309: CDP-NSS Administration Guide

TimeMarks and CDP

Why should you use TimeView instead of Copy? Unlike Copy, which creates a new virtual drive and requires disk space equal to or larger than the original disk, a TimeView requires very little disk space to mount. It is also quicker to create a TimeView than to copy data to a new virtual drive.

1. Highlight a SAN resource and select TimeMark/CDP --> TimeView.

The Create TimeView Wizard displays.

If this resource has CDP enabled, the top section contains a graph with marks that represent TimeMarks.

The graph is a relative reflection of the data changing between TimeMarks within the available journal range. The vertical y axis represents data usage per TimeMark; the height of each mark represents the Used Size of each TimeMark. The horizontal x axis represents time. Each mark on the graph indicates a single TimeMark. You will not see TimeMarks that have no data.

Because the graph is a relative reflection of data, and the differences in data usage can be very large, the proportional height of each TimeMark might not be very obvious.

For example, if you have one TimeMark with a size of 500 MB followed by several much smaller TimeMarks, the 500 MB TimeMark will be much more visible.

Similarly, if the maximum number of TimeMarks has been reached and older TimeMarks have been deleted to make way for newer ones, journal data is merged together with a previous TimeMark (or a newer TimeMark, if no previous exist). Therefore, it is possible that you will see one large TimeMark containing all of the merged data.

Note: Clients may not be able to access TimeViews during failover.

Move the slider to selectany point in time. You

can also type in the dateand time down to the

millisecond andmicrosecond.

Zoom In to see greater detail for the selected time period.

Click to select a CDP journal tag that was manually added.

CDP/NSS Administration Guide 307

Page 310: CDP-NSS Administration Guide

TimeMarks and CDP

Also, since the length of the x axis can reflect a range as small as one hour to 30 days, the location of an actual data point is approximate. Zooming in and using the Search button will allow you to get a more accurate location of a particular data point.

If CDP is enabled, you can use the visual slider to create a TimeView from any point in the CDP journal or you can create a TimeView from a scheduled TimeMark.

You can also click the Select Tag button to select a CDP journal tag that was manually added or was automatically added by CDP after a rollback occurred. Note that you will only see the tags for which there was subsequent I/O.

If CDP is not enabled, you will only be able to create a TimeView from a scheduled TimeMark.

2. To create a TimeView from a scheduled TimeMark, select Create TimeView from TimeMark Snapshots, highlight the correct TimeMark, and click OK.

If this is a replica server, the timestamp of a TimeMark is the timestamp of the source (not the replica’s local time).

3. To create a TimeView from the CDP journal, use the slider or type in an approximate time.

For example, if you are trying to find a deleted file, select a time prior to when the file was deleted. If this was an active file, aim for a time just prior to when the file was deleted so that you can recover the most up-to-date version.

If you are positive that the time you selected is correct, you can click OK to create a TimeView. If you are unsure of the exact time, you can zoom into an approximate time period to see greater detail, such as seconds, milliseconds, and even microseconds.

CDP/NSS Administration Guide 308

Page 311: CDP-NSS Administration Guide

TimeMarks and CDP

4. If you need to see greater detail, click Zoom In.

You can see the I/O that occurred during this five minute time frame displayed in seconds.

If you zoomed in and don’t see what you are looking for, you can click the Scroll button. It will move forwards or backwards by five minutes within the period of this TimeMark.

You can also click the Search button to locate data or a period with limited or no I/O.

At any point, if you know what time you want to select, you can click OK to return to the main dialog so that you can click OK to create a TimeView. Otherwise, you can zoom in further to see greater detail, such as milliseconds and microseconds.

TimeMark period

Five minute range withinthis TimeMark

CDP/NSS Administration Guide 309

Page 312: CDP-NSS Administration Guide

TimeMarks and CDP

You can then use the slider to select a time just before the file was deleted.

It is best to select a quiet time without I/O to get the most stable version of the file.

5. After you have selected the correct point in time, click OK to return to the main dialog and then click OK to create a TimeView.

CDP/NSS Administration Guide 310

Page 313: CDP-NSS Administration Guide

TimeMarks and CDP

6. Select the physical resource for SAN TimeView Resource.

CDP/NSS Administration Guide 311

Page 314: CDP-NSS Administration Guide

TimeMarks and CDP

7. Select a method for TimeView Creation.

Notes:

• The TimeView only uses physical space when I/O is written to the TimeView device. New write I/O may trigger expansion to allocate more physical space for the TimeView when no more space is available. Read I/O does not require additional physical space.

• The maximum size to which a TimeView device can be allocated is 5% more than the primary device. For example: Maximum TimeView body size = 1.05 X primary device size. The allocated size will be checked for both policy and user triggers to expand when necessary.

• The formula for allocating the initial size of the physical space for the TimeView is as follows:

• If the primary device size is less than 5GB, the initial TimeView size = primary size X 1.05 (the maximum TimeView size)

• If the primary device size is greater than 5GB, the initial TimeView size = 5GB

• If creating a TimeView from a VSS TimeMark, the initial TimeView size = 32MB (as shown in the screen above)

• For best performance, it is recommended that you do not lower the default initial size of the TimeView if you intend to write to the TimeView device (i.e. when using HyperTrac).

• Once the TimeView is deleted, the space becomes available. TimeViews cannot be shrunk once the space is allocated.

CDP/NSS Administration Guide 312

Page 315: CDP-NSS Administration Guide

TimeMarks and CDP

8. Enter a name for the TimeView and click OK to finish.

The Set TimeView Storage Policy screen displays.

9. Verify and create the TimeView resource.

10. Assign the TimeView to a client.

The client can now recover any files needed.

CDP/NSS Administration Guide 313

Page 316: CDP-NSS Administration Guide

TimeMarks and CDP

Remap a TimeView

With TimeViews, you can mount a virtual drive as of a specific point-in-time, based on your existing TimeMarks or your CDP journal. If you are finished with one TimeView but need to create another for the same virtual device, you can remap the TimeView to another point-in-time. When remapping, a new TimeView is created and all of the client connections are retained. To remap a TimeView, follow the steps below:

1. Right-click on an existing TimeView and select Remap.

You must have at least one additional TimeMark available.

2. Select a TimeMark or a point in the CDP journal.

3. Enter a name for the new TimeView and click Finish.

Delete a TimeView

Deleting a TimeView also involves deleting the SAN resource. To delete a TimeView:

1. Right-click on the TimeView and select Delete.

2. Select whether you want to Keep the TimeView data to be persistent when re-created with the same TimeMark.

This option allows you to save the TimeView data on the TimeMark and restore the data when it is recreated with the same TimeMark.

3. Type Yes in the box and click OK to confirm the deletion.

Note: It is recommend that you disable the TimeView from the client (via the Device Manager on Windows machines) before remapping it.

CDP/NSS Administration Guide 314

Page 317: CDP-NSS Administration Guide

TimeMarks and CDP

Remove TimeView Data

Obsolete TimeView data is automatically removed for all devices after a successful scheduled reclamation.

Use the Remove TimeView Data option to manually delete obsolete data. You may want to use this option after you have deleted a TimeMark and you want to clean up TimeView data.

This option can be triggered in batch mode, by right-clicking on the Logical Resources node in the FalconStor Management Console and selecting Remove TimeView Data.

To remove TimeView data on an individual device, right-click on the SAN resource and select Remove TimeView Data.

• The first option allows you to remove all TimeView data from selected virtual device(s)

• The second option allows you to select specific TimeView data for deletion.

CDP/NSS Administration Guide 315

Page 318: CDP-NSS Administration Guide

TimeMarks and CDP

Set TimeView Policy

TimeView uses its own storage, separate from the snapshot resource. The TimeView Storage Policy can be set during TimeView creation. After a TimeView is created, the storage (auto-expansion) policy can be modified from the properties option.

To set the TimeView storage policy:

1. Right-click on a TimeView device and select Properties.

2. Select the storage policy to be used when space starts to run low.

• Specify the threshold as a percentage of the space used (1 - 99%). The default is the same value as the snapshot resource threshold. Once the specified threshold is met, automatic expansion is triggered.

• Automatically allocate more space for the TimeView device. Check this option to allow the system to allocate additional space (according to the following settings) once the threshold is met.• Enter the percentage to Increment space by. The default is the same

value as the snapshot resource threshold.• Enter the maximum size (in MB) allowed for the TimeView device. This

is the maximum size limit used by automatic expansion. The default is 0, which means maximum TimeView size.

CDP/NSS Administration Guide 316

Page 319: CDP-NSS Administration Guide

TimeMarks and CDP

Rollback or roll forward a drive

Rollback restores your drive to a specific point in time, based on your existing TimeMarks, TimeViews, or your CDP journal. After rollback, your drive will look exactly like it did at that point in time.

After rolling a drive back, TimeMarks made after that point in time will be deleted but all of the CDP journal data will be available, if CDP is enabled. Therefore it is possible to perform another rollback and select a journal date ahead of the previous time, essentially rolling forward.

Group rollback allows you to rollback up to 32 (the default) disks to a TimeMark or CDP data point. To perform a group rollback, right-click on the group and select Rollback. TimeMarks that are common to all devices in a group will display in the wizard.

1. Unassign the Client(s) from the virtual drive before rollback.

For non-Windows Clients, type ./ipstorclient stop from /usr/local/ipstorclient/bin.

2. Right-click on the virtual drive and select TimeMark/CDP --> Rollback.

To enable preservation of all timestamps, check Preserve all TimeMarks with more recent timestamps.

Note: To avoid the need to reboot a Windows 2003 client, unassign the SAN resource from the client now and then reassign it just before re-attaching your client using the FalconStor Management Console.

CDP/NSS Administration Guide 317

Page 320: CDP-NSS Administration Guide

TimeMarks and CDP

Do not initiate a TimeMark rollback to a raw device while data is currently being written to the raw device. The rollback will fail because the device will fail to open.

If you have already created a TimeView from the CDP journal and want to roll back your virtual device to that point in time, right-click on the TimeView and select Rollback to.

3. Select a specific point in time or select the TimeMark to which you want to rollback.

If CDP is enabled and you have previously rolled back this drive, you can select a future journal date.

If you selected a TimeView in the previous step, you will not have to select a point in time or a TimeMark.

4. Confirm that you want to continue.

A TimeMark will be taken automatically at the point of the rollback and a tag will be added into the journal. The TimeMark will have the description !!XX-- POST CDP ROLLBACK --XX!! This way, if you later need to create a TimeView, it will contain data from the new TimeMark forward to the TimeView time. This means you will see the disk as it looked immediately after rollback plus any data written to the disk after the rollback occurred until the time of the TimeView.

It is recommended that you remove the POST CDP ROLLBACK after a successful rollback because it counts towards the TimeMark count for that member.

5. When done, re-attach your Client(s).

Change your TimeMark/CDP policies (updated February 2013)

You can change your TimeMark schedule, and enable/disable CDP on single devices.

To change a policy:

Note: If DynaPath is running on a Windows client, reboot the machine after roll-back.

Notes:

• You cannot enable/disable CDP by updating TimeMark properties in batch mode.

• If you are using DiskSafe, do not specify a TimeMark retention policy for a DiskSafe mirror resource until you disable any snapshot retention policy that may have been defined in DiskSafe. Use the DiskSafe console to disable the retention policy in resource protection properties. Refer to the DiskSafe User Guide for details on modifying protection properties.

CDP/NSS Administration Guide 318

Page 321: CDP-NSS Administration Guide

TimeMarks and CDP

1. Right-click on the virtual drive and select TimeMark/CDP --> TimeMark --> Properties.

2. Make the appropriate changes and click OK.

In addition, you can update TimeMark properties in batch mode.

To update multiple SAN resources:

1. Right-click on the SAN resources object and select Properties.

The Update TimeMark Properties screen displays.

2. Select all of the resources you want to update and click Next.

3. Make the desired policy changes and click OK.

Delete TimeViews in batch mode

You can delete multiple TimeViews in a device. To do this, select the SAN device to delete.

Note: If you uncheck the Enable Continuous Data Protection box, this will disable CDP and will delete the CDP journal. It will not delete TimeMarks. If you want to disable TimeMark and CDP, refer to the ‘Disable TimeMark and CDP’ section below.

CDP/NSS Administration Guide 319

Page 322: CDP-NSS Administration Guide

TimeMarks and CDP

Suspend/resume CDP

[For CDP only]

You can suspend/resume CDP for an individual resource. If the resource is in a group, you can suspend/resume CDP at the group level. Suspending CDP does not delete the CDP journal and it does not delete any TimeMarks. When CDP is resumed, data resumes going to the journal.

To suspend/resume CDP, right-click on the resource or group and select TimeMark/CDP --> CDP Journal --> Suspend (or Resume).

Delete TimeMarks

The Delete option lets you delete one or more TimeMark images for a virtual drive. Depending upon which TimeMark(s) you delete, this may or may not free up space in your Snapshot Resource. A general rule is that you will only free up Snapshot Resource space if the earliest TimeMark is deleted. If other TimeMarks are deleted, you will need to run reclamation to free up space. Refer to Snapshot Resource shrink and reclamation policies.

1. Right-click on the virtual drive and select TimeMark/CDP --> Delete.

2. Highlight one or more TimeMarks and click Delete.

3. Type yes to confirm and click OK to finish.

Disable TimeMark and CDP

If you ever need to disable TimeMark and CDP, you can select TimeMark/CDP --> Disable. In addition to disabling TimeMark and CDP, this will delete the CDP journal and all existing TimeMarks.

For multiple SAN resources, right-click on the SAN Resources object and select TimeMark/CDP --> Disable.

If you only want to disable CDP and delete the CDP resource, refer to the ‘Change your TimeMark/CDP policies (updated February 2013)’ section.

Replication and TimeMark/CDP

• The timestamp of a TimeMark on a replica is the timestamp of the source.• You cannot manually create any TimeMarks on the replica, even if you

enable TimeMark/CDP on the replica.• If you are using TimeMark with CDP, you must use Continuous Mode

replication (not Delta Mode).

CDP/NSS Administration Guide 320

Page 323: CDP-NSS Administration Guide

CDP/NSS Administration Guide

NIC Port BondingNIC Port Bonding is a load-balancing/path-redundancy feature available for Linux. This feature enables you to configure your storage server to load-balance network traffic across two or more network connections creating redundant data paths throughout the network.

NIC Port Bonding offers a new level of data accessibility and improved performance for storage systems by eliminating the point of failure represented by a single input/output (I/O) path between servers and storage systems and permits I/O to be distributed across multiple paths.

NIC Port Bonding allows you to group up to eight network interfaces into a single group.

NIC Port Bonding supports the following scenarios:

• 2 port bond• 4 port bond• Dual 2 port bond• 8 port bond• Dual 4 port bond

You can think of this group as a single virtual adapter that is actually made up of multiple physical adapters. To the system and the network, it appears as a single interface with one IP address. However, throughput is increased by a factor equal to the number of adapters in the group. Also, NIC Port Bonding detects faults anywhere from the NIC out into the network and provides dynamic failover in the event of a failure.

You can define a virtual network interface (NIC) which sends and receives traffic to/from multiple physical NICs. All interfaces that are part of a bond have SLAVE and MASTER definitions.

Enable NIC Port Bonding

To enable NIC Port Bonding with less than four NICs:

1. Right click on the server.

2. Select System Maintenance --> Bond NIC Port.

The NIC Port Bonding screen displays.

3. Enter the IP Address and Netmask for the bonded interfaces: eth0 and eth1. Then click OK.

A bonding interface bond0 with slaves eth0 and eth1 is created.

CDP/NSS Administration Guide 321

Page 324: CDP-NSS Administration Guide

NIC Port Bonding

To enable NIC Port Bonding with four or more NICs:

1. Right click on the server.

2. Select System Maintenance --> Bond NIC Port.

The NIC Port Bonding screen displays.

3. Select the number of bonded teams you are setting up.

You can choose to bond the ethernet interfaces in one group, two groups, or you can bond the first two interfaces into one group.

• For one team containing four to eight NICs, enter the IP Address and Netmask of the master and click OK.

• For two teams, enter the IP Address and Netmask of each Master and click OK.

CDP/NSS Administration Guide 322

Page 325: CDP-NSS Administration Guide

NIC Port Bonding

• For one team containing only eth0 and eth1, enter the IP Address and Netmask of the master and click OK.

NIC Port Bonding can be configured to use round robin load-balancing, so the first frame is sent on eth0, the second on eth1, the third on eth0 and so on. The bonding choices are shown below:

• Mode=0 (Sequential) transmission of data in Round-Robin mode, (mode=0) is the default mode option. There is no switch involved.

• Mode=4 (Link Aggregation) transmission of data in a more dedicated, tuned mode where the NIC ports work together with switches. This mode requires an LACP (802.3.ad) capable switch.

Bonding choices:

No Bonding

Eth0/Eth1 (1 group), 2 port

Eth0/Eth1/Eth2/Eth3 (1 group), 4 port

Eth0/Eth1/Eth2/Eth3/Eth4/Eth5/Eth6/Eth7 (1 group), 8 port

Eth0/Eth2,Eth1/Eth3 (2 group), 4 port

Eth0/Eth2/Eth4/Eth6, Eth1/Eth3/Eth5/Eth7 (2 group), 8 port

CDP/NSS Administration Guide 323

Page 326: CDP-NSS Administration Guide

NIC Port Bonding

Remove NIC Port Bonding

To remove NIC Port Bonding, right click on the server, select System Maintenance, and click Yes to confirm the NIC Port Bonding removal.

Change IP address

During the bonding process, you will have the option to select a new IP address.

CDP/NSS Administration Guide 324

Page 327: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Replication

Overview

Replication is the process by which a SAN resource maintains a copy of itself either locally or at a remote site. The data is copied, distributed, and then synchronized to ensure consistency between the redundant resources. The SAN resource being replicated is known as the primary disk. The changed data is transmitted from the primary to the replica disk so that they are synchronized. Under normal operation, clients do not have access to the replica disk.

If a disaster occurs and the replica is needed, the administrator can promote the replica to become a SAN resource so that clients can access it. Replica disks can be configured for CDP or NSS storage services, including backup, mirroring, or TimeMark/CDP, which can be useful for viewing the contents of the disk or recovering files.

Replication can be set to occur continuously or at set intervals (based on a schedule or watermark). For performance purposes and added protection, data can be compressed or encrypted during replication.

Remotereplication

Remote replication allows fast, data synchronization of storage volumes from one CDP or NSS appliance to another over the IP network.

With remote replication, the replica disk is located on a separate CDP or NSS appliance, called the target server.

Localreplication

Local replication allows fast, data synchronization of storage volumes within one CDP or NSS appliance. It can be used within metropolitan area Fibre Channel SANs, or can be used with IP-based Fibre Channel extenders.

CDP/NSS Administration Guide 325

Page 328: CDP-NSS Administration Guide

Replication

With local replication, the replica disk is connected to the CDP or NSS appliance via a gateway using edge routers or protocol converters. Because there is only one CDP or NSS appliance, the primary and target servers are the same server.

How replication works (updated January 2013)

Replication works by transmitting changed data from the primary disk to the replica disk so that the disks are synchronized. To ensure data consistency, replication creates a TimeMark on the primary server and uses data from this snapshot image for replication. By default this TimeMark is deleted after replication unless you decide to preserve it. Replication also creates a TimeMark on the target server for rollback purposes after replication successfully completes. TimeMarks created by replication have the default comment of repsnap. The TimeMark created by replication on the target server will display in the GUI if you have selected the Enable TimeMark option on the target server. The TimeMark created by replication on the primary server will display in the GUI if you decide to preserve it by selecting replication options described later in this section. How frequently replication takes place depends on several factors.

Deltareplication

With standard, delta replication, a snapshot is taken of the primary disk at prescribed intervals based on the criteria you set (schedule and/or watermark value).

Continuousreplication

With FalconStor’s Continuous Replication, data from the primary disk is continuously replicated to a secondary disk unless the system determines it is not practical or possible, such as when there is insufficient bandwidth. In these types of situations the system automatically switches to delta replication. After the next regularly-scheduled replication takes place, the system automatically switches back to continuous replication.

For continuous replication to occur, a Continuous Replication Resource is used to stage the data being replicated from the primary disk. Similar to a cache, as soon as data comes into the Continuous Replication Resource, it is written to the replica disk. The Continuous Replication Resource is created during the replication configuration.

There are several events that will cause continuous replication to switch back to delta replication, including when:

CDP/NSS Administration Guide 326

Page 329: CDP-NSS Administration Guide

Replication

• The Continuous Replication Resource is full due to insufficient bandwidth• The CDP or NSS appliance is restarted• After failover occurs• You perform the Replication --> Scan option• You add a resource to a group configured for continuous replication• Continuous Replication Resource is offline• The target server IP address is changed

Configure Replication

Requirements

The following are the requirements for setting up a replication configuration:

• (Remote replication) You must have two storage servers.• (Remote replication) You must have write access to both Servers.• You must have enough space on the target server for the replica and for the

Snapshot Resource. • Both clocks should be synchronized so that the timestamp matches.• In order to replicate to a disk with Thin Provisioning, the size of the SAN

resource must be equal to or greater than 10GB (the minimum permissible size of a thin disk).

Setup (updated January 2013)

You can enable replication for a single SAN resource or you can use the batch feature to enable replication for multiple SAN resources.

You need Snapshot Resources for the primary and replica disks. If you do not have them, you can create them through the wizard. Refer to Create a Snapshot Resource for more information.

1. For a single SAN resource, right-click on the resource and select Replication --> Enable.

For multiple SAN resources, right-click on the SAN Resources object and select Replication --> Enable.

The Enable Replication for SAN resources wizard launches. Each primary disk can only have one replica disk. If you do not have a Snapshot Resource, the wizard will take you through the process of creating one.

CDP/NSS Administration Guide 327

Page 330: CDP-NSS Administration Guide

Replication

2. Select the server that will contain the replica.

For local replication, select the Local Server.

For remote replication, select any server but the Local Server.

If the server you want does not appear on the list, click the Add button.

3. (Remote replication only) Confirm/enter the target server’s IP address.

CDP/NSS Administration Guide 328

Page 331: CDP-NSS Administration Guide

Replication

4. Specify if you want to use Continuous Replication mode or Delta mode.

Continuous Mode - Select if you want to use FalconStor’s Continuous Replication. After the replication wizard completes, you will be prompted to create a Continuous Replication Resource for the primary disk.

The TimeMark options listed below for continuous mode are primarily used for devices assigned to a VSS-enabled client to maintain the TimeMark synchronization on both the primary and replica disks.

• Create Primary TimeMark - By default, TimeMarks created by replication on the primary server are deleted once replication is complete. This option allows you to preserve the temporary TimeMark created by replication on the primary server whether it is triggered by replication schedule or a manual synchronization. The preserved TimeMark displays in the GUI with the default comment repsnap.

• Synchronize Replica TimeMark - By default, only TimeMarks that are created by a replication operation are synchronized to the replica server. This option allows you to synchronize all TimeMarks on the primary server to the replica server, regardless of how the TimeMark was triggered.

• Both of the above options need to be selected when replicating VSS TimeMarks in CDR mode to ensure synchronous VSS TimeMarks on both the primary and replica. This is necessary because VSS TimeMarks contain additional VSS TimeView data that will not be replicated over to the replica server without these options selected.

Delta Mode - Select if you want replication to occur at set intervals (based on schedule or watermark).

Note: When using Continuous Data Replication, you can enable CDP on the replica to get a more granular recovery point from the replica.

CDP/NSS Administration Guide 329

Page 332: CDP-NSS Administration Guide

Replication

The TimeMark options for delta mode are as follows:

• Use existing TimeMark - Determine if you want to use the most current TimeMark on the primary server when replication begins or if the replication process should create a TimeMark specifically for the replication. In addition, using an existing TimeMark reduces the usage of your Snapshot Resource. However, the data being replicated may not be the most current. When configuring replication for DiskSafe devices, selecting this "Use existing TimeMark" option will not trigger the initial replication sync until a new TimeMark is created - even if there is an existing TimeMark. To make sure you are replicating with the most recent TimeMark, it is recommended that you use the Trigger replication after TimeMark is taken option when configuring TimeMark properties. Refer to the ‘Enable TimeMark (updated February 2013)’ section for more information.

• Preserve Replication TimeMark - If you did not select the Use Existing TimeMark option, a temporary TimeMark is created when replication begins. This TimeMark is then deleted after the replication has completed. Select Preserve Replication TimeMark to create a permanent TimeMark that will not be deleted when replication has completed (if the TimeMark option is enabled). This is convenient way to keep all of the replication TimeMarks without setting up a separate TimeMark schedule. The preserved TimeMark displays in the GUI with the default repsnap comment.

Notes about using an existing TimeMark:

While using an existing TimeMark reduces the usage of your Snapshot Resource, the data being replicated may not be the most current.

For example, Your replication is scheduled to start at 11:15 and your most recent TimeMark was created at 11:00. If you have selected Use Existing TimeMark, the replication will occur with the 11:00 data, even though additional changes may have occurred between 11:00 and 11:15.

Therefore, if you select Use Existing TimeMark, you must coordinate your TimeMark schedule with your replication schedule.

Even if you select Use Existing TimeMark, a new TimeMark will be created under the following conditions:

• The first time replication occurs.• Each existing TimeMark will only be used once. If replication occurs

multiple times between the creation of TimeMarks, the TimeMark will be used once; a new TimeMark will be created for subsequent replications until the next TimeMark is created.

• The most recent TimeMark has been deleted, but older TimeMarks exist.• After a manual rescan.

CDP/NSS Administration Guide 330

Page 333: CDP-NSS Administration Guide

Replication

5. Configure how often, and under what circumstances, replication should occur.

An initial replication for individual resources begins immediately upon setting the replication policy. Then replication occurs according to the specified policy.

You must select at least one policy but you can have multiple. You must specify a policy even if you are using continuous replication so that if the system switches to delta replication, it can automatically switch back to continuous replication after the next regularly-scheduled replication takes place.

Any number of continuous replication jobs can run concurrently. However, by default, 20 delta replication jobs can run, per server, at any given time. If there are more than 20, the highest priority disks begin replication first while the remaining disks wait in the queue in the order of their priority. As soon as one of the jobs finishes, the disk with the next highest priority in the queue begins.

Start replication when the amount of new data reaches - If you enter a watermark value, when the value is reached, a snapshot will be taken and replication of that data will begin. If additional data (more than the watermark value) is written to the disk after the snapshot, that data will not be replicated until the next replication. If a replication that was triggered by a watermark fails, the replication will be re-started based on the retry value you enter, assuming the system detects any write activity to the primary disk at that time. Future watermark-triggered replications will not start until after a successful replication occurs.

Note: Contact Technical Support for information about changing this value but note that additional replication jobs will increase the load and bandwidth usage of your servers and network and may be limited by individual hardware specifications.

CDP/NSS Administration Guide 331

Page 334: CDP-NSS Administration Guide

Replication

If you are using continuous replication and have set a watermark value, make sure it is a reachable value; otherwise snapshots will rarely be taken. Although continuous replication does not take snapshots, you will need a recent, valid snapshot if you need to rollback the replica to an earlier TimeMark during promotion.

If you are using SafeCache, replication is triggered when the watermark value of data is moved from the cache resource to the disk.

Start initial replication on mm/dd/yyyy at hh:mm and then every n hours/minutes thereafter - Indicate when replication should begin and how often it should be repeated.

If a replication is still in progress when the next time interval is reached, the new replication request will be ignored.

6. Indicate which options you want to use for this device.

The Compress Data option provides enhanced throughput during replication by compressing the data stream. This option leverages machines with multi-processors by using more than one thread for processing data compression/decompression during replication. Compressing data reduces the size of the transmission, thereby maximizing network bandwidth. In other words, the more data is compressed, the less data there is to replicate. The compression ratio depends on the type of data. For example, JPEG files will typically have a higher compression ratio than ISO files..

The Encrypt Data option provides an additional layer of security during replication by securing data transmission over the network. Initial key distribution

Note: Compression requires 64K of contiguous memory. If storage server memory is too fragmented to allocate 64K, replication will fail.

CDP/NSS Administration Guide 332

Page 335: CDP-NSS Administration Guide

Replication

is accomplished using the authenticated Diffie-Hellman exchange protocol. Subsequent session keys are derived from the master shared secret, making it very secure.

Enable MicroScan - MicroScan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small random updates to the disk. If the global MicroScan option is turned on, it overrides the MicroScan setting for an individual virtual device. Also, if the virtual devices are in a group configured for replication, group policy always overrides the individual device’s policy. This option is selected by default.

7. Select how you want to create the replica disk on the target server.

Custom allows you to select which physical device(s) to use and lets you designate how much space to allocate from each. You can select a larger SED disk as a replica. Only in this case can a larger physical replica device be selected. However, the logical size will match the primary size and the extra space on the SED disk will not be available for any other purpose.

Express automatically creates the replica for you from available hard disk segments. You will only have to select the storage pool or physical device that should be used to create the replica resource. This is the default setting.

Select Existing allows you to select an existing resource. There are several restrictions as to what you can select:

• The target must be the same size as the primary. • The target can have Clients assigned to it but they cannot be connected

during the replication configuration.

Note: All data on the target will be overwritten.

CDP/NSS Administration Guide 333

Page 336: CDP-NSS Administration Guide

Replication

If you select Custom, you will see the following windows:

Only one disk can be selected at a time from this dialog. To create a replica disk from multiple physical disks, you will need to add the disks one at a time. After selecting the first disk, you will have the option to add more disks. You will need to do this if the first disk does not have enough space

Indicate how much space to allocate from this disk.

Click Add More if you need to add another physical disk to this replica disk.You will go back to the physical device selection screen where you can select another disk.

Indicate the type of replica disk you are creating.

Select the storage pool or device to use to create the replica resource.

CDP/NSS Administration Guide 334

Page 337: CDP-NSS Administration Guide

Replication

8. Enter a name for the replica disk.

The name is not case sensitive.

9. Confirm that all information is correct and then click Finish to create the replication configuration.

When willreplication

begin?

If you have configured replication for an individual resource, the system will begin synchronizing the disks immediately after the configuration is complete if the disk is attached to a client and is receiving I/O activity.

Replication fora group

If you have configured replication for a group, synchronization will not start until one of the replication policies (time or watermark) is triggered. If replication fails for one group member, it is skipped and replication continues for the rest of the group. After successful replication, group members will have a TimeMark created on their replica. In order for the group members that were skipped to have the same TimeMark on its replica, you will need to remove them from the group, use the same TimeMark to replicate again, and then re-join the group.

If youconfiguredcontinuousreplication

If you are using continuous replication, you will be prompted to create a Continuous Replication Resource for the primary disk and a Snapshot Resource for the replica disk. If you are not using continuous replication, the wizard will only ask you to create a Snapshot Resource on the replica.

Because old data blocks are moved to the Snapshot Resource as new data is written to the replica, the Snapshot Resource should be large enough to handle the amount of changed data that will be replicated. Since it is not always possible to

Note: Once you create your replication configuration, you should not change the hostname of the source (primary) server. If you do, you will need to recreate your replication configuration.

CDP/NSS Administration Guide 335

Page 338: CDP-NSS Administration Guide

Replication

know how much changed data will be replicated, it is a good idea for you to enable expansion on the target server’s Snapshot Resource. You then need to decide what to do if your Snapshot Resource runs out of space (reaches the maximum allowable size or does not have expansion enabled). The default is to preserve all TimeMarks. This option stops writing data to the source SAN resource if there is no more space available or there is a disk failure in order to preserve all TimeMarks.

Protect yourreplica

resource

For added protection, you can mirror or TimeMark an incoming replica resource by highlighting the replica resource and right-clicking on it.

Create a Continuous Replication Resource

This is needed only if you are using continuous replication.

1. Select the storage pool or physical device that should be used to create this Continuous Replication Resource.

CDP/NSS Administration Guide 336

Page 339: CDP-NSS Administration Guide

Replication

2. Select how you want to create this Continuous Replication Resource.

Custom lets you select which physical device(s) to use and lets you designate how much space to allocate from each.

Express lets you designate how much space to allocate and then automatically creates the resource using an available device.

3. Verify the physical devices you have selected, confirm that all information is correct, and then click Finish.

On the Replication tab, you will notice that the Replication Mode is set to Delta. Replication must be initiated once before it switches to continuous mode. You can either wait for the first scheduled replication to occur or you can right-click on your SAN resource and select Replication --> Synchronize to force replication to occur.

Note: The Continuous Replication Resource maximum size is 1 TB and cannot be expanded. Therefore, you should allocate enough space for the resource. By default, the size will be 256 MB or 5% of the size of your primary disk (or 5% of the total size of all members of this group), whichever is larger. If the primary disk regularly experiences a large number of writes, or if the connection to the target server is slow, you may want to increase the size, because if the Continuous Replication Resource should become full, the system switches to delta replication mode until the next regularly-scheduled replication takes place. If you “outgrow” your resource, you will need to disable continuous replication and then re-enable it.

CDP/NSS Administration Guide 337

Page 340: CDP-NSS Administration Guide

Replication

Check replication statusThere are several ways to check replication status:

• The Replication tab on the primary disk displays information about a specific resource.

• The Incoming and Outgoing objects under the Replication object display information about all replications to or from a specific server.

• The Event Log displays a list of replication information and errors.• The Delta Replication Status Report provides a centralized view for

displaying real-time replication status for all drives enabled for replication.

Replication tabThe following are examples of what you will see by checking the Replication tab for a primary disk:

With ContinuousReplication enabled

With DeltaReplication

CDP/NSS Administration Guide 338

Page 341: CDP-NSS Administration Guide

Replication

All times shown on the Replication tab are based on the primary server’s clock.

Accumulated Delta Data is the amount of changed data. Note that this value will not display accurate results after a replication has failed. The information will only be accurate after a successful replication.

Replication Status / Last Successful Sync / Average Throughput - You will only see these fields if you are connected to the target server.

Transmitted Data Size is based on the actual size transmitted after compression or with MicroScan performed.

Delta Sent represents the amount of data sent (or processed) based on the uncompressed size.

If compression and MicroScan are not enabled, the Transmitted Data Size will be the same as Delta Sent and the Current/Average Transmitted Data Throughput will be the same as Instantaneous/Average Throughput.

If compression or MicroScan is enabled and the data can be compressed or blocks of data have not changed and will not be sent, the Transmitted Data Size is going to be different from Delta Sent and both Current/Average Transmitted Data Throughput will be based on the actual size of data (compressed or Micro-scanned) sent over the network.

Event Log

Replication events are also written to the primary server’s Event Log, so you can check there for status and operational information, as well as any errors.

Replication object

The Incoming and Outgoing objects under the Replication object display information about each server that replicates to this server or receives replicated data from this server. If the server’s icon is white, the partner server is "connected" or "logged in". If the icon is yellow, the partner server is "not connected" or "not logged in".

CDP/NSS Administration Guide 339

Page 342: CDP-NSS Administration Guide

Replication

Delta Replication Status Report

The Delta Replication Status Report can be run from the Reports object. It provides a centralized view for displaying real-time replication status for all drives enabled for replication. It can be generated for an individual drive, multiple drives, source server or target server, for any range of dates. This report is useful for administrators managing multiple servers that either replicate data or are the recipients of replicated data.

This report only provides statistics for delta replication activity. Continuous Replication statistics are not available from the report but can be monitored in real-time within the FalconStor Management Console. The report can display information about existing replication configurations only or it can include information about replication configurations that have been deleted or promoted (you must select to view all replication activities in the database).

The following is a sample Delta Replication Status Report:

CDP/NSS Administration Guide 340

Page 343: CDP-NSS Administration Guide

Replication

Configure Replication performance

Set globalreplication

options

You can set global replication options that affect system performance during replication. While the default settings should be optimal for most configurations, you can adjust the settings for special situations.

To set global replication properties for a server:

1. Right-click on the server and select Properties.

2. Select the Performance tab.

Click the Configure Throttle button to configure target site(s)/server(s) to limit the maximum replication speed thus minimizing potential impact to network traffic.

Enable MicroScan - MicroScan analyzes each replication block on-the-fly during replication and transmits only the changed sections on the block. This is beneficial if the network transport speed is slow and the client makes small random updates to the disk. This global MicroScan option overrides the MicroScan setting for each individual virtual device.

Tune replicationparameters

You can run a test to discover maximum bandwidth and latency for remote replication within your network.

1. Right-click on a server under Replication --> Outgoing and select Replication Parameters.

2. Click the Test button to see information regarding the bandwidth and latency of your network.

While this option allows you to measure the bandwidth and latency of the network between the two servers (replication source and target), it is not a tool to test the connectivity of the network. Therefore, if there is a network connection issue or connection failure, the Test button will not work (and should not be used for testing the network connection between the servers).

CDP/NSS Administration Guide 341

Page 344: CDP-NSS Administration Guide

Replication

Assign clients to the replica diskYou can assign Clients to the replica disk in preparation for promotion or reversal. Clients will not be able to connect to the replica disk and the Client’s operating system will not see the replica disk until after the promotion or reversal. After the replica disk is promoted or a reversal is performed, you can restart the SAN Client to see the new information and connect to the promoted disk.

To assign Clients:

1. Right-click on an incoming replica resource under the Replication object and select Assign.

2. Select the Client to be assigned.

If the Client you want to assign does not appear in the list, you will need to exit the wizard and add the client by right-clicking on SAN Client and selecting Add.

3. Confirm all of the information and then click Finish to assign the Client.

Switch clients to the replica disk when the primary disk failsBecause the replica disk is used for disaster recovery purposes, clients do not have access to the replica. If a disaster occurs and the replica is needed, the administrator can promote the replica to become the primary disk so that clients can access it. The Promote option promotes the replica disk to a usable resource. Doing so breaks the replication configuration. Once a replica disk is promoted, it cannot revert back to a replica disk.

You must have a valid replica disk in order to promote it. For example, if a problem occurred (such as a transmission problem or the replica disk failing) during the first and only replication, the replicated data would be compromised and therefore could not be promoted to a primary disk. If a problem occurred during a subsequent replication, the data from the Snapshot resource will be used to recreate the replica from its last good state.

To promote a replica:

1. In the Console, right-click on an incoming replica resource under the Replication object and select Replication --> Promote.

If replication is not in a normal status, you will be prompted to roll back the replica to the last TimeMark. When this occurs, the wizard will not continue with the promotion and you will have to check the Event Log to make sure the

Notes:

• You cannot promote a replica disk while a replication is in progress. • If you are using continuous replication, you should not promote a replica

disk while write activity is occurring on the replica.• If you just need to recover a few files from the replica, you can use the

TimeMark/TimeView option instead of promoting the replica. Refer to ‘Use TimeMark/TimeView to recover files from your replica’ for more information.

CDP/NSS Administration Guide 342

Page 345: CDP-NSS Administration Guide

Replication

rollback completes successfully. Once you have confirmed that it has completed successfully, you need to re-select Replication --> Promote to continue.

2. Confirm the promotion and click OK.

3. Assign the appropriate clients to this resource.

4. Rescan devices or restart the client to see the promoted resource.

Recreate your original replication configuration

Your original primary disk became unusable due to a disaster and you have promoted the replica disk to a primary disk so that it can service your clients. You have now fixed, rebuilt, or replaced your original primary disk. Do the following to recreate your original replication configuration:

1. From the current primary disk, run the Replication Setup wizard and create a configuration that replicates from the current resource to the original primary server.

Make sure a successful replication has been performed to synchronize the data after the configuration is completed. If you select the Scan option, you must wait for this to complete before running another scan or replication.

2. Assign the appropriate clients to the new replica resource.

3. Detach all clients from the current primary disk.

• For Unix clients, type ./ipstorclient stop from /usr/local/ipstorclient/bin.

4. Right-click on the appropriate primary resource or replica resource and select Replication --> Reversal to switch the roles of the disks.

Afterwards, the replica disk becomes the new primary disk while the original primary disk becomes the new replica disk. The existing replication configuration is maintained but clients will be disconnected from the former primary disk.

For more information, refer to ‘Reverse a replication configuration’.

Note: Once the rollback process is triggered, it cannot be cancelled. If the pro-cess is interrupted, the promote replica process must be restarted from the begin-ning.

CDP/NSS Administration Guide 343

Page 346: CDP-NSS Administration Guide

Replication

Use TimeMark/TimeView to recover files from your replica

While the main purpose of replication is for disaster recovery purposes, the TimeMark feature allows you to access individual files on your replica without needing to promote the replica. This can be useful when you need to recover a file that was deleted from the primary disk. You can simply create a TimeView of the replica, assign it to a client, and copy back the needed file.

Using TimeMark with a replica is also useful for “what if” scenarios, such as testing a new application on your actual, but not live, data.

In addition, using HyperTrac Backup with Replication and TimeMark allows you to back up your replica at your disaster recovery site without impacting any application servers.

For more information about using TimeMark and HyperTrac, refer to your HyperTrac Backup Accelerator User Guide.

Change your replication configuration options

You can change the following for your replication configuration:

• Static IP address of a remote target server • Policies that trigger replication (watermark or schedule)• Replication protocol • Use of compression, encryption, or MicroScan• Replication mode

To change the configuration:

1. Right-click on the primary disk and select Replication --> Properties.

CDP/NSS Administration Guide 344

Page 347: CDP-NSS Administration Guide

Replication

The Replication Setup Options screen displays.

2. Select the appropriate tab to make the desired changes:

• The Target Server Parameters tab allows you to modify the host name or IP address of the Target server.

• The Replication Policy tab allows your modify the policies that trigger replication.

• The Replication Protocol tab allows your modify the replication protocol.• The Throughput Control tab allows you to enable throughput control.• The Data Transmission Options allows you to select the following options:

• Compress Data• Encrypt Data• Enable MicroScan

• The Replication Transfer Mode and TimeMark tab allows you to modify the Continuous mode and TimeMark options for the replication.

3. Make the appropriate changes and click OK.

Notes:

• If you are using continuous replication and you enable or disable encryption, the change will take effect after the next delta replication.

• If you are using continuous replication and you change the IP address of your target server, replication will switch to delta replication mode until the next regularly-scheduled replication takes place.

CDP/NSS Administration Guide 345

Page 348: CDP-NSS Administration Guide

Replication

Suspend/resume replication schedule

You can suspend future replications from automatically being triggered by your replication policies (watermark, interval, time) for an individual virtual device. Once suspended, all of the device’s replication policies will be put on hold, preventing any future policy-triggered replication from starting. This will not stop a replication that is currently in progress and you can still manually start the replication process while the schedule is suspended.

When replication is resumed, replication will start at the normally scheduled interval based on the device’s replication policies.

To suspend/resume replication, right-click on the primary disk and select Replication --> Suspend (or Resume).

You can see the current settings by checking the Replication Schedule field on the Replication tab of the primary disk.

Stop a replication in progress

You can stop a replication that is currently in progress.

To stop a replication, right-click on the primary disk and select Replication --> Stop.

Manually start the replication process

To force a replication that is not scheduled, select Replication --> Synchronize.

Note: If replication is already occurring, this request will fail.

CDP/NSS Administration Guide 346

Page 349: CDP-NSS Administration Guide

Replication

Set the replication throttle

Configuring the throttle allows you to limit the amount of bandwidth replication will use. This is useful when the WAN is shared among many applications and you do not want replication traffic to dominate the link. Setting this parameter affects the server to server relationship, which includes remote delta and remote continuous replication. Throttle does not affect local replication.

Throttle configuration involves three factors:

• Percentage - the amount of throttle relative to the selected Link-Type. leaving the Throttle field set to 0 (zero) means the throttle is disabled. If you change the field to 100 percent, this means that the maximum bandwidth available with the selected link type will be used. Besides 0, valid input is 1 - 100%.

• Link-Type - the link type to be used.• Window - the window can be scheduled by hours in the day.

Setting the throttle instructs the application to keep the network speed constant. Although network traffic bursts may still occur, depending on the environment, the throttle tries to remain at the set speed.

Throttle configuration settings are retained for each server even after replication has been disabled. When replication is enabled again, the previous throttle settings will be present.

Once you have set up replication and/or a target site, you can configure your throttle settings.

The throttle can be set and edited from various locations in the console as well as from the command line interface.

• To set the throttle, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Configure.

CDP/NSS Administration Guide 347

Page 350: CDP-NSS Administration Guide

Replication

• Set the throttle via Server Properties --> Performance tab --> click the Configure Throttle button.

• Highlight the server or target site that you want to edit and click the Edit button. (Target sites are indicated by a “T” icon.)

Add a Target Site

Another way to set the throttle is by adding a target site. A target site is a group of sites that share the same throttle configuration. Target sites can contain existing target servers or can be empty.

CDP/NSS Administration Guide 348

Page 351: CDP-NSS Administration Guide

Replication

Navigate to the Replication node in the console, right-click on Outgoing and select Target Site --> Add.

• Enter a name for the target site.• Select the target servers by checking the boxes next to their host name.

Optional: You can also add a target server for future use by clicking the Add button and entering the new target server name. Any throttle configuration existing on the new target server will be replaced with the Target Site throttle configuration settings.

• Link Types: Select the link type for this target site. To add a custom link type, refer to ‘Manage Link Types’.

• The default throttle is zero (disabled). You may change the default throttle to a percentage (1 - 100) of the link type. This setting takes effect immediately when the default throttle is in use. If the window throttle is in use, the new default setting takes effect the next time throttle is triggered outside of the window.

• The Throttle Window contains the throttle schedule for business hours and the backup window. You can select one of these built-in schedules or add a custom window via Throttle --> Manage Throttle Window.

Once a target site has been added, it displays, along with the individual servers, in the FalconStor Management Console under the Replication --> Outgoing node. You can right-click on the target site in the console to delete, edit or export it.

CDP/NSS Administration Guide 349

Page 352: CDP-NSS Administration Guide

Replication

Manage Throttle windows

Throttle windows allow you to limit read activity to the primary disk during peak processing times to avoid significant performance impact. Two throttle windows have been pre-populated for you - Business Hours and Backup Window. You can edit the pre-defined times to fit your business needs. You can also add custom throttle windows as needed. Throttle configuration settings persist when replication is disabled and re-enabled on the same server to server relationship.

Throttle windows can be defined to limit read activity to the primary disk so that performance is not significantly impacted during peak hours.

For example, if you have a production server disk with replication enabled, that experiences heavy I/O between 9:00AM and 5:00PM, replication adds to the read/write load since replication requires to read from the primary disk. Since this may impact disk performance when replication is accessing the disk, you can resolve this issue with a throttle window between 9:00AM and 5:00PM to throttle the replication speed down. With a lower replication speed, the need to access the disk for replication is lessened, resulting in a reduced read load on the disk.

To manage throttle windows, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Throttle Windows.

CDP/NSS Administration Guide 350

Page 353: CDP-NSS Administration Guide

Replication

Edit a Throttle window

To edit throttle windows times, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Throttle Windows. Then click the Edit button.

Add a throttle window

To add a new throttle window, click the Add button.

The Add Throttle Window screen displays, allowing you to add a unique name, start time and end time.

Time is entered as a 24 hour time period. For example, 5:00 p.m. would be entered as 17:00. Make sure the times do not overlap with an existing window. For example, if one window has an end time of 12:00, the next window must start at 12:01.

Delete a throttle window

You can also delete any custom (user-created) throttle window to cancel the schedule. Built-in throttle windows cannot be deleted. To delete a custom throttle window, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Throttle Windows. Then click the Delete button.

Throttle tab

Right-click on the target server or target site and click the Throttle tab for information on Link Type, default throttle, and any selected throttle windows.

Throttle and failover

Setting up throttle on a failover pair requires some additional considerations. Refer to the “Throttle and Failover” section for details.

CDP/NSS Administration Guide 351

Page 354: CDP-NSS Administration Guide

Replication

Manage Link Types

To manage link types and speed, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Link Types.

The Manage Link Types screen displays all link types, along with the description and speed.

Throttle speed displays the maximum speed, not necessarily the actual speed. For example, a throttle speed of 30 Mbps indicates a speed of 30 Mbps or less. The speed is determined by multiplying the throttle percentage to the link type speed. For example, a default throttle of 30% of a 100Mbps Link Type would be (30%) x (100Mbps) = 30Mbps.

Actual speed may or may not be evenly distributed across all Target Sites and servers. Actual speed depends on many factors, such as disk performance, network traffic, functions enabled (encryption, compression, MicroScan), and other processes in progress (TimeMark, Mirror, etc).

CDP/NSS Administration Guide 352

Page 355: CDP-NSS Administration Guide

Replication

Add link types If your link type is not listed in the pre-populated/build-in list, you can add a custom link type by navigating to the Replication node in the console, right-clicking on Outgoing and selecting Throttle --> Manage Link Types. Then click the Add button.

Then enter the Link Type, a brief description, and the speed in Megabytes per second (Mbps).

Edit link types Custom link types can be modified by clicking the Edit button. However, built-in link types cannot be edited.

To edit a custom link type, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Link Types. Then click the Edit button

Delete linktypes

Link Types can be deleted as long as they are not currently in use by any target site or server. Custom link types can be deleted when no longer needed. Built-in link types cannot be deleted.

To delete a custom link type, navigate to the Replication node in the console, right-click on Outgoing and select Throttle --> Manage Link Types. Then click the Delete button.

CDP/NSS Administration Guide 353

Page 356: CDP-NSS Administration Guide

Replication

Set replication synchronization priority

To set the synchronization priority for pending replications, select Replication --> Priority.

This allows you to prioritize the order that device/group will begin replicating if scheduled to start at the same time. This option can be set for a single resource or a single group via the Replication submenu or for multiple resources or group from the context menu of Replication Outgoing node.

Reverse a replication configuration

Reversal switches the roles of the replica disk and the primary disk. The replica disk becomes the new primary disk while the original primary disk becomes the new replica disk. The existing replication configuration is reset to the default. After the reversal, clients will be disconnected from the former primary disk.

To perform a role reversal:

1. , right-click on the appropriate primary resource or replica resource and select Replication --> Reversal.

2. Enter the New Target Server host name or IP address to be used by the new primary server to connect to the new target server for replication.

Notes:

• The primary and replica must be synchronized in order to reverse a replica. If needed, you can manually start the replication from the Console and re-attempt the reversal after the replication is completed.

• If you are using continuous replication, you have to disable it before you can perform the reversal.

• If you are performing a role reversal on a group, we recommend that the group have 40 or fewer resources. If there are more than 40 resources in a group, we recommend that multiple groups be configured to accomplish this task.

CDP/NSS Administration Guide 354

Page 357: CDP-NSS Administration Guide

Replication

Reverse a replica when the primary is not available

Replication can be reversed from the replica server side even if the primary server is offline or is not accessible. When you reverse this type of replica, the replica disk will be promoted to become the primary disk and the replication configuration will be removed.

Afterwards, when the original primary server becomes available, you must repair the replica in order to re-establish a replication configuration.The original replication policy will be used /maintained after repair.

Forceful role reversal

When the primary server is down and the replica is up. Or if the primary server is up but corrupted and the replica is not synchronized, you can force a role reversal as long as there are no replication processes running.

To perform a forceful role reversal:

1. Suspend the replication schedule.

If you are using Continuous Mode, disable it by right-clicking on the disk and selecting Replication --> Properties and uncheck Continuous Mode in the replication transfer Mode and TimeMark tab under the Replication Setup Options.

2. Right-click on the primary or replica server and select Replication --> Forceful Reversal.

3. Type YES to confirm the operation and then click OK.

4. Once the forceful role reversal is done, Repair the promoted replica to establish the new connection between the new primary and replica server.

Notes:

• If a primary disk is in a group but the group doesn’t have replication enabled, the primary resource should leave the group first before the repair replica can be performed.

• If you have CDP enabled on the replica and you want to perform a rollback, you can roll back before or after reversing the replica.

Notes:

• The forceful role reversal operation can be performed even if the CDP journal has unflushed data.

• The forceful role reversal operation can be performed even if data is not synchronized between the primary and replica server.

• The snapshot policy, TimeMark/CDP, and throttle control policy settings are not swapped after the repair operation for replication role reversal.

CDP/NSS Administration Guide 355

Page 358: CDP-NSS Administration Guide

Replication

The replication repair operation must be performed from the NEW primary server.

5. Confirm the IP address and click OK.

The current primary disk remains as the primary disk and begins replicating to the recovered server.

After the repair operation is complete, replication will synchronize again either by schedule or manual trigger. A full synchronization is performed if the replication was not synchronized prior the forceful role reversal and the replication policy from the original primary server will be used/update on the new primary server.

If you want to recreate your original replication configuration, you will need to perform another reversal so that your original primary becomes the primary disk again.

Repair a replica

When performing a repair, the following status conditions may display:

Relocate a replica

The Relocate feature allows replica storage to be moved from the original replica server to another server while preserving the replication relationship with the primary server. Relocating reassigns ownership to the new server and continues

Note: If the SAN resource is assigned to a client in the original primary server, it must be unassigned in order to perform the repair on the new primary.

Repair status - after forceful role reversal

Status Description

Valid The server performing the repair has verified that the server is OK for repair. If there is a problem with the replica server, respective errors will show after repair is initiated.

Invalid The server performing the repair has reported that the repair cannot be processed. Make sure all devices involved are online and have no missing segments.

TimeMark Rollback in Progress

The repair cannot be processed because one of the devices involved in the repair is currently performing a rollback.

Not Configured for Replication

The repair cannot be processed because there is a problem with a device which is a member of a group. Make sure there are no extra members or missing members of the group.

CDP/NSS Administration Guide 356

Page 359: CDP-NSS Administration Guide

Replication

replication according to the set policy. Once the replica storage is relocated to the new server, the replication schedule can be immediately resumed without the need to rescan the disks.

Before you can relocate the replica, you must import the disk to the new CDP or NSS appliance. Refer to Import a disk if you need more information.

Once the disk has been imported, open the source server, highlight the virtual resource that is being replicated, right-click and select Relocate.

Remove a replication configuration

Right-click on the primary disk and select Replication --> Disable. This allows you to remove the replication configuration on the primary and either delete or promote the replica disk on the target server at the same time.

When promoting a replica, the replication status must be normal. If the replication is not in a normal status, you will be prompted to roll back the replica to the last TimeMark. When this occurs, the wizard will not continue with the promotion and you will have to check the Event Log to make sure the rollback completes successfully. Once you have confirmed that it has completed successfully, you need to re-select Replication --> Disable --> Promote to continue.

If you choose to delete the replica, you will not be prompted to rollback.

Notes:

• You cannot relocate a replica that is part of a group.• If you are using continuous replication, you must disable it before

relocating a replica. Failure to do so will keep replication in delta mode, even after the next manual or scheduled replication occurs. You can re-enable continuous replication after relocating the replica.

Note: Once the rollback process is triggered, it cannot be cancelled. If the pro-cess is interrupted, the promote replica process must be restarted from the begin-ning.

CDP/NSS Administration Guide 357

Page 360: CDP-NSS Administration Guide

Replication

Expand the size of the primary disk

Devices with replication configured can be expanded with some limitation and restrictions. Expansion can only be initiated from the primary disk, and both disks will always have the same logical size. See the following examples of behaviors to expect when expanding different types of SAN Resources. (A thin disk will have the same behavior as a virtual disk):

Refer to the ‘Expand a Service-Enabled Device’ section for more information regarding SED expansion.

Disk expansion behavior

Virtual replicated to Virtual • You can select any expansion size up to the amount of storage space available for both the primary and replica disks.

Virtual replicated to SED • You must expand the physical size of the replica disk from the storage prior to expanding the primary disk.

• You can only expand to the full physical size of the replica.

SED replicated to Virtual • You must first expand the physical size of the primary disk from the storage

• You can only expand to the full physical size of the primary

SED replicated to SED • You must first expand the physical size of both the primary and replica disks from the storage.

• You can only expand the primary disk to its full physical size when the physical disk size is equal to or smaller than the replica disk. If the replica disk is smaller than the primary disk, you can only expand the primary disk to the full physical disk size of the replica disk.

Note: Do not attempt to expand the primary disk during replication. Otherwise, the disk will expand but the replication will fail,

CDP/NSS Administration Guide 358

Page 361: CDP-NSS Administration Guide

Replication

Replication with other CDP or NSS features

Replication and TimeMark

While enabling TimeMarks, you can set the Trigger Replication after TimeMark is taken option. This option is applicable if TimeMark and Replication are both enabled for that device/Group. If TimeMark is enabled for a Group, replication must also be enabled at the group level.

When this option is set, replication synchronization triggers automatically for that device or group when the TimeMark is created. If SafeCache or CDP is enabled, replication synchronization is triggered when the cache marker is flushed.

Since you cannot create TimeMarks on a replica device, if you enable this option for replica devices, it will only take effect after a role reversal.

Replication and Failover

If replication is in progress and a failover occurs at the same time, the replication will fail. After failover, replication will start at the next normally scheduled interval. This is also true in reverse, if replication is in progress and a recovery occurs at the same time.

Replication and Mirroring

When you promote the mirror of a replica resource, the replication configuration is maintained.

Depending upon the replication schedule, when you promote the mirror of a replica resource, the mirrored copy may not be an identical image of the replication source. In addition, the mirrored copy may contain corrupt data or an incomplete image if the last replication was not successful or if replication is currently occurring. Therefore, it is best to make sure that the last replication was successful and that replication is not occurring when you promote the mirrored copy.

Replication and Thin Provisioning

A disk with Thin Provisioning enabled can be configured to replicate to a normal SAN resource or another disk with Thin Provisioning enabled. The normal SAN resource can replicate to a disk with Thin Provisioning as long as the size of the SAN resource is equal to or greater than 10GB (the minimum permissible size of the thin disk).

Note: The timestamp of a TimeMark on a replica is the timestamp of the source.

CDP/NSS Administration Guide 359

Page 362: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Near-line MirroringNear-line mirroring allows production data to be synchronously mirrored to a protected disk that resides on a second storage server. You can enable near-line mirroring for a single SAN resource or multiple resources.

With near-line mirroring, the primary disk is the disk that is used to read/write data for a SAN Client and the mirrored copy is a copy of the primary. Each time data is written to the primary disk, the same data is simultaneously written to the mirror disk. TimeMark or CDP can be configured on the near-line server to create recovery points. The near-line mirror can also be replicated for disaster recovery protection.

If the primary disk fails, you can initiate recovery from the near-line server and roll back to a valid point-in-time.

IPStor

Service Enabled Disk

Synchronous Mirror

Synchronous Mirror

Production Nearline

Application Servers

IPStor

CDP/NSS Administration Guide 360

Page 363: CDP-NSS Administration Guide

Near-line Mirroring

Near-line mirroring requirements

The following are the requirements for setting up a near-line mirroring configuration:

• The primary server cannot be configured to replicate to the near-line server.• At least one protocol (FC or iSCSI) must be enabled on the near-line server.• If you are using the FC protocol for your near-line mirror, zone the

appropriate initiators on your primary server with the targets on your near-line server. For recovery purposes, zone the appropriate initiators on your near-line server with the targets on your primary server.

Setup Near-line mirroring

You can enable near-line mirroring for a single SAN resource or multiple resources. To enable and set up near-line mirroring on one resources, follow the steps described below. To enable near-line mirroring for multiple resources, refer to Enable Near-line Mirroring on multiple resources.

1. Right-click on the resource and select Near-line Mirror --> Add.

The Welcome screen displays.

2. If you are enabling one disk, specify if you want to enable near-line mirroring for the primary disk or just prepare the near-line disk.

When you create a near-line disk, the primary server performs a rescan to discover new devices. If you are configuring multiple near-line mirrors, the scans can become time consuming. Instead, you can select to prepare the near-line disk now and then manually rescan physical resources and discover new resources on the primary server. Afterwards, you will have to re-run the wizard and select the existing, prepared disk.

CDP/NSS Administration Guide 361

Page 364: CDP-NSS Administration Guide

Near-line Mirroring

If you are enabling near-line mirroring for multiple disks, the above screen will not display.

3. Select the storage pool or physical device(s) for the near-line mirror’s virtual header information.

4. Select the server that will contain the near-line mirror.

CDP/NSS Administration Guide 362

Page 365: CDP-NSS Administration Guide

Near-line Mirroring

5. Add the primary server as a client of the near-line server.

You will go through several screens to add the client:

• Confirm or specify the IP address the primary server will use to connect to the near-line server as a client. This IP address is used for iSCSI; it is not used for Fibre Channel.

• Determine if you want to enable persistent reservation for the client (primary server). This allows clustered clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes.

• Select the client’s protocol(s). If you select iSCSI, you must indicate if this is a mobile client.

• (FC protocol) Select or add WWPN initiators for the client. • (FC protocol) Specify if you want to use Volume Set Addressing (VSA).

VSA is used primarily for addressing virtual buses, targets, and LUNs. If your client requires VSA to access a broader range of LUNs, you must enable it for the client.

• (iSCSI protocol) Select the initiator that this client uses. If the initiator does not appear, you may need to rescan. You can also manually add it, if necessary.

• (iSCSI protocol) Add/select users who can authenticate for this client.

6. Confirm the IP address of the primary server.

Confirm or specify the IP address the near-line server will use to connect to the primary server when a TimeMark is created, if snapshot notification is used. If needed, you can specify a different IP address from what you used when you added the primary server as a client of the near-line server.

CDP/NSS Administration Guide 363

Page 366: CDP-NSS Administration Guide

Near-line Mirroring

7. Determine if you want to monitor the mirroring process.

If you select to monitor the mirroring process, the I/O performance will be checked to decide if I/O to the mirror disk is lagging beyond an acceptable limit. If it is, mirroring will be suspended so it does not impact the primary storage.

Monitor mirroring process every n seconds - Specify how frequently the system should check the lag time (delay between I/O to the primary disk and the mirror). Checking more or less frequently will not impact system performance. On systems with very low I/O, a higher number may help get a more accurate representation.

Maximum lag time for mirror I/O - Specify an acceptable lag time (1 - 1000 milliseconds) between I/Os to the primary disk and the mirror.

Suspend mirroring - If the I/O to the mirror disk is lagging beyond the specified level of acceptance, mirroring will be suspended when the following conditions are met:

When the failure threshold reaches n% - Specify what percentage of I/O must pass the lag time test. For example, you set the percentage to 10% and the maximum lag time to 15 milliseconds. During the test period, 100 I/O occurred and 20 of them took longer than 15 milliseconds to update the mirror disk. With a 20% failure rate, mirroring would be suspended.

CDP/NSS Administration Guide 364

Page 367: CDP-NSS Administration Guide

Near-line Mirroring

When the outstanding I/Os reaches n - Specify the minimum number of I/Os that can be outstanding. When the number of outstanding I/Os are above the specified number, mirroring is suspended.

8. If mirroring is suspended, specify when re-synchronization should be attempted.

Re-synchronization can be started based on time (every n minutes/hours) and/or I/O activity (when I/O is less than n KB/MB). If you select both, the time will be applied first before the I/O activity level. If you do not select either, the mirror will stay suspended until you manually synchronize it.

If you select one or both re-synchronization methods, you must also specify how often the system should retry the re-synchronization if it fails to complete. If you only select the second resync option, the default will be 10 minutes.

When the system initiates re-synchronization, it does not check lag time and mirroring will not be suspended if there is too much lag time.

If you manually resume mirroring, the system will monitor the process during synchronization and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.

Note: If a mirror becomes out of sync because of a disk failure or an I/O error (rather than having too much lag time), the mirror will not be suspended. Because the mirror is still active, re-synchronization will be attempted based on the global mirroring properties that are set for the server. Refer to ‘Set global mirroring options’ for more information.

Note: If CDP/NSS is restarted or the server experiences a failover while attempting to resynchronize, the mirror will remain suspended.

CDP/NSS Administration Guide 365

Page 368: CDP-NSS Administration Guide

Near-line Mirroring

9. Select how you want to create this near-line mirror resource.

• Custom lets you select which physical device(s) and which segments to use and lets you designate how much space to allocate from each.

• Express lets you select which physical device(s) to use and automatically creates the near-line resource from the available hard disk segments.

• Select existing lets you select an existing virtual device that is the same size as the primary or a previously prepared (but not yet created) near-line mirror resource. (The option to only prepare a near-line disk appeared on the first Near-line Mirror wizard dialog.)

CDP/NSS Administration Guide 366

Page 369: CDP-NSS Administration Guide

Near-line Mirroring

10. Enter a name for the near-line resource.

11. (iSCSI protocol) Select the iSCSI targets to assign.

Note: Do not change the name of the near-line resource if the server is a near-line mirror or configured with near-line mirroring.

CDP/NSS Administration Guide 367

Page 370: CDP-NSS Administration Guide

Near-line Mirroring

12. Confirm that all information is correct and then click Finish to create the near-line mirroring configuration.

To set the near-line mirror throughput speed/throttle for near-line mirror synchronization, refer to ‘Set mirror throttle’.

CDP/NSS Administration Guide 368

Page 371: CDP-NSS Administration Guide

Near-line Mirroring

Enable Near-line Mirroring on multiple resources

You can enable near-line mirroring on multiple SAN resources.

1. Right-click on SAN Resources and select Near-line Mirror --> Add.

The Enable Near-line Mirroring wizard launches.

2. Click Next at the Welcome screen.

The list of available resources displays.

3. Select the resources be Near-line Mirror resources or click the Select All button.

4. Select the storage pool or physical device(s) for the near-line mirror’s virtual header information.

5. Select the server that will contain the near-line mirrors.

6. Continue to set up near-line mirroring as described in ‘Setup Near-line mirroring’.

What’s next?

Near-line disksare prepared

but not created

If you prepared one or more near-line disks and are ready to create near-line mirrors, you must manually rescan physical resources and discover new devices on the primary server. Afterwards, you must re-run the Near-line Mirror wizard for each primary disk and select the existing, prepared disk. This will create a near-line mirror without re-scanning the primary server.

Near-line mirroris created

After creating your near-line mirror, you should enable TimeMark or CDP on the near-line server. This way your data will have periodic snapshots and you will be able to roll back your data when needed.

For disaster recovery purposes, you can also enable replication for a near-line disk to replicate the data to another location.

CDP/NSS Administration Guide 369

Page 372: CDP-NSS Administration Guide

Near-line Mirroring

Check near-line mirroring status

You can see the current status and properties of your mirroring configuration by checking the General tab for a mirrored resource.

Current status andproperties of

mirroringconfiguration.

CDP/NSS Administration Guide 370

Page 373: CDP-NSS Administration Guide

Near-line Mirroring

Near-line recovery

The following is required before recovering data:

• If you are using the FC protocol, zone the appropriate initiators on your near-line server with the targets on your primary server.

• You must unassign the primary disk from its client(s).• If enabled, disable mirroring for the near-line disk.• If enabled, suspend replication for the near-line disk.• All SAN resources must be online and accessible.• If the near-line mirror is part of a group, the near-line mirror must leave the

group prior to recovery.• TimeMark must be enabled on the near-line resource and the near-line

replica, if one exists.• At least one TimeMark must be available to rollback to during recovery.• If you have been using CDP and want to rollback to a specific point-in-time,

you may want to create a TimeView first and view it to make sure it contains the appropriate data that you want.

Recover data from a near-line mirror

Recovery is done in the console from the near-line resource.

1. Right-click on the near-line resource and select Near-line Mirror Resource -->Start Recovery

You can also start recovery by selecting TimeMark --> Rollback.

2. Add the near-line server as a client of the primary server.

You will go through several screens to add the client:

• Confirm or specify the IP address the near-line server will use to connect to the primary server as a client. This IP address is used for iSCSI; it is not used for Fibre Channel.

• Determine if you want to enable persistent reservation for the client (near-line server). This allows clustered clients to take advantage of Persistent Reserve/Release to control disk access between various cluster nodes.

• Select the client’s protocol(s). If you select iSCSI, you must indicate if this is a mobile client.

• (FC protocol) Select or add WWPN initiators for the client. • (FC protocol) Specify if you want to use Volume Set Addressing (VSA).

VSA is used primarily for addressing virtual buses, targets, and LUNs. If your client requires VSA to access a broader range of LUNs, you must enable it for the client.

Note: If the near-line recovery fails due to a TimeMark rollback failure, device discovery failure, etc., you can retry the near-line recovery by selecting Near-line Mirror Resources --> Retry Recovery on the Near-line Disk.

CDP/NSS Administration Guide 371

Page 374: CDP-NSS Administration Guide

Near-line Mirroring

• your storage devices use VSA, you must enable it.• (iSCSI protocol) Select the initiator that this client uses. If the initiator does

not appear, you may need to rescan. You can also manually add it, if necessary.

• (iSCSI protocol) Add/select users who can authenticate for this client.

3. Click OK to begin the recovery process.

4. Select the point-in-time to which you want to roll back.

Rollback restores your drive to a specific point in time, based on an existing TimeMark or your CDP journal. After rollback, your drive will look exactly like it did at that point in time.

You can select to roll back to any TimeMark. If this resource has CDP enabled and you want to select a specific point-in-time, type in the exact time.

Once you click Ok, the system will roll back the near-line mirror to the specified point-in-time and will then synchronize the data back to the primary server. When the process is completed, your screen will look similar to the following:

CDP/NSS Administration Guide 372

Page 375: CDP-NSS Administration Guide

Near-line Mirroring

5. When the Mirror Synchronization Status shows the status as Synchronized, you can select Near-line Mirror Resource --> Resume Config to resume the configuration of the near-line mirror.

This re-sets the original near-line configuration so that the primary server can begin mirroring to the near-line mirror.

6. Re-assign your primary disk to its client(s).

Recover data from a near-line replica

Another type of recovery is recovering from the TimeMark of the near-line replica disk. The following is required before recovering data from a near-line replica:

• All of the clients assigned to the primary disk must be removed.• The near-line disk and the replica disk must be in sync as required for role

reversal.• If the Near-line Disk is already enabled with mirror, the mirror must be

removed first.

Recovery is performed via the console from the near-line resource as described below:

1. Right-click on the near-line resource and select Replication -->

Recovery --> Prepare/Start

CDP/NSS Administration Guide 373

Page 376: CDP-NSS Administration Guide

Near-line Mirroring

2. Click OK to update the configuration for recovery.

3. Click OK to perform role reversal.

CDP/NSS Administration Guide 374

Page 377: CDP-NSS Administration Guide

Near-line Mirroring

The Recovery from Near-line Replica TimeMark screen displays.

4. Select the TimeMark to rollback to restore your drive to a specific point in time. Once you click Ok, the system will roll back the near-line mirror to the specified point-in-time.

5. Perform Replication synchronization from the REVERSED Near-line Replica Disk to the near-line disk after successful rollback.

This will synchronize the rollback data from the REVERSED replica to Near-line Disk and primary disk since the near-line disk is the replica now and the primary disk is the mirror of the near-line disk.

To do this:

• Right-click on the REVERSED Near-line replica disk.• Select Replication-> Synchronize.

6. Perform Role Reversal to switch Near-line Disk back as Replication Primary Disk and resume the Near-line Mirroring configuration.

To do this:

• Right-click on the REVERSED Near-line Replica Disk.• Select Replication-> Recovery-> Resume Config.

CDP/NSS Administration Guide 375

Page 378: CDP-NSS Administration Guide

Near-line Mirroring

The Resume Near-line Mirroring from Near-line Replica Recovery screen displays.

7. Click OK to switch the role of the Near-line disk and the Near-line replica and resume near-line mirroring.

8. Re-assign your primary disk to its client(s).

Recover from a near-line replica TimeMark using forceful role reversal

Recovery from a near-line replica TimeMark with forceful role reversal can be used when the near-line server is not available. However, this only works if both Near-line Disk and Near-line Replica are enabled with TimeMark.

To prepare for recovery:

• Suspend replication on the near-line server• Unassign all of the clients from the primary disk on the primary server• Suspend near-line mirror on the primary disk to prevent mirror

synchronization of the near-line disk.• Suspend CDP on the near-line disk and the near-line replica.

To recover using this method:

1. Perform forceful role reversal on Near-line Replica

• Right-click on the Replica disk --> Replication --> Reversal• The procedure will fail because the server is not available. Click OK.• Click the Cancel button at the login screen to exit the login dialog.

CDP/NSS Administration Guide 376

Page 379: CDP-NSS Administration Guide

Near-line Mirroring

The Forceful Replication Role Reversal screen displays.

Type Yes to confirm and click OK.

• Click OK to switch the roles of the replica disk and primary server.

The Replica is promoted.

2. Perform TimeMark rollback on the reversed Near-line Replica.

• Right-click on the reversed Near-line Replica and select TimeMark --> Rollback.

CDP/NSS Administration Guide 377

Page 380: CDP-NSS Administration Guide

Near-line Mirroring

• Select the TimeMark you are rolling back to and click OK.

3. Perform Repair Replica from the reversed Near-line Replica after the near-line server is online.

4. Right-click on the reversed Near-line Replica and select Replication --> Repair

5. Perform Synchronization on the reversed Near-line Replica.

Right-click on the reversed Near-line Replica and select Replication --> Synchronize

6. Once synchronization is finished, perform role reversal from the reversed near-line replica.

Right-click on the reversed Near-line Replica and select Replication --> Reversal

Note: You must set the near-line disk to Recovery Mode before repairing the replica.

CDP/NSS Administration Guide 378

Page 381: CDP-NSS Administration Guide

Near-line Mirroring

7. When the Mirror Synchronization Status shows the status as Synchronized, you can select Near-line Mirror Resource --> Resume Config to resume the configuration of the near-line mirror.

This re-sets the original near-line configuration so that the primary server can begin mirroring to the near-line mirror.

8. Once near-line mirror configuration has resumed, you can resume the Near-line Mirror, Replication, and CDP.

9. Re-assign your primary disk to its client(s)

Swap the primary disk with the near-line mirrored copy

Right-click on the primary SAN resource and select Near-line Mirror --> Swap to reverse the roles of the primary disk and the mirrored copy. You will need to do this if you are going to perform maintenance on the primary disk or if you need to remove the primary disk.

Manually synchronize a near-line mirror

The Synchronize option re-synchronizes a mirror and restarts the mirroring process once it is synchronized. This is useful if one of the mirrored disks has a minor failure, such as a power loss.

1. Fix the problem (turn the power back on, plug the drive in, etc.).

2. Right-click on the primary resource and select Near-line Mirror --> Synchronize.

During the synchronization, the system will monitor the process and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.

Rebuild a near-line mirror

The Rebuild option rebuilds a mirror from beginning to end and starts the mirroring process once it is synchronized.

After rebuilding the mirror, you would swap the mirror so that the primary server could service clients again. To do this, right-click on a primary resource and select Near-line Mirror --> Rebuild. You can see the current settings by checking the Mirror Synchronization Status field on the General tab of the resource.

Note: When swapping the primary disk with the near-line mirrored copy, the mir-ror will swap back to the primary disk if the mirror is in sync and a set period of time has passed. This is done to reduce the amount of load on the disk from the Near-line server. The time to swap back is based on the global sync option in the console.

CDP/NSS Administration Guide 379

Page 382: CDP-NSS Administration Guide

Near-line Mirroring

Expand a near-line mirror

Use the Expand SAN Resource Wizard to expand the near-line mirror. Make sure the near-line server is up and running. If the near-line server is down, you will not be able to expand the primary disk or the near-line mirror disk. However, if the primary server is down, you can still expand the near-line mirror and the primary disk will be expanded in the next mirror expansion.

You can expand the near-line mirror with or without the near-line replica server. If a near-line replica server exists, both the near-line mirror and the replica disk will be expanded at the same time.

To expand a virtualized disk:

1. Right-click on the primary disk or near-line mirror and select Expand.

If you want to enlarge the primary disk, you will need to enlarge the mirrored copy to the same size. The Expand SAN Resource Wizard will automatically lead you through expanding the near-line mirror disk first.

The Expand SAN Resource Wizard screen displays.

CDP/NSS Administration Guide 380

Page 383: CDP-NSS Administration Guide

Near-line Mirroring

2. Select the physical storage.

3. Select an allocation method and specify the size to allocate.

CDP/NSS Administration Guide 381

Page 384: CDP-NSS Administration Guide

Near-line Mirroring

The near-line mirror and the replica expands.

4. Click finish to confirm the expansion of the near-line mirror and the replica.

You are automatically routed back to the beginning of the Expand SAN Resource Wizard to expand the primary server.

Expand a service-enabled disk

To expand the service-enabled disk, the near-line mirror expand size must be greater than or equal than the primary disk expand size. You must expand the storage size on the physical disk first. Then go to the console and rescan the physical disk.

Once you have performed a rescan of the physical disk, follow the same steps described above to expand the disk.

Note: Thin provisioning is not supported with near-line mirroring.

CDP/NSS Administration Guide 382

Page 385: CDP-NSS Administration Guide

Near-line Mirroring

Suspend/resume near-line mirroring

When you manually suspend a mirror, the system will not attempt to re-synchronize, even if you have a re-synchronization policy. You will have to resume the mirror in order to synchronize.

When you resume mirroring, the mirror is synchronized before mirroring is resumed. During the synchronization, the system will monitor the process and check lag time. Depending upon your monitoring policy, mirroring will be suspended if the lag time gets above the acceptable limit.

To suspend/resume mirroring for a resource:

1. Right-click on a primary resource and select Near-line Mirror --> Suspend (or Resume).

You can see the current settings by checking the Mirror Synchronization Status field on the General tab of the resource.

Change your mirroring configuration options

Set globalmirroring

options

You can set global mirroring options that affect system performance during all types of mirroring (near-line, synchronous, or asynchronous). While the default settings should be optimal for most configurations, you can adjust the settings for special situations.

To set global mirroring properties for a server:

1. Right-click on the server and select Properties.

2. Select the Performance tab.

Synchronize Out-of-Sync Mirrors - Determine how often the system should check and attempt to resynchronize active out-of-sync mirrors, how often it should retry synchronization if it fails to complete, and whether or not to include replica mirrors. These settings will only be used for active mirrors. If a mirror is suspended because the lag time exceeds the acceptable limit, that re-synchronization policy will apply instead.

The mirrored devices must be the same size. If you want to enlarge the primary disk, you will need to enlarge the mirrored copy to the same size. When you use the Expand SAN Resource Wizard, it will automatically lead you through expanding the near-line mirror disk first.

CDP/NSS Administration Guide 383

Page 386: CDP-NSS Administration Guide

Near-line Mirroring

Changeproperties for aspecific primary

resource

You can change the following near-line mirroring configuration for a primary resource:

• Policy for monitoring the mirroring process• Conditions for re-synchronization • Throughput control policies

To change the configuration:

1. Right-click on a primary resource and select Near-line Mirror --> Properties.

2. Make the appropriate changes and click OK.

Changeproperties for a

specific near-line resource

For a near-line mirroring resource, you can only change the IP address that is used by the near-line server to connect to the primary server.

To change the configuration:

1. Right-click on a near-line resource and select Near-line Mirror Resource --> Properties.

2. Make the appropriate change and click OK.

Remove a near-line mirror configuration

You can remove a near-line mirror configuration from the primary or near-line mirror resource(s).

From the primary server, right-click on the primary resource and select Near-line Mirror --> Remove.

From the near-line server, right-click on the near-line resource and select Near-line Mirror Resource --> Remove.

CDP/NSS Administration Guide 384

Page 387: CDP-NSS Administration Guide

Near-line Mirroring

Recover from a near-line mirroring hardware failure

Replace afailed disk

If one of the mirrored disks has failed and needs to be replaced:

1. Right-click on the resource and select Near-line Mirror --> Remove to remove the mirroring configuration.

2. Physically replace the failed disk.

3. Re-run the Near-line Mirroring wizard to create a new mirroring configuration.

If both disks fail If a disaster occurs at the site where the primary and near-line server are housed, it is possible to recover both disks if you had replication configured for the near-line disk to a remote location.

In this case, after removing the mirroring configuration and physically replacing the failed disks, you can perform a role reversal to replicate all of the data back to the near-line disk.

Afterwards, you can recover the data from the near-line mirror back to the primary disk.

Fix a minor diskfailure

If one of the mirrored disks has a minor failure, such as a power loss:

1. Fix the problem (turn the power back on, plug the drive in, etc.).

2. Right-click on the primary resource and select Near-line Mirror --> Synchronize.

This re-synchronizes the disks and restarts the mirroring.

If the near-lineserver is set up

as a failoverpair and is in a

failed state

If the you are performing a near-line recovery and the near-line server is set up as a failover pair, always add the first and second nodes of the failover set to the primary for recovery.

1. Select the proper initiators for recovery

2. Assign both nodes back to the primary for recovery.

Note: There are cases where the server may not show up in the list since the machine maybe down and the particular port is not logged into the switch. In this situation, you must know the complete WWPN of your recovery initiator(s). This is important in cases where you need to manually enter the WWPN into the recov-ery wizard to avoid any adverse effects during the recovery process.

CDP/NSS Administration Guide 385

Page 388: CDP-NSS Administration Guide

Near-line Mirroring

Replace a disk that is part of an active near-line mirror

If you need to replace a disk that is part of an active near-line mirror storage, take the primary disk offline first. Then follow the steps below. If the primary server is part of a High Availability (HA) set, take the disks offline for both servers before proceeding.

1. If you need to replace the primary disk, right-click on the primary resource and select Near-line Mirror --> Swap to reverse the roles of the disks.

2. Take the original primary disk (now the mirror disk) offline. If the primary server is part of a High Availability (HA) set, take the disks offline for both servers before proceeding.

3. Select Near-line Mirror --> Replace Primary Disk.

Select Rescan Physical Resources from the console if the Replace Primary Disk option is not available.

4. Replace the disk.

5. Synchronize the mirror and swap the disks to reverse their roles.

Set Recovery Mode

The Set Recovery Mode option should only be used when recovering data from a near-line replica TimeMark using forceful role reversal.

CDP/NSS Administration Guide 386

Page 389: CDP-NSS Administration Guide

CDP/NSS Administration Guide

ZeroImpact BackupFalconStor’s ZeroImpact Backup Enabler allows you to perform a local raw device tape backup/restore of your virtual drives.

A raw device backup is a low-level backup or full copy request for block information at the volume level. Linux’s dd command generates a low-level request.

Examples of Linux applications that have been tested with the storage server to perform raw device backups include BakBone’s NetVault version 7.42 and Symantec Veritas NetBackup version 6.0.

Using the FalconStor ZeroImpact Backup Enabler with raw device backup software eliminates the need for the application server to play a role in backup and restore operations. Application servers on the SAN benefit from better performance and the elimination of overhead associated with backup/restore operations because the command and data paths are rendered exclusively local to the storage server. This results in the most optimal data transfer between the disks and the tape, and is the only way to achieve net transfer rates that are limited only by the disk’s or tape’s engine. The backup process automatically leverages the FalconStor snapshot engine to guarantee point-in-time consistency.

To ensure full transactional integrity, this feature integrates with FalconStor Snapshot Agents and the Group Snapshot feature.

Configure ZeroImpact backup

You must have a Snapshot Resource for each virtual device you want to back up. If you do not have one, you will be prompted to create one. Refer to Create a Snapshot Resource for more information.

1. Right-click on the SAN resource that you want to back up and select Backup --> Enable.

Note: There is a maximum of 255 virtual devices that can be enabled for ZeroImpact backup.

CDP/NSS Administration Guide 387

Page 390: CDP-NSS Administration Guide

ZeroImpact Backup

2. Enter a raw device name for the virtual device that you want to back up.

3. Configure the backup policy.

Use an existing TimeMark snapshot - (This option is only valid if you are using FalconStor’s TimeMark option on this SAN resource.) If a TimeMark exists for this virtual device, that image will be used for the backup. It may or may not be a current image at the time backup is initiated. If a TimeMark does not exist, a snapshot will be taken.

Create a new snapshot - A new snapshot will be created for the backup, ensuring the backup will be made from the most current image.

CDP/NSS Administration Guide 388

Page 391: CDP-NSS Administration Guide

ZeroImpact Backup

4. Determine how long to maintain the backup session.

Each time a backup is requested by a third-party backup application, the storage server creates a backup session. Depending upon the snapshot criteria set on the previous window, a snapshot may be taken at the start of the backup session. (If the resource is part of a group, snapshots for all resources in the group will be taken at the same time.) Subsequently, each raw device is opened for backup and then closed. Afterwards, the backup application may verify the backup by comparing the data on tape with that of the snapshot image created for this session. Therefore, it is important to maintain the backup session until the verification is complete. The storage server cannot tell how long your backup application needs to rewind the tape and compare the data, so you must select an option on this screen indicating how long the storage server is to maintain the session. The session length only applies to backups (reading from a raw device), not restores (writing to a raw device). The actual session will end within 60 seconds of the session length specified.

Absolute session length - This option maintains the backup session for a set period of time from the start of the backup session. Use this option when you know approximately how long the backup operation will take. This option can also be used to limit the length of time that a backup can run. The Backup operation will terminate when the Absolute Session Length timeout is reached (whether or not Backup has completed). An Event message is logged that the backup terminated when the Absolute Session Length timeout was reached.

Relative session length - This option maintains the backup session for a period of time after the backup completes (the last raw device is opened and closed). This is more flexible than the absolute session length since it may be difficult to estimate how long a back up will take for all devices. With relative time, you can estimate how long to wait after the last device is backed up. If there is a problem during the backup, and the backup cannot complete, the Inactivity timeout tells the storage server how long to wait before ending the backup session.

5. Confirm all information and click Finish to enable backup.

CDP/NSS Administration Guide 389

Page 392: CDP-NSS Administration Guide

ZeroImpact Backup

Back up a CDP/NSS logical resource using dd

Below are procedures for using Linux’s dd command to perform a raw device backup. Refer to the documentation that came with your backup software if you are using a backup application to perform the backup.

1. Determine the raw device name of the virtual device that you want to back up.

You can find this name from the FalconStor Management Console. It is displayed on the Backup tab when you highlight a specific SAN resource.

2. Execute the following command on the storage server:

where kisdev# refers to the raw device name of the logical resource.

st0 is the tape device. If you have multiple tape devices, substitute the correct number in place of the zero. You can verify that you have selected the right tape device by using the command: tar -xvf /dev/st0 where 0 is a variable.

bs=65536 sets the block size to 64K to achieve faster performance.

You can also back up a logical resource to another logical resource. Prior to doing so, all target logical resources must be detached from the client machine(s), and have backup enabled so that the raw device name for the logical resource can be used instead of specifying st0 for the tape device.

When the back up is finished, you will only see one logical resource listed in the Console. This is caused by the fact that when you reserve a hard drive for use as a virtual device, the storage server writes partition information to the header and the Console uses this information to recognize the hard drive. Since a Linux dd will do an exact copy of the hard drive, this partition information will exist on the second hard drive, will be read by the Console, and only one drive will be shown. If you need to make a usable copy of a virtual drive, you should use FalconStor’s Snapshot Copy option.

dd if=/dev/isdev/kisdev# of=/dev/st0 bs=65536

CDP/NSS Administration Guide 390

Page 393: CDP-NSS Administration Guide

ZeroImpact Backup

Restore a volume backed up using ZeroImpact Backup Enabler

You will need to do the following in order to restore an entire volume that was backed up with the ZeroImpact Backup Enabler.

1. Unassign the volume you will be restoring from the SAN client to which it attaches.

This ensures that the client cannot change data while the restore is taking place.

2. Before you start the restore, suspend replication and disable TimeMark.

These can hamper the performance of the restore. Before you disable TimeMark be sure to record the current policies. This can be done by right-clicking on the virtual drive and select TimeMark/CDP --> Properties.

3. Once the restore is complete, resume replication and re-enable TimeMark, if necessary.

CDP/NSS Administration Guide 391

Page 394: CDP-NSS Administration Guide

CDP/NSS Administration Guide

MultipathingThe Multipathing option may not be available in all IPStor, CDP, and NSS versions. Check with your vendor to determine the availability. This option allows the storage server to intelligently distribute I/O traffic across multiple Fibre Channel (FC) ports to maximize efficiency and enhance system performance.

Because it uses parallel active storage paths between the storage server and storage arrays, CDP/NSS can transparently reroute the I/O traffic to an alternate storage path to ensure business continuity in the event of a storage path failure.

Multipathing is possible due to the existence of multiple HBAs in the storage server and/or multiple storage controllers in the storage systems that can access the same physical LUN.

The multiple paths cause the same LUN to have multiple instances in the storage server.

CDP/NSS Administration Guide 392

Page 395: CDP-NSS Administration Guide

Multipathing

Load distribution

Automatic load distribution allows for two or more storage paths to be simultaneously used for read/write operations, enhancing performance by automatically and equally dispersing data access across all of the available active paths.

Preferred paths

Some storage systems support the concept of preferred paths, which means the system determines the preferred paths and provides the means for the storage server to discover them.

CDP/NSS Administration Guide 393

Page 396: CDP-NSS Administration Guide

Multipathing

Path management

From the FalconStor Management Console, you can specify a preferred path for each physical device. Right-click on the device and select Alias.

The Path Status can be Standby - Active (passive) or load-balancing (Active). Changes to the active path configuration become effective immediately, but are not saved permanently until you use the System Preferred Path --> Save option.

Each path has either a good or bad state. In most cases when the deployment is an active/passive clustered pair of an NSS Gateway or NSS HC acting as a gateway, there are two load-balancing groups.

• Single load-balancing group: Once the path is determined to be defective, it will be removed from the load-balanced group and will not be re-used after the path is restored unless there are no more good paths available or a manual rescan is performed. If either occurs, the path will be added back to the load-balanced group.

• Two load-balancing groups: If there are two load-balanced groups (one is active and the other is passive) for the physical device, then when there are no more good paths left in the active load-balanced group, the device will fail over to the passive load-balancing group.

CDP/NSS Administration Guide 394

Page 397: CDP-NSS Administration Guide

Multipathing

You can see multipathing information from the console by checking the Alias tab for a LUN (under Fibre Channel Devices).

For each device, you see the following:

• Path Status: Current, Standby Active, Standby Passive, or load-balancing• Current: Displays if only one path is being used.• Standby Active: Displays when a path is in the active group and is ready.

A rescan from the console will make it load-balanced.• Standby Passive: Displays for all passive paths• load-balancing displays for all active paths across which the I/O is being

balanced.• Standby Passive path(s) cannot be used until the LUN is trespassed. The

load is then balanced across the standby passive paths and the earlier “load-balanced” paths now become standby passive.

• Connectivity status - indicates whether the device is connected or disconnected.

The SCSI Devices tab displays a table with sizing information. If you are using alias paths for your multi-path Fibre Channel devices, the device size displays as N/A (as shown in the table below):

Only the actual path size is calculated in order to provide an accurate calculation of actual size. The icon for a device using an alias path displays in black and white.

For a multi-path device, all SCSI Devices can display under one specific Fibre Channel Adapter. This does not mean load balance is not active; the adapter number is just a place holder.

CDP/NSS Administration Guide 395

Page 398: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Command Line InterfaceThe Command Line Interface (CLI) is a simple interface that allows client machines to perform some of the more common functions currently performed by the FalconStor Management Console. Administrators can use the CLI to automate many tasks, as well as integrate CDP/NSS with their existing management tools.

The CLI utility can be downloaded from the FalconStor website (on the customer support portal and TSFTP) under the SAN client category.

Install and configure the CLI

The CLI is installed as part of the CDP/NSS Client installation. Once installed, a path must be set up for Windows clients in order to be able to use the CLI. The path can be set up from the Windows Desktop by performing the following steps:

1. Right-click My Computer and select Properties --> Advanced system settings --> Environment Variables button.

2. Highlight the Path variable in the System Variables box, click the Edit button and add the following to the end of the existing path. ;c:\Program Files\FalconStor\IPStor\Client

3. Click OK to save and exit.

For Linux, Solaris, AIX, and HP-UX clients, the path is automatically set during the Client installation. In order to use the CLI, Linux, Solaris, AIX, and HP-UX client users must have exited the current shell to set the new environment at least once after installing the client software.

Use the CLI

CLI command usage help can be obtained by typing: iscli [help] [<command>] [server parameters]. To run a CLI command, type: iscli <command> <parameters>

Type iscli at a command line to display a list of the existing commands.

For example: c:\iscli

These commands must be combined with the appropriate long or short arguments (ex. Long: --server-name servername Short: -s servername).

If you type the command name (for example, c:\iscli getvdevlist), a list of arguments will be displayed for that command.

Refer to Command Line Interface (CLI) error codes in the Troubleshooting / FAQs section for a list of CLI error codes.

Note: You should not have a console connected to your storage server when you run CLI commands; you may see errors in the syslog if a console is connected.

CDP/NSS Administration Guide 396

Page 399: CDP-NSS Administration Guide

Command Line Interface

Common arguments

The following arguments are used throughout the CLI. For each, a long and short variation is included. You can use either one. The short arguments ARE case sensitive. For arguments that are specific to each command, refer to the section for that command.

Short Argument Long Argument Value/Description

-s --server-name storage server Name (hostname or IP address). In order to use the hostname, the server name has to be resolvable on the client side and server side.

-u --server-username storage server Username

-p --server-password storage server User Password

-S --target-name Storage Target Server Name (hostname or IP address)

-U --target-username Storage Target Server Username (for replication commands)

-P --target-password Storage Target Server User Password (for replication commands)

-c --client-name Storage Client Name

-v --vdevid Storage Virtual Device ID

-v --source-vdevid storage server Source Virtual Device ID

-V --target-vdevid FalconStor Target Virtual Device ID

-a --access-mode Client Access Mode to Virtual Device

-f --force Force the deletion of the virtual device

-n --vdevname Virtual device name

-X --rpc-timeout Specify a number between 1 and 30000 seconds for the RPC timeout. The default is 30 seconds if not specified.

Note: You only need to use the --server-username (-u) and --server-password (-p) arguments when you log into a server. You do not need them for subsequent commands on the same server during your current session.

CDP/NSS Administration Guide 397

Page 400: CDP-NSS Administration Guide

Command Line Interface

Commands

Below is a list of commands you can use to perform CDP/NSS functions from the command line. You should be aware of the following as you enter commands:

• Type each command on a single line, separating arguments with a space.• You can use either the short or long arguments (as described above).• For details and a list of arguments for each command, type iscli and the

command. For example c:\iscli getvdevlist• Variables are listed in <> after each argument. • Arguments listed in brackets [ ] are optional.• The order of the arguments is irrelevant.• Arguments separated by | are choices. Only one can be selected.• For a value entered as a literal, it is necessary to enclose the value in

quotes (double or single) if it contains special characters such as *, <, >, ?, |, %, $, or space. Otherwise, the system will interpret the characters with a special meaning before it is passed to the command.

• Literals cannot contain leading or trailing spaces. Leading or trailing spaces enclosed in quotes will be removed before the command is processed.

• In order to use the hostname of the storage server instead of its IP address, the server name has to be resolvable on the client side and server side.

The following table provides a summary of the command line interface options along with a description.

Command Line Interface (CLI) description table

Command Description

Login/Logout of the storage server

iscli login This command allows you to log into the specified storage server with a given username and password.

iscli logout This command allows you to log out of the specified storage server.If the server was not logged in or you have already logged out from the server when this command is issued, error 0x0902000f will be returned. After logging out from the server, the -u and –p arguments will not be optional for the server commands.

Client Properties

iscli setfcclientprop This command allows you to set Fibre Channel client properties. <client-name> is required.

iscli getclientprop This command allows you to get client properties.

iscli setiscsiclientprop This command allows you to set iSCSI client properties. <user-list> is in the following format: user1,user2,user3

CDP/NSS Administration Guide 398

Page 401: CDP-NSS Administration Guide

Command Line Interface

iSCSI Targets

iscli createiscsitarget This command creates an iSCSI target. <client-name>, <ip-address>, and <access-mode> are required. A default iSCSI target name will be generated if <iscsi-target-name> is not specified.

iscli deleteiscsitarget This command deletes an iSCSI target. <client-name> and <iscsi-target-name> are required.

iscli assigntoiscsitarget This command assigns a virtual device or group to an iSCSI target. A virtual device or group (either ID or name) and iSCSI target are required. All virtual devices in the same group will be assigned to the specified iSCSI target if group is specified. If a virtual device ID is specified and it is in a group, an error will be returned.

iscli unassignfromiscsitarget This command unassigns a virtual device or group from an iSCSI target. This command unassigns a virtual device or group from an iSCSI target. Virtual device and iSCSI target are required. -f (--force) option i8s required when the iSCSI target is assigned to the client and the client is connected or when the virtual device is in a group. An error will be returned if the client is connected and the force option is not specified.

iscli getiscsitargetinfo This command retrieves information for iSCSI targets. The iSCSI target ID or iSCSI target name can be specified to get the specific iSCSI target information. The default is to get the information for all iSCSI targets.

iscli setiscsitargetprop This command sets the iSCSI target properties. Refer to Create iSCSI target above for details about the options.

Users and Passwords

iscli adduser This command allows you to add a CDP/NSS user. You must log in to the server as "root" in order to perform this operation.

iscli setuserpassword This command allows you to change a CDP/NSS user’s password. You must log in to the server as "root" in order to perform this operation if the user is not an iSCSI user.

Mirroring

iscli createmirror This command allows you to create a mirror for the specified virtual device. The virtual device can be a SAN, or Replica resource.

iscli getmirrorstatus This command shows the mirror status of a virtual device. The resource name, ID and synchronization status will be displayed if there is a mirror disk configured for the virtual device.

iscli syncmirror This command synchronizes the mirrored disks.

iscli swapmirror This command reverses the roles of the primary disk and the mirrored copy.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 399

Page 402: CDP-NSS Administration Guide

Command Line Interface

iscli promotemirror This command allows you to promote a mirror disk to a regular virtual device. The mirror cannot be promoted if the synchronization is in progress or when it is out-of-sync and the force option is not specified.

iscli removemirror This command allows you to remove a mirror for the specified virtual device.

iscli enablealternativereadmirror This command enables virtual devices to read from an alternative mirror.

iscli disablealternativereadmirror This command disables virtual devices so they no longer read from an alternative mirror.

iscli getalternativereadmirroroption

This command retrieves and displays information about all virtual devices with the alternative mirror option.

iscli migrate This command allows you to copy a virtual device without a snapshot. The original virtual device becomes a new virtual device with a new virtual device ID. The original virtual device name and ID will be kept, but with segments allocated from different storage. If the virtual device does not have mirror, it will create a mirror, sync the mirror, swap the mirror, then promote the mirror. If the virtual device already has mirror, it will swap the mirror, sync the mirror, promote the mirror, then re-create the mirror for the original VID.

iscli getmirrorpolicy The following is an example of the output of the command if Mirror Health Monitoring Option is enabled:• Mirror Health Monitoring Option Enabled=Yes• Monitoring Interval=1 seconds• Maximum Acceptable Lagging Time=15 milliseconds• Threshold to Report Error=5 %• Minimum outstanding IOs to Report Error=20• Mirror Sync Control Policy:• Sync Control Policy Enabled=Yes• Sync Control Max Sync Time=4 Minute(s)• Sync Control Max Resync Interval=1 Minute(s)• Sync Control Max IOs for Resync=N/A• Sync Control Max IO Size for Resync=20 MB• Sync Control Max Resync Retry=0

iscli setmirrorpolicy The Mirror policy is for resources enabled with the mirroring option. You can set the options to check the mirror health status, suspend, resume and re-synchronize the mirror when it is necessary.

iscli suspendmirror This command allows you to suspend mirroring.

iscli resumemirror This command allows you resume mirroring.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 400

Page 403: CDP-NSS Administration Guide

Command Line Interface

Server Commands for Virtual Devices and Clients

iscli createvdev This command allows you to create a SAN resource on the specified server. A SAN resource can be created in one of following categories: virtual or service-enabled. The default category is virtual if the category is not specified.

iscli getvdevlist This command retrieves and displays information about all virtual devices or a specific virtual device from the specified server.

iscli getclientvdevlist This command retrieves and displays information about all virtual devices assigned to the client from the specified server.

iscli renamevdev This command allows you to rename a virtual device. However only SAN resource and SAN replica can be renamed. Specify the id and new name of the resource to be renamed.

iscli assignvdev This command allows you to assign a virtual device or a group on a specified server to a SAN client. If this is an iSCSI client, you can use this command to assign an iSCSI target to a client, but not a device. Use CLI assigntoiscsitarget to assign a device.

iscli unassignvdev This command allows you to unassign a virtual device or a group on the specified server from a SAN client. If the client is an iSCSI client, iSCSI target should be specified. Otherwise, virtual device should be specified.

iscli expandvdev This command allows you to expand the size of a virtual device on the specified server. SAN resources can be expanded but not a replica disk by itself or a TimeView resource.

iscli deletevdev This command allows you to delete a SAN resource, or SAN TimeView Resource on the specified server. If the resource is assigned to a SAN client, the assignment(s) will be removed first. If a Snapshot Resource is created for the virtual device, it will be removed.

iscli setassignedvdevprop This command allows you to set properties for assigned virtual devices. Device properties can only be changed when the client is not connected.

iscli addclient This command allows you to add a client to the specified server.

iscli deleteclient This command allows you to delete a client. <client-name> is the client to be deleted.

iscli enableclientprotocol This command allows you to add a protocol to a client.

iscli disableclientprotocol This command allows you to remove a protocol from a client.

iscli getvidbyserialno This command allows you to get the corresponding virtual device ID when you enter a serial number (a 12-character long alphanumeric string).

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 401

Page 404: CDP-NSS Administration Guide

Command Line Interface

iscli addthindiskstorage This command allows you to add additional storage to the resource configured for Thin Provisioning without changing the maximum disk size seen by the client host. The resource can be SAN, or a replica.

iscli setthindiskproperties This command allows you to set the thin disk properties.

iscli getthindiskproperties This command allows you to get thin disk properties.

iscli getvdevserial This command retrieves the serial number of the specified devices from the server.

iscli replacefcclientwwpn This command allows you to replace the Fibre Channel Client World Wide Port Name (WWPN).

iscli rescanfcclient This command allows you to notify the Fibre Channel client to rescan the devices.

Email Alerts

iscli enablecallhome This command allows you to enable Email Alerts.

iscli disablecallhome This command allows you to disable Email Alerts.

Failover

iscli getfailoverstatus This command shows you the current status of your failover configuration. It also shows all Failover settings, including which IP addresses are being monitored for failover.

Replication

iscli createreplication This command allows you to set up a replication configuration.

iscli startreplication This command allows you to start replication on demand for a virtual device or a group. You can only specify one identifier, -v <vdevid>, -g <group-id>, or -G <group-name>.

iscli stopreplication This command allows you to stop the replication that is in progress for a virtual device or a group. If a group is specified, and the group is enabled with replication, the replication for all resources in the group will be stopped. If replication is not enabled for the group, but some of the resources in the group are configured for replication, replication for the resources in the group will be stopped.

iscli suspendreplication This command allows you to suspend scheduled replications for a virtual device or a group that will be triggered by your replication policy. It will not stop a replication that is currently in progress.

iscli resumereplication This command allows you to resume replication for a virtual device or a group that was suspended by the suspendreplication command. The replication will then be triggered by the replication policy once it is resumed.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 402

Page 405: CDP-NSS Administration Guide

Command Line Interface

iscli promotereplica This command allows you to promote a replica to a regular virtual device if the primary disk is available and the replica disk is in a valid state.

iscli removereplication This command allows you to remove the replication configuration from the primary disk on the primary server and delete the replica disk on the target server. Either a primary server with a primary disk or a target server with a replica disk can be specified.

iscli getreplicationstatusinfo This command shows the replication status. The target server name and the replica disk ID are required to get the replication status.

iscli setreplicationproperties This command allows you to set the replication policy for a virtual device or group configured for replication.

iscli getreplicationproperties This command allows you to get the replication properties for a virtual device or group configured for replication.

iscli relocate This command relocates a replica after the replica disk has been physically moved to a different server.

iscli scanreplica This command scans a replica server.

iscli getreplicationthrottles This command allows you to view the throttle configuration information.

iscli setreplicationthrottles This command allows you to configure the throttle level for target sites or windows. Can accept a file. The path of the file in the command must be the full path

iscli getthrottlewindows This command allows you to view the information of a particular Target Site.

iscli setthrottlewindows This command allows you to change the window start/end time. Can accept a file. The path of the file in the command must be the full path

iscli removethrottlewindows This command removes a custom window. Can accept a file. The path of the file in the command must be the full path.

iscli addthrottlewindows Creates a custom throttle window with a specific time duration. Can accept a file. The path of the file in the command must be the full path.

iscli addlinktypes This command allows you to create a custom Link Type.

iscli gettargetsitesinfo This command allows you to view the information of a particular Target Site.

iscli addtargetservertotargetsite

This command allows you to add a target server to an existing Target site. Can accept a file. The path of the file in the command must be the full path.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 403

Page 406: CDP-NSS Administration Guide

Command Line Interface

iscli deletereplicationtargetsite

This command deletes / removes a target site from server.

iscli createreplicationtargetsite

This command creates a target site. You can create a target site with multiple target servers within at once by listing in their host names in the command or use a file. The format of the file is one server per line. The path of the file in the command must be the full path.

iscli removetargetserverfromtargetsite

This command allows you to remove a target server from an existing Target site. Can accept a file. The path of the file in the command must be the full path.

iscli removelinktypes This command allows you to remove a custom Link Type.

iscli setlinktypes This command allows you to configure an existing custom Link Type.

iscli getlinktypes This command allows you to view the available Link Types on server.

Server configuration

iscli getserverversion This command allows you to view the storage version and build number.

Snapshot Copy

iscli snapcopy This command allows you to issue a snapshot copy between two virtual devices of the same size.

iscli getsnapcopystatus This command allows you to get the status of snapshot copy.

Physical Device

iscli getpdevinfo This command provides you with physical device information.

iscli getadapterinfo This command allows you to get HBA information on a selected adapter.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 404

Page 407: CDP-NSS Administration Guide

Command Line Interface

iscli rescandevices This command allows you to rescan the physical resource(s) on the specified server to get the proper physical resource configuration. • The adapter number can be specified to rescan only the devices

on that adapter. If an adapter is not specified, all adapters will be rescanned. In addition to the adapter number, you can also specify the SCSI range to be rescanned. If the range is not specified, all SCSI IDs of the specified adapter(s) will be rescanned. Furthermore, the LUN range can be specified to narrow down the rescanning range. The range is specified in this format: #-#, e.g. 1-10.

• If you want the system to rescan the device sequentially, you can specify the –L ([--sequential) option. The default is not to rescan sequentially.

iscli importdisk This command allows you to import a foreign disk to the specified server. A foreign disk is a virtualized physical device containing CDP/NSS logical resources previously set up on a different storage server.

If the previous server is no longer available, the disk can be set up on a new storage server and the resources on the disk can be imported to the new server to make them available to clients. Either the GUID or SCSI address can be specified for the physical device to be imported. This information can be retrieved through the getpdevinfo command.

iscli preparedisk This command allows you to prepare a physical device to be used by a CDP/NSS server or reserve a physical device for other usage.

The <guid> is the unique identifier of the physical device. <ACSL> is the SCSI address of the physical device in this format: #:#:#:# (adapter:channel:scsi id:lun). You can specify either the <guid> or <ACSL> for the disk to be prepared.

iscli renamephysicaldevice This command allows you to rename a physical device. (When a device is renamed on a server in a failover pair, the device gets renamed on the partner server also.)

iscli deletephysicaldevice This command allows you to remove a physical device.

iscli restoresystempreferredpath This command allows you to restore the system preferred path for a physical device.

TimeMark/CDP

iscli enabletimemark This command allows you to enable the TimeMark option for an individual resource or for a group. TimeMark can be enabled for a resource as long as it is not yet enabled.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 405

Page 408: CDP-NSS Administration Guide

Command Line Interface

iscli createtimemark This command allows you to create a TimeMark for a virtual device or a group. A timestamp will be associated with each TimeMark. A notification will be sent to the SAN client to stop writing data to its virtual devices before the TimeMark is created. The new TimeMark is not immediately available after a successful createtimemark command. The TimeMark creation status can be retrieved with the gettimemarkstatus command. The TimeMark timestamp information can be retrieved with the gettimemark command.

iscli disabletimemark This command allows you to disable the TimeMark option for a virtual device or a group.

iscli updatetimemarkinfo This command is only available in version 5.1 or later and lets you add a comment or change the priority of an existing TimeMark. A TimeMark timestamp is required to update the TimeMark information.

iscli deletetimemark This command allows you to delete a TimeMark for a virtual device or a group. <timemark-timestamp> is the TimeMark timestamp to be selected for the deletion in the following format: YYYYMMDDhhmmss.

iscli copytimemark This command allows you to copy the specified TimeMark to an existing or newly created virtual device with the same size. The copying status can be retrieved with the gettimemarkstatus command.

iscli selecttimemark This command allows you to select a TimeMark and create a raw device on the server to be accessed directly. Only one raw device can be created per TimeMark. The corresponding delselecttimemark command should be issued to release the raw device when the raw device is no longer needed.

iscli deselecttimemark This command allows you to release the raw device associated with the TimeMark previously selected via the selecttimemark command.

iscli rollbacktimemark This command allows you to rollback a virtual device to a specific point-in-time. The rollback status can be retrieved with the gettimemarkstatus command.

iscli gettimemark This command allows you to enumerate the TimeMarks and view the TimeMark information for a virtual device or for a group.

iscli settimemarkproperties This command allows you to change the TimeMark properties, such as the automatic TimeMark creation schedule and maximum TimeMarks allowed for a virtual device or a group.

iscli gettimemarkproperties This command allows you to view the current TimeMark properties associated with a virtual device or a group. When the virtual device is in a group, the TimeMark properties can only be retrieved for the group.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 406

Page 409: CDP-NSS Administration Guide

Command Line Interface

iscli gettimemarkstatus This commands allows you to retrieve the TimeMark creation state and TimeMark rollback or copying status.

iscli createtimeview This command allows you to create a TimeView virtual device associated with specified virtual device and TimeMark.

iscli remaptimeview This command remaps a TimeView associated with a specified virtual device and TimeMark. The original TimeView is deleted and all changes to it are gone. A new TimeView is created with the new TimeMark using the same TimeView device ID. All of the connection assignments are retained.

iscli suspendcdpjournal This option suspends CDP. After the CDP journal is suspended, data will not be written to it until it is resumed.

iscli resumecdpjournal This option resumes CDP after it has been suspended.

iscli getcdpjournalstatus This command gets the current size and status of your CDP journal, including all policies.

iscli removetimeviewdata This command allows you to remove TimeView data resources individually or by source virtual devices.

iscli getcdpjournalinfo This command allows you to retrieve CDP Journal information.

iscli createcdpjournaltag This command lets you manually add a tag to the CDP journal. The -A (--cdp-journal-tag) tag can be up to 64 characters long and serves as a bookmark in the CDP journal. Instead of specifying the timestamp, the tag can be used when creating a TimeView.

iscli getcdpjournaltags This command allows you to retrieve CDP Journal tags.

Snapshot Resource

iscli createsnapshotresource This command allows you to create a snapshot resource for a virtual device. A snapshot resource is required in order for a virtual device to be enabled with the TimeMark or Backup options. It is also required for replication, snapshot copy, and for joining a group.

iscli deletesnapshotresource A snapshot resource is not needed if the virtual device is not enabled for the TimeMark or Backup options, is configured for replication or is in a group. You can delete the snapshot resource to free up space when it is not needed.

iscli expandsnapshotresource This command allows you to expand the snapshot resource on demand. The maximum size allowed that is specified in the snapshot policy only applies to the automatic expansion. The size limit does not apply when the snapshot resource is expanded on demand.

iscli setsnapshotpolicy This command allows you to modify the existing snapshot policy for the specified resource. The new policy will take effect with the next snapshot operation.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 407

Page 410: CDP-NSS Administration Guide

Command Line Interface

iscli getsnapshotpolicy This command allows you to view the snapshot policy settings for the specified resource.

iscli enablereclamationpolicy This command allows you to set the reclamation policy settings for the specified resource.

iscli disablereclamationpolicy This command allows you to disable the reclamation policy settings for the specified resource.

iscli startreclamation This command allows you to manually start the reclamation process for the specified resource.

iscli stopreclamation This command allows you to manually stop the reclamation process for the specified resource.

iscli updatereclaimpolicy This command allows you to update the reclamation policy settings for the specified resource.

iscli getreclamationstatus This command allows you to retrieve and view the reclamation status for the specified resource.

iscli reinitializesnapshotresource

Snapshot Resource cannot be deleted when the virtual device is in a Snapshot Group, or when the snapshot is online.

iscli getsnapshotresourcestatus This command allows you to view snapshot resource status information. The output will be similar to the following:Virtual Device Name=Sarah-00457ID=457Type=SANSnapshot Resource Size=58827 MBSnapshot Resource Status=AccessibleUsed Size=47.54 GB(82%)

iscli setreclamationpolicy This command allows you to set the reclamation policy on a selected virtual device.

iscli setglobalreclamationpolicy This command allows you to set the global reclamation policy.

iscli getsnapshotgroups This command allows you to retrieve group information for all groups or a specific group on the specified server.The default output format is a list of groups and a list of group members in each group.

iscli createsnapshotgroup This command allows you to create a group, where <group-name> is the name for the group.The maximum length for the group name is 64. The following characters are invalid for the group name: <>"&$/\

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 408

Page 411: CDP-NSS Administration Guide

Command Line Interface

iscli deletesnapshotgroup This command allows you to delete a group. A group can only be deleted when there are no group members in it. If the group is configured for replication, both the primary group and replica group have to be deleted. The force option is required if one of the following conditions applies:

• Deleting the replica group on the target server when the primary server is not available.

• Deleting the primary group on the primary server when the target server is not available.

An error will be returned if the force option is not specified for these conditions.

iscli joinsnapshotgroup This command allows you to add a virtual device to the specified group. <vdevid> is the virtual device to join the group.Either <group-id> or <group-name> can be specified for the group.

iscli leavesnapshotgroup This command allows you to remove a virtual device from a group.

If the group is configured for replication, both the primary and target servers need to be available because the system will remove the primary disk from the group on the primary server and the replica disk from the group on the target server.

You can use the force option to allow the primary disk to leave the group on the primary server without connecting to the target server, or allow the replica disk to leave the group on the target server without connecting to the primary server. The force option should only be used when either the primary disk is not in the primary group anymore or when the replica disk is not in the replica group anymore

iscli enablereplication This command allows you to enable replication for a group. Specify the <group-id> or <group-name> for the group that should have replication enabled. All of the resources in the group have to be configured with replication in order for the group to be enabled for replication.Use the -E (--enable-resource-option) option to allow the system to configure the non-eligible resources with replication first before enabling the group replication option.A target server must be specified. A group for the replica disks will be created on the target server. You can specify the <target-group-name> or use the default. The default is to use the same group name.

iscli disablereplication This command allows you to disable replication for a group. All replica disks will leave the replica group and the replica group on the target server will be deleted. The replication configuration of all resources in the group will remain the same, but TimeMarks will not be taken for all resources together anymore. All replication operations will be applied to the individual resource only.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 409

Page 412: CDP-NSS Administration Guide

Command Line Interface

Cache resources

iscli createcacheresource This command creates a cache resource for a virtual device or a group.

iscli getcacheresourcestatus This command gets the status of a cache resource.

iscli setcacheresourceprop This command sets the properties of a cache resource.

iscli getcacheresourceprop This command displays the properties of a cache resource.

iscli suspendcacheresource This command suspends a cache resource. After the cache resource is suspended, no more new data will be written to it. The data on the cache resource will be flushed to the source resource.

iscli resumecacheresource This command resumes a suspended cache resource.

iscli deletecacheresource This command deletes a cache resource. The data on the cache resource has to be flushed before the cache resource can be deleted. The system will suspend the cache resource first if it is not already suspended.

Report data

iscli getreportdata This command allows you to get report data from the specified server and save the data to an output file in csv or text file format.

Event log

iscli geteventlog The date range can be specified to get the event log for a specific range. The default is to get all of the event log messages if a date range is not specified.

Backup

iscli enablebackup This command allows you to enable the backup option for an individual resource or for a group. Backup can be enabled for a resource as long as it is not already enabled.

iscli disablebackup This command allows you to disable backup for a virtual device or a group. Backup of a resource cannot be disabled if the resource is in a group enabled for backup.A group’s backup can be disabled as long as there is no group activity using the snapshot resource. Individual resources in the group will remain backup-enabled after the group’s backup is disabled.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 410

Page 413: CDP-NSS Administration Guide

Command Line Interface

iscli stopbackup This command allows you to stop the backup activity for a virtual device or a group. If a group is specified and the group is enabled for backup, the backup activity for all resources in the group is stopped. If the backup option is not enabled for the group, but some of the resources in the group are enabled for backup, the backup activity for the resources in the group is stopped.

iscli setbackupproperties This command allows you to change the backup properties, such inactivity timeout, closing grace period, backup window, and backup life span, for a virtual device or a group.When the virtual device is in a group, the backup properties can only be set for the group. To remove the inactivity timeout or backup life span, specify 0 as the value.

iscli getbackupproperties This command allows you to view the current backup properties associated with a virtual device or a group enabled for backup. When the virtual device is in a group, the backup properties can only be retrieved for the group.

Xray

iscli getxray This command allows you to get X-ray information from the storage server for diagnostic purposes. Each X-ray contains technical information about your server, such as server messages and a snapshot of your server's current configuration and environment. You should not create an X-ray unless you are requested to do so by your Technical Support representative.

Command Line Interface (CLI) description table

Command Description

CDP/NSS Administration Guide 411

Page 414: CDP-NSS Administration Guide

CDP/NSS Administration Guide

SNMP IntegrationCDP/NSS provides SNMP support to integrate CDP/NSS management into an existing enterprise management solution such as HP OpenView, HP Network Node Manager (NNM), Microsoft System Center Operations Manager (SCOM), CA Unicenter, IBM Tivoli NetView, and BMC Patrol.

For Dell appliances, SNMP integration with Dell OpenManage is supported. Information can be obtained via your MIB browser (i.e. query Dell’s OID with OpenView) or via the Dell OpenManage software.

For HP appliances, SNMP integreation with HP Advanced Server Management (ASM) is supported. Information can be obtained via your MIB browser or from the HP Systems Insight Manager (SIM).

CDP/NSS uses the MIB (Management Information Base) to determine what data can be monitored. The MIB is a database of information that you can query from an SNMP agent.

A MIB module contains actual specifications and definitions for a MIB. A MIB file is just a text file that contains one or more MIB modules.

There are three major areas of management:

• Accounting management (including discovery)Locates all storage servers and Windows clients. It shows how all the resources are aggregated, virtualized, and provisioned, including the number of adapters, physical devices, and virtual devices attached to a Server. Most of the information comes from the storage server’s configuration file (ipstor.conf).

• Performance management (including statistics)Shows information about your storage servers and clients, including the number of clients being serviced by a server, server memory used, CPU load, and the total MB transferred. Most of the information comes from the /proc/ipstor directory on the servers or the client monitor on the clients. For more information about each of the statistics, please refer to the IPSTOR-MIB.txt file that is in the Server’s /usr/local/ipstor/etc/snmp/mibs directory.

• Fault managementThis allows a trap to be generated when certain conditions occur.

CDP/NSS Administration Guide 412

Page 415: CDP-NSS Administration Guide

SNMP Integration

SNMPTraps

Simple Network Management Protocol (SNMP) is used to monitor systems for fault conditions, such as disk failures, threshold violations, etc.

Essentially, SNMP agents expose management data on the managed systems as variables. The variables accessible via SNMP are organized in hierarchies. These hierarchies, and other metadata (such as type and description of the variable), are described by Management Information Bases (MIBs).

An SNMP-managed network consists of three key components:

• Managed device• Agent — software which runs on managed devices• Network management system (NMS) — software which runs on the

manager

An SNMP trap is an asynchronous event indicating that a significant event has occurred.

There are statistic traps, disk-full traps, failover/recovery traps, and process-down traps. Statistics traps allow you to set a threshold for an Object Identifier (OID) so that a trap is sent when the threshold is met. In order to integrate with some third-party SNMP managers, you may need to load the MIB file. To load the MIB file, navigate to $ISHOME/etc/snmp/mibs/IPSTOR-MIB.TXT and copy the IPSTOR-MIB.TXT file to the machine running the SNMP manager.

An SNMP trap message is sent when triggered by an event. The message contains the OID, time stamp, and specific information for each trap.

Process down traps allow you to monitor the status of the CDP/NSS modules (or processes) so that a trap is sent when a CDP/NSS component is down. The following table lists the name and description of the modules (or processes) that can be configured to be monitored:

CDP/NSSEvent Logmessages

CDP/NSS Event Log messages can be sent to your SNMP manager. By default, Event Log messages (informational, warnings, errors, and critical errors) will not be sent. From the FalconStor Management Console, you can determine which type of messages should be sent. To select the Trap level:

1. Right-click on the server and select Properties --> SNMP Maintenance --> Trap Level.

2. After selecting a Trap Level, click Add to enter the name of the server receiving the traps (or IP address if the name is not resolvable), and a Community name.

Five levels are available:

• None – (Default) No messages will be sent.• Critical - Only critical errors that stop the system from operating properly

will be sent.• Error – Errors (failure such as a resource is not available or an operation

has failed) and critical errors will be sent.

CDP/NSS Administration Guide 413

Page 416: CDP-NSS Administration Guide

SNMP Integration

• Warning – Warnings (something occurred that may require maintenance or corrective action), errors, and critical errors will be sent.

• Informational – Informational messages, errors, warnings, and critical error messages will be sent.

Implement SNMP support

The SNMP software is installed on the Server and Windows clients during the CDP/NSS installation.

To complete the implementation, you must install software on your SNMP manager machine and then configure the manager to support CDP/NSS.

Since this process is different for each SNMP manager, please refer to the appropriate section below.

Note: CDP/NSS installs an SNMP module that stops the native SNMP agent on the storage server. The CDP/NSS SNMP module is customized for use with your SNMP manager. If you do not want to use the CDP/NSS SNMP module, you can stop it by executing: ./ipstor stop snmpd. However, the next time the server is rebooted, it will start again. Contact technical support if you do not want it to restart on boot up.

CDP/NSS Administration Guide 414

Page 417: CDP-NSS Administration Guide

SNMP Integration

Microsoft System Center Operations Manager (SCOM)

Microsoft SCOM is a Microsoft management server with SNMP functionality. CDP/NSS supports SNMP trap integration with Microsoft SCOM 2007 R2. SNMP integration requires that you manually create a rule and discover the SNMP device from the Microsoft SCOM console. To do this:

1. From the Microsoft SCOM console, navigate to Authoring --> Management Pack Object -> Rules. Right-click and select Create a new rule.

The Create Rule Wizard displays.

2. Select the type of rule to create. Alert Generating Rules -> Event based -> SNMPTrap (Alert) and click Next.

3. Select the rule name, description, and select the rule target for the snmp network device.

The Select a Target Type screen displays allowing you select for the populated list or use the “Look for” field to filter down to a specific target or sort the targets by Management Pack.

4. In the configure the trap OIDs to collect, select the Use discovery community string option and enter the OID. For example: 1.3.6.1.4.1.7368.0.9

5. Configure Alerts by specifying the information that will be generated by the alert and click Create.

Once the rule is created, you will be able to discover the SNMP network device.

6. Discover the SNMP network device.

From the Administration node, navigate to Device Management --> Network Devices and select Discovery Wizard from the right-click menu.

7. Click Next at the Computer and Device Management Wizard screen. Then select Advanced discovery.

Select network device in the Computer & Device Types field.

8. Select the discovery method. Specify the IP address (i.e. 172.11.22.333 to 172.11.22.333), type the community string (i.e. public), select the SNMP version (i.e. v2172), and click Discover.

After discovery you should see the network device. You can right-click on it and select Open --> Alert View to see trap information on the Alert properties screen.

CDP/NSS Administration Guide 415

Page 418: CDP-NSS Administration Guide

SNMP Integration

HP Network Node Manager (NNM) i9

CDP/NSS provides SNMP trap integration and MIB upload for the HP management server - Network Node Manager I 9 (NNMi9).

NMM i9 trap The trap configuration can be set by logging into the NNMi9 console from web and following the steps below:

1. From the HP Network Node Manager console, navigate to Workspaces --> Configuration, and select Incident Configuration.

2. Select the New icon under the SNMP Traps tab.

Enter the Basics and then click Save and Close:

• Name: IPSTOR-information• SNMP ObJect ID: .1.3.6.1.4.1.7368.0.9• Category: IPStor• Family: IPStor• Severity: Critical• Message Format: $oid

Navigate to Incident Browsing --> SNMP Traps to see the trap collection information

Upload MIB The MIB browser can be launched from the HP Network Node Manager console by selecting Tools --> MIB Browser.

1. Upload the MIB file from the HP Network Node Manager console by selecting Tools --> Upload Local MIB File.

The Upload Local MIB File window launches.

2. Browse to select the MIB file from the CDP/NSS storage server and click Upload MIB.

The Upload MIB File Data Results screen displays an upload summary.

CDP/NSS Administration Guide 416

Page 419: CDP-NSS Administration Guide

SNMP Integration

HP OpenView Network Node Manager 7.5

Install

The software installation media includes software that must be installed on your HP OpenView Network Node Manager (NNM) machine. This software adds several CDP/NSS menu options into your NNM and adds a CDP/NSS MIB tree so that you can set traps.

1. Navigate to the \SNMP\OpenView directory and run setup.exe to launch the SNMP installation program.

2. Start the NNM when the installation is finished.

Under the Tools menu you will see a new CDP/NSS menu option.

Configure

You need to define which hosts will receive traps from your storage server(s) and determine which CDP/NSS components to monitor. To do this:

1. In the NNM, highlight a storage server and select Tools --> SNMP MIB Browser.

2. In the tree, expand private --> enterprises --> ipstor --> ipstorServer --> trapReg and highlight trapSinkSettingTable.

The default read-only community is “public”. The default read-write community is “falcon”.

• Set the Community name to "falcon" so that you will be allowed to change the configuration.

• Click the Start Query button to query the configuration.• From the MIB values field, select a host to receive traps. You can set up to

five hosts to receive traps. If the value is ‘0’, the host is invalid or not set.• In the SNMP set value field, enter the IP address or machine name of the

host that will receive traps.• Click the Set button to save the configuration in snmpd.conf.

3. In the SNMP MIB Browser, select private --> enterprises --> ipstor --> ipstorServer --> alarmTable.

• Click the Start Query button to query the alarms.• In the MIB values field, select which CDP/NSS components to monitor.

You will be notified any time the component goes down. A description of each is listed in the ‘SNMPTraps’ section.

• In the SNMP set value field, enter ‘enable’ or ‘1’ to enable.• Click the Set button to enable the trap you selected.

CDP/NSS Administration Guide 417

Page 420: CDP-NSS Administration Guide

SNMP Integration

View statistics in NNM

In addition to monitoring CDP/NSS components and receiving alerts, you can view CDP/NSS statistics in NNM. There are two ways to do this:

CDP/NSSmenu

1. Highlight a storage server or Client and select Tools --> IPStor.

2. Select the appropriate menu option.

These reports are provided by CDP/NSS as a convenient way to view statistical information without having to go through the MIB browser.

You can add your own reports to the menu by selecting Options --> MIB Application Builder: SNMP. Refer to OpenView’s documentation for details on using the MIB Application Builder.

MIB browser 1. Highlight a storage server or Client and select Tools --> SNMP MIB Browser.

2. In the tree, expand private --> enterprises --> ipstor --> ipstorServer.

If this is a Client, select ipstorClient.

From here you can view information about this storage server. If you run a query at the ipstorServer level, you will get a superset of all of the information from all of the sub-categories.

For more specific information, expand the sub-categories.

For more information about each of the statistics, you can click the Describe button or refer to the IPSTOR-MIB.txt file that is in your \\OpenView\snmp_mibs directory.

CDP/NSS Administration Guide 418

Page 421: CDP-NSS Administration Guide

SNMP Integration

CA Unicenter TNG 2.2

Install

The software installation media includes software that must be installed on your CA Unicenter TNG 2.2 machine. This software creates a CDP/NSSSNMP class in Unicenter and adds a CDP/NSS MIB tree so that you can set traps.

Navigate to the \SNMP\Unicenter directory and run setup.exe to launch the SNMP installation program.

Configure

You need to define which hosts will receive traps from your storage server(s) and determine which CDP/NSS components to monitor. To do this:

1. Run Unicenter’s Auto Discovery.

If you have a repository with existing machines and then you install the storage server software, Unicenter will not automatically re-classify the machine and mark it as a storage server.

2. If you need to re-classify a machine, open the Unicenter TNG map, highlight the machine, select Reclassify Object, select Host --> IPStor SNMP and then change the Alarmset Name to IPStorAlarm.

If you want to re-align the objects on the map after re-classification, select Modes --> Design --> Folder --> Arrange Objects and then the appropriate network setting.

3. Restart the Unicenter TNG map.

4. To define hosts, right-click on Storage server --> Object View.

5. Click Object View, select Configure Toolbar, set the Get Community and Set Community to falcon, and set the Model to ipstor.mib.

The default community name (password) is falcon. If it was changed in the snmpd.conf file (on the storage server), enter the appropriate community name here.

6. Expand Vendor Information and highlight trapSinkSettingEntry.

7. To define a host to receive traps, highlight the trHost field of an un-defined host, right-click and select Attribute Set.

You can set up to five hosts to receive traps.

8. In the New Value field, enter the IP address or machine name of the host that will receive traps (such as your Unicenter TNG server).

Your screen will now show that machine.

CDP/NSS Administration Guide 419

Page 422: CDP-NSS Administration Guide

SNMP Integration

9. Highlight alarmEntry.

10. Highlight the alarmStatus field for a component, right click and select Attribute Set.

11. Set the value to enable for on or disable for off.

View traps

1. From your Start --> Programs menu, select Unicenter TNG --> Enterprise Management --> Enterprise Managers.

2. Double-click on the Unicenter machine.

3. Double-click on Event.

4. Double-click on Console Logs.

View statistics in TNG

You can view statistics about CDP/NSS directly from the ObjectView screen.

To do this, highlight a category in the tree and the CDP/NSS information will be displayed in the right pane.

Launch the FalconStor Management Console

If the FalconStor Management Console is installed on your Unicenter TNG machine, you can launch it directly from the Unicenter map by right-clicking on a storage server and selecting Launch FalconStor Management Console.

CDP/NSS Administration Guide 420

Page 423: CDP-NSS Administration Guide

SNMP Integration

IBM Tivoli NetView 6.0.1

Install

The software installation media includes software that must be installed on your Tivoli NetView machine. This software adds several CDP/NSS menu options into NetView and adds a CDP/NSS MIB tree so that you can set traps.

1. Navigate to the \SNMP\Tivoli directory and run setup.exe to launch the SNMP installation program.

2. Start NetView when the installation is finished.

You will see a new CDP/NSS menu option on NetView’s main menu.

Configure

You need to define which hosts will receive traps from your storage server(s). To do this:

1. In NetView, highlight a storage server on the map and click the Browse MIBs button.

2. In the tree, expand enterprises --> ipstor --> ipstorServer --> trapReg --> trapSinkSettingTable --> trHost.

The default read-only community is “public”. The default read-write community is “falcon”.

3. Set the Community Name so that you will be allowed to change the configuration.

4. Click the Get Values button.

5. Select a host to receive traps. You can set up to five hosts to receive traps. If the value is ‘0’, the host is invalid or not set.

6. In the New Value field, enter the IP address or machine name of the Tivoli host that will receive traps.

7. Click the Set button to save the configuration in snmpd.conf.

CDP/NSS Administration Guide 421

Page 424: CDP-NSS Administration Guide

SNMP Integration

View statistics in Tivoli

In addition to monitoring CDP/NSS components and receiving alerts, you can view CDP/NSS statistics in NetView. There are two ways to do this:

CDP/NSSmenu

1. Highlight a storage server or Client and select IPStor from the menu.

2. Select the appropriate menu option.

For a server, you can view:

• Memory used• CPU load• SCSI commands• MB read/written• Read/write errors

For a client, you can view:

• SCSI commands• Error report

These reports are provided by CDP/NSS as a convenient way to view statistical information without going through the MIB browser.

You can add your own reports to the menu by using NetView’s MIB builder. Refer to NetView’s documentation for details on using the MIB builder.

MIB browser 1. Highlight a storage server or Client and click Tools --> MIB --> Browser.

2. In the tree, expand private --> enterprises --> ipstor -->Server.

If this is a Client, select ipstorClient.

3. Select a category.

4. Click the Get Values button.

The information is displayed in the bottom section of the dialog.

CDP/NSS Administration Guide 422

Page 425: CDP-NSS Administration Guide

SNMP Integration

BMC Patrol 3.4.0

Install

The software installation media includes software that must be installed on your BMC Patrol machine. This software adds several CDP/NSS icon options into Patrol and adds several CDP/NSS MIB items so that you can retrieve information and set traps.

1. Navigate to the \SNMP\Patrol directory and run setup.exe to launch the SNMP installation program.

2. Start Patrol when the installation is finished.

3. Click Hosts --> Add on the Patrol main menu and enter the Host Name (IP preferred), Username (Patrol administrator name of the storage server), Password (Patrol administrator password of the storage server), and Verify Password fields to add the storage server.

4. Click Hosts --> Add on the Patrol main menu and input the Host Name, Username (administrator name of the Patrol machine), Password (administrator password of the Patrol machine), and Verify Password fields to add the Patrol Console machine.

5. Click File --> Load KM on the Patrol main menu and load the IPSTOR_MODULE.kml module.

6. Click File --> Commit KM --> To All Connected Hosts on the Patrol main menu to send changed knowledge (IPSTOR_MODULE.kml) to all connected agents, including the storage server and Patrol Console machine.

7. Expand the storage server tree.

You will see three new CDP/NSS sub-trees with several icons on the Patrol console.

Configure

You need to define which hosts will receive traps from your storage server(s) and determine which CDP/NSS components to monitor. To do this:

1. In the Patrol Console, on the Desktop tab, right-click the ServerInfo item in the IPS_Server subtree of one storage server and select KM Commands --> trapReg --> trapSinkSettingEntry.

The default read-only community is “public”. The default read-write community is “falcon”.

2. Select a host to receive traps. You can set up to five hosts to receive traps. If the value is '0', the host is invalid or not set.

CDP/NSS Administration Guide 423

Page 426: CDP-NSS Administration Guide

SNMP Integration

3. In the Host fields, enter the IP address or machine name of the host that will receive traps.

4. In the Community fields, enter the community.

5. Click the Set button to save the configuration in snmpd.conf.

6. In the Patrol Console, on the Desktop tab, right-click the ServerInfo item in the IPS_Server subtree of one storage server and select KM Commands --> alarmTable --> alarmEntry.

Set the status value to enable(1) for on or disable(0) for off.

View traps

1. In the Patrol Console, on the Desktop tab, right-click the IPS_Trap_Receiver --> SNMPTrap_Receiver of the Patrol Console machine and select KM Commands --> Start Trap Receiver to let the Patrol Console machine start receiving traps.

2. After turning the trap receiver on, you can double-click the SNMP_Traps icon in the SNMPTrap_Receiver subtree of the Patrol Console machine to get the results of the traps that have been received.

View statistics in Patrol

In addition to monitoring CDP/NSS components and receiving alerts, you can view storage server statistics in Patrol. There are two ways to do this:

IPStor icon 1. Highlight a storage server and totally expand the IPS_ProcessMonitor subtree and the IPS_Server subtree from the storage server.

2. Select the appropriate icon option.

For a Server, you can view:

• Processes Status (Authentication Process, Communication Process, Logger Process, Self Monitor Process, SNMPD Process, etc.).To monitor more processes, you can change to KM tab on the Patrol Console and right-click one process from Knowledge Module --> Application Classes --> IPS_ProcessMonitor --> Global --> Parameters and click the Properties item on the menu. You can check the “Active” option to let the specified process to be monitored. After, change to Desktop tab and you can see the specified process is visible in the IPS_ProcessMonitor subtree.

• Server Status (ipsLaAvailLoad, ipsMemAvailSwap and ipsMemAvaiReal)

These reports are provided by CDP/NSS as a convenient way to view statistical information without having to go through the MIB browser.

MIB browser 1. Highlight a storage server and right-click ServerInfor from IPS_Server subtree and select KM commands.

CDP/NSS Administration Guide 424

Page 427: CDP-NSS Administration Guide

SNMP Integration

In KM commands, several CDP/NSS integrated MIB items are inside.

2. Click one of the MIB items to retrieve the information related to the storage server.

Advanced SNMP topics

The following topics apply to all SNMP managers.

The snmpd.conf file

The snmpd.conf file is located in the /usr/local/ipstor/etc/snmp directory of the storage server and contains SNMP configuration information, including the CDP/NSS community name and the network over which you are permitted to use SNMP (the default is the network where your storage server is located).

If your SNMP manager resides on a different network, you will have to modify the snmpd.conf file before you can implement SNMP support through your SNMP manager.

In addition, you can modify this file if you want to limit SNMP communication to a specific subnet or change the community name. The default read-write community is “falcon”. This is the only community you should change.

Use an SNMP configuration for multiple storage servers

To re-use your SNMP configuration for multiple storage servers, go to /usr/local/ipstor/etc/snmp and copy the following files to the same directory on each storage server.

• snmpd.conf - contains trapSinkSettings• IPStorSNMP.conf - contains trapSettings

Note: In order for the configuration to take effect, you must restart the SNMPD module on each storage server to which you copied these files.

CDP/NSS Administration Guide 425

Page 428: CDP-NSS Administration Guide

SNMP Integration

IPSTOR-MIB tree

Once you have loaded the IPSTOR-MIB file, MIB Browser parses it into a tree hierarchy structure. The table below describes many of the tables and fields. Refer to the IPSTOR-MIB.txt file that is in your \\OpenView\snmp_mibs directory for a complete list.

Table / Field descriptions

Server Information

serverName The hostname which the storage server is running.

loginMachineName Identifies which storage server you are logged into.

serverVersion The storage server version and build number.

osVersion The operation system version of the host the storage server is running.

kernelVersion The kernel version of the host the storage server is running.

processorTable A table containing the information of all processors in the host which the storage server is running.• processorInfo: The specification of a processor type and power.

memory The amount of memory on the storage server.

swap The swap space of the host on which the storage server is running.

netInterfaceTable A table containing the information of all network interfaces in the host which the storage server is running.• netInterfaceInfo: The specification containing the MAC, IP address and MTU

of a network interface.

FailoverInformationTable A table containing the failover information that is currently configured on the storage server.• foName: The property of a failover configuration.• foValue: The setting value of a failover configuration.• foConfType: The Configuration Type of a failover configuration.• foPartner: The Failover Partner of a failover configuration.• foPrimaryIPRsource: The Primary Server IP Resource of a failover

configuration• foSecondaryIPResource: The Secondary Server IP Resource of a failover

configuration.• foCheckInterval: The Self Check Interval of a failover configuration• foHearbeatInterval: The Hearbeat Interval of a failover configuration.• foRecoverySetting: The Recovery Setting of a failover configuration.• foState: The Failover State of a failover configuration.• foPrimaryCrossLinkIP: The Primary Server CrossLink IP of a failover

configuration.• foSecondaryCrossLinkIP: The Secondary Server CrossLink IP of a failover

configuration.

CDP/NSS Administration Guide 426

Page 429: CDP-NSS Administration Guide

SNMP Integration

failoverInformationTable(continued)

• foSuspended: The Suspended status of a failover configuration• foPowerControl: The Power Control of a failover configuration• fofcWWPN: The Fibre Channel WWPN of a failover configuration.

serverOption nasOption:Indicates the status of the NAS option (enabled or disabled) of the storage server.• fibreChannelOption: Indicatess the status of the Fibre Channel option

(enabled or disabled) of the storage server.• replicationOption: It indicates the status of the Replication option (enabled or

disabled) of the storage server.• syncMirroringOption: Indicates the status of the synchronized Mirroring

option (enabled or disabled) of the storage server.• timemarkOption: Indicates the status of the Timemark option (enabled or

disabled) of the storage server.• zeroimpactOption: Indicates the status of the Zero Impact Backup option

(enabled or disabled) on the storage server.

MTCPVersion The MTCP Version which the storage server uses.

performanceTable A table containing the information of the performance in the host which the storage server is running.• performanceMirrorSyncTh: The Mirror Synchronization Throttle of the

performance table.• performanceSyncMirrorInterval: The Synchronize out-of-sync mirrors

Interval of the performance table.• performanceSyncMirrorRetry: The Synchronize out-of-sync mirrors retry

times of the performance table.• performanceSyncMirrorUpnum The Synchronize out-of-sync mirrors up

numbers at each interval of the performance table.• performanceInitialMirrorSync The option of starting initial synchronization

when mirror is added of the performance table.• performanceIncludeReplicaMirror: The option of including replica mirror in

the automatic synchronize process of the performance table.• performanceReplicationMicroScan: Indicates the MicroScan option of

Replication (enabled or disabled) of the storage server.

serverRole The storage server role.

smioption The storage server SMI-S option.

ServerIPaliasTable A table containing the information of the IP alias in the host on which the storage server is running.

ServerIPAliasIP: The storage server IP Alias

PhysicalResources

numOfAdapters The amount of physical adapters configured by the storage server.

numOfDevices The amount of physical devices configured by the storage server.

Table / Field descriptions

CDP/NSS Administration Guide 427

Page 430: CDP-NSS Administration Guide

SNMP Integration

scsiAdapterTable A table containing the information of all the installed SCSI adapters of the storage server.• adapterNumber: The SCSI adapter number.• adapterInfo: The model name of the SCSI adapter.

scsiDeviceTable A table containing all the SCSI devices of the storage server.• deviceNo: The sequential digit number as a index key of the device table.• deviceType:: Represents the access type that the device attached to the

storage server.• vendorID: The product vendor ID.• produtcID: The product model name.• firmwareRev: The firmware version of the device.• adapterNo: The configured SCSI adapter number.• channelNo: The configured SCSI channel number.• scsiID: The configured SCSI ID.• lun: The configured SCSI LUN number.• totalSectors: The amount of sectors or blocks of the device.• sectorSize: The size of bytes for each sector or block.• totalSize: The size of the device represented in megabytes.• configStatus: : Represents the attaching status of the device.• totalSizeQuantity: The quantity size of the device.• totalSizeUnit: The size unit of the device. 0=KB. 1=MB. 2=GB. 3=TB.• totalSectors64: The amount of sectors or blocks of the device.• totalSize64: The size of the device represented in megabytes.

StoragePoolTable A table containing the information of Storage Pool of the storage server.• PoolName: The name of the Storage Pool.• PoolID: The Pool ID of the Storage Pool.• PoolType: The Pool Type of the Storage Pool.• DeviceCount: The device Count in the Storage Pool.• PoolCount: The Storage Pool counts.• PoolTotalSize: The total Size of the Storage Pool.• PoolUsedSize: The amount of Storage Pool space used.• PoolAvailableSize: The available Size of the Storage Pool.• PoolTotalSizeQuantity: The total Size quantity of the Storage Pool.• PoolTotalSizeUnit The total Size of the Storage Pool. 0=KB. 1=MB. 2=GB.

3=TB.• PoolUsedSizeQuantity: The amount of the Storage Pool used.• PoolUsedSizeUnit: The amount of space used on the Storage Pool. 0=KB.

1=MB. 2=GB. 3=TB.• PoolAvailableSizeQuantity: The available Size quantity of the Storage Pool.• PoolAvailableSizeUnit: The available Size unit of the Storage Pool. 0=KB.

1=MB. 2=GB. 3=TB.• PoolTatalSize64: The total size of the Storage Pool.• PoolUsedSize64: The amount of Storage Pool space used.• PoolAvailableSize64: The available Size of the Storage Pool.

Table / Field descriptions

CDP/NSS Administration Guide 428

Page 431: CDP-NSS Administration Guide

SNMP Integration

LogicalResources

numOfLogicalResources The amount of logical resources which including the SAN, NAS, and Replica devices are available in the storage server.

SnapshotReservedArea • numOfSnapshotReserved: The amount of the shareable snapshot reserved areas.

• snapshotReservedTable : Table containing the snapshot reserved areas information.

• ssrName : The name of the snapshot reserved area.• ssrDeviceName : The physical device name of the snapshot reserved area.• ssrSCSIAddress : The SCSI address of the physical device which the

snapshot reserved area created.• ssrFirstSector : The first sector of the snapshot reserved area.• ssrLastSector : The last sector of the snapshot reserved area.• ssrTotalSectors : The amount of sectors that the snapshot reserved area

created.• ssrSize : The amount of resource size (represented in megabytes) of the

snapshot reserved area.• ssrSizeQuantity : The amount quantity of resource size of the snapshot

reserved area.• ssrSizeUnit : The resource size unit of the snapshot reserved area. The size

unit of the device. 0=KB. 1=MB. 2=GB. 3=TB.• ssrFirstSector64 : The first sector of the snapshot reserved area.• ssrLastSector64 : The last sector of the snapshot reserved area.• ssrTotalSector64 : The amount of sectors that the snapshot reserved area

created.• ssrSize64 : The amount of resource size (represented in megabytes) of the

snapshot reserved area.

Table / Field descriptions

CDP/NSS Administration Guide 429

Page 432: CDP-NSS Administration Guide

SNMP Integration

Logical Resources --> SANResources

numOfSANResources The number of SAN resources available from the storage server.

SANResourceTable A table containing the SAN resources information.• sanResourceID : The SAN resource ID assigned by the storage server.• sanResourceName : The SAN resource name created by the user.• srAllocationType : Represents the resource type when user allocating the

SAN device.• srTotalSectors : The amount of sectors allocated by the SAN resource.• srTotalSize : The amount of device size (represented in megabytes) of the

SAN resource.• srConfigStatus: Represents the attaching status of the SAN resource.• srMirrorSyncStatus: Represents the mirror synchronization status of the

SAN resource.• srReplicaDevice : Represents the target replica server and device as the

format <hostname of target>:<virtual device id>, if the replication option is enabled of the SAN resource

• srReplicatingSchedule: Represents the current status of the replicating schedule(On-schedule, Suspended, or N/A) set for the SAN resource.

• srSnapshotCopyStatus : The snapshot copy status of the SAN resource.• srPhysicalAllocLayoutTable : Table containing the physical layout

information for the SAN resources.• srpaSanResourceName : The SAN resource name created by the user.• srpaSanResourceID : The SAN resource ID assigned by the storage server.• srpaName : The physical device name.• srpaType: Represents the type(Primary, or Mirror) of the physical layout.• srpaAdapterNo : The SCSI adapter number of the physical device.• srpaChannelNo : The SCSI channel number of the physical device.• srpaScsiID : The SCSI ID of the physical device.• srpaLun : The SCSI LUN number of the physical device.• srpaFirstSector : The first sector of the physical device which is allocated by

the SAN resource.• srpaLastSector : The last sector of the physical device which is allocated by

the SAN resource.• srpaSize : The amount of the allocated size (represented in megabytes)

within a physical device.• srpaSizeQuantity : The amount of the allocated size quantity within a

physical device.• srpaSizeUnit : The amount of the allocated size unit within a physical device.

The size unit of the device. 0=KB. 1=MB. 2=GB. 3=TB.• srpaFirstSector64 : The first sector of the physical device which is allocated

by the SAN resource.• srpaLastSector64 : The last sector of the physical device which is allocated

by the SAN resource.

• srpaSize64 : The amount of the allocated size (represented in megabytes) within a physical device.

Table / Field descriptions

CDP/NSS Administration Guide 430

Page 433: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

• srClientInfoTable : Table containing the SAN clients information.• srClientNo : The SAN client ID assigned by the storage server.• srcName : The SAN client name assigned by the storage server.• srcSANResourceID : SAN resource ID assigned by the storage server.• srcSANResourceName : The SAN resource name created by the user.• srcAdapterNo : The adapter number of the SAN client.• srcChannelNo : The channel number of the SAN client.• srcScsiID : The SCSI ID of the SAN client.• srcLun : The SCSI LUN number of the SAN client.• srcAccess : SAN resource accessing mode assigned to the SAN client.• srcConnAccess : The connecting and accessing status with a resource of

the SAN client.• srFCClientInfoTable : Table containing the Fibre Channel clients information.• srFCClientNo : Fibre Channel client ID assigned by the storage server.• srFCName : Fibre Channel client name assigned by the storage server.• srFCSANResourceID : SAN resource ID assigned by the storage server.• srFCSANResourceName : The SAN resource name created by the user.• srFCInitatorWWPN : The world wide port name(WWPN) of the Fibre

Channel client's initator HBA.• srFCTargetWWPN : The world wide port name(WWPN) of the Fibre Channel

client's target HBA.• srFCLun : The SCSI LUN number of the Fibre Channel client.• srFCAccess : The SAN resource accessing mode assigned to the Fibre

Channel client.• srFCConnAccess : Identifies the connecting and accessing status with a

resource of the Fibre Channel client.• srSnapShotTable : Table containing the snapshot resources created by the

SAN resource.• srSnapShotResourceID : SAN resource ID assigned by storage server.• srSnapShotResourceName : SAN resource name created by the user.• srSnapShotOption : The status represents the snapshot option (enabled or

disabled) of the SAN resource.• srSnapShotSize : The allocated size when creating the SAN resource at first

time.• srSnapShotThreshold : The value represents the threshold setting which is

in percentage(%) format of the SAN resource.• srSnapShotReachTh : The policy is setting for expanding resource

automatically or manually while reaching the threshold.• srSnapShotIncSize : The incremental size for each time when is running out

the resource. This is meaningful when expanding resource is automatically.• srSnapShotMaxSize : The maximum resource size which is represented in

megabyte unit is allowed for allocating.• srSnapShotUsedSize64 : The resource size which is representing in kilobyte

unit have been used.

• srSnapShotFreeSize64 : The free resource size (represented in megabytes) before reaching the threshold.

Table / Field descriptions

CDP/NSS Administration Guide 431

Page 434: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

• srSnapShotReclaimPolicy : The status represents the snapshot Reclaim option is enabled or disabled of the SAN resource.

• srSnapShotReclaimTime : The initial time when the snapshot Reclaim option is enabled of the SAN resource.

• srSnapShotReclaimInterval : The schedule interval to start the snapshot Reclaim of the SAN resource.

• srSnpaShotReclaimWaterMark : The threshold for the minimum amount of space that can be reclaimed per TimeMark of the SAN resource.

• srSnapShotReclaimMaxTime : The maximum time for the reclaim process of the SAN resource.

• srSnapShotShrinkPolicy : The status represents the snapshot Shrink option is enabled or disabled of the SAN resource.

• srSnapShotShrinkThresHold : The minimum disk space to shrink the snapshot resource of the SAN resource.

• srSnapShotShrinkMinSize : The minimum size for the snapshot resource to shrink.

• srSnapShotShrinkMinSizeQuantity : The minimum size quantity for the snapshot resource to shrink.

• srSnapShotShrinkMinSizeUnit : The minimum size unit for the snapshot resource to shrink. 0=KB. 1=MB. 2=GB. 3=TB.

• srSnapShotShrinkMinSize64 : The minimum size for the snapshot resource to shrink.

• srSnapShotResourceStatus : The snapshot resource status of the SAN resouce.

• srTimeMarkTable : Table containing the timamark resources created by the SAN resource.

• srTimeMarkResourceID : The SAN resource ID assigned by the storage server.

• srTimeMarkResourceName : The SAN resource name created by the user.• srTimeMarkOption : The status represents the TimeMark option (enabled or

disabled) of the SAN resource.• srTimeMarkCounts : The maximum TimeMarks that is allowed to create of

the SAN resource.• srTimeMarkSchedule : The time interval creates one new TimeMark.• srTimeMarkLastTimeStamp : The lately timestamp creates TimeMark.• srTimeMarkSnapshotImage : The time of each day creates snapshot-image

automatically.• srTimeMarkSnapshotNotificationOption : This option triggers the snapshot

notification schedule.• srTimeMarkReplicationOption : The replication option after the TimeMark is

taken.• srBackupTable : Table containing the backup resources created by the SAN

resource.• srBackupResourceID : The SAN resource ID assigned by the storage server.• srBackupResourceName : The SAN resource name created by the user.• srBackupOption : The status represents the backup option (enabled or

disabled) of the SAN resource.

Table / Field descriptions

CDP/NSS Administration Guide 432

Page 435: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

• srBackupWindow : The daytime allows for opening one backup sesion.• srBackupSessionLen : The time interval allows for one backup session in

each time.• srBackupRelativeTime : The time interval waits before closing the backup

session which is in inactivity status.• srBackupWaitTime : The time interval which is represnting in minute unit

waits before closing the backup session after completion.• srBackupSelectCriteria : The snapshot image selection criteria that could be

new or latest for the backup session. New represents that it always creates new snapshot image for backup, and latest represents that it uses the last created snapshot image for backup.

• srBackupRawDeviceName : The SAN Backup Resource Raw Device Name created by the user.

• srReplicationTable : Table containing the replication resources created by the SAN resource.

• srReplicationResourceID : The SAN resource ID assigned by the storage server.

• srReplicationResourceName : The SAN resource name created by the user.• srReplicationOption : The status represents the replication option (enabled

or disabled) of the SAN resource.• srReplicaServer : The target replia server name.• srReplicaDeviceID : The target replica device ID.• srReplicaSchedule : Represents Current status of the replicating

schedule(On-schedule, Suspended, or N/A) set for the SAN resource.• srReplicaWatermark : The watermark sets to generate one new replication

automatically.• srReplicaWatermarkRetry : The retry interval which is representing in minute

unit if the replication failed.• srReplicaTime : The daytime of each day creates one new replication.• srReplicaInterval : The time interval creates one new replication.• srReplicationContinuousMode : The status represents the Continuous Mode

of Replication (enabled or disabled) .• srReplicationCreatePrimaryTimeMark : Allows you to create the primary

TimeMark when a replica TimeMark is created.• srReplicaSyncTimeMark : Allows you to synchronize the replica TimeMark

when a primary TimeMark is created.• srReplicationProtocol : Allows you to synchronize the replica TimeMark

when a primary TimeMark is created.• srReplicationCompression : The status represents the Compression option

(enabled or disabled) of Replication.• srReplicationEncryption : The status represents the Encryption option

(enabled or disabled) of Replication.• srReplicationMicroScan : The status represents the MicroScan option

(enabled or disabled) of Replication.• srReplicationSyncPriority : The Priority setting when Replication

Synchronize of the SAN resource.• srReplicationStatus : The Replication status of the SAN resource.• srReplicationMode : The Replication mode of the SAN resource.

Table / Field descriptions

CDP/NSS Administration Guide 433

Page 436: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

• srReplicationContinuousResourceID : The Continuous Replication Resource ID of the SAN resource.

• srReplicationContinuousResourceUsage : The Continuous Replication Resource Usage of the SAN resource.

• srReplicationDeltaData : The Accumulated Delta Data of replication of the SAN resource.

• srReplicationUseExistTM When Continuous Mode is disabled, the option about using existing TimeMark of the replication.

• srReplicationPreserveTm When Continuous Mode is disabled, the option about perserving TimeMark of the replication.

• srReplicaLastSuccessfulSyncTime : The last successful synchronize time of the replication.

• srReplicaAverageThroughput : The average throughput (MB/s) of the replication.

• srReplicaAverageThroughputQuantity : The average throughput quantity of the replication.

• srReplicaAverageThroughputUnit : The average throughput unit of the replication. 0=KB. 1=MB. 2=GB. 3=TB.

• srCacheTable : Table containing the cache resource created by the SAN device.

• srCacheResourceID : The SAN resource ID assigned by the storage server.• srCacheResourceName : The SAN resource name created by the user.• srCacheOption : The status represents the cache option (enabled or

disabled) of the SAN resource.• srCacheSuspend : The cache resource is currently suspended or not.• srCacheTotalSize : The allocated size when creating the cache resource.• srCacheFreeSize : The free resource size (represented in megabytes)

before reaching the maximum resource size.• srCacheUsage : The percentage of the used resource size.• srCacheThresHold : The data needs to be in the cache before beginning

flushing the cache.• srCacheFlushTime : The number of milliseconds before cache begins to

flush when below the data threshold level.• srCacheFlushCommand : The outstanding commands will be sent at one

time during the flush process.• srCacheSkipWriteCommand This option allows the system to skip multiple

pending write commands targeted for the same block.• srCacheFlushSpeed : The flush speed will be sent at one time during the

flush process.• srCacheTotalSizeQuantity : The allocated size quantity when creating the

cache resource.• srCacheTotalSizeUnit : The allocated size unit when creating the cache

resource. 0=KB. 1=MB. 2=GB. 3=TB.• srCacheFreeSizeQuantity : The free resource size quantity before reaching

the maximum resource size.

• srCacheFreeSizeUnit : The free resource size unit before reaching the maximum resource size. 0=KB. 1=MB. 2=GB. 3=TB.

Table / Field descriptions

CDP/NSS Administration Guide 434

Page 437: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

• srCacheOwnResourceID : The Cache resource ID assigned by the storage server.

• srCacheTotalSize64 : The allocated size when creating the cache resource.• srCacheFreeSize64 : The free resource size (represented in megabytes)

before reaching the maximum resource size.• srCacheStatus : Current safecache device's status of the SAN resource.• srWriteCacheproperty : The property represents the write cache is enabled

or disabled of the SAN resource.

• srMirrorTable : Table containing the mirror property created by the SAN device.

• srMirrorResourceID : The SAN resource ID that enables the mirror property.• srMirrorType : The mirror type when a SAN resource enable the mirror

property.• srMirrorSyncPriority : The mirror synchronization priority when a SAN

resource enable the mirror property.• srMirrorSuspended Whether the mirror is suspended.• srMirrorThrottle : The mirror throttle value for SAN resource.• srMirrorHealthMonitoringOption : The status represents the mirror health

monitoring option (enabled or disabled) .• srMirrorHealthCheckInterval : The Interval to Check and report mirror health

status.• srMirrorMaxLagTime : The Maximum acceptable lag time for mirror I/O.• srMirrorSuspendThPercent: Suspends mirroring when the threshold of

failure reaches the percentage of the failure conditions.• srMirrorSuspendThIOnum: Suspends mirroring when the outstanding IOs is

greater than or equal to the threshold.• srMirrorRetryPolicy : The status represents the mirror synchronization retry

policy is enable or not.• srMirrorRetryInterval : The mirror synchronization retry at specified interval.• srMirrorRetryActivity : The mirror synchronization retry when I/O activity is

below or at threshold.• srMirrorRetryTimes : The maximum mirror synchronization retry times.• srMirrorSychronizationStatus : Represents the mirror synchronization status

of the SAN resource.• srMirrorAlterReadMirror : Represents the alternative read mirror option of

the SAN resource.• srMirrorAverageThroughput : The average throughput (MB/s) of the mirror

synchronization operation.• srMirrorAverageThroughputQuantity : The average throughput quantity of

the mirror synchronization operation.• srMirrorAverageThroughtputUnit : The average throughput unit of the mirror

synchronization operation. 0=KB. 1=MB. 2=GB. 3=TB.• srThinProvisionTable : Table containing the Thin Provision of the SAN

device.• srThinProvisionOption : Represents the Thin Provisioning option (enabled or

disabled) of the resource.

Table / Field descriptions

CDP/NSS Administration Guide 435

Page 438: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

• srThinProvisionCurrAllocSize : Current Allocated Size of the Thin Provision resource on the storage server.

• srThinProvisionUsageSize : Current usage size of the Thin Provision resource.

• srThinProvisionUsagePercentage : Current usage percentage of the Thin Provision resource.

• srThinProvisionCurrAllocSizeQuantity : Current Allocated Size quantity of the Thin Provision resource on the storage server.

• srThinProvisionCurrAllocSizeUnit : Current Allocated Size unit of the Thin Provision resource on the storage server. 0=KB. 1=MB. 2=GB. 3=TB.

• srThinProvisionUsageSizeQuantity : Current usage size quantity of the Thin Provision resource.

• srThinProvisionUsageSizeUnit : Current usage size unit of the Thin Provision resource. 0=KB. 1=MB. 2=GB. 3=TB.

• srThinProvisionCurrAllocSize64 : Current Allocated Size of the Thin Provision resource on the storage server.

• srThinProvisionUsageSize64 : Current usage size of the Thin Provision resource.

• srCDPJournalTable : Table containing the CDP Journal resources created by the SAN resource.

• srCDPJournalResourceID : The CDP Journal ID assigned by the storage server.

• srCDPJournalSANResourceID : The CDP Journal SAN resource ID assigned by the storage server.

• srCDPJournalOption : The status represents the CDP Journal option (enabled or disabled) of the SAN resource.

• srCDPJournalTotalSize : The CDP Journal Total size of the SAN resource.• srCDPJournalStatus : The status represents current CDP Journal of the SAN

resource.• srCDPJournalPerformanceLevel : The setting Performance level for the

CDP Journal of the SAN resource.• srCDPJournalTotalSizeQuantity : The CDP Journal Total size quantity of the

SAN resource.• srCDPJournalTotalSizeUnit : The CDP Journal Total size unit of the SAN

resource. The size unit of the device. 0=KB. 1=MB. 2=GB. 3=TB.• srCDPJournalTotalSize64 : The CDP Journal Total size of the SAN resource.• srCDPJournalAvalibleTimerange : The CDP Journal Avaiable time range of

the SAN resource.• srCDPJournalUsageSize : The CDP Journal usage size(MB) of the SAN

resource.• srCDPJournalUsagePercentage : The CDP Journal usage percentage of the

SAN resource.• srCDPJournalUsageQuantity : The CDP Journal Usage size quantity of the

SAN resource.• srCDPJournalUsageUnit : The CDP Journal Usage size unit of the SAN

resource. The size unit of the device. 0=KB. 1=MB. 2=GB. 3=TB.

• srCDPJournalUsageSize64 : The CDP Journal Usage size of the SAN resource.

Table / Field descriptions

CDP/NSS Administration Guide 436

Page 439: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

• srNearLineMirrorTable : Table containing the Near-Line Mirror property of the SAN device.

• srNearLineMirrorRemoteServerName : The remote server name of Near-Line mirror resource sets on the storage server.

• srNearLineMirrorRemoteServerAlias : The remote server Alias of Near-Line mirror resource sets on the storage server.

• srNearLineMirrorRemoteID : The remote resource ID of Near-Line mirror resource sets on the storage server.

• srNearLineMirrorRemoteGUID : The remote resource GUID of Near-Line mirror resource sets on the storage server.

• srNearLineMirrorRemoteSN : The remote resource serial number of Near-Line mirror resource sets on the storage server.

• srTotalSizeQuantity : The amount of device size quantity of the SAN resource.

• srTotalSizeUnit : The amount of device size quantity of the SAN resource.• srTotalSectors64 : The amount of sectors allocated by the SAN resource.• srTotalSize64 : The amount of device size (represented in megabytes) of the

SAN resource• srISCSIClientInfoTable : Table containing the iSCSI clients information.• srISCSIClientNO : The iSCSI client ID assigned by the storage server.• srISCSIName : The iSCSI client name assigned by the storage server.• srISCSISANResourceID : The SAN resource ID assigned by the storage

server.• srISCSISANResourceName : The SAN resource name created by the user.• srISCSIAccessType : The resource access type of the iSCSI client.• srISCSIConnectAccess : Identifies the connecting and accessing status with

a resource of the iSCSI client.• srPhysicalTotalAllocLayoutTable : Table containing the total physical layout

information for the SAN resources.• srpaAllocSANResourceName : The SAN resource name created by the

user.• srpaAllocName : The physical device name.• srpaAllocType : Represents the type(Primary, or Mirror) of the physical

layout.• srpaAllocAdapterNo : The SCSI adapter number of the physical device.• srpaAllocChannelNo : The SCSI channel number of the physical device.• srpaAllocScsiID : The SCSI ID of the physical device.• srpaAllocLun : The SCSI LUN number of the physical device.• srpaAllocFirstSector : The first sector of the physical device which is

allocated by the SAN resource.• srpaAllocLastSector : The last sector of the physical device which is

allocated by the SAN resource.• srpaAllocFirstSector64 : The first sector of the physical device which is

allocated by the SAN resource.

• srpaAllocLastSector64 : The last sector of the physical device which is allocated by the SAN resource.

Table / Field descriptions

CDP/NSS Administration Guide 437

Page 440: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

• srpaAllocSize : The amount of the allocated size (represented in megabytes) within a physical device.

• srpaAllocSizeQuantity : The amount of the allocated size quantity within a physical device.

• srpaAllocSizeUnit : The amount of the allocated size unit within a physical device. The size unit of the device. 0=KB. 1=MB. 2=GB. 3=TB.

• srpaAllocSize64 : The amount of the allocated size (represented in megabytes) within a physical device.

• srHotZonePrefetchInfoTable : Table containing the HotZone Prefetch information.

• srHotZonePrefetchSANResourceID : The SAN resource ID that assigned by storage server.

• srHotZonePrefetchMaximumChains : The maximum number of sequential read chains to detect.

• srHotZonePrefetchMaximumReadAhead : The maximum size to read ahead representing with KB.

• srHotZonePrefetchReadAhead : The size of read the read command issued when reading ahead representing with KB.

• srHotZonePrefetchChainTimeout : The time before the chain is removed and the readahead buffers are freed.

• srHotZoneReadCacheInfoTable : Table containing the HotZone Read Cache information.

• srHotZoneCacheResourceID : The Resource ID that assigned by storage server.

• srHotZoneCacheSANResourceID : The SANResource ID that assigned by storage server.

• srHotZoneCacheTotalSize : The amount of device size (represented in megabytes) of the HotZone read cache resource.

• srHotZoneCacheStatus : The status represents current HotZone read cache of the SAN resource.

• srHotZoneCacheSuspended : The Suspended status represents current HotZone read cache of the SAN resource.

• srHotZoneCacheAccesType : The zone's access type policy of the SAN resource.

• srHotZoneCacheAccessIntensity : The access intensity to determine how the zone is accessed.

• srHotZoneCacheMinimumStayTime : The minimum time that how long a zone can stay at least in the HotZone before it is swapped out.

• srHotZoneCacheEachZoneSize : The size of each zone setting.• srHotZoneCacheTotalZones : The total zones that is allocated of the SAN

resource.• srHotZoneCacheUsedZones : Current used zones of the SAN resource.• srHotZoneCacheHitRatio : The hit ratio represents current HotZone read

cache of the SAN resource.

• srHotZoneCacheTotalSizeQuantity : The amount of device size quantity (represented in megabytes) of the HotZone read cache resource.

Table / Field descriptions

CDP/NSS Administration Guide 438

Page 441: CDP-NSS Administration Guide

SNMP Integration

SANResourceTable (continued)

•• srHotZoneCacheTotalSizeUnit : The amount of device size unit of the

HotZone read cache resource. The size unit of the device. 0=KB. 1=MB. 2=GB. 3=TB.

• srHotZoneCacheTotalSize64 : The amount of device size (represented in megabytes) of the HotZone read cache resource.

Table / Field descriptions

CDP/NSS Administration Guide 439

Page 442: CDP-NSS Administration Guide

SNMP Integration

Logical Resources --> replicaResources

numOfReplica The amount of replica resources created by the storage server.

ReplicaResourceTable A table containing the replica resources.• rrVirtualID : The resource ID assigned by the storage server.• rrVirtualName : The resource name created by the user.• rrAllocationType : Represents the resource type when user allocating the

resource.• rrSectors : The amount of sectors allocated by the resource.• rrTotalSize : The amount of device size (represented in megabytes) of the

resource.• rrConfigurationStatus : Represents the attaching status of the resource.• rrGUID : The GUID string of the replica resource.• rrPrimaryVirtualID : Represents the source replication server and device as

the format <hostname of source>:<virtual device id>, if the replication option is enabled of the resource.

• rrReplicationStatus : Represents the current status(Replication failed, New, Idle, and Merging) of the replication schedule.

• rrLastStartTime : The latest timestamp of the replication.• rrMirrorSyncStatus : Represents the mirror synchronization status of the

resource.• rrWriteCache : Represents the write cache option (enabled or disabled) of

the resource.• rrThinProvisionOption : Represents the Thin Provisioning option (enabled or

disabled) of the resource.• rrThinProvisionCurrAllocSize : Current Allocated Size of the resource which

enables Thin Provisioning.• rrThinProvisionUsageSize : Current usage size of the resource which

enables Thin Provisioning.• rrThinProvisionUsagePercentage : Current usage percentage of the

resource which enables Thin Provisioning.• rrTotalSizeQuantity : The amount of device size quantity of the resource.• rrTotalSizeUnit : The amount of device size unit of the resource.0=KB.

1=MB. 2=GB. 3=TB• rrThinProvisionCurrAllocSizeQuantity : Current Allocated Size quantity of the

resource which enables Thin Provisioning.• rrThinProvisionCurrAllocSizeUnit : Current Allocated Size unit of the

resource which enables Thin Provisioning. 0=KB. 1=MB. 2=GB. 3=TB.• rrThinProvisionUsageSizeQuantity : Current usage size quantity of the

resource which enables Thin Provisioning• rrThinProvisionUsageSizeUnit : Current usage size unit of the resource

which enables Thin Provisioning. 0=KB. 1=MB. 2=GB. 3=TB.• rrSectors64 : The amount of sectors allocated by the resource.• rrTotalSize64 : The size of the resource (represented in megabytes).

Table / Field descriptions

CDP/NSS Administration Guide 440

Page 443: CDP-NSS Administration Guide

SNMP Integration

ReplicaResourceTable(continued)

• rrThinProvisionCurrAllocSize64 : Current Allocated Size of the resource which enables Thin Provisioning.

• rrThinProvisionUsageSize64 : Current usage size of the resource which enables Thin Provisioning.

• rrLastSuccessSyncTime : The last successful synchronize timestamp of the replication.

• rrAverageThroughput : The average throughput (MB/s) of the replication.• rrAverageThroughputQuantity : The average throughput quantity of the

replication.• rrAverageThroughputUnit : The average throughput unit of the replication.

0=KB. 1=MB. 2=GB. 3=TB

ReplicaPhyAllocLayoutTable A table containing the physical layout information for the replica resources.• rrpaVirtualID : The replica resource ID assigned by the storage server.• rrpaVirtualName : The replica resource name created by the user.• rrpaName : The physical device name.• rrpaType : Represents the type(Primary, or Mirror) of the physical layout.• rrpaSCSIAddress : The SCSI address with <Adapter:Channel:SCSI:LUN>

format of the replica resource.• rrpaFirstSector : The first sector of the physical device which is allocated by

the replica resource.• rrpaLastSector : The last sector of the physical device which is allocated by

the replica resource.• rrpaSize : The amount of the allocated size (represented in megabytes)

within a physical device.• rrpaSizeQuantity : The amount of the allocated size quantity within a

physical device.• rrpaSizeUnit : The amount of the allocated size unit within a physical device.

The size unit of the device. 0=KB. 1=MB. 2=GB. 3=TB.• rrpaFirstSector64 : The first sector of the physical device which is allocated

by the replica resource.• rrpaLastSector64 : The last sector of the physical device which is allocated

by the replica resource.• rrpaSize64 : The amount of the allocated size (represented in megabytes)

within a physical device

Logical Resources --> Snapshot Group Resources

numOfGroup The amount of snapshot groups created by the storage server.

Table / Field descriptions

CDP/NSS Administration Guide 441

Page 444: CDP-NSS Administration Guide

SNMP Integration

snapshotgroupInfoTable • snapshotgroupName : The user-created snapshot group resource name.• snapshotgroupType : The property of the snapshot group, which it can be

one of the following types. TimeMark, backup, replication, TimeMark + backup, TimeMark + replication, backup + replication, and timmemark + backup + replication.

• snapshotgroupTimeMarkInfoTable : Table containing the TimeMark properties of snapshot groups.

• snapshotgroupTimeMarkGroupID : The snapshot group resource ID assigned by the storage server.

• snapshotgroupTimeMarkOption : The status represents the TimeMark option (enabled or disabled) of the snapshot group resource.

• snapshotgroupTimeMarkCounts : The maximum number of TimeMarks allowed to be created of the snapshot group resource.

• snapshotgroupTimeMarkSchedule : The time interval to create one new TimeMark.

• snapshotgroupTimeMarkSnapshotImage : The time of day that the snapshot-image is automatically created.

• snapshotgroupTimeMarkSnapshotNotificationOption : The option of triggering snapshot notification schedule.

• snapshotgroupTimeMarkReplicationOption : The replication option after the TimeMark is taken.

• snapshotgroupBackupInfoTable - : Table containing the backup properties of snapshot groups.

• snapshotgroupBackupGroupID : The snapshot group resource ID assigned by the storage server.

• snapshotgroupBackupOption : The status represents the backup option (enabled or disabled) of the snapshot group resource.

• snapshotgroupBackupWindow : The daytime allows for opening one backup session.

• snapshotgroupBackupSessionLen : The time interval allows for one backup session in each time.

• snapshotgroupBackupRelativeTime : The wait time before closing abackup session which is in an inactive status.

• snapshotgroupBackupWaitTime : The wait time (represented in minutes) before closing a backup session after completion.

• snapshotgroupBackupSelectCriteria : The snapshot image selection criteria that could be new or latest for the backup session. New represents that it always creates new snapshot image for backup, abnd latest represents that it uses the last created snapshot image for backup.

• snapshotgroupReplicationInfoTable - : Table containing the replication properties of snapshot

• snapshotgroupReplicationGroupID: The snapshot group resource ID assigned by the storage server

• snapshotgroupReplicationOption : The status represents the replication option (enabled or disabled) of the snapshot group resource.

Table / Field descriptions

CDP/NSS Administration Guide 442

Page 445: CDP-NSS Administration Guide

SNMP Integration

snapshotgroupInfoTable(continued)

• snapshotgroupReplicaServer : The target replia server name.• snapshotgroupReplicaGroupID : The target replica group ID.• snapshotgroupReplicaWatermark : The watermark sets to generate one new

replication automatically.• snapshotgroupReplicaTime : The daytime of each day creates one new

replication.• snapshotgroupReplicaInterval : The time interval creates one new

replication.• snapshotgroupReplicawatermarkRetry : The retry interval which is

representing in minute unit if the replication failed.• snapshotgroupReplicaContinuousMode : The status represents the

Continuous Mode of Replication (enabled or disabled) .• snapshotgroupReplicaCreatePrimaryTimeMark : Allows you to create the

primary TimeMark when a replica TimeMark is created.• snapshotgroupReplicaSyncTimeMark : Allows you to synchronize the replica

TimeMark when a primary TimeMark is created.• snapshotgroupReplicaProtocol It states the Protocol which Replication uses.• snapshotgroupReplicaCompression : The status represents the

Compression option (enabled or disabled) of Replication.• snapshotgroupReplicaEncryption : The status represents the Encryption

option (enabled or disabled) of Replication.• snapshotgroupReplicaMicroScan : The status represents the MicroScan

option (enabled or disabled) of Replication.• snapshotgroupReplicaSyncPriority : The Priority setting when Replication

Synchronize of the SAN resource.• snapshotgroupReplicaMode : The Replication mode of the SAN resource.• snapshotgroupReplicaUseExistTM: When Continuous Mode is disabled, the

option about using existing TimeMark of the replication.• snapshotgroupReplicaPreserveTM: When Continuous Mode is disabled, the

option about perserving TimeMark of the replication.

snapshotgroupCDPInfoTable A table containing the cdp properties of snapshot groups.• snapshotgroupCDPInfoGroupID : The snapshot group resource ID assigned

by the storage server.• snapshotgroupCDPInfoOption : The status represents the snapshot group

CDP Journal option (enabled or disabled) of the storage server.• snapshotgroupCDPInfoTotalSize : The total size of snapshot group CDP

Journal of the storage server.• snapshotgroupCDPInfoStatus : The status of the snapshot group CDP

Journal of the storage server.• snapshotgroupCDPInfoPerformanceLevel : The performance level setting of

the snapshot group CDP Journal of the storage server.• snapshotgroupCDPInfoAvailableTimerange : The available time range of the

snapshot group CDP Journal of the storage server.• snapshotgroupCDPInfoUsageSize : The usage size(MB) of snapshot group

CDP Journal of the storage server.

Table / Field descriptions

CDP/NSS Administration Guide 443

Page 446: CDP-NSS Administration Guide

SNMP Integration

snapshotgroupCDPInfoTable(continued)

• snapshotgroupCDPInfoUsagePercent: The usage percentage of snapshot group CDP Journal of the storage server.

• snapshotgroupCDPInfoTotalSizeQuantity : The total size quantity of snapshot group CDP Journal of the storage server.

• snapshotgroupCDPInfoTotalSizeUnit : The total size unit of snapshot group CDP Journal of the storage server. 0=KB. 1=MB. 2=GB. 3=TB.

• snapshotgroupCDPInfoTotalSize64 : The total size 64 bit long of snapshot group CDP Journal of the storage server.

• snapshotgroupCDPInfoUsageSizeQuantity : The usage size quantity of snapshot group CDP Journal of the storage server.

• snapshotgroupCDPInfoUsageSizeUnit : The usage size unit of snapshot group CDP Journal of the storage server. 0=KB. 1=MB. 2=GB. 3=TB.

• snapshotgroupCDPInfoUsageSize64 : The usage size 64 bit long of snapshot group CDP Journal of the storage server.

Table / Field descriptions

CDP/NSS Administration Guide 444

Page 447: CDP-NSS Administration Guide

SNMP Integration

snapshotgroupSafeCacheInfoTable

A table containing the safecache properties of snapshot groups.

• snapshotgroupSafeCacheInfoGroupID : The snapshot group resource ID assigned by the storage server.

• snapshotgroupSafeCacheInfoOption : The status represents the snapshot group safecache option (enabled or disabled) of the storage server.

• snapshotgroupSafeCacheInfoSuspend : The group safecache resource is currently suspended or not.

• snapshotgroupSafeCacheInfoTotalSize : The allocated size when creating the cache resource.

• snapshotgroupSafeCacheInfoFreeSize : The free resource size (represented in megabytes) before reaching the maximum resource size.

• snapshotgroupSafeCacheInfoUsage : The percentage of the used resource size.

• snapshotgroupSafeCacheInfoThreshold : The data needs to be in the cache before beginning flushing the cache.

• snapshotgroupSafeCacheInfoFlushTime : The number of milliseconds before cache begins to flush when below the data threshold level.

• snapshotgroupSafeCacheInfoSkeipWriteCommands This option allows the system to skip multiple pending write commands targeted for the same block.

• snapshotgroupSafeCacheInfoFlushSpeed : The flush speed will be sent at one time during the flush process.

• snapshotgroupSafeCacheInfoTotalSizeQuantity : The allocated size quantity when creating the cache resource.

• snapshotgroupSafeCacheInfoTotalSizeUnit : The allocated size unit when creating the cache resource. 0=KB. 1=MB. 2=GB. 3=TB.

• snapshotgroupSafeCacheInfoFreeSizeQuantity : The free resource size quantity before reaching the maximum resource size.

• snapshotgroupSafeCacheInfoFreeSizeUnit : The free resource size unit before reaching the maximum resource size. 0=KB. 1=MB. 2=GB. 3=TB.

• snapshotgroupSafeCacheInfoResourceID : The Cache resource ID assigned by the storage server.

• snapshotgroupSafeCacheInfoTotalSize64 : The allocated size when creating the cache resource.

snapshotgroupSafeCacheInfoTable (continued)

• snapshotgroupSafeCacheInfoFreeSize64 : The free resource size (represented in megabytes) before reaching the maximum resource size.

• snapshotgroupSafeCacheInfoStatus : The status of the snapshot group safecache of the storage server.

Table / Field descriptions

CDP/NSS Administration Guide 445

Page 448: CDP-NSS Administration Guide

SNMP Integration

snapshotgroupMembers • The snapshot group member counts of the storage server.• snapshotgroupAssignClients : The snapshot group assign client counts of

the storage server.• snapshotgroupCacheOption : The status represents the snapshot group

cache option (enabled or disabled) of the storage server.• snapshotgroupReplicationOption : The status represents the snapshot group

replication option (enabled or disabled) of the storage server.• snapshotgroupTimeMarkOption : The status represents the snapshot group

TimeMark option (enabled or disabled) of the storage server.• snapshotgroupCDPOption : The status represents the snapshot group cdp

option (enabled or disabled) of the storage server.• snapshotgroupBackupOption : The status represents the snapshot group

backup option (enabled or disabled) of the storage server.• snapshotgroupSnapShotOption : The status represents the snapshot group

snapshot notification option (enabled or disabled) of the storage server.

snapshotgroupMemberTable • snapshotgroupMemberTableGroupID: The snapshot group resource ID assigned by the storage server.

• snapshotgroupMemberTableName : Virtual resource name created by the user.

Table / Field descriptions

CDP/NSS Administration Guide 446

Page 449: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Email Alerts (updated March 2013)

Email Alerts is a unique FalconStor customer support utility that proactively identifies and diagnoses potential system or component failures and automatically notifies system administrators via email.

With Email Alerts, the performance and behavior of servers can be monitored so that system administrators are able to take corrective measures within the shortest amount of time, ensuring optimum service uptime and IT efficiency.

Using pre-configured scripts (called triggers), Email Alerts monitors a set of pre-defined, critical system components (SCSI drive errors, offline device, etc.).

With its open architecture, administrators can easily register new elements to be monitored by these scripts.

Configure Email Alerts

Email Alerts can be configured to meet your business needs. You can specify who should be notified about which events. The triggers can be defined to combine any of the scripts listed below. For example, it can be used to monitor a particular Thin disk or all Thin disks.

To configure Email Alerts:

1. In the Console, right-click on your storage server and select Options --> Enable Email Alerts.

2. Enter general information for your Email Alerts configuration.

CDP/NSS Administration Guide 447

Page 450: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

• SMTP Server - Specify the mail server that Email Alerts should use to send out notification emails.

• SMTP Port - Specify the mail server port that Email Alerts should use.• SMTP Username/Password - Specify the user account that will be used by

Email Alerts to log into the mail server.• User Account - Specify the email account that will be used in the “From”

field of emails sent by Email Alerts.• Target Email - Specify the email address of the account that will receive

emails from Email Alerts. This will be used in the “To” field of emails sent by Email Alerts.

• CC Email - Specify any other email accounts that should receive emails from Email Alerts.

• Subject - Specify the text that should appear on the subject line. The general subject defined during setup will be followed by the trigger specific subject. If the trigger does not have a subject, the trigger name and parameters are appended to the general email subject. For the syslogcheck.pl trigger, the first alert category is appended to the general email subject. If the email is sent based on event severity, the event ID will be appended to the general email subject.

• Interval - Specify the time period between each activation of Email Alerts.• The Test button allows you to test the configuration by sending a test

email.

3. Enter the contact information that should appear in each Email Alerts email.

CDP/NSS Administration Guide 448

Page 451: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

4. Set the triggers that will cause Email Alerts to send an email.

CDP/NSS Administration Guide 449

Page 452: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

Triggers are the scripts/programs that perform various types of error checking when Email Alerts activates. By default, FalconStor includes scripts/programs that check for low system memory, changes to the CDP/NSS XML configuration file, and relevant new entries in the system log.

The following are the some of the default scripts that are provided:

• activity.pl - (Activity check) - This script checks to see if an fsstats activity statistics file exists. If it does, an email alert is sent with the activity file attached.

• cdpuncommiteddatachk.pl -t 90 - This script checks for uncommitted data on CDP and generates an email alert message if the percentage of uncommitted data is more than that specified. By default, the trigger gets activated when the percentage of uncommitted data is 90%.

• chkcore.sh 10 (Core file check) - This script checks to see if a new core file has been created by the operating system in the bin directory of CDP/NSS. If a core file is found, Email Alerts compresses it, deletes the original, and sends an email report but does not send the compressed core file (which can still be large). If there are more than 10 (variable) compressed core files under $ISHOME/bin directory, it will keep latest 10 compressed core files and delete the oldest ones.

• defaultipchk.sh eth0 10.1.1.1 (NIC IP address check) - This script checks that the IP address for the specified NIC matches what is specified here. If it does not, Email Alerts sends an email report. You can add multiple defaultipcheck.sh triggers for different NICs (for example eth1 could be used in another trigger). Be sure to specify the correct IP address for each NIC.

• diskusagechk.sh / 95 (Disk usage check) - This script checks the disk space usage at the root of the file system. If the percentage is over the specified percentage (default is 95), Email Alerts sends an email report. You can add multiple diskusagechk.sh triggers for different mount points (for example, /home could be used in another trigger).

• fccchk.pl - (QLogic HBA check) - This script checks each QLogic adapter initiator port and sends an email alert if there is a status change from Online to Not Online. The script also checks QLogic link status and sends an email alert if the status of FC Link Down changes from OK to Not OK.fmchk.pl and smchk.pl - These scripts (for checking if the fm and ipstorsm modules are responding) are disabled.

• ipstorstatus.sh (IPStor status check) - This script checks if any module of CDP/NSS has stopped. If so, Email Alerts sends an email report.

Note: If the system log is rotated prior to the Email Alerts checking interval and contains any triggers but the new log does not have any triggers in it, then no email will be sent. This is because only the current log is checked, not the previous log.

CDP/NSS Administration Guide 450

Page 453: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

• kfsnmem.sh 10 (CDP/NSS memory management check) - This script checks to see if the maximum number of memory pages has been set. If not, Email Alerts sends an email report. If it is set, the script checks the available memory pages. If the percentage is lower than specified percentage (default is 10), Email Alerts sends an email report.

• memchk.sh (Memory check) - This script takes in a percentage as the parameter and checks whether the available system memory is below this percentage. If yes, Email Alerts sends an email report.

• netconfchk.pl - (Inactive network interfaces/invalid broadcasts check) - This script uses the ifconfig command to check network configuration once a day (by default) and sends an email alert if there are any network devices set to '_tmp' or any broadcast addresses that do not match the IP and netmask rules.

• neterrorchk.pl - (Network configuration check) - This script uses the ifconfig command to check network configuration and sends an email alert if there are any network errors, overruns, dropped events, or network collisions.

• powercontrolchk.pl - This script checks system configuration file and reports absent power control in a failover setup once a day, by default.

• processchk.pl - (System process check) This script checks system processes (via the ps command) and sends an email alert if there are processes using more than 1 GB of non-swapped physical memory. This script also sends an email alert if there are processes using more than 90% of CPU usage.

• promisecheck.pl - (Promise storage check) - This script checks events reported by Promise storage hardware every 10 minutes (by default) and sends an email alert if there is an event with a category other than Info. This trigger needs to be enabled on-site and requires the IP address and user/password account needed to access the storage via ssh. The ssh service must be enabled and started on the Promise storage.

• repmemchk.sh (Memory check) - This script checks memory usage by continuos replication resources. If data in the CDR resource is using more than 1GB of kernal memory, it triggers an email alert.

• reportheartbeat.pl (Heartbeat check) - This script checks to see if the server is active. If it is, Email Alerts sends an email every 24 hours, by default, to report that the server is alive. You can change the default interval with the parameter ‘-interval<value in minutes>’.

• reposit_check.pl - This script checks the configuration repository’s current configuration. If it is not updated, generates an email alert. However, this trigger works only in case of failover pair. This trigger does not generate an email alert for a CDP/NSS server with quorum repository but not in failover mode.

• serverstatus.sh (Server status check) - This script checks the server module status. If any module has stopped, an email alert is sent.

CDP/NSS Administration Guide 451

Page 454: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

• snapshotreschk.pl (Snapshot resource are usage check) - This script checks the snapshot resource area usage. If the usage corresponds to the actual percentage threshold minus the margin value (default 10%), an email alert is sent to warn users to take remedy actions before the actual threshold is reached.

• swapcheck.pl 80 (Memory swap usage check) - This script checks available swap memory. If the percentage is below the specified value (default 80), an email alert is sent with the total swap space and the swap usage.

• syslogchk.pl (System log check) - This script looks at the last 20 MB of messages in the system log. If any message matches what was defined on the System Log Check dialog and does not match what was defined on the System Log Ignore dialog, an email alert is sent with an attachment that includes all files in $ISHOME/etc and $ISHOME/log. If you want to limit the number of email alerts for the same System log event or category of events, set the -memorize parameter to the number of minutes to remember each event. If the same event is detected in the previous Email Alerts interval, no email alert is sent for that event. If an event is detected several times during the current interval, the first occurrence is reported in the email that is sent for that interval and the number of repetitions is indicated at the end of the email body with the last occurrence of the message. The default value is the same as the Email Alerts interval that was set on the first dialog (or the General tab if Email Alerts is already configured). Some of the common events in Syslogchk are as follows:• Fail over to the partner• Take over the partner• Replication failure• Mirrored primary device failure• Mirror device failure• Mirror swap• SCSI Error• Stack• Abandoned commands• FC pending commands• Busy FC• Storage logout• iSCSI client reset because of commands stuck in IO Core• Kernel error• Kernel memory swap

• thindevchk.pl -t 200 -s 200 -n 48 - This script monitors total free storage, storage pool free space, free space for thin device expansion, and number of segments of a thin device. The trigger parameters are:• -t threshold of percentage of global free space: if the (global free

storage space/global total storage space) is less than the given percentage, send an alert.

CDP/NSS Administration Guide 452

Page 455: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

• -i threshold of percentage of free space of each storage pool: if the (free storage space/total storage space) of any storage pool is less than the given percentage, send an alert.

• -s threshold of free space for expansion of thin-provisioning devices: if the available GB storage to expand each thin-provisioning device is less than the given value, send an alert. if the thin device VID is provided by "-v", then only check that device.

• -v vid: The vid of a thin-provisioning device that needs to be checked for free storage for expansion.

• -n threshold of number of segments of a thin-provisioning disk: If the number of segments on primary disk or mirror disk of a thin-provisioning device exceeds the given threshold, send an alert.

• -interval: enter this parameter followed by the number of minutes to trigger this script every n minutes. This parameter applies to all triggers. This interval overrides the global setting.

• tmkusagechk - This script monitors TimeMark memory usage. It checks the values of 'Low Total Memory' and 'Total Memory reserved by IOCore'. When TimeMark memory usage goes over the lower of these two values, by the percentage defined in the trigger, an Email Alert is generated.

• xfilechk.sh - This script checks and notifies changes in executable files on the server. If an executable file is added, removed, renamed, or modified, it sends an email alert. It does not monitor non-executable files.

• zombiechk.pl (Defunct process check) - This script checks system processes once a day (by default) and sends an email alert if there are 10 (default) or more defunct processes.

5. Select the components that will be included in the X-ray if an X-ray is to be sent by a trigger.

Note: Because of its size (minimum of 2 MB), the X-ray file is not sent by any standard trigger.

CDP/NSS Administration Guide 453

Page 456: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

The following options are available to customize your x-ray.

• System Information - When this option is selected, the X-ray creates a file called info which contains information about the entire system, including: host name, disk usage, operating system version, mounted file systems, kernel version, CPU, running processes, IOCore information, uptime, and memory. In addition, if an IPMI device is present in the server, the X-ray info file will also include the following files for IPMI:• ipmisel - IPMI system event log • ipmisensor - IPMI sensor information• ipmifru - IPMI built-in FRU information

• IPStor Configuration - This information is retrieved from the /usr/local/ipstor/etc/<hostname> directory. All configuration information (ipstor.conf, ipstor.dat, IPStorSNMP.conf, etc.), except for shared secret information, is collected.

• SCSI Devices - SCSI device information included in the info file.• IPStor Virtual Device - Virtual device information included in the info file.• Fibre Channel - Fibre Channel information.• Log File - The Linux system message file, called messages, is located in

the /var/log directory. All storage server messages, including status and error messages are stored in this file.

• Loaded Kernel - Loaded kernel modules information is included in the info file.

• Network Configuration - Network configuration information is included in the info file.

• Kernel Symbols - This information is collected in the event it will need to be used for debugging purposes.

CDP/NSS Administration Guide 454

Page 457: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

• Core File - The /usr/local/ipstor path will be searched for any core files that might have been generated to further help in debugging reported problems.

• Scan Physical Devices - Physical devices will be scanned and information about them will be included. You can select to Scan Existing Devices or Discover New Devices.

6. Indicate the terms that should be tracked in the system log by Email Alerts.

The system log records important events or errors that occur in the system, including those generated by CDP/NSS. This dialog allows you to rule out entries in the system log that have nothing to do with CDP/NSS, and to list the types of log entries generated by CDP/NSS that Email Alerts needs to examine. Entries that do not match the entries entered here are ignored, regardless of whether or not they are relevant to CDP/NSS.

The trigger for monitoring the system log is syslogchk.pl. To inform the trigger of which specific log entries need to be captured, you can specify the general types of entries that need to be inspected by Email Alerts. On the next dialog, you can enter terms to ignore, thereby eliminating entries that match these general types, yet can still be disregarded. The resulting subset contains all entries for which Email Alerts needs to send out email reports.

Each line is a regular expression. The regular expression rules follow the pattern for AWK (a standard Unix utility).

Note: By default, the system log file is included in the X-ray file which is not sent with each notification email.

CDP/NSS Administration Guide 455

Page 458: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

7. Indicate which categories of internal messages should not be included.

By default, all categories are disabled except the syslog.ignore.customized. If a category is checked, it will ignore any messages related to that category.

Select the Customized System Log Ignore tab to add customized ignore entries. You can enter terms to ignore, thereby eliminating entries that will cause Email Alerts to send out email reports.

Each line is a regular expression. The regular expression rules follow the pattern for AWK (a standard Unix utility).

CDP/NSS Administration Guide 456

Page 459: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

8. Select the severity level of server events for which you want to receive an email alert.

By default, the alert security level is set to None. You can select one of the following severity levels”

• Critical - checks only the critical severity level• Error - checks the error and any severity level higher than error.• Warning - checks the warning and any severity level higher than warning.• Informational - checks all severity levels.

9. Confirm all information and then click OK to enable Email Alerts.

CDP/NSS Administration Guide 457

Page 460: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

Modify Email Alerts properties

Once Email Alerts is enabled, you can modify the information by right-clicking on your storage server and selecting Email Alerts.

Click on the appropriate tab to update the desired information.

• The General tab displays server and message configuration and allows you to send a test email.

• The Signature tab allows you to edit the contact information that appears in each Email Alerts email.

• The Trigger tab allows you to set triggers that will cause Email Alerts to send an email as well as set up an alternate email.

• The Attachment tab allows you to select the information (if any) to send with the email alert. You can send log files or X-Ray files.

• The System Log Check tab allows you to add, edit, or delete syntax from the log entries that need to be captured. You can also specify the general types of entries that need to be inspected by Email Alerts.

• The System Log Ignore tab allows you to select system log entries to ignore, thereby eliminating entries that will cause Email Alerts to send out email reports.

CDP/NSS Administration Guide 458

Page 461: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

Email format

The email body contains the messages return by the triggers. The alert text starts with the category followed by the actual message coming from the system log. The first 30 lines are displayed. If the email body is more than 16 KB, it will be compressed and sent as an attachment to the email. The signature defined during Email alerts setup appears at the end of email body.

Limiting repetitve Emails

To limit repetitive emails, you have the option to limit the number of email alerts for the same event ID. By using the -memorize parameter for the syslogcheck.pl trigger, you can have the Email Alerts module memorize IDs and timestamps of events for which an alert is sent.

In this case, an event detected with the same event ID as an event in the previous interval, will not trigger an email alert for that same event. However, if an event is detected several times during the current checking interval, all those events are reported in the email that is sent for that interval.

The parameter -memorize for the syslogcheck.pl trigger allows you to set the trigger memorization logic and set the number of hours to remember each event. The default value is 24 hours that results in sending alerts for the same event once a day.

Script/program trigger information

Email Alerts uses script/program triggers to perform various types of error checking. By default, FalconStor includes several scripts/programs that check for low system memory, changes to the CDP/NSS XML configuration file, and relevant new entries in the system log.

Custom emaildestination

You can specify an email address to override the default Target Email or a text subject to override the default Subject. To do this:

1. Right-click on your storage server and select Email Alerts --> Trigger tab.

CDP/NSS Administration Guide 459

Page 462: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

2. For an existing trigger, highlight the trigger and click Edit.

The alternate email address along with the Subject is saved to the $ISHOME/etc/callhome/trigger.conf file when you have finished editing.

New script/program

The trigger can be a shell script or a program (Java, C, etc.). If you create a new script/program, you must add it in to the $ISHOME/etc/callhome/trigger.conf file so that Email Alerts knows of its existence.

Return codes Return codes determine what happens as a result of the script’s/program’s execution. The following return codes are valid:

• 0: No action is required and no email is sent.• 1: Email Alerts sends email without any attachments.• 2: Email Alerts attaches all files in $ISHOME/etc and $ISHOME/log to the

email.• 3: Email Alerts sends the X-ray file as an attachment (which includes all files

in $ISHOME/etc and $ISHOME/log). Because of its size (minimum of 2 MB), it is recommended that you do not attach the X-ray file to the notification email sent for a trigger.

The $ISHOME/etc directory contains a CDP/NSS configuration file (containing virtual device, physical device, HBA, database agent, etc. information). The $ISHOME/log directory contains Email Alerts logs (containing events and output of triggers).

Output fromtrigger

In order for a trigger to send useful information in the email body, it must redirect its output to the environment variable $IPSTORCLHMLOG.

Sample script The following is the content of the storage server status check trigger, ipstorstatus.sh:

Note: If you specify an email address, it overrides the return code. Therefore, no attachment will be sent, regardless of the return code.

CDP/NSS Administration Guide 460

Page 463: CDP-NSS Administration Guide

Email Alerts (updated March 2013)

#!/bin/shRET=0if [ -f /etc/.is.sh ]then . /etc/.is.shelse echo Installation is not complete. Environment profile is missing in /etc. echo exit 0 # don't want to report error here so have to exit with error code 0fi$ISHOME/bin/ipstor status | grep STOPPED >> $IPSTORCLHMLOGif [ $? -eq 0 ] ; then RET=1fiexit $RET

If any CDP/NSS module has stopped, this trigger generates a return code of 2 and sends an attachment of all files under $ISHOME/etc and $ISHOME/log.

CDP/NSS Administration Guide 461

Page 464: CDP-NSS Administration Guide

CDP/NSS Administration Guide

BootIPFalconStor’s boot over IP services for Windows and Linux-based storage servers allows you to maximize business continuity and return on investment. BootIP enables IT Managers to provision disk storage and its related services to achieve maximum return on investment (ROI).

BootIP leverages the proven SAN management infrastructure and storage services available in FalconStor’s network storage infrastructure to ensure business continuity, high availability and effective disaster recovery planning.

Set up BootIP

Setting up BootIP involves several steps, which are outlined below:

1. Prepare a sample computer with the operating system and all the applications installed.

2. Install CDP or NSS on a server computer.

3. Install Microsoft iSCSI initiator boot version and DiskSafe on the sample computer.

The Microsoft iSCSI Software Initiator enables connection of a Windows host to an external iSCSI storage array using Ethernet NICs. For boot version, using configurations to boot Windows Server 2003/vista/2008 hosts.

When installing Microsoft iSCSI Software Initiator, check the item “Configure iSCSI Network Boot Support” and select the network interface driver for the NIC that will be used to boot via iSCSI.

4. Install the FalconStor Management Console.

You can also create a boot image for client computers that do not have disks. To do this, you need to prepare a computer to be used for your boot image.

1. Make sure everything is installed on the computer, including the operating system and the applications that the client computers will use.

2. Once you have prepared the computer, use DiskSafe to backup the computer to create a boot image for diskless client computers.

3. After preparing the boot image, create TimeMarks from the boot image, and then mount the TimeMarks as individual TimeViews and respectively assign them to the diskless computers.

4. Configure the diskless computers to boot up from the network.

Using DiskSafe can help you to clone a boot image from the sample computer and put the image on an IPStor-managed virtual disk. You can then set up the BootIP from the server and use the boot image to boot the diskless client computers.

CDP/NSS Administration Guide 462

Page 465: CDP-NSS Administration Guide

BootIP

Prerequisites

A valid Operating System (OS) image must be prepared for iSCSI remote boot. The conditions of a valid OS image for an iSCSI boot client are listed below:

The OS must be one of the following:

• Windows 2003 with Microsoft iSCSI initiator boot version installed. • Windows Vista SP1 with Microsoft iSCSI initiator enabled manually. • Windows 2008 with Microsoft iSCSI initiator enabled.• The network adapter used by remote boot must be certified by Microsoft for

iSCSI Boot Component Test. • In Local Area Connection Properties of this network adapter, Internet

Protocol (TCP/IP) must be checked. • In Windows 2003, make sure the iSCSI Boot (BootIP) sequence is correct

using command “c:\iscsibcg /verify /fix”• Make sure the network interface card is the first boot device in the client

machine’s BIOS. • In addition to a valid OS image and client BIOS configuration, the mirrored

iSCSI disk should set the following properties before remote boot: • Assign LUN 0 to the iSCSI disk used for remote boot. • The iSCSI disk must be assigned to the first iSCSI target with the

smallest target ID. • If the iSCSI disk contains Windows 2008 or Windows Vista OS, the

iSCSI disk’s disk signature changed by DiskSafe during backup must be changed back to the original signature – to match the local disk backed up by DiskSafe. You can use the following IPStor iscli command to change the disk signature: # iscli setvdevsignature -s 127.0.0.1 -v VID –F

Note: The VID should be the virtual device ID of the iSCSI disk.

CDP/NSS Administration Guide 463

Page 466: CDP-NSS Administration Guide

BootIP

Create a boot image for a diskless client computer

To create a boot image that can be used to boot up a single diskless client computer, follow the steps below:

1. Prepare the storage and user access for the storage server from the FalconStor Management Console. For details, see ‘Initialize the configuration of the storage Server’.

2. Enable the IPStor BootIP via the FalconStor Management Console. For details, see‘Enable the BootIP from the FalconStor Management Console’.

3. Create a boot image by using DiskSafe to clone a virtual disk and set up the BootIP properties from FalconStor Management Console. For details, see ‘Use DiskSafe to clone a boot image’, ‘Set BootIP properties’,and ‘Set the Recovery Password’

4. Shutdown the sample computer and remove the system disk.

5. Boot up the iSCSI disk remotely on the original client computer. For details, see ‘Remote boot the diskless computer’

6. Use the System Preparation Tool to configure the automatic deployment windows OS on your remote boot client computer. For details, refer to ‘Set BootIP properties’.

7. Create a TimeMark of boot image. For details, see ‘Create a TimeMark’.

8. Create a TimeView from the TimeMark. For details, see ‘Create a TimeView’.

9. Assign the TimeView to this SAN client. For details, see ‘Assign a TimeView to a diskless client computer’.

10. Set up the BootIP properties from the FalconStor Management Console. For details, see ‘Set BootIP properties’.

11. Boot up the diskless computer client remotely.

CDP/NSS Administration Guide 464

Page 467: CDP-NSS Administration Guide

BootIP

Initialize the configuration of the storage Server

Initializing the configuration of the storage server involves several steps, including:

• Entering the license keycodes• Preparing the storage and adding your virtual device to the storage pool• Creating an IPStor user account• Selecting users who will have access the storage pool you have created.

Enable the BootIP from the FalconStor Management Console

You will need to enable the BootIP function before you can use it. You must also set the BootIP properties of the SAN clients. To do this:

1. Log into the storage server from the FalconStor Management Console.

2. Right-click on the [HostName] and select Options à Enable BootIP

3. If you have external DHCP, DHCP will not be enabled on the storage server. Therefore, keep the Enable DHCP option unchecked.

4. Click OK to start the BootIP daemon.

Use DiskSafe to clone a boot image

You can use DiskSafe to clone a boot image to be used at a later date. To do this:

1. While running DiskSafe on the sample computer, right-click on Disks and select Protect.

2. Click Next to launch the Protect Disk Wizard

3. Choose the system disk and Click Next

4. Click New Disk…

5. Click Add Server

6. Enter the storage server name (or IP), User name and Password; Check the iSCSI protocols. Then click OK.

7. Click OK to allocate Disk.

8. Click Next to continue finishing the following wizard setting

9. After synchronizing finished, Right-Click on the disk you protected and select Advanced -->Take Snapshot

When the disk is protected from DiskSafe, an IPStor-managed virtual disk within the boot image will be generated and assigned to the sample computer from the FalconStor Management Console.

CDP/NSS Administration Guide 465

Page 468: CDP-NSS Administration Guide

BootIP

Set BootIP properties

To set the BootIP properties, follow the instructions below.

1. From FalconStor Management Console, navigate to SAN Clients.

2. Right-Click on the Client host name and select Boot properties.

The Boot Properties dialog box appears.

3. Select the Boot type as BootIP.

The options become available.

4. Uncheck Boot from the local disk.

5. Optional: Select the Boot from Local Disk check box if you want the computer to boot up locally by default.

6. Type the Mac address of remote boot client and click OK.

Set the Recovery Password

Once you have finished setting DiskSafe protection, you can set two authentication modes for remote boot:

• Un-authentication mode. • CHAP mode.

Set the Recovery password from the iSCSI user management

To set the Recovery password from the iSCSI user management, follow the instructions below:

1. Right-Click on the [Server Host Name], select iSCSI Users.

An iSCSI user management window displays.

2. Select the appropriate iSCSI user.

3. Click Reset CHAP secret, type the secret, confirm it and click OK.

Set the authentication and Recovery password from iSCSI client properties

You can also set the authentication and Recovery password from iSCSI client properties. To do this:

1. Navigate to [Client Host Name] and expand it.

2. Right-Click on iSCSI and select Properties. An iSCSI Client Properties window displays.

3. Select User Access to set authentication.

4. Optional: Select Allow unauthenticated access. The user needn’t to authenticate for remote boot.

CDP/NSS Administration Guide 466

Page 469: CDP-NSS Administration Guide

BootIP

5. Optional: Select users who can authenticate for the client. You will be prompted to enter the user name, CHAP secret and confirm CHAP secret. You will also be prompted to type the Recovery password for remote boot.

6. Click OK.

Remote boot the diskless computer

For Windows 2003

To enable your client computer to boot remotely, you need to configure the BIOS of the computer and set the network interface card (NIC) as the first boot device. For details, about configuring the BIOS, refer to the user documentation of your main board.

1. After shutdown the sample computer, remove the system disk.

2. Boot up the diskless sample computer.

3. The client will boot from network and get the IP from DHCP server.

4. Click F8 to enter into boot menu.

5. If you didn’t click F8, the default auto-selection should be Remote Boot (gPXE), Click Enter

6. Then, it will start booting remotely.

For Windows Vista/2008

If the iSCSI disk contains Windows 2008 or Windows Vista OS, the iSCSI disk’s disk signature changed by DiskSafe during backup must be changed back to the original signature so that it is the same as the local disk backed up by DiskSafe.

You can use the following IPStor iscli command to change the disk signature:

# iscli setvdevsignature -s 127.0.0.1 -v VID –F.

VID is the virtual ID of mirror disk or Time View device. You can confirm the VID from the SAN Resource mirror disk or from the TimeView you assigned for remote boot General tab in the FalconStor Management Console.

Note: Mutual CHAP secret is not currently supported for iSCSI authentication.

CDP/NSS Administration Guide 467

Page 470: CDP-NSS Administration Guide

BootIP

Use the Sysprep tool

Sysprep is a Microsoft tool that allows you to automate a successful Windows operating system deployment on multiple computers. Once you have performed the initial setup steps on a single machine, you can run Sysprep to prepare the sample computer for cloning.

The Factory mode of Sysprep is a method of pre-configuring installation options to reduce the number of images to maintain. You can use the Factory mode to install additional drivers and applications at the stage after the reboot that follows Sysprep.

Normally, running Sysprep as the last step in the pre-installation process prepares the computer for delivery. When rebooted, the computer displays Windows Welcome or Mini–Setup.

By running Sysprep with the –factory option, the computer reboots in a network–enabled state without starting Windows Welcome or Mini–Setup. In this state, Factory.exe processes its answer file, Winbom.ini, and performs the following actions:

1. Copies drivers from a network source to the computer.

2. Starts Plug and Play enumeration.

3. Stages, installs, and uninstalls applications on the computer from source files located on either the computer or a network source.

4. Adds customer data.

For Windows 2003:

To prepare a reference computer for Sysprep deployment in Windows 2003, follow these steps:

1. On a reference computer, install the operating system and any programs that you want installed on your destination computers.

2. Click Start, click Run, type cmd, and then click OK.

3. At the command prompt, change to the root folder of drive C, and then type cmd Sysprep.

4. Open the Deploy.cab file and Copy the Sysprep.exe file and the Setupcl.exe file to the Sysprep folder.

If you are using the Sysprep.inf file, copy this file to the Sysprep folder. In order for the Sysprep tool to function correctly, the Sysprep.exe file, the Setupcl.exe file, and the Sysprep.inf file must all be in the same folder

For remote boot, add “LegacyNic=1” into Sysprep.inf file under [Unattended] section.

5. To run the Sysprep tool, type the following command at the command prompt:

CDP/NSS Administration Guide 468

Page 471: CDP-NSS Administration Guide

BootIP

Cmd: Sysprep /optional parameter

If you run the Sysprep.exe file from the %systemdrive%\Sysprep folder, the Sysprep.exe file removes the folder and the contents of the folder.

6. On the system preparation tool, choose the shutdown mode as shutdown and click Reseal to prepare the computer.

The computer should shutdown itself.

7. Optional: You can use Snapshot Copy or TimeView to assign them to the other clients and remote boot to initialize the other systems of windows 2003.

Use the Setup Manager tool to create the Sysprep.inf answer file

Once you have automated the deployment of windows 2003, you can use the sysprep.ini to customize the windows initial settings, such as user name, organization, host name, product key, networking component, workgroup, time-zone, etc.

To install the Setup Manager tool and to create an answer file, follow these steps:

1. Navigate to the Deploy.cab file that you replaced and double-click on it to open it.

2. On the Edit menu, click Select All

3. On the Edit menu, click Copy to Folder.

4. Click Make New Folder and enter a name for the Setup Manager folder. For example, type setup manager, and then press Enter.

5. Click Copy.

6. Open the new folder that you created, and double-click the Setupmgr.exe file.

The Windows Setup Manager Wizard launches.

7. Follow the instructions in the wizard to create a new answer file.

8. Select the Sysprep setup to generate the sysprep.inf

9. Select Yes, fully automate the installation.

Later, you will be prompted to enter the license keycode.

10. Select to automatically generate computer name or specify a computer name.

11. Save the sysprep.inf to the C:\Sysprep\

12. Click Finish to exit the Setup Manager wizard.

Note: For a list of parameters, see the "Sysprep parameters" section. http://tech-net.microsoft.com/en-us/library/cc758953.aspx

CDP/NSS Administration Guide 469

Page 472: CDP-NSS Administration Guide

BootIP

For Windows Vista/2008

Use the Windows system Image Manager to create the Sysprep.xml answer file

In order to begin creating a Sysprep.xml file, you will need to load a Windows Image File (WIM), Install the Automated Installation Kit (AIK) for Windows Vista SP1 and Windows Server 2008:

http://www.microsoft.com/downloads/details.aspx?FamilyID=94bb6e34-d890-4932-81a5-5b50c657de08&DisplayLang=en

Prepare a reference computer for Sysprep deployment

1. On a reference computer, install the operating system and any programs that you want installed on your destination computers.

2. Use DiskSafe to clone the system disk to the storage server

3. Boot the mirror disk remotely (Setting related BootIP configuration)

4. Open the Windows system Image Manager (Start --> All Programs --> Microsoft Windows AIK --> Windows System Image Manager)

5. Copy Install.wim from the product installation package (source) to your disk.

6. Create a catalog on the WAIK.

7. On the File menu, click Select Windows Image.

8. Navigate to the location where you saved install.wim, and then click Open.

You are prompted to select an image.

9. Select the appropriate version of windows Vista/2008, and then click OK.

10. On the File menu, click New Answer File.

11. If a message displays that a catalog does not exist, click OK to create one.

12. From the windows image, choose the proper component.

13. From the Answer file, you can set the following options:• Auto-generate a computer name• Add or edit Organization and Owner Information• Set the language and locale• Set the initial tasks screen not to show at logon• Set server manager not to show at logon• Set the Administrator password• Create a second administrative account and set the password• Run a post-image configuration script under the administrator account at

logon• Set automatic updates to not configured (to be configured post-image)• Configure the network location• Configure screen color/resolution settings• Set the time zone

CDP/NSS Administration Guide 470

Page 473: CDP-NSS Administration Guide

BootIP

1. Press Control + S and choose C:\windows\system32\sysprep\ as the save location and file name as sysprep.xml.

2. Click Save to continue.

3. Navigate to C:\Windows\System32\Sysprep and enter one of the following:

sysprep /generalize /oobe /shutdown /unattend:sysprep.xml or sysprep /generalize /audit /shutdown /unattend:sysprep.xml

To apply the settings in auditSystem and auditUser, boot to Audit mode by using the sysprep /audit command. The machine will shutdown and you can use Snapshot Copy from the FalconStor Management Console to clone the mirror disk and remote boot to initialize the other systems of windows Vista/2008.

Create a TimeMark

Once your boot image has been created and is on the storage server, it can be used as a base image for your diskless client computers. You will need to create separate boot images for each computer that you want to boot up remotely.

In order to create a separate boot image for a computer, you need to create a TimeMark of the base image first, then create a TimeView from the TimeMark. The TimeView can be assigned to an individual client computer for remote boot.

To create a TimeMark of the base boot image:

1. Launch the FalconStor Management Console if you have not done so yet.

2. Select your virtual disk under SAN Resources.

3. Right-click the on the disk and select TimeMark --> Enable.

A message box appears, prompting you to create the SnapShot Resource for your virtual disk.

4. Click OK and follow the instructions of the Create SnapShot Resource Wizard to create the SnapShot Resource.

5. Click Finish when you are done with the creation process. The Enable TimeMark Wizard appears.

6. Click Next and specify the schedule information if you want to create TimeMarks regularly.

You can skip the next two steps if you have specified the schedule information as TimeMarks will be created automatically based on your schedule.

7. Click Finish when you are done.

The Wizard closes and you are returned to the main window of FalconStor Management Console.

Note: /generalize must be run. After reboot, a new SID is created and the clock for windows activation resets.

CDP/NSS Administration Guide 471

Page 474: CDP-NSS Administration Guide

BootIP

8. From the FalconStor Management Console, right-click your virtual disk and select TimeMark --> Create.

The Create TimeMark dialog box appears.

9. Type a comment for the TimeMark and click OK.

The TimeMark is created.

Create a TimeView

After creating a TimeMark of your base boot image, you can create a TimeView from the TimeMark, and then assign the TimeView to a diskless computer for remote boot.

To create a TimeView from a TimeMark:

1. Start the FalconStor Management Console - if it is not running yet.

2. Right-click your virtual disk and select TimeMark --> TimeView. The Create TimeView dialog box appears.

3. Select the TimeMark from which you want to create a TimeView and click OK.

4. Type a name for the TimeView in the TimeView Name box and click OK.

The TimeView is created.

Assign a TimeView to a diskless client computer

After creating a TimeView from your base boot image, you can assign it to a specific diskless client computer so that the computer can be booted up remotely from the TimeView.

To assign a TimeView to a client computer for remote boot, you must perform the following tasks in FalconStor Management Console:

1. Add a SAN Client.

2. Assign the TimeView to the SAN Client.

3. Associate the SAN Client with a diskless computer and configure it for remote boot.

Add a SAN Client

1. Start the FalconStor Management Console if you have not done so yet.

2. Right-click SAN Clients and select Add.

The Add Client Wizard appears.

Note: Only one TimeView can be created per TimeMark. If you want to create multiple TimeViews for multiple diskless computers, you will need to create multiple TimeMarks from the base boot image first.

CDP/NSS Administration Guide 472

Page 475: CDP-NSS Administration Guide

BootIP

3. Click Next and enter a name in the Client Name box.

4. Select SAN/IP as the protocol for the client and then click Next.

5. Review the settings and click Finish.

The SAN Client is added.

Assign a TimeView to the SAN Client

1. Start the FalconStor Management Console if you have not done so yet.

2. Right-click the TimeView and select Assign.

The Assign a SAN Resource Wizard appears.

3. Click Next and assign LUN0 to the target.

4. Click Next and review the settings, then click Finish.

The BootIP boots the image that is assigned to the smallest target ID with LUN 0.

Recover Data via Remote boot

DiskSafe is used to protect the client’s system and data disks/partitions. In the event of system failure, the client can boot up from the iSCSI disk or selected TimeView, including the OS image, and restore the system or disk data to the local disk or new disk using DiskSafe.

A valid operating system image is prepared for DiskSafe to clone to the iSCSI disk for remote boot.

To recovery data using DiskSafe when the client boots up from an iSCSI disk, refer to ‘Remote boot the diskless computer’ on page 467.

1. After remotely booting, hot plug-in the local disk (original disk) to restore.

2. Rescan the disks from disk management.

3. Open the DiskSafe console and remove the existing system disk protection on DiskSafe.

4. Create a new DiskSafe protection to the recovery disk by right-clicking on the disk and selecting Protect.

5. Select remote boot disk (disk 0) to be the Primary disk then click Next.

Note: Only LUN 0 is supported for iSCSI remote boot.

For windows Vista/2008 only: Before recovering the system to the local disk, you must flip the disk signature first for local boot. So, please type the IPStor command # iscli setvdevsignature -s 127.0.0.1 -v VID -F'. VID should be the virtual ID of remote boot disk.

CDP/NSS Administration Guide 473

Page 476: CDP-NSS Administration Guide

BootIP

6. Check Allow mirror disks with existing partitions to restore to the original disk, and then click Yes.

7. Select the original primary disk from the eligible mirror disks list and click Next.

4. The system will warn you that the mirror disk is a local disk.

8. Click Yes.

9. Finish the protect disk wizard and DiskSafe starts to synchronize the current data to local disk.

10. Once synchronization has finished, and the restore process succeeds, you can shutdown the server normally.

11. Disable the BootIP from the iSCSI client or setting Boot from local disk.

12. Local boot the client with the disk you restored.

13. Once the system successful boots up, open the DiskSafe Management Console and remove the protection that you just created for recovery.

14. Re-protect the disk.

Make sure your boot up disk is from the FALCON IPSTOR DISK SCSI Disk Device. To do this, navigate to Disk Management and right-click on the first disk (Disk 0). It should show FALCON IPSTOR DISK SCSI Disk Device.

Note: After the remote boot, verify the status of services and applications to make sure everything is up and ready after start up.

CDP/NSS Administration Guide 474

Page 477: CDP-NSS Administration Guide

BootIP

Remotely boot the Linux Operating System

Remotely install CentOS to an iSCSI disk

Remote boot the iSCSI disk and install the CentOS5.x on it. Before you begin, make sure you have a CentOS 5.x installation package and have prepared a diskless client computer with PXE boot supported NIC adapter.

Remote boot from the FalconStor Management Console

From the FalconStor Management Console:

1. Right-Click on “SAN clients” to Add a customized client name.

The Add Client Wizard displays.

2. Select iSCSI protocol and click Next

3. Click Add to add iSCSI initiator name and click OK

4. Check the iSCSI initiator name you created and click Next.

5. Set the authentication for the client to Allow unauthenticated access for the client and click Next.

6. Keep the client IP address as empty and finish the Add client Wizard.

7. Create a New (empty) SAN resource with size 6 ~ 10 GB (depending upon the size of the Linux system).

8. Assign the New SAN resource to the client machine

9. From the FalconStor Management Console, navigate to SAN Clients and right-click on the client host name you added and select Boot properties.

10. The Boot Properties dialog box appears.

11. Select the Boot type as BootIP.

The options become available.

12. Keep the Boot from the local disk unchecked.

13. Type the Mac address of diskless client and click OK.

CDP/NSS Administration Guide 475

Page 478: CDP-NSS Administration Guide

BootIP

Remote boot from the Client

1. For the diskless client, set the boot sequence in the BIOS to boot from PXE first and then from the DVD ROM.

2. Boot up the client machine remotely and launch the CentOS 5.x installation package at the same time.

After the remote boot, the installation package starts loading.

3. Select Advanced storage configuration when prompted to select the drive(s) to use for installation,

4. Select Add iSCSI target and click Add drive.

The Enable network interface wizard appears.

5. Keep the default setting, and click OK.

The Configure iSCSI Parameters wizard appears.

6. Enter the Target IP Address (your storage server IP) and click Add target.

7. Click Yes to initialize the iSCSI disk and erase all data.

A sda disk (FALCON IPSTOR DISK) in drive list displays.

8. If you would like to Review and modify partitioning layout, check it and click Next.

9. Finish the installation setup wizard and install the OS.

10. Once the installation finishes, click Reboot and remote boot again to boot up the CentOS on iSCSI disk.

BootIP and DiskSafe

If you plan to perform BootIP before using DiskSafe to protect the system are running on Windows 2008 R2 environment, refer to the following Microsoft knowledge base article: KB 976042, http://support.microsoft.com/kb/976042 to unbind the WFP Lightweight filter for NIC before protecting your system disk.

Remote boot and DiskSafe

To perform remote boot for DiskSafe version 3.7 snapshot images, there is no need to perform a flip disk signature operation. You can simply mount the snapshot to the TimeView, assign it to the corresponding SAN Client, and perform a remote boot.

Note: Per Microsoft http://technet.microsoft.com/zh-tw/library/ee619722%28WS.10%29.aspx, PXE boot from iSCSI disk on client versions of Windows, such as Windows Vista® or Windows 7, are not supported.

CDP/NSS Administration Guide 476

Page 479: CDP-NSS Administration Guide

CDP/NSS Administration Guide

Troubleshooting / FAQsThis section contains ‘Error codes’ and helps you through some frequently asked questions and issues that may be encountered when setting up and running the CDP/NSS storage network. Click on one of the following topics for more information:

Frequently Asked Questions (FAQ)

The following tables contain some general and specific questions and answers that may arise while managing your CDP or NSS servers.

Troubleshooting topics

‘Logical resources’ ‘Failover’ ‘Fibre Channel target mode and storage’

‘Network connectivity’ ‘Replication’ ‘iSCSI Downstream Configuration’

‘NIC Port Bonding’ ‘TimeMark Snapshot’ ‘SCSI adapters and devices’

‘Virtual devices’ ‘SafeCache’ ‘Service-Enabled Devices’

‘Storage server X-ray’ ‘SNMP’ ‘Multipathing method: MPIO vs. MC/S’

‘Storage Server’ ‘BootIP’ ‘Cross-mirror failover on a virtual appliance’

‘iSCSI DownstreamConfiguration’

‘Event log’ ‘Windows client debug information’

Question Answer

Why did my storage server not automatically start after rebooting? What should I do?

If your CDP or NSS server detects a configuration change during startup, autostart will not occur without user interaction.Typing YES allows the server to continue to start. Typing NO prevents the server from starting. Typing nothing (no user input) results in the server aborting the auto start process. If the server does not automatically start after a reboot, you can manually start it from the command line using the ipstor start all command.

Why are my snapshot resources marked off-line? Snapshot resources will be marked off-line if the physical resource they have been created from is disconnected from a single server in a failover set prior to a failing over to the secondary server.

Why does it take so long (several minutes) for my Solaris SAN client to load?

When the Client starts, it reads all of the LUN entries in the /kernel/drv/sd.conf file. It can take several minutes for the client to load if there are a lot of entries. It is recommended that the /kernel/drv/sd.conf file only contain entries for LUNS that are physically present so that time is not spent scanning LUNs that may not be present.

I used the rpm –e command to uninstall the storage server. How do I now remove the IPStor directory and its subdirectories?

In order to remove them, execute the following command from the /usr/local directory:rm –rf IPStor

CDP/NSS Administration Guide 477

Page 480: CDP-NSS Administration Guide

Troubleshooting / FAQs

NIC Port Bonding

Event log

How can I make sure information is updated correctly if I change storage server IP addresses using a third-party utility, like yast?

You can change storage server IP addresses, through the FalconStor Management Console using System Maintenance --> Network Configuration.

I changed the hostname of the storage server. Why are all block devices now marked offline and appear as foreign devices?

You cannot change the hostname of the storage server if you are using block devices.

My IPStor directory and its subdirectories are still visible after using rpm –e to uninstall the storage server. How do I remove them?

In order to remove them, execute the following command from the /usr/local directory:rm –rf IPStor

I changed a storage server IP addresses using yast. Why was the information not updated correctly?

Changing a storage server IP address using a third-party utility is not supported. You will need to change storage server IP addresses via the console System Maintenance --> Network Configuration.

Why am I having trouble completing offline license activation?

If you are unable to complete offline activation successfully, try the following solutions:

1. In order to prevent the possibility of unsuccessful email delivery to the FalconStor activation server, disable Delivery Status Notification (DSN) before you send the activation request email to [email protected].

2. If you do not receive a reply to your offline activation email from the FalconStor activation server within one hour after sending it, check your email encoding and change it to UNICODE (UTF-8) if set otherwise, then send the email again.

Question Answer

What if I need to change an IP address for NIC port bonding?

During the bonding process, you will have the option to enter/select a new IP address. Right-click on the server and select System Maintenance --> Bond NIC Port.

Question Answer

Why is the event log displaying event messages as numbers rather than text?

You may be low on space. Check to make sure that there is at least 5 MB of free space on the file system on which the console is installed. If not, free up some space.

Question Answer

CDP/NSS Administration Guide 478

Page 481: CDP-NSS Administration Guide

Troubleshooting / FAQs

SNMP

Virtual devices

FalconStor Management Console

Question Answer

The trap, ucdShudown, appears as a raw message at the management console. Is this a problem?

When stopping the SNMP daemon, the daemon itself will issue a trap, ucdShudown. You can ignore the extra trap.

How do I load the MIB file? To load the MIB file, navigate to $ISHOME/etc/snmp/mibs/IPSTOR-MIB.TXT and copy the IPSTOR-MIB.TXT file to the machine running the SNMP manager.

Question Answer

Why won’t my virtual device expand? You may have exceeded your quota. If you have a set quota and have allocated a disk greater than your quota, then enabling any feature that uses auto-expansion (i.e. Snapshot Resource or CDP), those specific resources will not expand, because the quota has been exceeded.

Question Answer

Why am I getting an error while attempting to install the FalconStor Management Console?

If you experience an error installing the FalconStor Management Console, select the Install Windows Console link again and select Save Target or Save link in the browser. Then right-click installation package name and select Properties. In the Program Compatibility tab, check Run this program as administrator.

Now that I have installed the console, why will it not launch?

The console might not launch under the following conditions:• System display settings are configured for 16 colors. • The install path contains characters such as !, %, {, }. • “Font specified in font.properties not found” message

is displayed. This indicates that the jdk font.properties file is not properly set for the Linux operating system. To fix this, change the font.properties files to get the correct symbol font name. To do this, replace all lines containing “--symbol-medium-r-normal--*-%d-*-*-p-*-adobe-fontspecific” with “--standard symbols l-medium-r-normal--*-%d-*-*-p-*-urw-fontspecific”.

• The console must be run from a directory with “write” access. Otherwise, the host name information and message log file retrieved from the storage server cannot be saved to the local directory. As a result, the console will display event messages as numbers and console options will not be able to be saved.

CDP/NSS Administration Guide 479

Page 482: CDP-NSS Administration Guide

Troubleshooting / FAQs

Multipathing method: MPIO vs. MC/S

Question Answer

When should I use Microsoft Multipath I/O (MPIO) vs. Multiple Connections per Session (MC/S) for multipathing?

While MPIO is usually the preferred method for multipathing, there are a number of things to consider when decinging to use MCS or Microsoft MPIO for multipathing.

• If your configuration uses hardware iSCSI HBA then Microsoft MPIO should be used.

• If your target does not support MCS, then Microsoft MPIO should be used. (Most iSCSI target arrays support Microsoft MPIO.)

• If you need to specify different load balance policies for different LUNs then Microsoft MPIO should be used.

Reasons for using MCS include the following:• If your target does support MCS and you are using the

Microsoft software initiator driver then MCS is the best option. There may be some exceptions where you desire a consistent management interface among multipathing solutions and already have other Microsoft MPIO solutions installed that may make Microsoft MPIO an alternate choice in this configuration.

• If you are using Windows XP or Windows Vista, MCS is the only option since Microsoft MPIO is only available with Windows Server SKUS.

What are the advantages and disadvantages for using each method?

The advantages of using Microsoft MPIO is that MPIO is a “tried and true” method of multipathing that supports software and hardware iSCSI initiators (HBAs). MPIO also allows you to mix protocols (iSCSI/FC). In addtion, each LUN can have its own load balance policy. The disadvantage is that an extra multipathing technology layer is required.

The advantages of using MCS are that MCS is part of the iSCSI specification and there is no extra vendor multipathing software required.

The disadvantages of using MCS are that this method is not currently supported by iSCSI initiator HBAs, or for MS software initiator boot. Another shortfall is the load balance policy is set on a per-session basis; thus all LUNs in an iSCSI session share the same load balance policy.

CDP/NSS Administration Guide 480

Page 483: CDP-NSS Administration Guide

Troubleshooting / FAQs

What is the default MPIO timeout and how do I change it?

The default MPIO timeout is 20 seconds. This is usually enough time, but there are certain situations where you may want to increase the timeout value.

For example, when configuring multipathing with MPIO in a Windows 2008 environment, you may need addtional time to enable Windows 2008 to survive a failover taking more than 20 seconds.

To increase the timeout value, you will need to modify the PDORemovePeriod, the setting that controls the amount of time (in seconds) that the multipath pseudo-LUN will continue to remain in system memory, even after losing all paths to the device.

To increase the timeout , follow the steps below:

• // increase disk timeout from default 60 seconds to 5 minutes

HKEY_LOCAL_MACHINE-System-CurrentControlSet-Services-Disk-TimeOutValue: 300

• // increase iSCSI timeout to from default 60 seconds to 5 minuGetting Startedtes

HKEY_LOCAL_MACHINE-System-CurrentControlSet-Control-Class-{4D36E97B-xxxxxxxxxxxxxxxx}-xxxx-Parameters-MaxRequestHoldTime: 300

• // due to the increased disk timeout, enable NOPOut to early detect connection failure

HKEY_LOCAL_MACHINE-System-CurrentControlSet-Control-Class-{4D36E97B-xxxxxxxxxxxxxxxx}-xxxx-Parameters-EnableNOPOut: 1

CDP/NSS Administration Guide 481

Page 484: CDP-NSS Administration Guide

Troubleshooting / FAQs

BootIP

Question Answer

Why does windows keep logging off during remote boot?

This happens when you remote boot the mirror disk and keep the local disk inside, Try to re-protect (or re-sync) the local disk.

How do I confirm if the system has booted remotely?

Go to Disk management and right-click on disk0. It should show the disk is an FalconStor IPStor disk, but not the local disk.

Can I change the IP address after remote boot? No, you cannot change the IP Address because the iSCSI needs the original IP address for communication.

Why do I sometimes see a blue screen after a remote boot. Error code: 0x0000007B?

Check if the boot sequence is correct by typing the following command on the sample computer before remotely booting: #iscsibcg /verify /fix

Why do the following messages display on the screen during a PXE boot and not allow a boot to the iSCSI disk?

• Registered as BIOS driver 0x80• Booting from BIOS drive• Boot failed• Unregistering BIOS drive 0x80• No more network devices

These messages show that the system cannot boot the disk successfully. Check to make sure you are using the boot disk. And make sure the mirror disk has been synced completely and that you have protected the correct system disk or system partition.

Is iSCSI boot supported in an UEFI environment? No, this version does not support iSCSI boot in an Unified Extensible Firmware Interface BIOS (UEFI) environment.

CDP/NSS Administration Guide 482

Page 485: CDP-NSS Administration Guide

Troubleshooting / FAQs

SCSI adapters and devices

Since CDP and NSS relies on SCSI devices for storage, it is often helpful to be able to discover the state of the SCSI adapters and devices locally attached to the storage server. Verification requires that the administrator be logged into the storage server. Refer to ‘Log into the CDP/NSS server’.

Question Answer

How do I verify the healthy state of the SCSI adapters that the storage server is up?

If you do not see the appropriate driver for your SCSI adapter, it may not have been loaded properly or it may have been unloaded. Once it is determined that the SCSI adapter and driver are properly installed, the next step is to check to see if the individual SCSI devices are accessible on the SCSI bus. To check to see what devices are recognized by the storage server, execute the following command on a CDP/NSS Server.cat /proc/scsi/scsi.These commands display the SCSI devices attached to the storage server. For example, you will see something similar to the following:[0:0:0:0] disk 3ware Logical Disk 0 1.2 /dev/sda[0:0:1:0] disk 3ware Logical Disk 1 1.2 /dev/sdb[2:0:1:0] disk IBM-PSG ST318203FC !# B324 -[2:0:2:0] disk IBM-PSG ST318203FC !# B324 -[2:0:3:0] disk IBM-PSG ST318304FC !# B335 -If the operating system cannot see a device, it may not have been installed properly or it may have been replaced while the storage server was running. If the Server was not rebooted, Linux will not recognize the drive because it does not have “plug-and-play” capabilities.

How do I replace a physical disk? Remove the SCSI device from the Linux OS by executing: echo "scsi remove-single-device x x x x">cat /proc/scsi/scsi(where x x x x stands for A C S L numbers: Adapter, Channel, SCSI, and LUN number.)Then execute the following to re-add the device so that Linux can recognize the drive:echo "scsi add-single-device x x x x">cat /proc/scsi/scsi.(where x x x x stands for A C S L numbers: Adapter, Channel, SCSI, and LUN number.)

How do I ensure that the SCSI drivers are loaded on a Linux SAN Client?

To ensure that the SCSI drivers are loaded on a Linux machine, type the following command for Turbo Linux:modprobe <SCSI card name>For example: modprobe aic7xxxFor Caldera Open Linux, type: insmod scsi_mod

CDP/NSS Administration Guide 483

Page 486: CDP-NSS Administration Guide

Troubleshooting / FAQs

Failover

What if I have LUNs greater than zero? By default, Linux will not automatically discover devices with LUNs greater than zero. You must either manually add these devices or you can edit your modules.conf file to automatically scan them. To do this:

1. Type the following command to edit the modules.conf file: vi /etc/modprobe.conf

2. If necessary, add the following line to modprobe.conf:

option scsi_mod max_luns=x

where x is the LUN number that you want the server to scan up to.

3. After exiting from vi, make a new image file.

mkinitrd newimage.img X

where 'X' is the kernel version (such as 2.4.21-IPStor) and ‘newimage’ can be any name.

4. Make a new entry to point to the new .img file you created in the step above and make this your default.

use /boot/grub/grub.conf

5. Save and close the file.

6. Reboot the machine so that the scan will take place.

7. Verify that all LUNs have been scanned by typing: cat /proc/scsi/scsi

Question Answer

How can I verify the health status of a server in a failover configuration?

You can verify the health status of a server by connecting to the server via SSH using the heartbeat address, and running the sms command.

Question Answer

CDP/NSS Administration Guide 484

Page 487: CDP-NSS Administration Guide

Troubleshooting / FAQs

Fibre Channel target mode and storage

Question Answer

What is VSA? Some storage devices (such as EMC Symmetric storage controller and older HP storage) use VSA (Volume Set Addressing) mode. This addressing method is used primarily for addressing virtual buses, targets, and LUNs.

If your client requires VSA to access a broader range of LUNs, you must enable it for the client. This can be done via the Fibe Channel Client Properties screen by selecting the Options tab.

Incorrect use of VSA can lead to problems seeing the LUNs (disks) at the HBA level. If the HBA cannot see the disks, the storage server is not able to access and manage them. This is true both ways: (1) the storage requires VSA, but it is not enabled and (2) the storage does not use VSA, but it is enabled.

For upstream, you can set VSA for the client at the time it is created or you modify the setting afterwards by right-clicking on the client.

What is Persistent binding? Persistent binding is automatically configured for all QLogic HBAs connected to storage device targets upon the discovery of the device (via a Console physical device rescan with the Discover New Devices option enabled). However, persistent binding will not be SET until the HBA is reloaded. You can reload HBAs using the IPStor start hba or IPStor restart all commands.

The console will display the Persistent Binding Tab for QLogic Fibre Channel HBAs even if the HBAs were not loaded using those commands. In addition, you will not be able to enable Fibre Channel target mode on those HBAs. To resolve this, load the driver using the IPStor start hba or IPStor restart all commands.

How can I determine the WWPN of my Client?

here are a couple of methods to determine the WWPN of your clients:

1. Most Fibre Channel switches allow administration of the switch through an Ethernet port. These administration applications have utilities to reveal or allow you to change the following: Configuration of each port on the switch, zoning configurations, the WWPNs of connected Fibre Channel cards, and the current status of each connection. You can use this utility to view the WWPN of each client connected to the switch.

2. When starting up your client, there is usually a point at which you can access the BIOS of your Fibre Channel card. The WWPN can be found there.

3. The first time a new client connects to the storage server, the following message appears on the server screen:FSQLtgt: New Client WWPN Found: 21 00 00 e0 8b 43 23 52

CDP/NSS Administration Guide 485

Page 488: CDP-NSS Administration Guide

Troubleshooting / FAQs

Power control option

Replication

Is ALUA supported? Yes, Asymmetric Logical Unit Access (ALUA) is fully supported for both targets and initiators.

• Upstream: ALUA support is included for QLogic Fibre Channel and iSCSI targets with implicit mode only.

• Downstream: ALUA support is included for QLogic Fibre Channel and iSCSI initiators with explicit or implicit modes.

Question Answer

What causes a failure to communicate with a power control device?

Failure to communicate to your power control devices may be caused by one the following reasons:• Authentication error (password and/or username is

incorrect)• Network connectivity issue• Server power cable is unplugged• Wrong information used for power control device such

as incorrect IP

Question Answer

Is replication supported between version 7.00 and earlier versions of CDP/NSS?

The following replication matrix will help you determine which versions are supported for replication.

Question Answer

CDP/NSS Administration Guide 486

Page 489: CDP-NSS Administration Guide

Troubleshooting / FAQs

iSCSI Downstream Configuration

Protecting data in a Windows environment

Question Answer

Does the CDP/NSS software iSCSI initiator have a target limitation? Will I have the same limitation when using a hardware iSCSI initiator?

The CDP/NSS software iSCSI initiator has a limitation of 32 targets. When using a hardware iSCSI initiator you will not have this limitation.

Why is there a 32 target limitation when using the software iSCSI initiator?

The reason for this limitation is that when the software iSCSI initiator logs into a target it creates a new SCSI host per iSCSI target to which it is connected.

How can I get more information on properly configuring my CDP/NSS appliance to use dedicated iSCSI downstream storage using an iSCSI initiator (software HBA)?

Refer to ‘Configuring iSCSI software initiator’ for details regarding requirements and procedures needed to properly configure a CDP/NSS device to use dedicated iSCSI downstream storage using an iSCSI initiator (software HBA).

How can I get more information on properly configuring my CDP/NSS appliance to use dedicated iSCSI downstream storage using a hardware HBA?

Refer to ‘Configuring iSCSI hardware HBA’ for details regarding requirements and procedures needed to properly configure a CDP/NSS device to use dedicated iSCSI downstream storage with a hardware iSCSI HBA.

Which HBAs can I use on my NSS or CDP appliance?

Only QLogic iSCSI HBA's are currently supported on a CDP or NSS appliance.

What utility can I use to configure an iSCSI HBA on my NSS or CDP appliance and where can I get it?

The QLogic "iscli" (SANSurfer CLI) utility is provided on the appliance to configure the iSCSI HBA's.

The QLogic SANSurfer CLI is located at "/opt/QLogic_Corporation/SANsurferiCLI/". To configure the HBA, run "iscli" from the path as shown below:

[root@demonstration ~]# /opt/QLogic_Corporation/SANsurferiCLI/iscli

Does the hardware initiator require any special configuration for multipath support?

The hardware initiator does not require any special configuration for multipath support. The only configuration required is to connect multiple HBA ports to a downstream iSCSI target. The driver used for the QLogic iSCSI HBA is specially handled by CDP/NSS for multipath.

Question Answer

How do I protect my data in a windows environment?

FalconStor DiskSafe for Windows protects Windows application servers, desktops, and laptops (referred to as hosts) by copying the local disks or partitions to a mirror—another local disk or a remote virtual disk managed by a storage server application such as CDP.Refer to the DiskSafe User Guide for further information.

CDP/NSS Administration Guide 487

Page 490: CDP-NSS Administration Guide

Troubleshooting / FAQs

Protecting data in a Linux environment

Protecting data in an AIX environment

Protecting data in an HP-UX environment

Question Answer

How do I protect my data in a Linux environment? FalconStor DiskSafe for Linux is a disk mirroring backup and recovery solution designed to protect data from disaster or accidental loss on Linux platform. Local disks and remote virtual disks managed by a storage server application such as CDP can be used for protection. However features such as snapshots are available only when a mirror disk is a virtual CDPVA disk. Linux LVM logical volume protection is also supported by DiskSafe.

Refer to the DiskSafe User Guide for further information.

Question Answer

How do I protect my data in an AIX environment? FalconStor provides AIX scripts to simplify and automate the protection and recovery process of logical volumes on AIX platforms. Once you have prepared the AIX host machine, you can:

• Install AIX FalconStor Disk ODM Fileset• Install the AIX SAN Client and Filesystem Agent• Use LVM for protection and recovery.

Question Answer

How do I protect my servers/data in an HP-UX environment?

Protecting your servers in a HP Unix environment requires that you establish a mirror relationship between the HP-UX (LVM and vxVM) Volume Group's Logical Volumes and the mirror LUNs from the CDP/NSS appliance. To protect your data:• Install the HP-UX file system Snapshot Agent• Confirm that the package installation was successful

by listing system installed packages:

swlist | grep VxFSagent• Authenticate the client to the storage server by

running ipstorclient monitor.• Use LVM for protection and recovery.

CDP/NSS Administration Guide 488

Page 491: CDP-NSS Administration Guide

Troubleshooting / FAQs

Logical resources

The following table describes the icons that are used to show the status of logical resources:

If you see one of these icons, check through the tabs to determine the problem.

Network connectivity

Storage servers, clients and consoles are all attached to one another through an Ethernet network. In order for all of the components to work properly together, their network connectivity should be configured properly.

To test connectivity between machines (servers, clients and consoles,) there are several things that can be done. This example shows a user testing connectivity from a client or console to a server named “knox”.

To test connectivity from one machine to the storage server, you can execute the ping utility from a command line prompt. For example, if your storage server is named “knox”, execute:

Icon Description

This icon indicates a warning, such as:• Virtual device offline (or has incomplete segments)• Mirror is out of sync• Mirror is suspended• TimeMark rollback failed• Replication failed• One or more supporting resources is not accessible (SafeCache,

CDP, Snapshot resource, HotZone, etc.)

This icon indicates an alert, such as:• Replica in disaster recovery state (after forcing a replication

reversal)• Cross-mirror need to be repaired on the virtual appliance• Primary replica is no longer valid as a replica• Invalid replica

ping knox

CDP/NSS Administration Guide 489

Page 492: CDP-NSS Administration Guide

Troubleshooting / FAQs

If the storage server is running and attached to the network, you should receive a response like this:

If the Server is not available, you may get a response like this:

This means that either the machine is not running, or is not properly attached to the network. If you get a response like this:

This means that your machine cannot find the storage server by name. There could be two reasons for this. First, it may be that the storage server is not running or connected to the network, and therefore has not registered itself to the name service on your network.

Second, it may be that the storage server is running, but is not known by name, possibly because the name service, such as DNS, is not running, or your machine is not referring to the proper name service.

Refer to your network’s reference material on how to configure your network’s name service.

If your storage server is available, you can execute the following command on the Server to verify that the CDP/NSS ports are both up:

Pinging knox [10.1.1.99] with 32 bytes of data:

Reply from 10.1.1.99: bytes=32 time<10ms TTL=255Reply from 10.1.1.99: bytes=32 time<10ms TTL=255Reply from 10.1.1.99: bytes=32 time<10ms TTL=255Reply from 10.1.1.99: bytes=32 time<10ms TTL=255

Ping statistics for 10.1.1.99: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms

Pinging knox [10.1.1.99] with 32 bytes of data:

Request timed out.Request timed out.Request timed out.Request timed out.Ping statistics for 10.1.1.99: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms

Unknown host knox.

netstat –a |more

CDP/NSS Administration Guide 490

Page 493: CDP-NSS Administration Guide

Troubleshooting / FAQs

Both ports 11576 and 11577 should be listed. In addition, port 11576 should be “listening”.

Linux SANClient

You may see the following message when executing ./IPStorclient start or ./IPStorclient restart if the Linux Client cannot locate the storage server on the network:

Creating IPStor Client Device [FAILED]Failed to connect to Storage Server 0, -1

To resolve, restart the services on both the storage server and the Linux Client.

Jumbo frames support

To determine if a machine supports jumbo frames, use the ping utility from a command line prompt to ping with the packet size. If your storage server is named “knox”, execute one of the following commands:

On Linux systems:

Diagnosing client connectivity issues

Problems connecting clients to their SAN resources may occur due to several causes, including network configuration and storage server configuration.

• Check the General Info tab for the Client in the Console to see if the Client has been authenticated. In order for a Client to be able to access storage, you must establish a trusted relationship between the Client and Server and you must assign storage resources to the Client.

• If you make any Client configuration changes in the Console, you must restart the Client in order for the changes to take effect.

• Clients may not achieve the maximum throughput when writing over gigabit. If you are noticing slower than expected speeds when writing over gigabit, you can do the following:• Turn on TCP window scaling on the storage server: /proc/sys/net/ipv4/tcp_window_scaling

1 is on. 0 is off.

• On Windows, go to Run and type regedit. Add the following:[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]

"Tcp1323Opts"=dword:00000001"GlobalMaxTcpWindowSize"=dword:01d00000"TcpWindowSize"=dword:01d00000

To see if the storage server client has connectivity to the storage server over the Ethernet network, refer to ‘Network connectivity’.

ping –s 8000 knox

CDP/NSS Administration Guide 491

Page 494: CDP-NSS Administration Guide

Troubleshooting / FAQs

Windows Client

Windows client debug information

You can configure the amount of detail about the storage server Client’s activity and performance that will be written to the Windows Event Viewer.

In addition, you can enable a system tracer. When enabled, the trace information will be logged to a file called FSNTrace.log located in the \FalconStor\IPStor\Logs directory.

1. To filter the events and/or configure the tracer, select Tools --> Options.

2. To filter the events being written to the Event Viewer, select one of the levels in the Log Level field.

Note that regardless of which level you choose, there are several events that will always be written to the Event Viewer (driver not loaded, service failed to start, service started, service stopped).

Five levels are available for use:

Issue Cause/Suggested Action

The SAN Client hangs when the storage containing its virtual disk goes offline.

To prevent the CDP/NSS Client from hanging when there is a storage problem on the storage server, change the default I/O error response sense key from medium error to unit not ready by the running the following command: echo "IPStor set-parameter default-io-error-sense-key 2 4 0" > /proc/IPStor/IPStor

CDP/NSS Administration Guide 492

Page 495: CDP-NSS Administration Guide

Troubleshooting / FAQs

• Off – No activity will be recorded. • Errors only – Only errors will be recorded.• Brief – Errors and warnings will be recorded.• Detailed – (Default) Errors, warnings and informational messages will be

recorded.• Trace – This is the highest level of activity tracing. Debugging messages

will be written to the trace log. In addition, all errors, warnings and informational messages will be recorded in the Event Viewer.

3. If you select the Trace level, specify which portions of the storage server Client will be traced.

Warning: Adjusting these parameters can impact system performance. They should not be adjusted unless directed to do so by FalconStor technical support.

CDP/NSS Administration Guide 493

Page 496: CDP-NSS Administration Guide

Troubleshooting / FAQs

Clients with iSCSI protocol (updated February 2013)

Issue Cause/Suggested Action

(iSCSI protocol) After rebooting, the client loses its file shares.

This is a timing issue. To reconnect to shares:Open a command prompt and type the following for commands:net stop browsernet stop servernet start servernet start browserYou may want to create a batch file to do this.

(iSCSI protocol) Intermittent iSCSI disconnections on the client. or The client cannot see the disk.

The Microsoft iSCSI initiator has a default retry period of 60 seconds. Changing it to 300 seconds will sustain the disk for five minutes during network disconnection events, meaning applications will not be disrupted by temporary network problems (such as during a failover or recovery). If you are using Windows Server 2003, this setting is changed through the registry as described below. 1. Go to Start --> Run and type regedit.2. Find the following registry key:HKEY_LOCAL_MACHINE\system\CurrentControlSet\control\class\4D36E97B-xxxxxxxxx\<iscsi adapter interface>\parameters\

where iscsi adapter interface corresponds to the adapter instance, such as 0000, 0001, .....

3. Right-click Parameters and select Export to create a backup of the parameter values.

4. Double-click MaxRequestHoldTime.5. Pick Decimal and change the Value data to 300.6. Click OK.7. Reboot Windows for the change to take effect.

If you are using Windows Server 2008, you will need to use DynaPath to sustain the disk (for up to five minutes) during failover.

The Microsoft iSCSI initiator fails to connect to a target.

The Microsoft iSCSI initiator can only connect to an iSCSI target if the target name is no longer than 221 characters. It will fail to connect if the target name is longer than this.

CDP/NSS Administration Guide 494

Page 497: CDP-NSS Administration Guide

Troubleshooting / FAQs

Clients with Fibre Channel protocol

Linux SAN Client

Issue Cause/Suggested Action

An initiator times out with the following message: FStgt: SCSI command aborted.

Certain Fibre Channel hosts and/or HBAs are not as aggressive as others, which can affect the balancing of each host's pending commands. We recommend that the value for the Execution Throttle (QLogic) or Queue Depth (Emulex) for all client initiators using the same target(s) not exceed 240.If an initiator's Execution Throttle or Queue Depth is configured too high, it could result in slow response time from the storage subsequently causing the initiator to timeout. To resolve this issue, decrease the value of the initiator's Execution Throttle or Queue Depth.

Issue Cause/Suggested Action

You see the following message when viewing the Client's current configuration: On command 12 data received 36 is not equal to data expected 256.

This is an informational message and can be ignored.

You see the following message when executing ./IPStorclient start or ./IPStorclient restart:Creating IPStor Client Device [FAILED]Failed to connect to Storage Server 0, -1

The SAN Client cannot locate the storage server on the network. To resolve this, restart the services on both the storage server and the Linux Client.

You see the following message continuously:SCSI: Aborting due to timeout: PID ######…..

You cannot un-assign devices while a Linux client is accessing those devices (i.e. mounted partitions).

CDP/NSS Administration Guide 495

Page 498: CDP-NSS Administration Guide

Troubleshooting / FAQs

Storage Server

Storage server X-ray

The X-ray feature is a diagnostic tool used by your Technical Support team to help solve system problems. Each X-ray contains technical information about your server, such as server messages and a snapshot of your server's current configuration and environment. You should not create an X-ray unless you are requested to do so by your Technical Support representative.

To create the X-ray file for multiple servers:

1. Right click on the Servers node in the console and select X-ray.

A list of all of your storage servers displays.

2. Select the servers for which you would like to create an X-ray and click OK.

If the server is not listed, click the Add button to add the server to the list.

CDP/NSS Administration Guide 496

Page 499: CDP-NSS Administration Guide

Troubleshooting / FAQs

3. Select the X-ray options based upon the discussion with your Technical Support representative and set the file name.

To To create an X-ray for an individual server:

1. Right click on the server in the console and select X-ray.

The X-ray options screen (shown above) displays.

2. Select the X-ray options based upon the discussion with your Technical Support representative and set the file name.

Filter out and includeonly storage servermessages from theSystem Event Log.

CDP/NSS Administration Guide 497

Page 500: CDP-NSS Administration Guide

Troubleshooting / FAQs

Failover

Issue Cause/Suggested Action

After restarting failover servers, CDP/NSS starts but will not come back online.

This can happen if: • There was a communication problem (i.e.network

error) between the servers.• Both failover servers were down and then only one is

brought up.• If failover was suspended and you restart one of the

servers.To resolve this:1. At a Linux command prompt, type sms to determine if the system is in a ready state. 2. As soon as it becomes ready, type the following: IPStorsm.sh recovery

After failover, when you connect to the newly-promoted primary server, the failover status is not correct.

You are connecting to the server with an IP address that is not part of the failover configuration or with the heartbeat IP address and you are seeing the status from before the failover. You should only use those IP addresses that are configured as part of the failover configuration to connect to the Server in the Console.

You need to perform recovery on the near-line server when it is set up as a failover pair and is in a failed state.

When performing a near-line recovery and the Near-Line server is setup in a failover configuration, always add the first and second nodes of the failover set to the primary for recovery. • Select the proper initiators for recovery • Assign both nodes back to the primary for recovery. Note: There are cases where the server WWPN may not show up in the list since the machine maybe down and the particular port is not logged into the switch. In this situation, you must know the complete WWPN of your recovery initiator(s). This is important in cases where you need to manually enter the WWPN into the recovery wizard to avoid any adverse effects during the recovery process.

A server failure and failover occurred and the standby initiator assumed the WWPN of the failed servers target WWPN. losing the connection to the near-line mirror disk.

When adding a near-line mirror to a device, make sure you do not select initiators that have already been set as a standby initiator during failover setup. Doing so will cause loss of connection to the mirror disk in the event of failover causing mirror to break.

Failover partner fails to take over when primary network connection associated with iSCSI client fails.

The partner server has a network connection failure on the same subnet preventing it from successfully taking over.

Failover partner fails to take over when primary server fails.

You can manually trigger failover from the console by right-clicking on the server and selecting Failover --> Start take over <server name>.

CDP/NSS Administration Guide 498

Page 501: CDP-NSS Administration Guide

Troubleshooting / FAQs

Cross-mirror failover on a virtual appliance

The IP address for the primary server conflicts with the secondary server’s IP address.For example, both Storage Cluster Interlink ports share the same IP address.

The primary server’s network interface is using an IP address that is being used by the same interface on the partner server.Check the IP addresses being used by your servers. Modify the IP address on one of the servers by right-clicking on the server and selecting System Maintenance --> Configure Network. Refer to the ‘Network configuration’ section for details.

The Storage Cluster Interlink connection is broken in a failover pair and both servers cannot be synchronized

Use the Sync Standby Devices menu item to manually synchronize both servers’ metadata after the Storage Cluster Interlink is reconnected.

Failover has been suspended on server B and restarts server A. After the server is restarted, it does not come up automatically, but is in a ready state.

Attempt to login via the console and bring up the server. Type YES in the popup window that displays to forcefully bring up the server.

Failover is suspended on server A and server B for maintenance. Both servers are powered off. After the maintenance period both servers are restarted but they do not come up automatically.

Issue Cause/Suggested Action

During cross-mirror configuration, the system reports a mismatch of physical disks on the two appliances even though you are sure that the configuration of the two appliances is exactly the same, including the ACSL, disk size, CPU and memory.

An iSCSI initiator must be installed on the storage server and is included on FalconStor cross-mirror appliances. If you are not using a FalconStor cross-mirror appliance, you must install the iSCSI initiator RPM from the Linux installation media before running the IPStorinstall installation script. The script will update the initiator.

Issue Cause/Suggested Action

CDP/NSS Administration Guide 499

Page 502: CDP-NSS Administration Guide

Troubleshooting / FAQs

Replication

Issue Cause/Suggested Action

Replication is set to use compression but replication fails. You may see error messages in the syslog like this:__alloc_pages: 4-order allocation failed (gfp=0x20/0)IOCORE1 expand_deltas, cannot allocate for 65536 bytesIOCORE1 write_replica, error expanding deltas, return -EINVALIOCORE1 replica_write_parser, server returned -22IOCORE1 replica_read_post, stopping because status is -22

Compression requires 64K of contiguous memory. If the memory in the storage server is very fragmented, it will fail to allocate 64K. When this happens, replication will fail.

Replication Primary server cannot connect to the Replica server due to different TCP Protocols.The primary server console event log will print messages like shown below:

Mar 12 10:56:28 fs18626 kernel: MTCP2 ctrl hdr's signature is mismatch with 00000000, check the network proto-col(MTCP2).

Check your Replication MTCP Version from the FalconStor Management Console by Clicking on the ServerName and the General TabBoth servers should have same versions of MTCP (either 1 or 2)If you see two different versions, contact Technical Support.

You perform a role reversal and get the following error:"The group for replica disks on the target server is no longer valid. Reversal cannot proceed".

If you attempt to perform a role reversal on a resource that belongs to a non-replication group, the action will fail. To resolve this issue, remove the resource from the group and perform the role reversal.

Replication fails. Do not initiate a TimeMark copy while replication is in progress. Doing so will result in the failure of both processes.

Replication fails for a group member. If replication fails for one group member, it is skipped and replication continues for the rest of the group. In order for the group members that were skipped to have the same TimeMark on its replica, you will need to remove them from the group, use the same TimeMark to replicate again, and then re-join the group.

CDP/NSS Administration Guide 500

Page 503: CDP-NSS Administration Guide

Troubleshooting / FAQs

TimeMark Snapshot

Snapshot Resource policy

SafeCache

Issue Cause/Suggested Action

TimeMark rollback of a raw device fails. Do not initiate a TimeMark rollback to a raw device while data is currently being written to the raw device. The rollback will fail because the device will fail to open.

TimeMark copy fails. Do not initiate a TimeMark copy while replication is in progress. Doing so will result in the failure of both processes.

Issue Cause/Suggested Action

Snapshot Resource threshold has been reached and is not expanding.

• Set the policy to allow expanding the Snapshot Resource automatically.

• Add storage and manually expand the Snapshot Resource.

• Delete old TimeMarks in an orderly manner, starting with the earliest, if the Snapshot Resource policy is set to Preserve all TimeMarks.

Snapshot Resource failure (i.e. a recoverable storage error) or system error (i.e. out of memory)

• Check errors of the system/storage and repair. • If the Snapshot Resource has been set to offline due

to the Snapshot Resource policy Always maintain write operations, re-initialize the Snapshot Resource.

Issue Cause/Suggested Action

A physical resource has failed (for example, the disk was unplugged or removed) but the resources in the SafeCache group are not marked offline.

If a physical resource has failed prior to the cache being flushed, the resources in the SafeCache group will not be marked offline until after a rescan has been performed.

The primary resource has failed and you attempt to disable the cache, but the cache is unable to flush data back to the primary resource. A dialogue box displays “N/A” as the number of seconds needed to flush the cache.

The cache is unable to flush the data due to a problem with data transfer from the cache to the primary resource.

The SafeCache resource has failed and you attempted to resume the SafeCache. The resume appears to be successful, however, the client cannot write to the virtual device.

The client can only write to the virtual device when the SafeCache resource is restored. However, the SafeCache remains in a suspended state. You should suspend and resume the cache from the Console to return the cache status to normal and operational.

CDP/NSS Administration Guide 501

Page 504: CDP-NSS Administration Guide

Troubleshooting / FAQs

Command line interface

Service-Enabled Devices

Issue Cause/Suggested Action

Failed to resolve storage server to a valid IP address.Error: 0x09022004

The storage server hostname is not resolvable on both the client side and the server side. Add the server name to the hosts file to make it resolvable or use the IP address in commands.

Issue Cause/Suggested Action

An unassigned physical device does not show the Service-enabled Device option when you try to set the device category.

If you see that the GUID for this device is “fa1cff00...”, the device cannot be supported as a Service-enabled Device. This is because the device does not support the mandatory SCSI page codes that are used to determine the actual GUID for the device.

A Service-enabled device (SED) is marked "Incomplete" on the primary server and the client that normally connects to the SED resource has lost access to the disk.

In a failover configuration, you should not change the properties of a SED used by a primary server to "Unassigned" on the secondary server. If this occurs, you should do the following:1. Delete the offline SAN resource.2. Service-enable the physical disk.3. Re-create the SAN resource.4. Re-assign the SAN resource back to the client.

CDP/NSS Administration Guide 502

Page 505: CDP-NSS Administration Guide

Troubleshooting / FAQs

Error codes

The following table contains a description of some common error codes.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

1005 Error Out of kernel resources. Failed to get major number for the SCSI device.

Too many Linux device drivers installed.

Type cat /proc/devices for a list and see if any can be removed.

1006 Error Failed to allocate memory. Memory leak from various modules in the Linux OS, most likely from network adapter or other third party interface drivers.

Check knowledge base for known memory leak problems in various drivers.

1008 Error Failed to set up the network connection due to an error in SANRPCListen.

Another application has port UDP 11577 open.

Confirm using netstat -a and then remove or reconfigure the offending application.

1016 Critical Primary virtual device [Device number] has failed and mirror is not in sync. Cannot perform swap operation.

Physical device associated with primary virtual device may have had a failure.

Check physical device status and all connections, including cables and switches, and downstream driver log.

1017 Critical Secondary virtual device [Device #] has failed.

A mirror device has failed.

Check drive, cable, and adapter.

1022 Error Replication has failed for virtual device [Device number] -- [Device #].

The network might have a problem.

Check connectivity between primary and replica, including jumbo frame configuration if applicable.

1023 Error Failed to connect to physical device [Device number]. Switching to alias to [ACSL].

An adapter/cable might have a problem.

Check for a loose or damaged cable on the affected drive.

1030 Error Failed to start replication -- replication is already in progress for virtual device [Device number].

Only one replication at a time per device is allowed.

Try later.

1031 Error Failed to start replication -- replication control area not present on virtual device [Device number].

The configuration might not be valid.

Check configuration, restart the console, or re-import the affected drive.

1032 Error Failed to start replication -- replication control area has failed for virtual device [Device number].

A drive may have failed. Check the physical drive for the first virtual drive segment.

CDP/NSS Administration Guide 503

Page 506: CDP-NSS Administration Guide

Troubleshooting / FAQs

1033 Error Failed to start replication -- a snapshot is in progress for virtual device [Device number].

There is a raw device backup or snapshot copy in progress.

Do not open raw devices or perform snapshot copy when replication is occurring.

1034 Warning Replication failed for virtual device [Device number] -- the network transport returned error [Error].

The network might have a problem.

Check connectivity between the primary and replica, including jumbo frame configuration if applicable.

1035 Error Replication failed for virtual device [Device number] -- the local disk failed with error [Error].

There is a drive failure. Check all physical drives associated with the virtual drive.

1036 Error Replication failed for virtual device [Device number] -- the local snapshot used up all of the reserved area.

Snapshot reserved area is insufficient on the primary server.

Add additional snapshot reserved area.

1037 Error Replication failed for virtual device [Device number] -- the replica snapshot used up all of the reserved area.

Snapshot reserved area is insufficient on the primary server.

Add additional snapshot reserved area.

1038 Error Replication failed for virtual device [Device number] -- the local server could not allocate memory.

Memory is low. Check system memory usage.

1039 Error Replication failed for virtual device [Device number] -- the replica disk failed with error [Error].

Replication failed because of the indicated error.

Based on the error, remove the cause, if possible.

1040 Error Replication failed for virtual device [Device number] -- failed to set the replication time.

The configuration might not be valid.

Check the ipstor.dat file on the replica server.

1043 Error A SCSI command terminated with a non-recoverable error condition that was most likely caused by a flaw in the medium or an error in the recorded data. Check the system log for additional information.

This is most likely caused by a flaw in the media or an error in the recorded data.

Check the system log for additional information. Contact the hardware manufacturer for a diagnostic procedure.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 504

Page 507: CDP-NSS Administration Guide

Troubleshooting / FAQs

1044 Error A SCSI command terminated with a non-recoverable hardware failure (for example, controller failure, device failure, parity error, etc.). Check the system log for additional information.

This is a general I/O error that is not media related. This can be caused by a number of potential failures, including controller failure, device failure, parity error, etc.

Check the system log for additional information. Contact the hardware manufacturer for a diagnostic procedure.

1046 Error Replica rescan for differences has failed for virtual device [Device number] -- the local device failed with error.

There is a drive failure. Check all physical drives associated with the virtual drive.

1047 Error Replica rescan for differences has failed for virtual device [Device number] -- the replica device failed with error.

There is a drive failure. Check all physical drives associated with the virtual drive.

1048 Error Replica rescan for differences has failed for virtual device [Device number] -- the network transport returned error.

Network problem. Check connectivity between primary and replica, including jumbo frame configuration if applicable.

1049 Error Replica rescan for differences cannot proceed -- replication control area not present on virtual device [Device #].

The configuration might not be valid.

Check configuration, restart GUI Console, or re-import the affected drive.

1050 Error Replica rescan for differences cannot proceed -- replication control area has failed for virtual device [Device #].

There is a drive failure. Check the physical drive for the first virtual drive segment.

1051 Error Replica rescan for differences cannot proceed -- a merge is in progress for virtual device [Device number].

A merge is occurring on the replica server.

No action is required. A retry will be performed when the retry delay expires.

1052 Error Replica rescan for differences failed for virtual device [Device number] -- replica status returned.

The configuration might not be valid.

Check the ipstor.dat file on the replica server.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 505

Page 508: CDP-NSS Administration Guide

Troubleshooting / FAQs

1053 Error Replica rescan for differences cannot proceed -- replication is already in progress for virtual device [Device #].

Only one replication is allowed at a time for a device.

Try again later.

1054 Error Replication cannot proceed -- a merge is in progress for virtual device [Device number].

A merge is occurring on source.

No action is required. A retry will be performed when the retry delay expires.

1055 Error Replication failed for virtual device [Device number] -- replica status returned [Error].

The configuration might not be valid.

Check the ipstor.dat file on the replica server.

1056 Error Replication role reversal failed for virtual device [Device number] -- the error code is [Error].

The configuration might not be valid for replication role reversal.

Check the ipstor.dat file on the replica server.

1059 Error Replication failed for virtual device [Device number] -- start replication returned [Error].

The configuration might not be valid.

Check the ipstor.dat file on the replica server.

1060 Error Rescan replica failed for virtual device [Device number] -- start scan returned [Error]

The configuration might not be valid.

Check the ipstor.dat file on the replica server.

1061 Critical I/O path failure detected. Alternate path will be used. Failed path (A.C.S.L): [ACSL]; New path (A.C.S.L): [ACSL].

An alias is in use due to a primary path failure.

Check the primary path from the server to the physical device.

1066 Error Replication cannot proceed -- snapshot resource area does not exist for remote virtual device [Device ID].

The snapshot resource for the replica is no longer there. It may have been removed accidentally.

From the Console. log into the replica server and check the state of the snapshot resource for the replica. If deleted accidentally, restore it back.

1067 Error Replication cannot proceed -- unable to connect to replica server [Server name].

Either the network connection is down or the replica server is down.

From the console, log into the replica server and check the state of the server at the replica site. Determine and correct either the network or server problem.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 506

Page 509: CDP-NSS Administration Guide

Troubleshooting / FAQs

1068 Error Replication cannot proceed -- group [Group name] is corrupt.

The group configuration is not consistent or is missing.

Try to restart server modules or recreate the group.

1069 Error Replication cannot proceed -- virtual device [Device ID] no longer has a replica or the replica device does not exist.

The designated replica device is no longer on the replica server. Most likely the replica drive was either promoted or deleted while the primary server was down.

Check the replica server first. If the replica exists, the configuration may be corrupted. In this case, call Technical Support. If the drive was promoted or deleted, you must remove replication from the primary and reconfigure. If the drive was deleted, create a new replica. If the drive was promoted, you can assign it back as a replica once you determine if new data was written to the drive while it was promoted. You will also need to decide if you want to preserve the data. Once assigned back as the replica, it will be resynchronized with the original primary drive.

1070 Error Replication cannot proceed -- replication is already in progress for group [Group name].

The snapshot group is currently performing replication. Only one replication operation can be running at a time for each group.

Wait for the process to complete or change the replication schedule.

1071 Error Replication cannot proceed -- Remote vid %1 does not exist or is not a replica device.

The replica was not valid when replication was triggered. The replica might have been removed without the primary.

Remove the replication setup from the primary and reconfigure the replication.

1072 Error Replication cannot proceed -- missing a remote replica device in group [Group name].

One of the replica drives in the snapshot group is missing. Replication must be able to be performed for the entire snapshot group in order to proceed.

See 1069.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 507

Page 510: CDP-NSS Administration Guide

Troubleshooting / FAQs

1073 Error Replication cannot proceed -- unable to open configuration file.

Failed to open the configuration file to get the replication configuration, possibly because the system was busy.

Check system disk status.

Check system status.

1074 Error Replication cannot proceed -- unable to allocate memory.

Memory allocation for replication information failed possibly because the system was busy.

Check system status.

1075 Error Replication cannot proceed -- unexpected error %1.

Replication failed with the listed error.

Check system status.

1078 Error Replication cannot proceed -- mismatch between our snapshot group [Group name] and replica server.

The Snapshot group in the source server has different virtual drives than the replication server. This may be due to an altered configuration when a server was down.

This is highly unusual situation. The cleanest way to fix this is to remove replication for devices in the group, remove the group, recreate the group, and configure replication again.

1079 Error Replication for group [Group name] has failed due to error on virtual device [Device ID].

One or more virtual drives in the group failed during replication.

Check the log for error details. In the case of physical disk failure, the disk must be replaced, and data must be restored from the backup. In the case of a communication failure, replication will continue when the problem is resolved, and the schedule resumes.

1080 Error Replication cannot proceed -- failed to create TimeMark on virtual device [Device ID].

The replication process could not create a snapshot. This can occur for various reasons, including low system memory, low and/or improper configuration parameters for automatic snapshot resources, or physical storage is depleted.

Check the snapshot resource issues, check the og for other errors, and check the maximum number of TimeMark configured.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 508

Page 511: CDP-NSS Administration Guide

Troubleshooting / FAQs

1081 Error Replication cannot proceed -- failed to create common TimeMark for group [Group name].

One of the virtual drives in the group failed to create snapshot for replication. See 1080.

See 1080.

1082 Warning Replication for virtual device [Device ID] has manually aborted by user.

Replication was stopped by the user.

None.

1083 Warning Replication for group [Group name] has manually aborted by user.

Replication was stopped by the user.

None.

1084 Warning A SCSI command terminated with a recovered error condition. Check the system log for additional information.

This is most likely caused by a flaw in the media or an error in the recorded data.

Check the system log for additional information. Contact the hardware manufacturer for a diagnostic procedure.

1085 Error HotZone for virtual device [Device ID] has been auto-disabled due to an error.

Physical device failure. Check physical devices associated with HotZone.

1087 Error Primary virtual device [Device #] has failed, swap to secondary.

The mirror device had a physical error resulting in a mirror swap.

Check physical device.

1089 Error Rescan replica failed for virtual device %1 -- the network transport returned error %2. Verify the Replication features you are using are supported on both servers. Local server version is %3.

The replication protocols on the source and replica servers do not match.

Make sure you are using compatible builds on both the source and replica server.

1096 Warning Replication for virtual device %1 has been set to delta mode -- %2.

Replication for the virtual device switched to delta mode due to an operation triggered by the user, such as a configuration change or replica rescan, or due to a replication I/O error, out of space, out of memory condition, etc.

• Check device status for I/O error or disk space usage error.

• Increase memory or reduce the concurrent activities for memory or other type of errors.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 509

Page 512: CDP-NSS Administration Guide

Troubleshooting / FAQs

1097 Warning Replication for group %1 has set to delta mode -- %2. {Affected members: %3 }

Replication for the virtual device switched to delta mode due to an operation triggered by the user, such as a configuration change or replica rescan, or due to a replication I/O error, out of space, out of memory condition, etc.

• Check device status of the group members for I/O error, disk space usage error.

• Increase memory or reduce the concurrent activities for memory or other type of errors.

1098 Error Replication cannot proceed -- Failed to get virtual device delta information for virtual device %1.

Failed to get the delta of the resource to replicate possibly due to too many pending processes.

Retry later.

1099 Error Replication cannot proceed -- Failed to get virtual device delta information for virtual device %1 because the device is offline.

Failed to get the delta of the resource to replicate due to the resource being offline.

Check device status and bring the device back online.

1100 Error Replication cannot proceed -- Failed to communicate with replica server to trigger replication for virtual device %1.

Failed to connect to the replica server or exchange replication information with the replica server to start replication for the virtual device.

• Check connectivity between primary and replica servers.

• Check if the replica server is busy. Adjust the schedule to avoid too many operations occurring at the same time.

1101 Error Replication cannot proceed -- Failed to communicate with replica server to triggr replication for group %1.

Failed to connect to the replica server or failed to exchange replication information with the replica server, preventing the start of replication for the group

• Check connectivity between the primary and replica servers.

• Check if the replica server is busy. Readjust the schedule to avoid too many operations occurring at the same time.

1102 Error Replication cannot proceed -- Failed to update virtual device meta data for virtual device %1.

Failed to update virtual device metadata to start replication possibly due to a device access error or the system is busy.

• Check virtual device status. • Check system status.

1103 Error Replication cannot proceed -- Failed to initiate replication for virtual device %1 due to server busy.

Failed to start replication for virtual device because the system was busy.

Check system status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 510

Page 513: CDP-NSS Administration Guide

Troubleshooting / FAQs

1104 Error Replication cannot proceed -- Failed to initiate replication for group %1 due to server busy.

Failed to start replication for group because the system was busy.

Check system status.

1108 Error Replication failed for virtual device %1

The network might have a problem.

Check connectivity between the primary and replica, including jumbo frame configuration if applicable.

1111 Error TimeView data replication cannot proceed -- Failed to initiate replication for virtual device %1, TimeMark %2 due to %3.

It might be due to one of the following reasons:

• replication is not enabled

• TimeView replication is not supported on the replica server version

• TimeView of the TimeMark is mounted.

Make sure replication is enabled, the target server is running version 6.1 or higher, no and TimeView of that TimeMark exists or is mounted.

1113 Error TimeView data replication cannot proceed -- Failed to start replication for virtual device %1, TimeMark %2 due to %3.

The ioctl call to start TimeView replication failed due to one of the following reasons: • TimeMark might have

been deleted at the time TimeView replication is triggered

• Replication is no longer in progress

• virtual device is offline

• lost network connection

• memory allocation failure occurred

• snapshot resource is offline

• other TimeMark operations are in progress.

Based on the reason stated in the event message, check the status of the virtual device and snapshot resources.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 511

Page 514: CDP-NSS Administration Guide

Troubleshooting / FAQs

1123 Error Stopping TimeView data replication cannot proceed -- Failed to stop for virtual device %1, TimeMark %2 due to %3.

The ioctl call to stop TimeView replication failed due to one of the following reasons: • TimeMark may have

been deleted• the replication is no

longer in progress• virtual device is

offline• lost network

connection • memory allocation

failure occurred• snapshot resource is

offline• other TimeMark

operations are in progress.

Based on the reason stated in the event message, check the status of the virtual device and snapshot resources

1131 Error TimeView Replication failed for virtual device %1 -- the replica device failed with error %2.

Replica server or replica device may be in an unhealthy state.

Check replica server and disk status

1132 Error TimeView Replication failed for virtual device %1 - the network transport returned error %2.

A network error is reported.

Check the network status between two servers.

1133 Error TimeView Replication failed for virtual device %1 - the local disk failed with error %2.

Local physical device may be busy or snapshot resource area might be offline.

Check the local disks on the server.

1134 Error TimeView Replication failed for virtual device %1 - start replication returned %2

A replication may already be in progress, TimeView status might not be OK, or system memory might be low.

Based on the returned error, check the source server.

1136 Error TimeView Replication failed for virtual device %1 - the network transport returned error %2, Verify the Replication features you are using are supported on both servers. Local server version is %3.

The replication protocols on the source and replica servers mismatch.

Make sure you are using compatible builds on both source and replica servers.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 512

Page 515: CDP-NSS Administration Guide

Troubleshooting / FAQs

1137 Error TimeView Replication failed for virtual device %1 error code is %2

A replication error is reported.

Based on the returned error, check the servers, devices, and the network.

1201 Warning Kernel memory is low. Add more memory to the system if possible. Restart the host if possible.

Too many processes for the current resources.

Add more memory to the system if possible. Restart the host if possible.

1203 Error Failed to trespass path to [Path].

All downstream storage paths had failures.

Check storage status and path connections.

1204 Error Failed to add path group. ACSL: [Path].

Downstream storage path failure.

Check storage status and path connections.

1206 Error Failed to activate path: [Path].

Downstream storage path failure.

Check storage status and path connections.

1207 Error Detected critical path failure . Path [Path] will be removed.

Downstream storage path failure.

Check storage status and path connections.

1208 Warning Path [Path] does not belong to active path group.

Tried to use a non-active path to access storage.

Use only active paths.

1209 Warning Rescanning the physical adapters is recommended to correct the configuration.

There may be a problem with the configuration.

Rescan the physical adapters.

1210 Warning No valid path is available for device [Device ID].

Downstream storage path failure.

Check storage status and path connections.

1211 Warning No valid group is available. Unexpected path configuration.

Check path group configuration.

1212 Warning No active path group can be found. [GUID].

Storage connectivity failure.

Check cables, switches and storage system to determine cause.

1214 Error Failed to add path to storage device: [Path].

Downstream storage path failure.

Check storage status and path connections.

1215 Warning CLARiiON storage path is trespassing.

Downstream storage path failure or manual trespass.

Check storage status and path connections.

1216 Warning T300 storage path is trespassing.

Downstream storage path failure or manual trespass.

Check storage status and path connections.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 513

Page 516: CDP-NSS Administration Guide

Troubleshooting / FAQs

1217 Warning HSG80 storage path is trespassing.

Downstream storage path failure or manual trespass.

Check storage status and path connections.

1218 Warning MSA1000 storage path is trespassing.

Downstream storage path failure or manual trespass.

Check storage status and path connections.

1230 Error TimeMark [TimeMark] cannot be created during disk rollback. Time-stamp: [Time].

Disk rollback is in progress.

Wait until disk rollback is complete.

1231 Error Snapshot resource %1 became offline due to storage problem or memory shortage.

Physical storage of the snapshot resource may have a failure or server memory is low.

Check storage and server status to remove the failure condition. If the snapshot resource policy is set to Always maintain write operations, you will lose all snapshots on that resource and need to reinitialize the snapshot resource. If the policy is set to Preserve-all/recent-TimeMarks, you may need to reinitialize the snapshot resource if it does not automatically come back online once the failure condition is removed.

1234 Error Timemark reclamation vdev %1 timestamp %2 cancelled by snapshot operation or user.

TimeMark reclamation has been cancelled by the user or a snapshot operation (i.e snapshot expansion).

If the TimeMark reclamation process cancellation was not user-initiated, check the snapshot resource status and available space. If automatic expansion of the snapshot resource is enabled and resource space is low, perform a manual expansion prior to the next TimeMark reclamation.

1235 Error Timemark reclamation vdev %1 timestamp %2 failed due to storage problem or memory shortage.

Physical storage of the snapshot resource may have a failure or server memory is low.

Check storage and server status to remove the failure condition.

1236 Error The disk virtual header and snapshot resource could not be updated for virtual disk %1.

Physical storage of the disk virtual header or snapshot resource may have a failure.

Check storage to remove the failure condition. Data on related snapshots may be compromised.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 514

Page 517: CDP-NSS Administration Guide

Troubleshooting / FAQs

1300 Critical Meta transfer link %1. Meta transfer link is down due to incorrect network configuration or error condition in network connection.

Check meta transfer link configuration and network connection.

1302 Critical Global Safe Cache flushing enabled.

Global Safe Cache flushing is enabled.

Contact Tech Support.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

3003 Error Number of CCM connections has reached the maximum limit %1.

There are too many CCM Consoles open.

Close the CCM GUI on different machines.

3009 Error CCM could not create a session with client %1.

There may be a network issue or not enough memory for CCM module to create a communication thread with the client.

Check network communication and client access from the server; try to restart the ccm module on the server.

3010 Error List of the clients cannot be retrieved from the server.

CCM module cannot get the list of SAN clients by executing internal CLI commands.

This is very unlikely to happen; check the executable iscli is present in $ISHOME/bin.

3014 Error CCM service cannot be started.

CCM RPC service could not be created.

Try to restart the ccm module on the server.

3016 Error User name or password is longer than the maximum limit %1.

in the CCM user or password string for connecting to the server is too long.

Enter a string within the limit.

3017 Warning The version information of the message file cannot be retrieved.

The Event Log message file is missing.

This is very unlikely to happen; check the existence of $ISHOME/etc/msg/english.msg.

3020 Error CCM service cannot be created on the server as socket creation failed.

A TCP socket could not be created or set for CCM service.

This is very unlikely to happen; try to restart the ccm module on the server.

3021 Error CCM service cannot be created on the server as socket settings failed.

A TCP socket option could not be set for CCM service.

This is very unlikely to happen; try to restart the ccm module on the server.

CDP/NSS Administration Guide 515

Page 518: CDP-NSS Administration Guide

Troubleshooting / FAQs

3022 Error CCM service cannot be created on the server as socket binding to port %1 failed.

Another process may be using the same port number.

Identify the process using the ccm port and stop it.

3023 Error CCM service cannot be created on the server as TCP service creation failed.

A TCP service could not be created for the CCM service.

This is very unlikely to happen; try to restart the ccm module on the server.

3024 Error CCM service cannot be created on the server as service registration failed.

Binding CCM service to RPC callback function failed when CCM module started.

This is very unlikely to happen; try to the restart the ccm module on the server.

7001 Error Patch %1 failed -- environment profile is missing in /etc.

Unexpected loss of environment variables defined in /etc/.is.sh on the server.

Check server package installation.

7002 Error Patch %1 failed -- it applies only to build %2.

The server is running a different build than the one for which the patch is made.

Get the patch, if any, for your build number or apply the patch on another server that has the correct build.

7003 Error Patch %1 failed -- you must be the root user to apply the patch.

The user account running the patch is not the root user.

Run the patch with the root account.

7004 Warning Patch %1 installation failed -- it has already been applied.

You tried to apply the same patch again.

None.

7005 Error Patch %1 installation failed -- prerequisite patch %2 has not been applied.

A previous patch is required but has not been applied.

Apply the required patch before applying this one.

7006 Error Patch %1 installation failed -- cannot copy new binaries.

Unexpected error on the binary file name or path in the patch.

Contact Tech Support.

7008 Warning Patch %1 rollback failed -- there is no original file to restore.

This patch has not been applied or has already been rolled back.

None.

7009 Error Patch %1 rollback failed -- cannot copy back previous binaries.

Unexpected error on the binary file name or path in the patch.

Contact Tech Support.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 516

Page 519: CDP-NSS Administration Guide

Troubleshooting / FAQs

7010 Error Patch %1 failed -- the file %2 has the patch level %3, higher than this patch. You must rollback first %4.

A patch with a higher level has already been applied and is conflicting with this patch.

Roll back the higher-level patch, apply this patch, and then reapply the higher-level patch.

7011 Error Patch %1 failed -- it applies only to kernel %2.

You tried to apply the patch to a server that is not running the expected OS kernel.

Apply the patch on a server that has the expected kernel.

7012 Error Patch %1 failed -- The available free space is %2 bytes; you need at least %3 bytes to apply the patch.

Patch applied to a server running low on the disk used for server home directory.

Add more storage.

10001 Error Insufficient privilege (uid: [UID]).

Server modules are not running with root privilege.

Log in to the server with the root account before starting server modules.

10002 Error The server environment is corrupt.

The configuration file in the /etc directory, which provides the server home directory and other environment information, is either corrupted or deleted.

Determine the cause of the corruption and correct it. Perform regular backups of server configuration data so it can be restored.

10003 Error Failed to initialize configuration [File name].

During the initialization process, one or more critical processes experienced a problem. This is typically due to system drive failure, storage hardware failure, or system configuration corruption.

Check storage connectivity; check the system drive for errors via OS-provided utilities (i.e. fsck); check for a server environment variable file in /etc.

10004 Error Failed to get SCSI device information.

An error occurred when accessing the SCSI devices during startup. Most likely due to storage connectivity failure or hardware failure.

Check the storage devices (power status; controller status, etc.) Check the connectivity, e.g., cable connectors. With Fibre Channel switches, even if the connection status light indicates a good connection, it is not a guarantee. Push the connector in to verify. Check the specific storage device using OS-provided utilities such as hdparm.

10005 Error A physical device will not be available because we cannot create a Global Unique Identifier for it.

Physical SCSI device is not qualified because it does not support proper SCSI Inquiry pages.

Get supported storage devices.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 517

Page 520: CDP-NSS Administration Guide

Troubleshooting / FAQs

10006 Error Failed to write configuration [File name].

An error was encountered when writing the server configuration file to the system drive. This can only happen if the system drive runs out of space, is corrupted, or has a hardware failure

Check the system drive using OS-provided utilities. Free up space if necessary. Replace the drive if it is not reliable.

10054 Error Server FSID update encountered an error.

Atomic merge is occurring on source.

No action required, retry will be performed after when the retry delay expires.

10059 Error Server persistent binding update encountered an error.

There is a conflict on ACSL. Use a different ACSL for binding.

10100 Error Failed to scan new SCSI devices.

An error occurred when adding newly discovered SCSI devices to the system. This is most likely due to unreliable storage connectivity, hardware failure, or system resources running low.

See 10004 for information about checking storage devices. If system resources are low, run 'top' to check the process using the most memory. If physical memory is below the server recommendation, install more memory. If the OS is suspected to be in a bad state due to a hardware or software failure, restart the server machine.

10101 Error Failed to update configuration [File name].

An error was encountered when updating the server configuration file to the system drive. This can only happen if the system drive is corrupted or has a hardware failure.

Check the system drive using OS-provided utilities.

10102 Error Failed to add new SCSI devices.

An error occurred when adding newly discovered SCSI devices to the system. This is most likely due to unreliable storage connectivity, hardware failure, or system resources are running low.

Check the storage devices and connectivity status. If system resources are low, run 'top' to check the process that is using the most memory. If physical memory is below the server recommendation, install more memory. If the OS is suspected to be in a bad state due to unexpected failure in either hardware or software components, restart the server machine.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 518

Page 521: CDP-NSS Administration Guide

Troubleshooting / FAQs

10200 Warning Configuration [File name] exists.

A configuration file already exists, possibly from a previous installation. The existing configuration file will be reused.

If you suspect the existing configuration file is correpted, do not use it. Remove the $ISHOME directory before re-installing.

10210 Error Marked virtualized PDev [GUID] OFFLINE, guid does not match SCSI guid [GUID].

A physical device has a different GUID written on the device header than the record in the configuration file. This is typically caused by old drives being imported without proper initialization. In rare cases, this is due to corruption of the configuration or device header.

Check the physical connection of the storage, and the storage system. If problem persists, call tech support.

10211 Warning Marked Physical Device [%1] OFFLINE because its wwid %2 does not match scsi wwid %3, [GUID: %4].

The physical storage device is not the one registered previously.

Check if the storage device has been replaced. If not rescan devices.

10212 Error Marked PDev [GUID] OFFLINE because scsi status indicate OFFLINE.

The physical storage system response indicates the specific device is off-line. It may have been removed, turned off, or malfunctioning.

Check the storage system, and cabling. After correcting the problem, rescan the adapter where the drive is connected. Limit the scope of the scan to that SCSI address.

10213 Error Marked PDev [GUID] OFFLINE because it did not respond correctly to inquiry.

Physical device failure or unqualified device for SCSI commands.

Check physical device.

10214 Error Marked PDev [GUID] OFFLINE because its GUID is an invalid FSID.

The GUID in the header of the drive does not match the unique ID, called the FSID, which is based on the external properties of the physical drive. It may be caused by drives changed while the server is down.

Always change drives via the console to eliminate them from the virtual resource list first. Also, do not allow other applications to direct access to physical drives - always go through the server.

10215 Error Marked PDev [GUID] OFFLINE because its storage capacity has changed.

The physical drive geometry, including the number of sectors, is different from the original record.

Rescan the drive to establish its properties.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 519

Page 522: CDP-NSS Administration Guide

Troubleshooting / FAQs

10240 Error Missing SCSI Alias [A,C,S,L].

One of the SCSI paths for the device is not accessible. This may be due to a disconnected storage cable, a re-zoned FC switch, or storage port failure.

Check cabling and storage system. After correcting the problem, rescan the adapter connected to the drive, and limit the scope to that path.

10241 Error Physical Adapter [Adapter #] could not be located in /proc/scsi/.

The adapter driver could be unloaded.

Check the loaded drivers.

10242 Critical Duplicate Physical Adapter number [Adapter #] in /proc/scsi/.

Some Linux kernel versions had a defect that caused the same adapter number to be assigned to two different adapters, resulting in possible overwritten data.

Do not repeatedly load and unload the Fibre Channel drivers and the server modules individually. Load and unload all the drivers together.

10244 Error Invalid FSID, device [Device ID] LUN in FSID [FSID] does not match actual LUN.

The FSID is generated with the LUN of the device. Once a device is used by the server, the LUN cannot be changed on the storage configuration.

Do not change the LUN of a virtualized drive. Revert back to the original LUN in the storage configuration.

10245 Error Invalid FSID, Generate FSID %1 does not match device acsl:%2 GUID %3.

The physical storage device is not the one registered previously.

Check if the storage device has been replaced. If not rescan devices.

10246 Error Failed to generate FSID for device acsl:[A C S L], can't validate FSID.

The physical drive does not present valid data for unique ID generation.

Only use this type of drive for virtual drive, not Service Enabled Disks (SED).

10247 Error Device (acsl:[A C S L]) GUID is blank, can't validate FSID.

Some process may have erased the disk header. This can be due to the accidental erase by fdisk or format.

Never access the virtual drives by passing the server.

10250 Warning Remove scsi alias %1 from %2 because their categories are different.

Possible hardware configuration change.

Check if the device has changed or has failed.

10251 Warning Remove scsi alias %1 from %2 because their GUIDs are different.

Possible hardware configuration change.

Check if the device has changed or has failed.

10254 Error Import logical resources failed.

This might be caused by a disk IO failure.

Check storage devices.

10257 Error CDP Journal (GUID: %1) of virtual device %2 (ID %3) need repair.

CDP Journal expansion failed for the virtual device. Repair is needed.

Call support to investigate and repair the CDP Journal in question.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 520

Page 523: CDP-NSS Administration Guide

Troubleshooting / FAQs

10258 Warning Console (%1): The number of configured device paths is %2 (%3 disks), which reaches/exceeds the maximum number of supported device paths: %4.

Number of device paths reaches/exceeds the maximum of 4096 after rescan with discovering new devices.

Review the physical device configuration and keep the number of device paths within the limit.

10259 Error Loading snapshot resource for virtual device id %1 has failed. Virtual device is loaded without snapshot resource.

Storage of the Snapshot resource may be offline / inaccessible. Virtual device is loaded without snapshot resource based on the policy.

Check the storage system and all cabling to see if repair or replacement is needed. Rescan after correction is made.

11000 Error Failed to create socket. This kind of problem should rarely happen. If it does, it may indicate network configuration error, possibly due to system environment corruption. It is also possible that the network adapter failed or is not configured properly. It is also possible that the network adapter driver has problem.

Restart the network. If problem persists, restart the OS, or restart the machine (turn off then turn on the machine). If problem still persists, you may need to reinstall the OS. If that is the case may sure you properly save all the server configuration information before proceeding.

11001 Error Failed to set socket to re-use address.

System network configuration error, possibly due to system environment corruption.

See 11000.

11002 Error Failed to bind socket to port [Port number].

System network configuration error, possibly due to system environment corruption.

See 11000.

11003 Error Failed to create TCP service.

System network configuration error, possibly due to system environment corruption.

See 11000.

11004 Error Failed to register TCP service (program: [Program name], version: [Version number]).

System network configuration error, possibly due to system environment corruption.

See 11000.

11006 Error The server communication module failed to start.

Most likely due to the server port is being occupied by either a previous unexpected failure of the communication module, or another application is using the TCP port.

Restart the OS and try again. If problem persists, use the OS-provided utilities, such as netstat to check the port used.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 521

Page 524: CDP-NSS Administration Guide

Troubleshooting / FAQs

11007 Warning There is not enough disk space available to successfully complete this operation and maintain the integrity of the configuration file. There is currently %1 MB of disk space available. The server requires %2 MB of disk space to continue.

The available space on the disk holding the configuration file is not enough.

Increase disk space.

11101 Error SAN Client ([host name]): Failed to add SAN Client.

This error is most likely due to a system configuration error or system resources running low.

Check OS resources using provided utilities such as top.

11103 Error SAN Client (%1): Authentication failed.

The user account to connect to the server is not valid.

Check user account and password.

11104 Error There are too many SAN Client connections.

The number of simultaneous connections exceeded the supported limit that the current system resources can handle.

Stop some client connections.

11106 Error SAN Client ([host name]): Failed to log in.

Access account might be invalid.

Check user name and password.

11107 Error SAN Client ([host name]): Illegal access.

The client host attempted to perform an operation beyond its granted privileges. was tried.

This is very rare. Record the message and monitor the system. If this happens often, investigat the cause to prevent security breaches.

11112 Error SAN Client ([host name]): Failed to parse configuration file [File name].

The configuration file is not readable by the server.

If there is a valid configuration file saved, restore it to the system. Make sure to use reliable storage devices for critical system information.

11114 Error SAN Client ([host name]): Failed to allocate memory.

System resources are running low. This may be due to too little memory installed for the system or some runaway process that is consuming too much of the memory.

Use top to locate the process using the most memory. If physical memory is below the server recommendation, install more memory to the system.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 522

Page 525: CDP-NSS Administration Guide

Troubleshooting / FAQs

11115 Warning SAN Client ([host name]): License conflict -- Number of CPU's approved: [Number of CPU], number of CPU's used: [Number of CPU].

The number of client attached to the server exceeded the licensed number allowed.

Obtain additional license keycodes.

11222 Error Console ([host name]): Failed to remove SAN Client (%2) from virtual device %3.

Failed to unassign a virtual device from the client possibly due to a configuration update failure.

Check system disk status and system status. Check the configuration repository status if the configuration repository is configured.

11201 Error There are too many Console connections.

Too many GUI consoles are connected to the particular server. This is a rare condition.

None.

11202 Error Console ([host name]): Illegal access.

The console host attempted to perform an operation beyond its granted privileges.

See 11107.

11203 Error Console ([host name]): SCSI device re-scanning has failed.

An error occurred while adding newly discovered SCSI devices. This can be due to unreliable storage connectivity, hardware failure, or low system resources.

See 10100.

11204 Error Console ([host name]): SCSI device checking has failed.

An error occurred when accessing the SCSI devices when console requests the server to check the known storage devices. Most likely due to storage connectivity failure or hardware failure.

Check the storage devices, e.g., power status; controller status, etc. Check the connectivity, e.g., cable connectors. For Fibre Channel switches, even if the connection status light indicates a god connection, it is still not a guarantee. Push the connector in to make sure. Check the specific storage device using OS-provided utilities, such as hdparm.

11211 Error Console ([host name]): Failed to save file [file name].

An error was encountered while writing the server configuration file to the system drive. This means the system drive ran out of space, it is corrupted, or there is hardware failure in the system drive.

See 10006.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 523

Page 526: CDP-NSS Administration Guide

Troubleshooting / FAQs

11212 Error Console ([host name]): Failed to create index file [file name] for Event Log.

Failed to create an index file for the event log retrieval. Most likely due to insufficient system disk space.

Free up disk space or add additional disk space to system drive.

11216 Error Console ([host name]): Out of system resources. Failed to fork process.

The server is low in memory resources for normal operation.

See 11114.

11219 Error Console ([host name]): Failed to add virtual device [Device number].

Failed to create a virtual drive due to either system configuration error, or storage hardware failure, or system resource access failure.

Check system resource, such as memory, system disk space, and storage device connectivity, i.e., cable connection.

11220 Error Console ([host name]): Failed to remove virtual device [Device number].

When a virtual drive is deleted, all associated resources must be handled, including replica resources. If the replica server is not reachable, the remove operation fails.

Check the log for specific reason of the failure. If the replica is not reachable, the condition must be corrected before trying again.

11221 Error Console ([Host name]): Failed to add SAN Client ([Client name]) to virtual device [Device ID].

Failure to create a SAN Client entity is typically due to a system configuration error, storage hardware failure, or system resource access failure. This should be rare.

Check system resource, such as memory, system disk space. Check the syslog for specific reason of the failure.

11233 Error Console ([host name]): Failed to map the SCSI device name for [A C S L].

The mapping of the SCSI address, namely the adapter, channel, SCSI ID, and LUN (ACSL), is no longer valid. This can be due to sudden failure, improper removal, or change of storage devices in the server .

See 11204. Check and restore the physical configuration to proper state if changed improperly.

11234 Error Console ([host name]): Failed to execute "hdparm" for [Device number].

Failed to perform the device throughput test for the given device. This can be due to the OS being in a bad state such that the program cannot be run or the storage device failed.

Run the hdparm program from the server console directly. Check storage devices as described in 11204.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 524

Page 527: CDP-NSS Administration Guide

Troubleshooting / FAQs

11237 Error Console ([user name]): Failed to get file /usr/local/ipstor/etc/[host name]/ipstor.dat.cache

This message can display when the server console tries to query the server status (such as replication status). The RPC server retrieves this information in the /usr/local/ipstor/etc/<host>/ipstor.dat.cache file. It will fail if the file is in use by other server processes.

A retry usually opens it successfully. The Console automatically retries the query 3 seconds later until it succeeds. The retry will stop when the Console is closed.

11240 Error Console ([host name]): Failed to start the server module.

When any server process cannot be started, it is typically due to insufficient system resources, an invalid state left by a server process that was not stopped properly, or an unexpected OS process failure that left the system in a bad state. This should be rare. If frequent occurrence is encountered, there may be external factors contributing to the behavior that must be investigated and removed before running the server.

If system resources are low, use top to check the process that is using the most memory. If physical memory is below the server recommendation, install more memory to the system. If the OS is suspected in bad state due to unexpected failure in either hardware or software components, restart the server machine to make sure the OS is in a healthy state before trying again.

11242 Error Console ([host name]): Failed to stop the server module.

When a server process cannot be stopped, it is typically due to insufficient system resources, an invalid state left by a server process that may not have been stopped properly, or an unexpected OS process failure that left the system in a bad state. This should be very rare. If frequent occurrence is encountered, there may be external factors contributing to the behavior that should be resolved before running the server.

See 11240.

11244 Error Console ([host name]): Failed to access the server administrator list.

Failed to retrieve the list of server administrators / users / iSCSI users possibly due to the system being busy or file open error.

Check event log for actual cause.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 525

Page 528: CDP-NSS Administration Guide

Troubleshooting / FAQs

11245 Error Console ([host name]): Failed to add user %2.

The server administrator or user ID, or password is not valid.

Check system setting for the user and password policy to verify user ID and password are conforming to the policy.

11247 Error Console ([host name]): Failed to delete user %2.

User ID is not valid. Check if user exists; look at log message for possible cause, and try again.

11249 Error Console ([host name]): Failed to reset password for user %2.

Password is not valid. Check to see if other administrators already deleted the user.

11251 Error Console ([host name]): Failed to update password for user %2.

Password is not valid. Check to see if other administrators already deleted the user.

11253 Error Console ([host name]): Failed to modify virtual device %2.

Failed to expand virtual device possibly due to a device error or the system being busy.

Check device status and system status.

11257 Error Console ([host name]): Failed to add SAN Client ([Host name]).

See 11101. See 11101.

11259 Error Console ([host name]): Failed to delete SAN Client (%2).

Specified client could not be deleted possibly due to configuration update failure.

Check system disk and Configuration Repository if configured.

11261 Error Console ([Host name]): Failed to get SAN Client connection status for virtual device [Device ID].

Failed to inquire about the SAN Client connection status due to either a system configuration error, storage hardware failure, or system resource access failure. This should be a rare event.

Check system resource, such as memory, system disk space. Check the syslog for specific reason of the failure.

11262 Error Console ([host name]): Failed to parse configuration file [File name].

See 11112. See 11112.

11263 Error Console ([host name]): Failed to restore configuration file [File name].

See 10006. See 10006.

11266 Error Console ([host name]): Failed to erase partition of virtual device [Device number].

Storage hardware failure. See 10004.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 526

Page 529: CDP-NSS Administration Guide

Troubleshooting / FAQs

11268 Error Console ([host name]): Failed to update meta information of virtual device %2.

This may be due to a disk being offline or a disk error.

Check disk status.

11270 Error Console ([host name]): Failed to add mirror for virtual device [Device #].

This is typically due to a storage device hardware error.

See 10004.

11272 Error Console ([host name]): Failed to remove mirror for virtual device %2.

This may be due to a mirror disk error or the system may be busy.

Check disk status and try again.

11274 Error Console ([host name]): Failed to stop mirroring for virtual device %2.

This may be due to the system being busy.

Retry later.

11276 Error Console ([host name]): Failed to start mirror synchronization for virtual device %2.

This may be due to a mirror disk error or the system being busy.

Check disk status and try again.

11278 Error Console ([host name]): Failed to swap mirror for virtual device [Device #].

This is most likely due to storage device hardware error.

See 10004.

11280 Error Console ([host name]): Failed to create shared secret for server %2.

Secure communication channel information for a failover setup, a replication setup, or a Near-line mirroring setup could not be created.

Check if specified IP address can be reached from the failover secondary server, replication primary server, or Near-line server.

11282 Error Console ([host name]): Failed to change device category for physical device [Device number] to [Device number].

Storage hardware failure. See 10004.

11285 Error Console ([host name]): Failed to execute failover command (%2).

Failed to execute the command to start failover or stop failover.

Check system log message for actual cause.

11287 Error Console ([host name]): Failed to set failover mode ([Mode]).

The system resources are low, or the OS is in a unstable state possibly due to previous unexpected error condition.

See 11240.

11289 Error Console ([host name]): Failed to restart the server module.

Failed to restart the server modules for failover setup or NAS operations.

Check system log messages for possible cause.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 527

Page 530: CDP-NSS Administration Guide

Troubleshooting / FAQs

11291 Error Console ([host name]): Failed to update meta information of physical device [Device number].

Storage hardware failure. See 10004.

11293 Error Console ([host name]): Failed to swap IP address from [IP address] to [IP address].

11294 Error Console ([host name]): Failed to get host name.

See 11000. See 11000.

11295 Error Console ([host name]): Invalid configuration format.

The configuration file is not readable by the server.

If there is a valid configuration file saved, restore it to the system. Make sure to use reliable storage devices for the critical system information.

11296 Error Console ([host name]): Failed to resolve host name -- %2.

Host name could not be mapped to IP address on replication primary server during replication setup.

Check the accuracy of the hostname entered for replication target server and the network configuration between the replication primary and target server.

11299 Critical Failed to save the server configuration to configuration repository. Check the storage connectivity and if necessary, reconfigure the configuration repository.

Configuration file on the configuration repository could not be updated possibly due to an offline device or disk failure.

Check system log messages for possible cause.

11300 Error Invalid [User name ([User name]) used by client at IP address [IP address].

An invalid user name is used to log in the server, either from the client host or the IPStor console.

Make sure the correct user name is used. The correct user names are the root, or the admin users created using the "Administrator" option. If many unexplained occurrences of this message are in the log, then may be someone was deliberately trying to gain unauthorized access by guessing the user credential. In that case investigate, start with the source IP address.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 528

Page 531: CDP-NSS Administration Guide

Troubleshooting / FAQs

11301 Error Invalid password for user ([User name]) used by client at IP address [IP address].

The incorrect password was used during authentication from the IPStor Console, or from the client host during adding of the server.

Make sure you are using the correct user name and password. If there are many unexplained occurrences of this message in the log, this might be an attempt to gain unauthorized access. In this case investigate, start with the source IP address.

11302 Error Invalid passcode for machine ([Host name]) used by client at IP address [IP address].

An incorrect shared secret was used by the client host to connect to the server. This may be because the server was reinstalled and the credential file was changed. In rare cases, this may occur if someone is trying to gain data access by guessing the shared secret.

From the client host, delete the server and add it back again to resynchronize with the shared secret.

11303 Error Authentication failed in stage [%1] for client at IP address [IP address].

An incorrect login was used by the client host to connect the server.

From the client host, delete the server, add it back again, and use the correct login.

11306 Error The server Administrator group does not exist.

Server Administrator Group does not exist in the system possibly due to improper installation or upgrade.

Contact Tech Support for possible cause and fixes.

11307 Error User %1 at IP address %2 is not a member of the server Administrator's group.

It might be a typo when user typed in the ID and password to log in.

Check the user ID and password to make sure there is no possibility for unauthorized login from that IP address.

11308 Error Obsolete - The client group does not exist.

IPStor Client group does not exist in the system possibly due to improper installation or upgrade.

(OBSOLETE since IPStor 5.1)

Contact Tech Support for possible cause and fixes.

11309 Error User ID %1 at IP address %2 is invalid.

It might be a typo when user typed in the ID and password to log in.

Check the user account and retry.

11310 Error The Client User name %1 does not match with the client name %2.

User name does not match the original user when resetting the credential for the client.

Use the original user name or ask the IPStor Administrator to reset the credential from the client.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 529

Page 532: CDP-NSS Administration Guide

Troubleshooting / FAQs

11315 Error Authentication failed for user (%1) at IP address %2 -- %3.

An incorrect login was used during authentication from the Console, or from the client host when adding the server.

Make sure the correct user name and password pair is used. If you see many unexplained occurrences of this message in the log, someone may be attempting to gain access by guessing the password. In this case, investigate starting with the source IP address.

11408 Error Synchronizing the system time with [host name]. A system reboot is recommended.

The failover pair has detected a substantial time difference. It is recommended to keep the failover pair synchronized to avoid potential problems.

Set the correct time for both machines in the failover pair.

11410 Warning Enclosure Management: %1

Physical enclosure might have some failures.

Check enclosure configuration.

11411 Error Enclosure Management: %1

Physical enclosure has some failures.

Check enclosure configuration.

11505 Error Console (%1): Failed to process the rollback TimeMark because the Meta Data Resource %2 is mounted.

The Meta Data Resource is mounted.

11506 Error Console ([host name]): Failed to start replica scanning for virtual device %2.

This may be due to a connection error or the system being busy.

Check connectivity between replication primary server and target server. Check to see if the system is busy with pending operations.

11508 Error Console ([host name]): Failed to set the properties for the server.

Failed to update configuration file for the new server properties possibly due to disk error or the system being busy.

Check system disk and system status.

11510 Error Console ([host name]): Failed to save report -- %2.

A report file could not be saved possibly due to not enough space or error on disk.

Check system disk status, available space, and system status.

11511 Error Console ([host name]): Failed to get the information for the NIC.

Network interface information could not be retrieved possibly due to configuration error or low system resources.

Verify that network configuration is configured properly. Check if system memory is running low for allocation.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 530

Page 533: CDP-NSS Administration Guide

Troubleshooting / FAQs

11512 Error Console ([host name]): Failed to add a replica for device %2 to the server %3 (watermark: %4 MB, time: %5, interval: %6, watermark retry: %7, suspended: %8).

Failed to configure replication on the primary server possibly due to the system being busy.

Check system log messages for actual cause.

11514 Error Console ([host name]): Failed to remove the replica for device %2 from the server %3 (watermark: %4 MB, time: %5, interval: %6, watermark retry: %7, suspended: %8).

Failed to remove replication configuration on the primary server when deleting replication setup possibly due to the system being busy.

Check system log messages for actual cause.

11516 Error Console ([host name]): Failed to create the replica device [Device number].

Failed to create the replica for the source virtual device. This is most likely due to problem in the remote server.

Check the hardware and software condition in the remote replica server to make sure it is running properly before trying again.

11518 Error Console ([host name]): Failed to start replication for virtual device %2.

The remote server is not reachable, or is in a bad state.

Check the hardware and software condition in the remote replica server to make sure it is running properly before trying again.

11520 Error Console ([host name]): Failed to stop replication for virtual device %2.

It is possibly because the system was busy.

Check the system log messages for actual cause and retry.

11522 Error Console ([host name]): Failed to promote replica device %2 to a virtual device.

It is possibly because the system was busy.

Check the system log messages for actual cause and retry.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 531

Page 534: CDP-NSS Administration Guide

Troubleshooting / FAQs

11524 Error Console ([host name]): Failed to run the server X-Ray.

If a server process cannot be started, it is typically due to one of the following reasons:• insufficient system

resources• the server process was not

stopped properly and left in an invalid state

• an unexpected OS process failure left the system in a bad state.

This is a very rare occurrence. If this behavior is frequent, look for external factors affecting the server.

If the system resources are low, run the 'top' command to check which process is using the most memory. If the physical memory is below the server recommendation, install more memory on the system. If you suspect the OS to be in a bad state due to an unexpected failure in either hardware or software components, restart the server machine.

11534 Error Console ([host name]): Failed to reset the umap for virtual device %2.

A storage hardware failure has occurred.

Check the storage devices (i.e. power status; controller status). Check connectivity (i.e.cable connectors). With FC switches, even if the connection status light indicates the connection is good, it is not a guarantee. Press the connector in to verify. Check the specific storage device using an OS-provided utility, such as 'hdparm'.

11535 Error Console ([host name]): Failed to update the replication parameters for virtual device %2 to the server %3 (watermark: %4 MB, time: %5, interval: %6, watermark retry: %7, suspended: %8).

Failure to update replication properties, possibly because the system was busy.

Wait until the system is not as busy and retry.

11537 Error Console ([host name]): Failed to claim physical device %2.

This server version limits the storage capacity.

Check license agreement and keycodes.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 532

Page 535: CDP-NSS Administration Guide

Troubleshooting / FAQs

11539 Error Console ([host name]): Failed to import physical device %2.

A storage hardware failure has occurred.

Check the storage devices (i.e. power status; controller status). Check connectivity (i.e.cable connectors). With FC switches, even if the connection status light indicates the connection is good, it is not a guarantee. Press the connector in to verify. Check the specific storage device using an OS-provided utility, such as 'hdparm'.

11541 Error Console ([host name]): Failed to save event message (ID: %2).

Failed to create an Event message from the console or CLI for replication, snapshot expansion, etc. possibly because the system disk does not have enough space.

Check the system disk status and available space.

11542 Error Console ([host name]): Failed to remove replica device %2.

Failed to delete replica disk possibly due to the system being busy.

Check system status and retry

11544 Error Console ([host name]): Failed to modify replica device %2.

Failed to expand replica disk possibly due to the system being busy.

Check system status and retry.

11546 Error Console ([host name]): Failed to mark the replication for virtual device %2.

Failure to mark replication during synchronization may be due to a connectivity issue between the primary and target server, or because the system is busy.

Check connectivity and system status. Try again.

11548 Error Console ([host name]): Failed to determine if data was written to virtual device %2.

Failed to check if the virtual device has been updated possibly due to a device error or the system being busy.

Check device status and system status.

11553 Error Console ([host name]): Failed to get login user list.

The list of users could not be retrieved from the system.

Check system status.

11554 Error Console ([host name]): Failed to set failover option <selfCheckInterval: %d sec>.

Failed to set failover options on primary server possibly due to failover module stopped or disk error.

Check failover module status.

Check system disk status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 533

Page 536: CDP-NSS Administration Guide

Troubleshooting / FAQs

11556 Error Console ([host name]): Failed to start snap copy from virtual device [Device number] to virtual device [Device number].

This may happen if another process is performing I/O with the snapshot requirements, such as an backup operation. It is also possibly due to storage hardware failure.

Check to see if another process is using the snapshot. See 10004 if storage failure is suspected.

11560 Error Console ([host name]): Failed to get licenses.

License keycode information could not be retrieved.

Check system disk and system status.

11561 Error Console ([host name]): Failed to add license %2.

The license is not valid. Check license keycode validity.

11563 Error Console ([host name]): Failed to remove license %2.

The license is not valid. Check license keycode validity.

11565 Error Console ([host name]): Failed to check licenses -- option mask %2.

The license is not valid. Check license keycode validity.

11567 Error Console ([host name]): Failed to clean up failover server directory %2.

This may be due to a disk error or the system being busy when the failover setup was to be removed.

Check system disk and system status.

11568 Error Console ([host name]): Failed to set (%2) I/O Core for failover -- Failed to create failover configuration.

Failed to notify IOCore of failover setup or removal possibly due to the system being busy.

Reconfigure failover if this happens during failover setup.

11569 Error Console ([host name]): Failed to set [Device number] to Fibre Channel mode [Mode].

This is possibly due to the Fibre Channel driver is not properly loaded, or the wrong version of the driver is loaded. IPStor FC target mode requires the FalconStor version of the driver to be used. The driver name should be qla2x00fs.o.

Use lsmod to check the qla2x00fs driver to make sure it is loaded. If is, check to make sure it is the correct revision. The correct revision should be located in the ipstor/lib directory.

11571 Error Console ([host name]): Failed to assign Fibre Channel device %2 to %3; rolled back changes

Failed to assign virtual device to FC target. All intermediary configuration changes were rolled back and configuration remains unchanged.

Check LUN conflict, disk status, system status, Fibre Channel Target module status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 534

Page 537: CDP-NSS Administration Guide

Troubleshooting / FAQs

11572 Error Console ([host name]): Failed to assign Fibre Channel device %2 to %3; could not roll back changes.

Failed to assign virtual device to Fibre Channel target. However, the configuration was partially updated.

Check LUN conflict, disk status, system status, FC target module status. You may need to restart the FC target Module to resolve the configuration conflict.

11574 Error Console ([host name]): Failed to unassign Fibre Channel device %2 from %3 and returned %4; rolled back changes.

Failed to unassign virtual device from FC target. All intermediary configuration changes were rolled back.The configuration is unchanged.

Check Fibre Channel Target module status.

11575 Error Console ([host name]): Failed to unassign Fibre Channel device %2 from %3 (not rolled back) and returned %4; could not roll back changes.

Failed to unassign virtual device from Fibre Channel target. However, the configuration is partially updated.

Check Fibre Channel Target module status. You may need to restart the Fibre Channel Target Module to resolve the configuration conflict.

11577 Error Console ([host name]): Failed to get Fibre Channel target information.

This may be due to a Fibre Channel target module status.

Check Fibre Channel Target module status.

11578 Error Console ([host name]): Failed to get Fibre Channel initiator information.

This could be because the Fibre Channel driver is not properly loaded, or the wrong version of the driver is loaded.

Run 'lsmod' to check that the qla driver is loaded and it is the correct revision located in $ISHOME/lib/modules/<kernel>/scsi.

11581 Error Console ([host name]): Failed to set NAS option %2.

Failed to start the NAS processes. This is typically due to insufficient system resources, invalid state left by last running server processes that may not have been stopped properly, or due to an unexpected OS processes failure that left the system in a bad state. This should happen very rarely. If frequent occurrences are encountered, there are probably external factors contributing to the behavior that should be investigated and removed before running the server.

If system resources are low, run the 'top' command to determine which process is using the most memory. If the physical memory is below the server recommendation, install more memory on the system. If the OS is suspected to be in a bad state due to unexpected failure in either hardware or software components, restart the server machine.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 535

Page 538: CDP-NSS Administration Guide

Troubleshooting / FAQs

11583 Error Console ([host name]): Failed to update Fibre Channel client (%2) WWPNs.

This could be because the Fibre Channel driver is not properly loaded, or the wrong version of the driver is loaded.

Run the 'lsmod' command to make sure the correct qla driver is loaded and located in $ISHOME/lib/modules/<kernel>/scsi.

11585 Error Console ([host name]): Failed to set Fibre Channel option %2.

Fibre Channel option could not be enabled or disabled.

Check system status.

11587 Error Console ([host name]): Failed to demote virtual device %2 to a replica.

Fail to convert back a virtual device (a promoted replica) to a replica.

Check if virtual device is online or if system is busy.

11590 Error Out of disk space to expand snapshot storage for virtual device [Device ID].

There is no more storage left for automatic expansion of the snapshot resource, which just reached the threshold usage.

Add additional storage. Physical storage must be prepared for virtual drive before it is qualified to be allocated for snapshot resources.

11591 Error Failed to expand snapshot storage for virtual device [Device ID]: maximum segment exceeded (error code [Return code]).

The virtual drive has an upper limit on the number of physical segments. The drive has been expanded so many times that it exceeded the limit.

Do not expand drives in increments that are too small . Consolidate the segments by mirroring or creating a snapshot copy to another virtual drive with fewer segments before expanding again.

11594 Error Console ([host name]): Failed to set CallHome option %2.

The Email Alert option could not be enabled or disabled.

Check system status.

11598 Error Out of disk space to expand CDP journal storage for %1.

The CDP Journal could not be expanded due to insufficient disk space.

Add more storage.

11599 Error Failed to expand CDP journal storage for %1: maximum segment exceeded (error code %2).

The CDP Journal resource could not be expanded due to the maximum supported segments.

Currently up to 64 segments are supported; to prevent this from happening, create a bigger CDP journal to avoid frequent expansions.

11605 Error Failed to create character device to map TimeMark %1 for virtual device %2.

Failed to map a raw device interface for virtual device to perform backup, snapshot copy, or TimeMark copy.

Check virtual device and snapshot resource status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 536

Page 539: CDP-NSS Administration Guide

Troubleshooting / FAQs

11608 Error Console ([host name]): Failed to proceed copy/rollback TimeMark operation with client attached on the target virtual device [Device ID].

The device to be rolled back is still assigned to client hosts. CDP/NSS requires that the device be guaranteed to have no I/O during roll back. Therefore the device cannot be assigned to any hosts.

Unassign the virtual device before rolling back.

11609 Error [Task name] Failed to create TimeMark for virtual device [Device ID] while the last creation/client notification is in progress.

The last snapshot operation, including the notification process, is still in progress. This may be caused by to short an interval between snapshots, or the snapshot notification is held up due to network or client applications.

Adjust the frequency of TimeMark snapshots. Determine the actual time it takes for snapshot notification to complete, which is application and data activity dependent.

11610 Error Console ([host name]): Failed to create TimeView for virtual device %2 TimeMark %3.

The TimeView resource could not be created for the virtual device possibly due to a device error.

Check to see if the virtual device and snapshot resource are online.

11613 Error Console ([host name]): Failed to enable TimeMark for device %2.

The TimeMark option could not be enabled for the virtual device, possibly due to a device error.

Check to see if the virtual device and snapshot resource is online.

11615 Error Console ([host name]): Failed to disable TimeMark for device %2.

The TimeMark option for virtual device could not be disabled possibly due to the system is busy.

Retry later.

11618 Error Failed to select TimeMark %1 for virtual device %2: TimeMark %3 has already been selected.

TimeMark is already selected for another operation.

Wait for the completion for the other operation.

11619 Error Failed to select TimeMark %1 character device for virtual device %2.

The TimeMark could not be selected for raw device backup possibly due to the system is busy.

Check system status and retry later.

11621 Error Failed to create TimeMark %1 for virtual device %2.

The TimeMark for this virtual device could not be created possibly due to a device error or the system is busy.

Check device status and system status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 537

Page 540: CDP-NSS Administration Guide

Troubleshooting / FAQs

11623 Error Failed to delete TimeMark %1 for virtual device %2.

The specified TimeMark for virtual device could not be removed, possibly due to a device error, an operation is pending, or the system is busy.

Check device status and system status. Retry later.

11625 Error Failed to copy TimeMark %1 of virtual device %2 as virtual device %3.

TimeMark failed to copy from source virtual device to target virtual device, possibly due to a device error or busy system.

Check device status and system status. Retry later.

11627 Error Failed to roll back to TimeMark timestamp %1 for virtual device %2.

TimeMark rollback for virtual device failed possibly due to a device error or busy system.

Check device status and system status. Retry later.

11631 Error Failed to expand snapshot storage for virtual device %1 (error code %2).

Automatic snapshot resource expansion failed possibly due to a device error, quota reached, out-of-space, or the system being busy.

Check log messages for possible cause.

11632 Error Console ([host name]): Failed to set failover option on secondary server <heartbeatInterval: %2 sec, autoRecoveryInterval: %3 sec>.

Failed to update failover options with auto-recovery mode on secondary server.

Check failover module status.

Check system disk status.

11633 Error Console ([host name]): Failed to set failover option on secondary server <heartbeatInterval: %2 sec, autoRecoveryInterval: disabled>.

Failed to update failover options without auto-recovery mode on secondary server.

Check failover module status.

Check system disk status.

11637 Error Failed to expand CDP journal storage for %1 (error code %2).

The CDP Journal could not be expanded possibly because there is not enough space or there is an error on the disk.

Check disk status, available space and system status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 538

Page 541: CDP-NSS Administration Guide

Troubleshooting / FAQs

11638 Error Failed to expand CDP journal storage for %1. The virtual device is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB, which exceeds the limit.

The CDP Journal resource could not be expanded due to the storage quota limit being exceeded.

Increase storage quota for the specified user.

11639 Error The virtual device %1 is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB. Only %5 MB will be added to the CDP Journal.

The CDP Journal resource was expanded with a smaller increment size than usual due to user quota limit.

Increase storage quota for the specified user.

11640 Error Failed to expand snapshot resource for virtual device %1. The virtual device is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB, which exceeds the limit.

The snapshot resource was not expanded due to quota limit exceeded.

Increase storage quota for the specified user.

11641 Error The virtual device %1 is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB. Only %5 MB will be added to the snapshot resource.

The snapshot resource was expanded with a smaller increment size than usual due to user quota limit.

Increase storage quota for the specified user.

11642 Error Failed to create a temporary TimeView from TimeMark %1 to copy TimeView data for virtual device %2.

TimeMark might not be available to create the temporary TimeView or raw device creation failed.

If TimeMark is still available, try TimeMark copy again.

11643 Error [Task %1] Failed to create TimeMark for virtual device %2 while notification to client %3 for other resource is in still progress.

Snapshot notification to the same client for other virtual devices is still pending.

Retry later.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 539

Page 542: CDP-NSS Administration Guide

Troubleshooting / FAQs

11644 Error Take TimeView [TimeView name] id [Device ID] offline because the source TimeMark has been deleted.

The TimeMark snapshot which the TimeView is based on has been deleted. The TimeView image therefore is set to OFF-LINE because it is no longer accessible.

Remove the TimeView from the resource.

11645 Error Console ([host name]): Failed to create TimeView: virtual device [Device ID] already have a TimeView.

For each TimeMark snapshot, only one TimeView interface can be created.

None.

11649 Error Failed to convert inquiry string on SCSI device %1

The inquiry string contains invalid information.

Check the device configuration.

11655 Error Bad capacity size for SCSI device %1

Failed to get capacity information from the device.

Check the storage.

11656 Warning Discarded scsi device %1, unsupported Cabinet ID

The Cabinet ID of the device is not supported.

Check storage device definition.

11657 Warning Discarded scsi device %1, missing "%2" vendor in inquiry string.

The disk is not from one of the supported vendor.

Check storage device definition.

11658 Warning SCSI device %1 storage settings are not optimal. Check the storage settings.

Storage settings are not optimal.

Check the storage.

11659 Warning Discarded scsi device %1, exceeded maximum supported LSI LUN %2.

Number of LSI device LUNs exceeds the maximum supported value.

Check storage configuration.

11660 Error Failed to allocate a %1 MB DiskSafe mirror disk in storage pool %2. There is only %3 MB free space left in storage pool.

The DiskSafe mirror disk could not be created due to insufficient storage space.

Add more storage.

11661 Error Failed to expand the DiskSafe mirror disk by %1 MB for user %2. The total size allocated for this user would be %3 MB and this exceeds the user's quota of %4 MB.

The DiskSafe mirror disk could not be expanded due to user quota.

Increase storage quota for the specified user.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 540

Page 543: CDP-NSS Administration Guide

Troubleshooting / FAQs

11662 Error Failed to create a %1 MB DiskSafe snapshot resource. There is not any storage pool with enough free space.

The Snapshot resource could not be created for DiskSafe mirror disk due to insufficient storage in storage pool assigned for DiskSafe.

Add more storage to the DiskSafe storage pool.

11665 Error Console ([host name]): Failed to enable backup for virtual device %2.

The backup option could not be enabled for the virtual device possibly due to a device error or the maximum number of virtual devices that can be enabled for backup has reached.

Check that the maximum limit of 256 virtual devices that can be enabled for backup has not been reached. Also, check the disk status and system status.

11667 Error Console ([host name]): Failed to disable backup for virtual device %2.

The backup option could not be disabled for the virtual device possibly due to a device error.

Check disk status and system status.

11668 Error Console ([host name]): Failed to stop backup sessions for virtual device %2.

The raw device backup session for the virtual device could not be stopped possibly due to the system being busy.

Check system status.

11672 Error Console ([host name]): Virtual device %2 cannot join snapshot group %3 group id %4.

The virtual device could not be added to the group possibly because a snapshot operation was in progress.

Check if a snapshot operation is pending for the virtual device or group.

Check disk status and system status.

11673 Error Console ([host name]): Virtual device %2 cannot leave snapshot group %3 group id %4.

Virtual device could not be removed from the group possibly because a snapshot operation was in progress.

Check if a snapshot operation is pending for the virtual device or group.

Check disk status and system status.

11676 Error Console ([host name]): Failed to resize NAS file system on virtual device %2.

The NAS file system could not be resized automatically after expansion using system commands.

Try offline re-size operation.

11681 Error Console ([host name]): Failed to resume Cache Resource %2 (ID: %3).

SafeCache usage for the virtual device could not be resumed possibly due to a device error.

Check disk status and system status.

11683 Error Console ([host name]): Failed to suspend cache Resource %2 (ID: %3).

SafeCache usage for the virtual device could not be suspended possibly due to a device error.

Check disk status and system status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 541

Page 544: CDP-NSS Administration Guide

Troubleshooting / FAQs

11684 Error Console ([host name]): Failed to reset cache on target device %2 (ID: %3) for %4 copy.

SafeCache could not be reset for the snapshot copy target resource possibly due to the system being busy.

Check if system is busy and retry later.

11686 Error Console ([host name]): Failed to add %2 Resource %3 (ID: %4).

The specified resource could not be created possibly due to a device error.

Check disk status and system status.

11688 Error Console ([host name]): Failed to delete %2 Resource %3 (ID: %4).

The specified resource could not be created possibly due to the system being busy.

Check if system is busy.

11690 Error Console ([host name]): Failed to resume HotZone resource %2 (ID: %3).

HotZone usage could not be resumed possibly due to disk error.

Check system disk status and system status.

11692 Error Console ([host name]): Failed to suspend HotZone resource %2 (ID: %3).

HotZone usage could not be suspended possibly due to disk error.

Check system disk status and system status.

11694 Error Console ([host name]): Failed to update policy for HotZone resource %2 (ID: %3).

The HotZone policy could not be updated possibly due to disk error.

Check system disk and system status.

11695 Error Console ([host name]): Failed to get HotZone statistic information.

HotZone statistics information could not be retrieved from log file.

Check HotZone log, disk status, system status.

11696 Error Console ([host name]): Failed to get HotZone status.

The HotZone status could not be retrieved possibly due to disk error.

Check HotZone device status.

11698 Warning CDP/Safecache marker in device(%1) is full, fail new marker(%2) request for vdev(%3).

CDP/Safecache's marker is full.

Delete old snapshot marker.

11699 Warning CDP/Safecache(%1) is temporarily full, size (%2)MB.

CDP/Safecache is temperarily full.

Increase the flush speed and expand cache size.

11701 Error Console ([host name]): Failed to reinitialize snapshot resource (ID: %2) for virtual device (ID: %3).

The snapshot resource could not be reinitialized possibly due to disk error.

Check if the snapshot resource is online.

Check if the system is busy.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 542

Page 545: CDP-NSS Administration Guide

Troubleshooting / FAQs

11706 Error Console ([host name]): Failed to shrink snapshot resource for resource %2 (ID: %3).

Shrinking of the snapshot resource failed possibly due to the system being busy.

Check system status and retry.

11707 Warning Deleting TimeMark %1 on virtual device %2 to maintain snapshot resource threshold is initiated.

A TimeMark was deleted after a failed expansion in order to maintain the snapshot resource threshold.

Check disk status, available space. Check if system is busy.

Try manual expansion if it is necessary.

11708 Error Failed to get TimeMark information to roll back to TimeMark %1 for virtual device %2.

TimeMark information could not be retrieved for rollback possibly due to a pending TimeMark deletion operation.

Retry later.

11709 Warning Snapshot marker %1 on CDP journal storage for %2 was deleted.

11711 Error Copying CDP journal data to %1 %2 (ID: %3) failed to start. Error: %4.

CDP Journal data could not be copied possibly due to the system being busy.

Check system status.

11713 Error Copying CDP journal to %1 %2 (ID: %3) failed to complete. Error: %4.

Copying CDP Journal data failed possibly due to the system being busy.

Check system status.

11715 Error Console ([host name]): Failed to suspend CDP Journal Resource %2 (ID: %3).

The CDP Journal for the resource could not be suspended possibly due to the system being busy.

Check system status.

11716 Error Console ([host name]): Failed to get information for license activation.

License registration information could not be retrieved.

Check license is registered and the public key is not missing.

11717 Error Console ([host name]): Failed to activate license (%2).

License registration failed. Check connectivity to the registration server; check file system is not read-only for creation of intermediary files.

11719 Warning This server upgrade is not licensed. Please contact FalconStor Support immediately.

License registration information could not be retrieved.

Contact FalconStor Support.

11722 Error Console (%1): Failed to flush the TimeView cache data for TimeView resource %2.

The snapshot resource or the cache resource may be offline.

Check the snapshot and cache resources status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 543

Page 546: CDP-NSS Administration Guide

Troubleshooting / FAQs

11724 Error Failed to convert Snapshot Resource vid %1 due to %2

Snapshot Resource conversion failed.

If the error is due to insufficient space, add more storage. Otherwise, check the system log for more information about the failure.

11728 Error Console (%1): TimeView Data Conversion has failed for virtual device %2 (timestamp %3)%4

TimeView conversion may fail due to insufficient space, I/O error or insufficient memory.

Expand the TimeView if it is due to insufficient space. Check the storage if it is due to I/O error. Check memory usage if it is due to insufficient memory.

11730 Error Console ([host name]): Failed to suspend mirror for virtual device %2.

Mirror synchronization could not be suspended possibly due to the system being busy.

Check system status.

11738 Error Console ([host name]): Failed to update the replication parameters for virtual device %2 to server %3 (compression: %4, encryption: %5, MicroScan: %6)

Replication properties could not be updated.

Check system disk status and system status.

11740 Warning [Task %1] Snapshot creation for %2 %3 will proceed even if the Near-line mirror is out-of-sync on server %4.

A snapshot is going to be created while the mirror is out-of-sync in a Near-line setup.

Synchronize the mirror of the primary disk on the primary server.

11741 Warning [Task %1] Snapshot creation / notification for %2 %3 will proceed even if the Near-line mirroring configuration cannot be retrieved from server %4.

The system tries to connect to the primary server to obtain the client configuration information when taking a snapshot. The snapshot is still created but the client is not notified.

Check connectivity between the primary server and Near-line server.

Check if the primary server is busy.

11742 Warning [Task %1] Snapshot creation / notification for %2 %3 will proceed even if the Near-line mirroring configuration is invalid on server %4.

The system attempts to check the primary server's configuration when a snapshot is taken on the Near-line server. If unsuccessful, the snapshot is still taken, but the data might not be valid.

Check primary disk configuration and status.

11761 Error Console ([host name]): Failed to updated mirror policy.

Virtual device mirroring policy could not be updated possibly due to the system being busy.

Check system status and retry later.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 544

Page 547: CDP-NSS Administration Guide

Troubleshooting / FAQs

11764 Error Failed to create snapshot image %1 by snapshot marker for virtual device id %2.

The snapshot marker for virtual device could not be created, possibly due to a device error or the system being busy.

Check the device status and the system status.

11765 Error Failed to create snapshot image %1 by snapshot marker for group id %2.

The snapshot marker for group could not be created possibly due to a device error or the system being busy.

Check the device status and the system status.

11766 Warning CDP/Safecache marker is full, failed to create new marker %1 request for vdev %2.

The device has reached 256 TimeMarks, which is the maximum number supported if configured with CDP. Or the device has reached the maximum number (256) of unflushed TimeMarks in SafeCache.

For CDP, make sure to keep fewer than 256 TimeMarks. For SafeCache, make sure fewer than 256 TimeMarks are unflushed. Check for possible storage issues that may be preventing CDP or SafeCache from flushing the data at a reasonable rate.

11767 Warning CDP/Safecache marker is full, failed to create new marker %1 request for group %2.

The group has reached 256 TimeMarks, which is the maximum number supported if configured with CDP. Or the group has reached the maximum number (256) of unflushed TimeMarks in SafeCache.

For CDP, make sure to keep fewer than 256 TimeMarks. For SafeCache, make sure you have fewer than 256 unflushed TimeMarks. Check for a possible storage issue preventing CDP or SafeCache from flushing the data at a reasonable rate.

11768 Warning Backup session closed for vdev %1 since absolute session duration %2 %3 was reached. The start time of the session was %4

The backup session duration time has exceeded the limit.

None

11769 Warning Backup session closed for group %1 since absolute session duration %2 %3 was reached. The start time of the session was %4.

The backup session duration time has exceeded the limit.

None

11770 Error Console ([host name]): Failed to get mutual chap user list.

The list of iSCSI Mutual CHAP Secret users could not be retrieved.

Check system status and retry later.

11771 Error Console ([host name]): Failed to reset mutual chap secret for user %2.

The iSCSI Mutual CHAP Secret for a user could not be reset by root.

Check system status and retry later.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 545

Page 548: CDP-NSS Administration Guide

Troubleshooting / FAQs

11773 Error Console ([host name]): Failed to update mutual chap secret for user %2.

The iSCSI Mutual CHAP Secret for a user could not be updated by root.

Check system status and retry later.

11775 Error Console ([host name]): Failed to add mutual chap secret user %2.

The iSCSI Mutual CHAP Secret for a user could not be added by root possibly due to disk problem or the system being busy.

Check system disk and system status.

11777 Error Console ([host name]): Failed to delete mutual chap secret user %2.

The iSCSI Mutual CHAP Secret for a user could not be deleted by root.

Check log message for possible cause.

11900 Error Failed to import report request.

There is an invalid parameter specified in the report request.

Check parameters for report generation.

11901 Error Failed to parse report request %1 %2.

Report request parsing failed. Check parameters for report generation.

11902 Error Undefined report type %1.

Report type is invalid. Check parameters for report generation.

11910 Error Failed to create report file %2 (type %1).

Specified report type could not be created possibly because the system was busy, out-of-space, or had a disk error.

Check system log message for possible cause.

13300 Error Failed to authenticate to the primary server -- Failover Module stopped.

The security credentials for the failover operation are corrupted or deleted. This will not happen under normal operating conditions.

Reconfigure the failover set after re-establishing user credentials, e.g., reset the root password of both hosts and then reconfigure failover using the new root credentials.

13301 Error Failed to authenticate to the local server -- Failover Module stopped.

See 13300. See 13300.

13302 Error Failed to transfer primary static configuration to secondary.

Quorum disk failure. Check failover quorum disk status.

13303 Error Failed to transfer primary dynamic configuration to secondary.

Quorum disk failure. Check failover quorum disk status.

13307 Error Failed to transfer primary authentication information to secondary.

Quorum disk failure. Check failover quorum disk status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 546

Page 549: CDP-NSS Administration Guide

Troubleshooting / FAQs

13308 Error Invalid failover configuration detected. Failover will not occur.

The primary configuration file is missing.

Check network to make sure the config file could be transferred.

13309 Error Primary server failed to respond to command from secondary.

Quorum disk or communication failure.

Check failover quorum disk status and network.

13316 Error Failed to add IP address [IP address].

This error should be very rare. A frequent occurrence may indicate a network configuration error, possibly due to system environment corruption. It is also possible that the network adapter failed or is not configured properly or that the network adapter driver has a problem.

Restart the network. If the problem persists, restart the OS or restart the machine (turn off then turn on the machine). If the problem still persists, you may need to reinstall the OS. In this case, may sure you properly save all configuration information before proceeding.

13317 Error Failed to release IP address [IP address].

During failover, the system may be holding the IP address of the failed server for longer than the failover module can wait. This is not a problem and the message can be ignored.

None.

13319 Error Failed to stop IPStor Failover Module. Host may need to reboot.

If an IPStor process cannot be stopped, it is usually due to insufficient system resources, an invalid state left by a server process that was not stopped properly, or an unexpected OS process failure that left the system in a bad state. This is typically very rare. If frequently occurring, external factors may be the cause, which should be investigated and resolved before running the server.

See 11240.

13320 Error Failed to update the configuration files to the primary server [Error].

See 13300. See 13300.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 547

Page 550: CDP-NSS Administration Guide

Troubleshooting / FAQs

13700 Error Failed to allocate memory -- Self-Monitor Module stopped.

When any server process cannot be started, it is usually due to insufficient system resources, invalid state left by a process that may not have been stopped properly, or due to an unexpected OS process failure that left the system in a bad state. This should happen very rarely. If frequently occurring, there may be external factors contributing to the behavior that should be investigated and resolved before running the server.

If system resources are low, use top to check the process that is using the most memory. If physical memory is below the IPStor recommendation, install more memory to the system. If the OS is suspected in bad state due to unexpected failure in either hardware or software components, restart the server machine to make sure the OS is in a healthy state before trying again.

13701 Error Failed to release IP address [IP address]. Retrying the operation.

See 13317. See 13317.

13702 Error Failed to add virtual IP address: %1. Retrying the operation.

There may be a network issue preventing the primary server from getting its virtual IP back during failback.

Check network configuration.

13703 Error Failed to stop IPStor Self-Monitor Module.

See 13319. See 13319.

13704 Error Server module failure detected. Condition: %1.

The secondary server has detected that one module has been stopped on the primary.

Check primary server status.

13710 Critical The Live Trial period has expired for server [Server name]. Please contact FalconStor or its representative to purchase a license.

The liveday live trial grace period has been exceeded.

Contact FalconStor or a representative to obtain proper license.

13711 Critical The following options are not licensed: [IPStor option]. Please contact FalconStor or its representative to purchase a license.

The specific option is not licensed properly.

Contact FalconStor or a representative to obtain proper license.

13800 Error Server failure detected. Failure condition: [Error].

The primary server detected failure condition as described, which is being reported to the secondary server. Waiting for the secondary server to decide rather it should take over.

None.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 548

Page 551: CDP-NSS Administration Guide

Troubleshooting / FAQs

13804 Critical Quorum disk failed to release to secondary.

The virtual drive holding the quorum is no longer available due to the deletion of the first virtual drive when the system was in an inconsistent state.

This is typically very rare, unless the server is in an experimental stage where drives are created / deleted randomly. Call Technical Support if if persists.

13817 Critical Primary server failback was unsuccessful. Failed to update the primary configuration.

The primary server failed to restore from the failover operation due to other conditions.

Check the log for the specific error conditions encountered and correct the situation accordingly.

13818 Critical Quorum disk negotiation disk failed.

The primary server failed to access the quorum disk.

The secondary server will take over anyway. Shut down the primary via power control or an auto shutdown script to avoid conflict.

13820 Warning Failed to retrieve primary server health information.

The secondary server cannot receive a heartbeat from the primary server. The secondary server is trying to determine if the primary server is down, or the secondary server might be isolated from the network by trying to contact other network entities.

Check other messages in the log for more details and more precise picture of the situation.

13821 Error Failed to contact other entities in network. Takeover is not initiated assuming failure is on this server.

The secondary server failed to receive a heartbeat from the primary and failed to contact any other network entities in the subnet. This network problem prevents the secondary server from taking over the primary server.

Check the secondary server network status.

13822 Critical Secondary will not take over because storage connectivity is not 100%.

When the primary reports a storage connectivity problem, the secondary will try to determine if it has better connectivity. If it fails to connect to all storage devices, it will not take over.

Check the storage connectivity for both the primary and secondary to correct the situation. See 11204 for checking storage.

13823 Warning Partner server failed to acknowledge takeover request in time. This server will forcefully take over the partner.

Failover waiting time period is too short or the partner is not fully operational.

Check failover environment parameter settings and the partner status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 549

Page 552: CDP-NSS Administration Guide

Troubleshooting / FAQs

13827 Error Failed to stop quorum updating process. PID. Maybe due to storage device or connection failure.

There may be a storage device or connection failure.

Check storage connectivity.

13828 Warning Almost running out of file handlers (current [Number of handles], max [Number of handles]).

The operating system is running out of resource for file handles.

Determine the appropriate amount of memory required for the current configuration and applications. Check for any process that is leaking memory.

13829 Warning Almost running out of memory (current [Number of KB] K, max [Number of KB]).

The operating system is running out of memory.

See 13828.

13830 Error Get configuration file from storage failed.

There may be a storage device or connection failure.

Check storage connectivity.

13832 Error Server operation is resumed either because the user initiated an action, or the partner server was suspended.

The failed server was forced to come back.

Check the primary and partner server status.

13833 Error Failed to back up file from [source] to [target location].

There may be a storage device or connection failure.

Check storage connectivity.

13834 Error Failed to copy file out from Quorum repository.

There may be a storage device or connection failure on the quorum disk.

Check storage connectivity.

13835 Error Failed to take over primary.

The secondary server is not completely functional.

Check secondary server status.

13836 Error Failed to get configuration files from repository. Check and correct the configuration disk.

There may be a storage device or connection failure on the quorum disk.

Check storage connectivity.

13841 Error Secondary server does not match primary server status.

Takeover is in progress but the primary server is not in DOWN or READY status.

Check primary server status. It may have temporarily been in an inconsistent state; if its status is still not DOWN or READY, check if the sm module is running.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 550

Page 553: CDP-NSS Administration Guide

Troubleshooting / FAQs

13842 Warning Secondary server will takeover. Primary is still down.

The primary server failed. None.

13843 Error Secondary server failed to get original conf file from repository before failback.

There may be a storage device or connection failure on the quorum disk.

Check storage connectivity.

13844 Error Failed to write to repository.

There may be a storage device or connection failure on the quorum disk.

Check storage connectivity.

13845 Warning Quorum disk failure detected. Secondary is still in takeover mode.

There may be a storage device or connection failure on the quorum disk.

Check storage connectivity.

13848 Warning Primary is already shut down. Secondary will take over immediately.

Failover occurred. None.

13849 Warning One of the heartbeat channels is down: IP address [IP]

Lost heartbeat IP information. Check network connections.

13850 Error Secondary server can not locate quorum disk. Either the configuration is wrong, or the drive is offline.

There may be a storage device or connection failure on the quorum disk.

Check storage connectivity.

13851 Error Secondary server can't take over due to [Reason]

The secondary server cannot take over due to the indicated reason.

Take action based om the reason explanation.

13853 Error Secondary notified primary to go up because secondary is unable to take over.

The secondary server detected a failure on the primary server, as well as a failure on itself. Therefore, take over of the primary does not occur. This might happen if the primary was booting up.

Check the status of both servers.

13856 Error Secondary server failed to communicate with the vprimary server via IP.

There is a heartbeat communication issue between failover partners.

Check network connections.

13858 Critical Secondary server failed to communicate with remote mirror.

There was a primary server failure or network communication is broken.

Check server and network connections.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 551

Page 554: CDP-NSS Administration Guide

Troubleshooting / FAQs

13860 Error Failed to merge configuration file.

This may be because of inconsistent failover node configuration files when merging them after restore.

Check the server configuration.

13861 Error Failed to rename file from %1 to %2.

The file name already exists or the file system is inconsistent or read-only.

Check the file system .

13862 Error Failed to write file %1 to repository.

There might be storage device or connection failure on the quorum disk.

Check storage connectivity.

13863 Critical Primary server is commanded to resume.

Forced primary recovery. Check the status of failover servers.

13864 Critical This server operation will terminate.

Forced server down. Check server status.

13877 Error Secondary server failed to take over.

Secondary server is not in a good state.

Check secondary server.

13878 Error Primary server has invalid failover configuration.

Server configuration is not consistent.

Check failover setup configuration.

13879 Critical Secondary server detected kernel module failure; you may need to reboot server %1.

Unexpected kernel module error happened.

Reboot the secondary server.

13880 Critical Secondary server has detected communication module failure. Failover is not initiated. [Error].

Unexpected error happened in comm module so failover will not occur.

Check server modules status.

13881 Error Secondary server will terminate failover module. error [Error].

Forced fm module to stop. Check server status.

13882 Error Primary server quorum disk may have problem. error [Error].

There might be a storage device or connection failure on the quorum disk.

Check storage connectivity.

13888 Warning Secondary server is temporarily busy.

The secondary server has a heavy load perhaps due to I/O or TimeMark operations.

Check server status.

13895 Critical Partner server failure detected: [Failure] (timestamp [Date Time])

The server detected the specified failure condition on the partner, which can result in a failover.

Check the failover condition to resolve the issue.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 552

Page 555: CDP-NSS Administration Guide

Troubleshooting / FAQs

13896 Warning Partner server power control device status: %1.

This server regularly checks commuication with power control device on the partner server.

In case the status is not OK, check the partner server.

13897 Critical Server failure detected The server has detected a failure on the partner resulting in a failover.

Check the partner server status.

13898 Warning Manual takeover occurred.

A manual takeover has occurred.

Ignore the message if this is an expected action.

13900 Critical This server failed to take over due to %1.

This server is not able to take over the partner due to a power control failure or a missing configuration file.

Check power control status and failover configuration file.

13901 Error Failed to read %1 from Configuration Repository.

This may be due to a storage device or connection failure on the quorum disk.

Check storage connectivity.

13909 Critical Allocation block size mismatch between failover partner(local %1 remote %2)

Allocation block size is not set to the same value on servers in a failover setup.

Set the environment parameter to the same value.

13910 Error Failover status in quorum may not be right on this server: %1.

Failover status could not be updated on the quorum disk due to a storage device or connection failure on the disk.

Check storage connectivity.

13912 Warning Failover is configured without a power control option enabled.

A reliable physical power control option such as IPMI or HP iLO is not implemented in a failover setup.

In order to avoid any outage by single node failure, configure the power control option using a physical power control device.

13913 Critical This sever cannot take over the partner due to a missing power control device.

A reliable physical power control option such as IPMI or HP iLO is not implemented in a failover setup.

Either fix the issue on the partner or manually take over after shutting down the partner.

13916 Warning Failover is configured without a power control option enabled.

A reliable physical power control option such as IPMI or HP iLO is not implemented in a failover setup.

In order to avoid any outage by single node failure, configure the power control option using a physical power control device.

13917 Critical This server has no power control device.

A reliable physical power control option such as IPMI or HP iLO is not implemented in a failover setup.|

Either fix the issue on the partner or manually take over after shutting down the partner.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 553

Page 556: CDP-NSS Administration Guide

Troubleshooting / FAQs

13918 Critical Forceful takeover will continue even though power control is not functioning.

Forceful takeover happened without powering off the primary server.

Fix your power control equipment.

13919 Critical Local server failure detected: fsnupd terminated abnormally.

The server fsnupd terminated abnormally. If this is a failover set,the secondary server may be able to take over

Contact Technical Support for possible cause.

13920 Critical Local server failure detected: ipstorcomm terminated abnormally.

The server ipstorcomm terminated abnormally. If this is a failover set, the secondary server might take over.

Contact Technical Support for possible cause.

13921 Critical Local server failure detected: NAS terminated abnormally.

The server NAS terminated abnormally. If this is a failover set,the secondary server may be able to take over.

Contact Technical Support for possible cause.

13922 Critical Local server failure detected: iscsi terminated abnormally.

The server iscsi terminated abnormally. If this is a failover set,the secondary server may be able to take over.

Contact Technical Support for possible cause.

13923 Critical Local server failure detected: istor terminated abnormally.

The server istor terminated abnormally.

Contact Technical Support for possible cause.

13924 Critical Local server failure detected: permanently stop log module due to too many failures.

The server permanently stop log module due to too many failures.

Contact Technical Support for possible cause.

13925 Critical Local server failure detected: log module terminated abnormally.

The server log module terminated abnormally.

Contact Technical Support for possible cause.

13926 Critical Local server failure detected: log module restart due to memory exceed maximum size fail.

The server log module restarted because memory exceeded the maximum threshold.

Contact Technical Support for possible cause.

13927 Critical Local server failure detected: The auth module has permanently stopped due to too many auth module failures.

The auth module has permanently stopped because of too many auth module failures.

Contact Technical Support for possible cause.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 554

Page 557: CDP-NSS Administration Guide

Troubleshooting / FAQs

13928 Critical Local server failure detected: auth module terminated abnormally.

The connection authentication module has stopped. In a failover setup, the partner server will try to take over this server.

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

13929 Critical Local server failure detected: iscliproxy terminated abnormally.

The CLI proxy module has stopped. In a failover setup, the partner server will try to take over this server.

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

13930 Critical Local server failure detected: ioserver process terminated abnormally.

The IO core module has stopped. In a failover setup, the partner server will try to take over this server.

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

13931 Critical Local server failure detected: ioctl_mgr terminated abnormally.

An IO core thread has stopped. In a failover setup, the partner server will try to take over this server.

If stopping the module was not intentional, check the system log for more information about the reason for the module failure.

13932 Critical Local server failure detected: downstream terminated abnormally.

An IO core thread has stopped. In a failover setup, the partner server will try to take over this server.

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

13933 Critical Local server failure detected: control_from_user terminated abnormally.

An IO core thread has stopped. In a failover setup, the partner server will try to take over this server.

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

13934 Critical Local server failure detected: control_from_kernel terminated abnormally.

An IO core thread has stopped. In a failover setup, the partner server will try to take over this server.

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

13935 Critical Local server failure detected: ioctl_evt terminated abnormally.

An IO core thread has stopped. In a failover setup, the partner server will try to take over this server.

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 555

Page 558: CDP-NSS Administration Guide

Troubleshooting / FAQs

13936 Critical Local server failure detected:kfsnbase terminated abnormally.

The downstream management module has stopped. In a failover setup, the partner server will try to take over this server.

If the stopping of the module was not intentional, check the system log to get more information about the reason for the module failure.

13937 Critical Local server failure detected: Fibre Channel link down detected.

A Fibre Channel HBA port has changed to a state of link down. In a failover setup, the partner server will try to take over this server.

Check the cables and make sure the FC GBIC is correctly plugged in. You may need to replace cables or the GBIC.

13938 Critical Local server failure detected: storage connectivity failure.

The server cannot access one or more storage arrays. In a failover setup, the partner server will try to take over this server.

Determine the cause of the storage failure (i.e. FC port link down, storage array down, connectivity failure) and correct the issue.

13939 Critical Local server failure detected: pipe full, restart comm.

The configuration file does not have correct Fibre Channel settings; for example, FC clients are detected but no FC adapter is set to target mode.

Check configuration of FC HBAs.

13940 Critical Local server failure detected: invalid fc configuration in conf file.

The configuration file does not have correct Fibre Channel settings; for example, FC clients are detected but no FC adapter is set to target mode.

Check configuration of FC HBAs.

13941 Critical Local server failure detected: ioctl stuck: pid %1 function %2 seconds %3.

An IOCTL function was not processed in time.

Check the system log for more information about the reason for the function failure.

13942 Warning Secondary server will continue takeover with exception:%1

The partner server took over this server under a special context such as a manual takeover.

Check to see if this context is expected.

13943 Warning Secondary server detect primary server has a flaw during taking over:%1.

The takeover process may not be perfect due to issues with quorum mirror segments.

Check configuration repository and its mirror.

13944 Critical Secondary server detect physical layout mismatch with primary server.

Device physical layout does not match between failover servers configuration.

Check storage configuration.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 556

Page 559: CDP-NSS Administration Guide

Troubleshooting / FAQs

13945 Critical This server has detected ioserver failure.

The IO core module has failed. Reboot the server.Check the system log to get more information about the reason for the module failure. You may need to reboot.

13946 Error Secondary server failed to prepare file during takeover: %1

The takeover process may not perfect due to issues with configuration file processing.

Check the system log to get more information about the reason for the failure.

13947 Critical Secondary server will continue manual takeover with exception: %1

The takeover process may not perfect due to issues with configuration files.

Check the system log to get more information about the reason for the failure.

15000 Error Snapshot copy failed to start because of invalid input arguments.

The source virtual device or the destination virtual device cannot be accessed when creating a snapshot and copying it.

Check source and target virtual devices.

15002 Error Snapshot copy from virtual device id %1 to id %2 failed because it could not open file %3.

The source virtual device or destination virtual device cannot be accessed when creating and copying a snapshot and copying it.

Check source and target virtual devices.

15003 Error Snapshot copy from virtual device id %1 to id %2 failed because it failed to allocate (%3) memory.

Memory is low. Check server memory amount and usage.

15004 Error Snapshot copy from virtual device id %1 to id %2 failed because an error occurred when writing to file %3, errno is %4.

The source virtual device or the destination virtual device cannot be accessed when creating a snapshot and copying it.

Check source and target virtual devices.

15005 Error Snapshot copy from virtual device id %1 to id %2 failed because an error occurred when lseek in file %3, errno is %4.

The source virtual device or the destination virtual device cannot be accessed when creating a snapshot and copying it.

Check source and target virtual devices.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 557

Page 560: CDP-NSS Administration Guide

Troubleshooting / FAQs

15006 Error Snapshot copy from virtual device id %1 to id %2 failed because an error occurred when reading from file %3, errno is %4.

The source virtual device or the destination virtual device cannot be accessed when creating a snapshot and copying it.

Check source and target virtual devices.

15008 Error Snapshot copy from virtual device id [Device ID] to id [Device ID] might have run out of snapshot reserved area. Please expand the snapshot reserved area.

The snapshot copy operation failed and is most likely due to insufficient snapshot resource area that cannot maintain the snapshot.

Increase the snapshot resource or create the snapshot copy while the virtual drive is not being actively written to.

15016 Error TimeMark copy failed to start because of invalid input arguments.

The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark.

Check source and target virtual devices.

15018 Error TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because it failed to open file %4.

The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark.

Check source and target virtual devices.

15019 Error TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because it failed to allocate (%4) memory.

Memory is low. Check server memory amount and usage.

15020 Error TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because an error occurred when writing to file %4, errno is %5.

The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark.

Check source and target virtual devices.

15021 Error TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because an error occurred when lseek in file %4, errno is %5.

The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark.

Check source and target virtual devices.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 558

Page 561: CDP-NSS Administration Guide

Troubleshooting / FAQs

15022 Error TimeMark copy from virtual device id %1 snapshot image %2 to id %3 failed because an error occurred when reading from file %4, errno is %5.

The source virtual device or the destination virtual device cannot be accessed when copying an existing TimeMark.

Check source and target virtual devices.

15024 Warning TimeMark copy from virtual device id [Device ID] snapshot image [TimeMark name] to id [Device ID] might have run out of snapshot reserved area. Please expand the snapshot reserved area.

The TimeMark copy operation failed and is most likely due to insufficient snapshot resource area that cannot maintain the snapshot.

Increase the snapshot resource or create a TimeMark copy while the virtual drive is not being actively written to.

15032 Error TimeMark rollback failed to start because of invalid input arguments.

The source virtual device or the destination virtual device cannot be accessed.

Check source and target virtual devices.

15034 Error TimeMark rollback for virtual device id %1 to snapshot image %2 failed because it failed to open file %3.

The source virtual device or the destination virtual device cannot be accessed.

Check source and target virtual devices.

15035 Error TimeMark rollback for virtual device id [Device ID] to snapshot image [TimeMark name] failed because it failed to allocate ([Kilobytes]) memory.

The memory resource in the system is running low. The system cannot allocate enough memory to perform the rollback operation.

Stop unnecessary processes or delete some TimeMarks and try again. If this happens frequently, increase the amount of physical memory to adequate level.

15036 Error TimeMark rollback for virtual device id %1 to snapshot image %2 failed because an error occurred when writing to file %3, errno is %4.

The source virtual device or the destination virtual device cannot be accessed.

Check source and target virtual devices.

15037 Error TimeMark rollback for virtual device id %1 to snapshot image %2 failed because an error occurred when lseek in file %3, errno is %4.

The source virtual device or the destination virtual device cannot be accessed.

Check source and target virtual devices.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 559

Page 562: CDP-NSS Administration Guide

Troubleshooting / FAQs

15038 Error TimeMark rollback for virtual device id %1 to snapshot image %2 failed because an error occurred when reading from file %3, errno is %4.

The source virtual device or the destination virtual device cannot be accessed.

Check source and target virtual devices.

15040 Error TimeMark rollback for virtual device id [Device ID] to snapshot image [TimeMark name] might have run out of snapshot reserved area. Please expand the snapshot reserved area.

The snapshot resource area is used for the rollback process. If the resource is too low, it will affect the rollback operation.

Expand the snapshot resource to an adequate level.

15041 Error TimeMark rollback for virtual device id %1 to snapshot image %2 failed because an error occurred while getting TimeMark extents.

This might be due to snapshot resource device error.

Check snapshot resource device.

15050 Error Server IO cpl call UPDATE_TimeMark failed on vdev id [Device ID]: Invalid Argument

TimeMark related function call returned error. For example, if you get this error during TimeMark copy, it is most likely due to insufficient snapshot resource space

Check the system log. Take action based on the related function call that has failed. For TimeMark copy failure, expand the snapshot resource to an adequate level. Check if TimeMark or Replication successfully completed. If not, manually run TimeMark or Replication after expanding the snapshot resource.

15051 Error Server ioctl call %1 failed on vdev id %2: I/O error (EIO).

The virtual drive is not responsive to IO requested by the upper layer.

Try again after checking devices.

15052 Error Server ioctl call %1 failed on vdev id %2: Not enough memory space (ENOMEM).

The virtual drive is not responsive to the upper layer calls because of low memory condition.

Check system memory.

15053 Error Server ioctl call %1 failed on vdev id %2: No space left on device (ENOSPC).

The virtual drive is not responsive to upper layer calls due to insufficient free space.

Check free space on physical and virtual devices.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 560

Page 563: CDP-NSS Administration Guide

Troubleshooting / FAQs

15054 Error Server ioctl call %1 failed on vdev id %2: Already existed (EEXIST).

The operation may have already been executed or is in conflict with an existing operation.

Check operation results.

15055 Error Server ioctl call [Device ID] failed on vdev id [Device ID]: Device or resource is busy (EBUSY).

The virtual drive is busy with I/O and not responsive to the upper layer calls.

Try again when the system is less busy or determine the cause the high activity and correct the situation if necessary.

15056 Error Server ioctl call %1 failed on vdev id %2: Operation still in progress (EINPROGRESS).

The virtual drive is busy with I/O and not responsive to the upper layer calls.

Try again when the system is less busy or determine the cause the high activity and correct the situation.

16002 Error Failed to create TimeMark for group %1.

TimeMark cannot be created on all group members.

Check group members.

16003 Error Failed to delete TimeMarks because they are in rollback state.

TimeMarks are in rollback state.

Try again.

16004 Error Failed to delete TimeMarks because TimeMark operation is in progress to get TimeMark information.

TimeMark operation is in progress.

Try again.

16010 Error Group cache/CDP journal is enabled for virtual device %1, vdev signature is not set for VSS.

Virtual device is not VSS aware.

Select the right virtual device for VSS operation.

16106 Error Failed to update the configuration of the Primary Disk %1 for Near-line Recovery.

Near-line storage device might have a problem.

Check the server connection of the near-line pair.

16107 Error Failed to update the configuration of the Near-line Disk %1 for Near-line Recovery.

Near-line storage device might have a problem.

Check the server connection of near-line pair.

16108 Error Failed to start TimeMark rollback on Near-line Disk %1 for Near-line Recovery.

Near-line storage device might have a problem.

Check the TimeMark status and server connection of the near-line pair.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 561

Page 564: CDP-NSS Administration Guide

Troubleshooting / FAQs

16109 Error Failed to assign the Primary Server to Near-line Disk %1 to resume the Near-line Mirroring configuration.

The ioctl call may fail due to server busy or assignment error from Fibre Channel or iSCSI depending on the protocol.

Check the server status and retry.

16110 Error Failed to update the configuration of the Primary Disk %1 to resume Near-line Mirroring configuration.

Near-line storage device might have a problem.

Check the server connection of the near-line pair.

16111 Error Failed to update the configuration of the Near-line Disk %1 to resume Near-line Mirroring configuration.

Near-line storage device might have a problem.

Check the server connection of the near-line pair.

16120 Error Failed to update the configuration of the Primary Disk %1 for Near-line Replica Recovery.

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16121 Error Failed to update the configuration of the Near-line Disk %1 for Near-line Replica Recovery.

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16122 Error Failed to update the configuration of the Near-line Replica %1 for Near-line Replica Recovery.

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16123 Error Failed to start TimeMark rollback on Near-line Replica %1 for Near-line Replica Recovery.

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16124 Error Failed to update the configuration of the Primary Disk %1 to resume the Near-line Mirroring configuration.

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16125 Error Failed to update the configuration of the Near-line Disk %1 to resume the Near-line Mirroring configuration.

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 562

Page 565: CDP-NSS Administration Guide

Troubleshooting / FAQs

16126 Error Failed to update the configuration of the Near-line Replica %1 to resume the Near-line Mirroring configuration.

Storage device might have a problem.

Check the server connection of the near-line pair and replica server.

16200 Error Console ([host name]): Failed to modify Fibre Channel client (%2) WWPN from %3 to %4.

There may be duplicate WWPNs.

Check FC WWPNs.

16211 Error Failed to add storage to the thin disk %1 (error code %2).

There may not be enough storage available.

Check storage capacity.

16212 Error Failed to add storage to the thin disk %1. The virtual device is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB, which exceeds the limit.

Quota limit is reached. Check user quota.

16213 Error The virtual device %1 is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB. Only %5 MB will be added to the thin disk.

Quota limit is reached. Check user quota.

16214 Error Out of disk space to add storage to thin disk %1.

There is not enough storage available.

Check storage capacity.

16215 Error Failed to add storage to the thin disk %1: maximum segment exceeded (error code %2).

There is not enough storage available.

Check storage capacity.

16217 Error Console ([host name]): Failed to update the thin disk properties for virtual device %2 (threshold: %3, increment: %4)

Parameter values might be inconsistent.

Check parameters.

16219 Error Console ([host name]): Failed to modify the thin disk size for virtual device %2 to %3 MB (%4 sectors).

Thin disk expansion failed possibly due to a device error or the system being busy.

Check device status and system status; then try again.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 563

Page 566: CDP-NSS Administration Guide

Troubleshooting / FAQs

16220 Error Console ([host name]): Failed to add storage to the thin disk %2.

There is not enough storage available.

Check storage capacity.

16223 Error Failed to expand TimeView %1 (error code %2)

16224 Error Not enough disk space is available to expand TimeView %1

The physical storage for the TimeView Resource has run out of space. 2) Allocation block size is enabled which may require more space than the actual expansion size.

1) Check the physical amount of space for public and/or storage pools. Add more storage as needed. 2) Check if the allocation block size is enabled. Add more storage as needed.

16225 Error Failed to expand TimeView %1: maximum segment exceeded (error code %2)

Contact Tech Support.

16226 Error Failed to expand TimeView %1. The TimeView is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB, which exceeds the limit.

Contact Tech Support.

16227 Error The TimeView %1 is assigned to user %2. The quota for this user is %3 MB and the total size allocated to this user is %4 MB. Only %5 MB will be added to the TimeView.

Contact Tech Support.

16232 Error Failed to initialize report scheduler configuration.

The system might be busy or disk space is running low.

Check system resource usage and disk usage.

16234 Error Failed to start report scheduler.

The system might be busy and takes longer to start.

Check if the CLI proxy server module has started. Restart the comm module if the proxy server module is not started.

16236 Error Failed to stop report scheduler.

The system might be busy and takes longer to stop.

Check to see if the CLI proxy server module is stopped.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 564

Page 567: CDP-NSS Administration Guide

Troubleshooting / FAQs

16238 Error Failed to retrieve report schedule(s).

The system might be busy. Retry later.

16240 Error Failed to add / update report schedule(s).

The system might be busy or disk space is running low.

Check system resource usage and disk usage.

16242 Error Failed to remove report schedule(s).

The system might be busy. Retry later.

16252 Error Failed to initialize statistics log scheduler configuration.

The statistics scheduler thread could not be started possibly due to being configured incorrectly or system status.

Check system status.

16254 Error Failed to start statistics log scheduler.

The statistics scheduler thread could not start to collect information.

Check system status.

16256 Error Failed to stop statistics log scheduler.

Statistics scheduler thread could not stop possibly due to the system being busy.

Check system status.

16258 Error Failed to retrieve statistics log schedules.

Statistics schedules could not be retrieved possibly due to the system being busy.

Check system status.

16260 Error Failed to add / update statistics log schedule(s).

Statistics schedules could not be updated possibly due to the system being busy.

Check system status.

16262 Error Failed to remove statistics log schedule(s).

Statistics schedules could not be removed possibly due to the system being busy.

Check system status.

16421 Error Failed to remove TimeView data resource%1

Removing timeview data resource failed possibly because the system was busy.

Check the system status and retry.

17001 Error Rescan replica cannot proceed because a replication is in progress.

Rescan cannot be performed when replication is in progress.

Wait for the process to complete and try again, or change the replication schedule.

17002 Error Rescan replica cannot proceed due to missing replication control area.

There may be a storage problem.

Check the virtual device layout and storage devices for missing segments.

17003 Error Rescan replica cannot proceed due to replication control area failure.

There may be a storage problem.

Check the virtual device layout and storage devices for missing segments.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 565

Page 568: CDP-NSS Administration Guide

Troubleshooting / FAQs

17004 Error Replication cannot proceed due to replication control area failure.

There may be a storage problem.

Check the virtual device layout and storage devices for missing segments.

17005 Error Replication cannot proceed due to replication control area failure.

There may be a storage problem.

Check the virtual device layout and storage devices for missing segments.

17006 Error Rescan replica cannot proceed due to replication control area failure.

There may be a storage problem.

Check the virtual device layout and storage devices for missing segments.

17011 Error Rescan replica failed due to network transport error.

Rescan for differences requires connecting to the replica server. A network issue can cause rescan to fail.

Check network condition between the servers.

17012 Error Replicating replica failed due to network transport error.

Replication failed due to a network condition.

Check network condition between the servers.

17013 Error Rescan replica failed due to local disk error.

Rescan encountered a disk I/O error from the source disk.

Check the storage device or system in the source server.

17014 Error Replication failed due to local disk error.

Replication encountered a disk I/O error from the source disk.

Check the storage device or system in the source server.

17015 Error Replication failed because local snapshot used up all of the reserved area.

Replication failed because the snapshot from the source drive could not be maintained due to low snapshot resources.

Expand the snapshot resource for the source device.

17016 Error Replication failed because the replica snapshot used up all of the reserved area.

Replication failed because the snapshot from the replica drive could not be maintained due to low snapshot resource space.

Expand the snapshot resource for the replica device.

19007 Warning Failed to rescan failover secondary server after preparing disk on primary. The following physical devices are not found on the failover partner. Primary Server: %1, Secondary Server: %2, Physical Devices SCSI Addresses: %3.

Failed to update the physical devices on failover secondary server

Make sure the physical devices are set up properly on the failover partner server and rescan the physical resources to refresh the configuration.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 566

Page 569: CDP-NSS Administration Guide

Troubleshooting / FAQs

19008 Warning Failed to rescan failover secondary server after importing disk on primary. The following physical devices are not found on the failover partner. Primary Server: %1 , Secondary Server: %2, Physical Devices SCSI Addresses: %3.

Failed to update the physical devices on failover secondary server.

Make sure the physical devices are set up properly on the failover partner server and rescan the physical resources to refresh the configuration.

31003 Error Failed to open file %1. The specified file does not exist.

Try yo locate the file. Check if the file exists.

31004 Error Failed to add user %1 to the NAS server.

When adding username and UID into the file /etc/passwd, one of the following errors occurred: - nasgrp is not in /etc/group- username already exists in /etc/passwd

- the file /etc/passwd cannot be updated

Check nasgrp group exists by the command "getent group | grep nasgrp". If not, add it using the command "groupadd nasgrp".

If the username is new and the group nasgrp already exists, check to make sure the file system does not have an issue by creating a test file under /etc. If the file cannot be created, reboot the server to trigger a file system check.

31005 Error Failed to allocate memory.

Memory is low. Check system memory usage to make sure enough memory is reserved for user-mode operations especially if you have NAS enabled. Run the command "cat /proc/meminfo" to check if ((MemFree+Buffers+Cached)/MemTotal) is not less than 10%. Investigate to determine the cause of high memory usage.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 567

Page 570: CDP-NSS Administration Guide

Troubleshooting / FAQs

31011 Error IPSTORUMOUNT: Failed to unmount %1.

When unmounting a NAS file system, one of the following errors occurred: - the mount path is not from /nas - umount process cannot be forked- NAS file system is busy to be unmounted

/etc/mtab cannot be locked temporarily

Run "lsof /nas/<resource>" to check the process that opens the device. If the process exists, then manually kill it. If no process opens the device, then you may need to reboot the server.

31013 Error IPSTORMOUNT: Failed to mount %1.

When mounting NAS file system, one of the following errors occurred: - failed to get vdev name by vid from ipstor.conf. This can happen if ipstor.conf cannot be read ot does mot contain VirtualDevConnection info. - unmount failed.(Ref. 31011) (unmount will happen when the mount path is duplicated)

- NAS file system failed to be mounted

Check whether you can open ipstor.conf.Try to create a test file under $ISHOME/etc/$HOSTNAME; if the file cannot be created, written or read, the file system may be corrupted. Reboot the server to trigger a file system check.Check whether the info of vdev, vid and VirtualDevConnection are correctly in ipstor.conf

Try to manually mount the NAS device to a test folder, (i.e. /mnt/test), an error will display if the mount fails.

31017 Error Failed to write to file %1. The file system may be inconsistent.

Try to create a test file in the indicated path. If the file cannot be created, written, or read, reboot the server to trigger a file system check.

31020 Error Failed to rename file [File name] to file [File name].

The file system is full or system resources are critically low.

Try removing some unnecessary files (i.e. logs or cores).

31023 Error IPSTORNASMGTD: Failed to create file [File name].

See 31020. See 31020.

31024 Error IPSTORNASMGTD: Failed to lock file [File name].

Some processes exited without an unlock file.

Restart the server modules.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 568

Page 571: CDP-NSS Administration Guide

Troubleshooting / FAQs

31025 Error IPSTORNASMGTD: Failed to open file [File name].

One of the configuration files is missing.

Make sure the package is installed properly.

31028 Warning Failed to lock file [File name].

Some processes exited without an unlock file.

Restart the server modules.

31029 Error Failed to create file [File name].

The file system is full or system resource is critically low.

Try removing some unnecessary files like logs or cores.

31030 Error Failed to create directory [Directory name].

The file system is full or system resource is critically low.

Try removing some unnecessary files like logs or cores.

31031 Error Failed to remove directory [Directory name].

Some other process might be accessing the directory.

Try stopping some running process or exit out of existing logins.

31032 Error Failed to execute program '[Program name]'.

When any server process cannot be started, it is most likely due to insufficient system resources, invalid state left by an server process that may not have been stopped properly, or an unexpected OS process failure that left the system in a bad state. This should happen very rarely. If frequent occurrence is encountered, there may be external factors that contribute to the behavior that must be investigated and removed before running the server.

If system resources are low, use top to determine the process using the most memory. If physical memory is below the CDP/NSS recommendation, install more memory to the system. If the OS is suspected ito be in bad state due to unexpected failure in either hardware or software components, restart the server to make sure the OS is in a healthy state before trying again.

Check whether there is any core file under $ISHOME/bin that indicates process error.

31034 Warning Local IPStor SAN Client is not running.

The Client is not running properly.

Restart the server modules.

31035 Error Failed to add group [Group name] to the NAS server.

The number of reserved group IDs are used up.

Add addition ranges from Console -> NAS Clients -> Windows Clients -> UID/GID.

31036 Error Failed to delete user [User name] from the NAS server.

User being deleted is currently logged in.

Kill any running process that belongs to an account that you are deleting.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 569

Page 572: CDP-NSS Administration Guide

Troubleshooting / FAQs

31037 Error Error accessing NAS Resource state file for virtual device [Device number].

System had an unclean shutdown.

No action needed.

31039 Error Failed to rename file [File name] to file [File name].

File system is full. Try removing some unnecessary files like logs or cores.

31040 Error Failed to create the NAS Resource. Failed to allocate SCSI disk device handle - operating system limit reached.

OS limit reached. Refer to doc on how to rebuild kernel to support more SCSI devices.

31041 Error Exceed maximum number of reserved NAS users.

The number of reserved user IDs is used up.

Add additional ranges from Console -> NAS Clients -> Windows Clients -> UID/GID.

31042 Error Exceed maximum number of reserved NAS groups.

The number of reserved user IDis used up.

Add additional ranges from Console -> NAS Clients -> Windows Clients -> UID/GID.

31043 Error Failed to setup password database.

See 31020. Try removing some unnecessary files (i.e. logs or cores).

31044 Error Failed to make symlink from [File name] to [File name].

See 31020. Try removing some unnecessary files (i.e. logs or cores).

31045 Error Failed to update /etc/passwd.

Some processes exited without unlocking file.

Restart the server modules.

31046 Error Failed to update /etc/group.

Some processes exited without unlocking file.

Restart the server modules.

31047 Error Synchronization daemon is not running.

Someone manually stopped the process.

If system resource is low, run 'top' to check the process that is using the most memory. If physical memory is below the server recommendation, install more memory on the system. If the OS is suspected to be in a bad state due to unexpected failure in either hardware or software components, restart the server machine.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 570

Page 573: CDP-NSS Administration Guide

Troubleshooting / FAQs

31048 Error Device [Device number] mount error.

Failed to attach to the SAN device provided by the local client module or the file system is corruptedr.

Make sure all of the physical devices are connected and powered on correctly and restart the server modules. If the Console shows that the NAS resource is attached but not mounted, you might need to reformat this NAS resource. This will remove all data on the drive.

31049 Error Device [Device number] umount error.

Some other process might be accessing the mount point.

Kill any running processes which might be accessing the mount point.

31050 Error Failed to detach device vid [Device number].

The client module is not running properly.

Restart the server modules.

31051 Error Failed to attach device vid [Device number].

Failed to attach the SAN device provided by the local client module or the file system is corrupted.

Restart the server modules.

31054 Error Failed to get my hostname

Failed to get hostname with the function gethostname.

Check the host exists with that name and the name is resolvable.

31055 Error SAM: connection failure. Samba authentication server not accessible.

Check if the auth server is up and running or the name of the server is correct setup from the Console.

31056 Warning Delay mount due to unclean file system on vid [Device number].

During failover, the secondary is waiting for a specific amount of time until the primary unmounts NAS resources gracefully.

None.

31058 Warning Not all disks unmount complete.

A file system check is in progress or the device is not available during failover/failback.

If the file system check is in progress, you can try to stop it by killing the file system repair process.

Check physical device status.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 571

Page 574: CDP-NSS Administration Guide

Troubleshooting / FAQs

31060 Warning Not all disks mount complete.

A file system check is in progress or the device is not available during failover/failback.

If the file system check is in progress, you can try to stop it by killing the file system repair or checking processes.

Check physical device status.

31061 Error Nas process ipstorsmbd fail.

One of the following processes is not running properly: ipstorclntd, kvbdi, ipstornasmgtd, smbd, nmbd, winbindd, portmap, rpc.mountd, mountd, nfsd

See 31032

See 31032

31062 Error Failed to read from file %1

See 31017. See 31017.

31064 Error Error file system type %1 There is a wrong file system type of NAS resource set in ipstor.conf.

Check the file system type in ipstor.conf.

31066 Error Invalid XML file Failed to get NAS file system block size from ipstor.conf.

Check the file system block size in ipstor.conf.

31067 Error cannot parse dynamic configuration %1

Failed to read the file $ISHOME/etc/$HOSTNAME/ipstor.dat.cache.

See 31017.

31068 Error dynamic configuration does not match %1

Failed to get vdev by vid from the file $ISHOME/etc/$HOSTNAME/ipstor.dat.cache.

Check the mapping of vdev name and corresponding vid in ipstor.dat.cache.

31069 Error Do not destroy file system's superblock of %1

When formatting NAS resources, the super block could not be removed because it failed to open the VBDI device or write to the device.

Check whether the device /dev/vbdixx exists.

31071 Error Missing file %1 Failed to get status of the file $ISHOME/bin/sfsck.

Run the command "stat $ISHOME/bin/sfsck" to see if any error displays.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 572

Page 575: CDP-NSS Administration Guide

Troubleshooting / FAQs

31072 Error Failed to update CIFS native configuration

When updating CIFS native configuration, one of the following error happened: If it exists, run the command "dd if=/dev/vbdixx of=/tmp/test.out bs=512 count=100" to test whether you can read the device. If not, check the physical device status-Failed to create the temporary file $ISHOME/etc/$HOSTNAME/.smb.conf.XXXXXX- Failed to get the cifs client from $ISHOME/etc/$HOSTNAME/nas.conf

- Failed to rename the file $ISHOME/etc/$HOSTNAME/.smb.conf.XXXXXX to $ISHOME/etc/$HOSTNAME/smb.conf

Try to create a test file in the indicated path. If the file cannot be created, written, or read, reboot the server to trigger a file system check.

31073 Error Failed to update NFS native configuration

When updatting NFS native configuration, one of the following error happened: - Failed to open the file $ISHOME/etc/$HOSTNAME/nas.conf

- Failed to create the temporary file $ISHOME/etc/$HOSTNAME/.exports.

Try to create a test file in the indicated path. If the file cannot be created, written, or read, reboot the server to trigger a file system check.

31074 Error Failed to parse XML file %1

Failed to open the file $ISHOME/etc/$HOSTNAME/nas.conf.

Try to create a test file in the indicated path. If the file cannot be created, written, or read, reboot the server to trigger a file system check.

31075 Error Disk %1 unmount failed during failover

Failover or failback has occurred so NAS resources need to be unmounted.

Reboot the failed server.

31076 Critical Due to storage failure, NAS Secondary Server has to reboot to resume its tasks.

NAS resources cannot be detached or unmounted during failover/failback. The storage failure prevented the file system from flushing the cache. Rebooting the failed server will clean the cache.

Reboot the failed server.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 573

Page 576: CDP-NSS Administration Guide

Troubleshooting / FAQs

31078 Error Add NAS resource to iocore failed during failover processing %1

When adding a NAS resource to iocore, one of the following error happened: - Failed to open the file $IS_CONF- NAS option is not enabled

- Failed to open /dev/isdev/kisconf

See 31017

Run the command "stat /dev/isdev/kisconf" to check the file.

31079 Error Missing file system commands %1 of %2

When getting file system command from $ISHOME/etc/$HOSTNAME/nas.conf, the InfoItem value is not right.

Check whether all the InfoItem names are correct in nas.conf. For example, Example: InfoItem name="mount". You can compare it with nas.conf file on a healthy NAS server.

50000 Error iSCSI: Missing targetName in login normal session from initiator %1

The iSCSI initiator may not be compatible.

Check the iSCSI initiator on the client side.

50002 Error iSCSI: Login request to nonexistent target %1 from initiator %2

The iSCSI target does not exist any longer.

Check the iSCSI initiator on the client side and the iSCSI configuration on the server. Remove targets from the configuration that do not exist.

50003 Error iSCSI: iSCSI CHAP authentication method rejected. Login request to target %1 from initiator %2

The CHAP settings are not valid.

Check the iSCSI CHAP secret settings on the server and the client sides.

51001 Warning RAID: %1 The physical RAID controller might have some failures.

Check RAID controller configuration.

51002 Error RAID: %1 The physical RAID controller has some failures.

Check RAID controller configuration.

51003 Critical RAID: %1 The physical RAID controller has some failures.

Check RAID controller configuration.

51004 Warning Enclosure: %1 The physical enclosure might have some failures.

Check enclosure configuration.

CDP/NSS Error Codes

Code Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 574

Page 577: CDP-NSS Administration Guide

Troubleshooting / FAQs

UNIX SAN Client error codes

Type Text Probable Cause Suggested Action

UNIX Error Failed to add device, %s.

The storage server is not running.

Check the storage server status.

UNIX Error Failed to connect to Server %s:%d, errno=%d.

The storage server is not running.

Check the storage server status.

UNIX Error Failed to attach to device %ld on server %ld, %s.

The storage server is not running.

Check the storage server status.

UNIX Error %s Client is not running! The storage server is not running.

Check the storage server status.

UNIX Error Failed to connect to bridge, %s.

The storage server is not running.

Check the storage server status.

UNIX Error Client %s is not authenticated.

Changed storage server configuration.

Run "ipstorclient monitor" and set up client again.

UNIX Error Failed to authenticate user %s.

Changed storage server configuration.

Run "ipstorclient monitor" and set up client again.

UNIX Error FC_client_HS Failed. Changed storage server configuration.

Run "ipstorclient monitor" and set up client again.

UNIX Error FC_server_HS Failed. Changed storage server configuration.

Run "ipstorclient monitor" and set up client again.

UNIX Error Failed to unmount %s, %s.

The file system device may be busy.

Clean file system access and run a "umount" command.

UNIX Error FC_complete_HS Failed

Time has expired for IPStor client to authenticate with server.

On the storage server side, run "ipstor restart" to restart server again.

UNIX Error FC_client_HeartBeat Failed

Heartbeat interval has expired.

On the storage server side, run "ipstor restart" to restart server again.

UNIX Error BridgeStart: Failed to open %s, %s.

SAN SCSI driver isn't loaded.

Check if the Intel pro 100 NIC has been installed on client machine.

UNIX Information Failed to wait for status, %s.

Child process has exited. None

UNIX Error There is no device assigned to this client! Exiting.

Client does not have any device assigned.

Use SAN console and assign devices to client then run "ipstorclient restart".

HP-UX Information pclose status %d. Status of closing file descriptor.

None

CDP/NSS Administration Guide 575

Page 578: CDP-NSS Administration Guide

Troubleshooting / FAQs

HP-UX Error No SAN SCSI drivers. SAN SCSI driver is not loaded.

Check if the network driver has been correctly loaded on the client machine.

HP-UX Error Failed to open /dev/sanscsi, %s.

SAN SCSI driver is not loaded.

Check if the network driver has been correctly loaded on the client machine.

HP-UX Error BridgeStart: Failed to open %s, %s.

SAN SCSI driver is not loaded.

Check if the network driver has been correctly loaded on the client machine.

HP-UX Error Maximum number of virtual adapters (%d) is exceeded!

HP client needs dummy NIC cards.

Install another Intel Pro 100 NIC on HP client machine.

AIX Error Bad Magic Number! The shared secret file is corrupted.

Run "ipstorclient monitor" and set up the client again.

AIX Error Failed to read sc, %s. The shared secret file is corrupted.

Run "ipstorclient monitor" and set up the client again.

AIX Error Failed to position file pointer, %s.

The shared secret file is corrupted.

Run "ipstorclient monitor" and set up the client again.

AIX Error Failed to rewind file pointer, %s.

The shared secret file is corrupted.

Run "ipstorclient monitor" and set up the client again.

AIX Error Failed to position file pointer, %s.

The shared secret file is corrupted.

Run "ipstorclient monitor" and set up the client again.

AIX Error BridgeStart: Failed to open %s, %s.

SAN SCSI driver is not loaded.

Run "ipstorclient monitor" again.

Linux Error Failed to open /proc/scsi/ipstor/%d

The IPStor client module isn't loaded.

Run "ipstorclient restart" to load the client module.

Type Text Probable Cause Suggested Action

CDP/NSS Administration Guide 576

Page 579: CDP-NSS Administration Guide

Troubleshooting / FAQs

Command Line Interface (CLI) error codes

The fol0x09lowing table contains command line error codes..

CDP-NSS Command Line Interface Error Messages

Error code Text

0x09020001 Invalid arguments.

0x09020002 Invalid Virtual Device ID.

0x09020003 Invalid Client access mode.

0x09020004 Connecting to $ISPRODUCTSHORT$ server failed.

0x09020005 You are connected to $ISPRODUCTSHORT$ server with read-only privileges.

0x09020006 Connecting to SAN client failed.

0x09020007 Getting SAN client state failed.

0x09020008 The requested Virtual Device is already attached.

0x09020009 Attaching to Virtual Device failed.

0x0902000a Disconnecting from SAN client failed.

0x0902000b Detaching from Virtual Device failed.

0x0902000c Invalid size.

0x0902000d Invalid X Ray options.

0x0902000e Logging in to $ISPRODUCTSHORT$ server failed.

0x0902000f User has already logged out from $ISPRODUCTSHORT$ server.

0x09020010 Invalid client.

Note: Make sure you use the SAN client names that are created on the server. These names may be different from the actual hostname or the ones in /etc/hosts.

0x09020011 Replication policy is not specified.

0x09020012 Memory allocation error.

0x09020013 Failed to get configuration file from server.

0x09020014 Failed to get dynamic configuration from server.

0x09020015 Failed to parse configuration file.

0x09020016 Failed to parse dynamic configuration file.

0x09020017 Failed to connect to the target server.

0x09020018 You are connected to the target Server with readonly privilege.

0x09020019 Failed to get the configuration file from the target server.

0x0902001a Failed to get the dynamic configuration file from target server.

CDP/NSS Administration Guide 577

Page 580: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x0902001b Failed to parse the configuration file from target server.

0x0902001c Failed to parse the dynamic configuration file from the target server.

0x0902001d Invalid source virtual device.

0x0902001e Invalid target virtual device.

0x0902001f Invalid source resource type.

0x09020020 Invalid target resource type.

0x09020021 The virtual device is a replica disk.

0x09020022 The virtual device is a replication primary disk.

0x09020023 Failed to delete virtual device from client.

0x09020024 Failed to delete virtual device.

0x09020025 Failed to delete remote client.

0x09020026 Failed to save the file.

0x09020027 Remote client does not exist.

0x09020028 You have to run login command with valid user id and password or provide server user id and password through the command.

0x09020029 You have to run login command with valid user id and password or provide target server user id and password through this command.

0x0902002a Virtual Device ID %1 is not assigned to the client %2.

0x0902002b The size of the source disk and target disk does not match.

0x0902002c The virtual device is not assigned to the client.

0x0902002d Replication is already suspended.

0x0902002e Replication is not suspended.

0x0902002f Rescanning Devices failed.

0x09020030 The requested Virtual Device is already detached.

0x09020031 $ISPRODUCTSHORT$ server is not added to the client.

0x09021000 CLI_RPC_FAILED.

0x09021001 CLI_RPC_COMMAND_FAILED.

0x09022000 Failed to start a transaction for this command.

0x09022001 Failed to start a transaction on the primary server for this command.

0x09022002 Failed to start a transaction on the target server for this command.

0x09022003 $ISPRODUCTSHORT$ server specified is an invalid IP address.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 578

Page 581: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09022004 Failed to resolve $ISPRODUCTSHORT$ server to a valid IP address.

Note: For CLI to work with server name instead of IP address, the server name has to be resolved on the client side and server side. This can happen, for example when the server hostname is not in DNS or /etc/hosts file.

0x09022005 Failed to create a connection.

Note: Check network interface is not down on the server to make sure RPC calls go through.

0x09022006 Failed to secure the connection.

0x09022007 User authentication failed.

0x09022008 Failed to login to $ISPRODUCTSHORT$ server.

0x09022009 Failed to get the device statistics from client.

0x0902200a Device is not ready. 2

0x0902200b Device is not detached.

0x0902200c Failed to get device status from client.

0x0902200d The source virtual device is already a snapcopy source.

0x0902200e The source virtual device is already a snapcopy target.

0x0902200f The target virtual device is already a snapcopy source.

0x09022010 The target virtual device is already a snapcopy target.

0x09022011 The source virtual device is a replica disk.

0x09022012 The target virtual device is a replica disk.

0x09022013 Invalid category for source virtual device.

0x09022014 Invalid category for target virtual device.

0x09022015 The category of source virtual device is different from category of the target virtual device.

0x09022016 The size of the primary disk does not match the size of the replica disk. The minimum size for the expansion is %1 MB in order to synchronize them.

0x09022017 Getting $ISPRODUCTSHORT$ server information failed. It's possible that the server version is prior to version 1.02

0x09022018 The Command Line Interface and the $ISPRODUCTSHORT$ Server are running different software versions:\n\t<CLI version %1 (build %2) and $ISPRODUCTSHORT$ Server version %3 (build %4)>\n Please update these components to the same version in order to use the Command Line Interface.

0x09022019 Invalid client list information.

0x0902201a Invalid resource list information.

0x0902201b Getting report data timeout.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 579

Page 582: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x0902201c There is no report data.

0x0902201d Failed to open the output file: %1.

0x0902201e Invalid Report Data.

0x0902201f Output file: %1 already exists.

0x09022020 The target server name cannot be resolved on the primary server. Please make sure your DNS is set up properly or use static IP address for the target server.

0x09022021 Failed to promote mirror due to virtual device creation error. The mirror is not recovered.

0x09022022 Invalid physical segment information.

0x09022023 Failed to open file: %1.

0x09022024 Physical segment section not defined.

0x09022025 Some physical segment information are overlapped.

0x09022026 Invalid segment size.

0x09022027 Invalid segment section.

0x09022028 Invalid TimeMark.

0x09022029 The virtual device is in a snapshot group. You have to enable the TimeMark before joining the virtual device to the snapshot group.

0x09022030 The virtual device is in a snapshot group. Please use force option to disable the TimeMark option for this virtual device that is in a snapshot group.

0x09022031 The virtual device is in a snapshot group. All the virtual devices in the same snapshot group have to be unassigned as well. Please use force option to unassign the virtual device or -N (--no-group-client-assignment) option to unassign the virtual device only.

0x09022032 Failed to write to the output file: %1. Please check to see if you have enough space.

0x09022033 The client is currently connected and the virtual device is attached. We recommend you to disconnect the client first before unassigning the virtual device. You must use the <force> option to unassign the virtual device from the client when the client is connected.

0x09022034 Failed to connect to the replication primary server. Please use <force> option to promote the replica.

0x09022035 TimeMark cannot be disabled when the virtual device is in a snapshot group. Please remove the virtual device from the snapshot group first.

0x09022036 The virtual device is in a snapshot group, the individual TimeMark policy for the virtual device cannot be updated. Please specify the group id or group name to update the TimeMark policy for the snapshot group.

0x09022037 Please specify at least one Snapshot property to be updated.

0x09022038 Replica disk does not exist. Therefore, there is no new resource promoted from the replica disk. but the replication configuration is removed from the primary disk.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 580

Page 583: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09022039 TimeView virtual device exists for this TimeMark.

0x0902203a After rollback, some of the timemarks will no longer exist and there are TimeView resources created for those timemarks. Please delete the TimeView resources first if you want to rollback the timemark.

0x0902203b There are TimeView virtual devices associated with this virtual device. Please delete the TimeView virtual devices first.

0x0902203c Replica disk does not exist. Only the replication configuration is removed from the primary disk.

0x0902203d Invalid adapter number: %1.

0x0902203e Total number of Snapshot Group reaches the maximum groups: %1.

0x0902203f Total number of Snapshot Group on the target server reaches the maximum groups: %1.

0x09022040 The resource is in a Snapshot Group. Please set the backup properties through the Snapshot Group.

0x09022042 Replication is not configured for this resource.

0x09022043 The resource is in a Snapshot Group. Please set the replication properties through the Snapshot Group.

0x09022044 Please specify at least one replication option to be updated.

0x09022045 Failed to get Server Time information.

0x09022046 Invalid resource type for deleting TimeMark.

0x09022047 Invalid resource type for rolling back TimeMark.

0x0902204 The virtual device is in a Snapshot Group enabled with TimeMark. Please perform the group TimeMark operation.

0x09022048 The Snapshot Group is not enabled for TimeMark. Please perform the TimeMark operation through the virtual device.

0x0902204a TimeView virtual device already exists for this TimeMark.

0x0902204b There is no Snapshot Image created for this Snapshot Group.

0x0902204c The virtual device is in a replication-enabled snapshot group.

0x0902204d The snapshot group is not enabled with replication. If the virtual device in the snapshot group is enabled with replication, please perform the replication operations through the virtual device.

0x0902204e Failed to create connection for failover partner server.

0x0902204f Failed to start transaction for failover partner server.

0x09022050 You are connected to $ISPRODUCTSHORT$ failover server partner with readonly privilege.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 581

Page 584: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09022051 Failed to parse the configuration from failover partner server.

0x09022052 Replication feature is not supported on this server: %1.

0x09022053 Backup feature is not supported on this server: %1.

0x09022054 TimeMark feature is not supported on this server: %1.

0x09022055 Snapshot Copy feature is not supported on this server: %1.

0x09022056 Mirroring feature is not supported on this server: %1.

0x09022058 Fibre Channel feature is not supported on this server: %1.

0x09022059 The specified TimeMark is the latest TimeMark on the replica disk, which cannot be deleted.

0x0902205a Unable to get NAS write access.

0x0902205b Failed to parse NAS configuration.

0x0902205c The primary disk is not available. The replication configuration on the primary disk will not be removed.

0x0902205d There are SAN client connected to the resource. You have to disconnect the client(s) first before deleting the resource.

0x0902205e There are active SMB connections associated with this NAS resource. Please disconnect them first or use force option.

0x0902205f Snapshot Group feature is not supported on this server: %1.

0x09022060 NAS feature is not supported on this server: %1.

0x09022061 Timeout while disabling cache resource.

0x09022062 CLI_ERROR_DIR_EXIST

0x09022063 CLI_PARSE_NAS_USER_CONF_FAILED

0x09022064 Invalid NAS User.

0x09022065 The IP address of the replication target server for this configuration has to be in the range of %1.

0x09022066 Local Replication feature is not supported on this server: %1.

0x09022067 The specified replica disk is the same as the primary disk

0x09022068 The batch mode processing is not completed for all the requested virtual devices.

0x09022069 The server \"%1\"is not configured for failover.

0x0902206a Unable to get server name.\nPlease check that the environment variable ISSERVERNAME has been set properly.

0x0902206b Unable to get user name.\nPlease check that the environment variable ISUSERNAME has been set properly.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 582

Page 585: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x0902206c Unable to get password.\nPlease check that the environment variable ISPASSWORD has been set properly.

0x0902206d Invalid login information format.

0x0902206e File %1 does not exist.

0x0902206f Unable to open configuration file: %1.

0x09022070 Error reading configuration file: %1.

0x09022071 There are virtual devices assigned to this client.

0x09022072 NAS resource is not ready.

0x09022073 Invalid Windows User name.

0x09022074 Invalid NAS authentication mode.

0x09022076 Failed to get server name.

0x09022077 The server is not a failover secondary server.

0x09022078 Failover is already enabled on this server.

0x09022079 Failover is already suspended on this server.

0x09022083 CLI_ERROR_BMR_COMPATIBILITY.

0x09022084 CLI_ERROR_ISCSI_COMPATIBILITY.

0x09022085 This command \"%1\"is not supported for this server version: %2.

0x09022086 Cache group is not supported for this server version: %1.

0x0902208a Snapshot Notification Option is not supported for this server version: %1.

0x0902208e Compression option is not supported for this server version: %1.

0x0902208f Encryption option is not supported for this server version: %1.

0x09022090 Timeout policy is not supported for this server version: %1.

0x09022091 Cache parameter is not supported for this version: %1.

0x09022092 Cache parameter <skip-duplidate-write> is not supported for this version: %1.

0x09022093 Reserving service-enabled Disk Inquiry String feature is not supported for this version: %1.

0x09022094 This is not a valid server configuration to set the server communication information.

0x09022095 The resource is a NAS resource and it is attached. Please unmount and detach the NAS resource first before performing TimeMark rollback.

0x09022096 Invalid iSCSI Target starting lun.

0x09022097 Invalid IPStor user:

0x09022098 iSCSI Initiator %1 is already assigned to other client.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 583

Page 586: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09022099 There are no users assigned to this iSCSI client.

0x0902209a Invalid client type for updating the device properties.

0x090220a7 The client has to support at least one client protocol.

0x090220a8 Generic client protocol is not supported on this server.

0x090220a9 Invalid type for client / resource assignment.

0x090220aa Invalid Client Type.

0x090220b0 CLI_NAS_SMB_HOME.

0x090220b1 CLI_SNMP_MAX_TRAPSINK.

0x090220b2 CLI_SNMP_NO_TRAPSINK.

0x090220b3 CLI_SNMP_OUT_INDEX

0x09023000 Invalid BootIP client.

0x09023001 This client is not enabled BootIP.

0x09023002 This client is already enabled BootIP.

0x09023003 Did not input any BootIP properties.

0x09023004 Ip address is needed.

0x09023005 Hardware address is needed.

0x09023006 Invalid <use-static-ip> value.

0x09023007 Invalid <default-boot> value.

0x09023008 Duplicated MAC address.

0x09023009 Duplicated IP address.

0x09023010 This device is a BootIP resource. Disable BootIP before you delete/unassign it.

0x09023011 -S 1 and <ip-address> should be specified at the same time.

0x09023012 BootIP feature is not supported on this server: %1.

0x09023013 DHCP is not enabled in this server, cannot use static ip.

0x09023100 Fail to connect to the server. Please make sure the server is running and the version of the server is 4.01 or later.

0x09023101 Use existing TimeMark for replication option is not supported on this server: %1.

0x09023103 Invalid share information for batch mode NAS share creation.

0x09023104 Replica size exceeds the licensed Worm Size limit on the target server: %1 GB

0x09023105 NAS resource will exceed the licnesed Worm Size limt: %1 GB.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 584

Page 587: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09023102 CLI_TARGET_SERVER_NOT_WORM_KERNEL.

0x09023106 Compliance time can only be set when compliance clock option is set.

0x09023107 Worm is not supported by the kernel of this server.

0x09023108 Stop Write option is no longer supported in this version of server: %1.

0x09023109 Invalid replica disk.

0x0902310a The compliance clock between failover servers is more than 5 minutes apart. Please use force option to continue.

0x0902310b The compliance clock between replication servers is more than 5 minutes apart. Please use force option to continue.

0x0902310c Local replication is not supported for WORM resource.

0x0902310d You do not have the license for WORM resource.

0x0902310e Invalid iSCSI user password length (12 - 16).

0x0902310f The login user is not authorized for this operation.

0x09023110 The specified user name already exists.

0x09023111 Invalid user name.

0x09023112 Continuous Replication is not supported.

0x09023113 Replication is still in progress. Please wait for replication to complete before disabling the TimeMark option.

0x09023114 This resource is in a snapshot group. Snapshot notification will be determined at the group level and cannot be updated at the resource level.

0x09023115 This server is configured as Symmetric failover server. In Symmetric failover setup, the same target WWPN will be used on the secondary server during failover instead of the standby WWPN as in Asymmetric failover setup. It's not necessary and not allowed to configure Fibre channel client protocol for the same client on the failover partner server. This client is already enabled with Fibre Channel protocol on the failover partner server. The operation cannot proceed.

0x09023116 Replication protocol is not supported for this version of server: %1.

0x09023117 TCP protocol is not supported for continuous mode replication on this target server. It is supported on a target server of version 5.1 or later.

0x09023118 It is required to assign all the Fibre Channel devices to \"all-to-all\"for Symmetric failover setup.

0x09023119 Invalid CDP journal timestamp.

0x0902311a CDP option is not supported for this server version: %1.

0x0902311b CDP journal is not available.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 585

Page 588: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x0902311c TimeMark priority is not supported on this server: %1.

0x0902311d TimeMark information update is not supported on this server: %1.

0x0902311e TimeMark information cannot be update for replica group.

0x0902311f TimeMark comment cannot be updated for TimeMark group.

0x09023120 TimeMark priority cannot be updated for TimeMark group member.

0x09023121 CDP journal was suspended at the specified timestamp.

0x09023122 This virtual device is still valid for cross mirror setup. Manual swapping is not allowed.

0x09023123 Notification Frequency option is not supported on this version of server: %1

0x09023124 This operation is not supported for cross mirror configuration.

0x09023125 This virtual device is in a TimeMark group. Rollback is currently in progress for one of the group member %1. Please wait until the rollback is completed before starting rollback for this virtual device.

0x09023126 Invalid CDP journal tag.

0x09023127 Clients have to be unassigned before rollback is performed.

0x09023128 The specified data point is not valid for post rollback TimeView creation.

0x09023129 The specified data point is not valid for recurring rollback.

0x09023130 This virtual device is still valid for cross mirror setup. Manual swapping is not allowed.

0x09023131 Group replication schedule has to be suspended first before joining the resources to the group.

0x09023132 Replication schedule of the specified resource has to be suspended first before joining the replication group.

0x09023133 Replication schedule for all the group members has to be suspended first before joining the resources to the group or enabling the group replication.

0x09023134 MicroScan option for individual resource is not support on this server: %1.

0x09023135 Source virtual device isn't on a Falconstor SED.

0x09023136 Resource has STP enabled.

0x09023137 CLI_ERROR_MULTI_STAGE_REPL_NOT_SUPPORTED.

0x09023138 Suspend / Resume Mirror option is not supported on this server: %1.

0x09023139 Mirror of this resource is already suspended.

0x0902313a Mirror of this resource is already suspended.

0x0902313b This resource is in the replication disaster recovery state, the operation is not allowed.

0x0902313c This group is in the replication disaster recovery state, the operation is not allowed.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 586

Page 589: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09024001 The target virtual device is in a SafeCache group or a CDP group. Please remove the resource from the group first if you need to copy the data to the resource.

0x09024002 Mirror Policy feature is not supported on this server: %1.

0x09024003 CLI_ERROR_REPL_TRANSMITTED_INFO_NOT_SUPPORTED.

0x09024004 The virtual device info serial number is not supported for this version of server: %1.

0x09024005 Fast replication synchronization is not supported for this version of server: %1.

0x09024006 Mirror Swap option is already disabled

0x09024007 Mirror Swap option is already enabled.

0x09024008 Disable Mirror Swap option is not supported for this server version: %1.

0x09024101 A server is in a cross-mirror setup.

0x09024102 Virtual device is a Near-line Disk.

0x09024103 Virtual device is a Primary Disk enabled with Near-line Mirror.

0x09024104 Rescan didn't find the new assigned virtual device

0x09024105 The new assigned virtual device has been allocated

0x09024106 The remote client hasn't iSCSI target.

0x09024107 The virtual device is not a Primary Disk enabled with Near-line Mirror

0x09024108 The virtual device is not a Near-line Disk.

0x09024109 The servers are not Near-line Mirroring partners for the specified virtual device.

0x0902410a There is an error in Near-line Mirroring configuration.

0x0902410b Mirror license is required to perform this operation.

0x0902410c Please swap the Primary Disk with its mirror first

0x0902410d All segments of the Primary Disk are in online state.

0x0902410e Cannot join a Near-line Disk to a group contains Near-line Disk with different Near-line server

0x09024201 The virtual device has been assigned to a client and the virtual device's userACL doesn't match the snapshot group's userACL.

0x09024202 Near-line Mirror option is not support on this server: %1

0x09024203 The operation is not allowed when Near-line Recovery is initiated for the specified Primary Disk:.

0x09024204 The operation is not allowed when Near-line Recovery is initiated for the specified Nearline Disk:.

0x09024205 TimeMark rollback is not supported for Near-line Disk:.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 587

Page 590: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09024206 The specified resource is a Near-line Disk. Please remove the Near-line Mirroring configuration first.

0x09024207 The specified resource is enabled with Near-line Mirror. Please remove the Near-line Mirroring configuration first.

0x09024208 The specified iSCSI target is assigned to a Near-line server.

0x09024209 The operation is not allowed when Near-line Replica Recovery is initiated for the specified Nearline Replica Disk:

0x09024301 Cannot disable InfiniBand, since there are targets/devices assigned to InfiniBand client.

0x09024302 Infini-band is not supported in this build.

0x09024303 iSCSI isn't enabled.

0x09024304 No infini-band license.

0x09024305 Command is not allowed because FailOver is enabled.

0x09024306 Failed to convert IP address to integer.

0x09024307 The given IP address isn't binded to an infini-band NIC.

0x09024308 InfiniBand isn't enabled.

0x09024351 Each zone's size cannot be bigger than the HotZone resource's size.

0x09024401 Problem vdev command is not supported by this version of server.

0x09024402 Virtual device signature is not supported by this version of server.

0x09024403 Invalid physical device name.

0x09024404 There is no Fibre Channel devices to perform this operation.

0x09024501 The virtual device specified by <timeview-vid> isn't a timeview.

0x09024502 The timeview doesn't belong to the given virtual device or snapshot group.

0x09024601 The cli command is not allowed because of the server is in failover state.

0x09024602 There is virtual device allocated on the physical device.

0x09024603 The physical device is in a storage pool.

0x09024604 The server isn't the owner of the physical device.

0x09024605 Physical device is online.

0x09024606 Invalid initiator WWPN.

0x09024607 The specified new initiator WWPN is invalid already exists.

0x09024608 Replacing Fibre Channel client WWPN operation is not supported on this server.

0x09024609 The specified target disk is a thin disk.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 588

Page 591: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x0902460a Thin provisioning feature is not supported on this server: %1.

0x0902460b Minimum outstanding IOs for mirror policy is not supported on this server: %1

0x0902460c Mirror throughput control policy is not supported on this server: %1.

0x0902460d Replication throughput control policy is not supported on this server: %1.

0x0902460e iSCSI Mutual Chap Secret option is not supported on this server: %1.

0x0902460f Host Apps Info is not supported on this server: %1

0x09024610 The version of the primary server has to be the same or later than the version of the Near-line server for Near-line mirroring setup.

0x09024611 The version of the primary server has to be the same or later than the version of the replica server for replication setup.

0x09024612 Saving persisted timeview data information is not supported in this version of server: %1

0x09024613 Replication using specific TimeMark is not supported in this version of server: %1.

0x09024614 iSCSI mobile user update is not supported in this version of server.

0x09024615 Service Enable Device license is required to perform this operation.

0x09024616 Primary server is configured for symmetric failover. Near-line Recovery alredy triggered for the Primary failover partner server. Please resume the configuration first.

0x09024617 Primary server is configured for symmetric failover. Near-line server client exists on the primary failver partner server. Please remove the client from the primary failover partner server first.

0x09024618 Near-line disk is enabled with mirror. Please remove the mirror from Near-line disk before performing Near-line recovery.

0x09024619 Near-line Resource is not the mirror of the Primary Disk. Please swap the mirror first before performing the Near-line Recovery.

0x0902461a Invalid Near-line client for iscsi protocol".

0x0902461b Failed to discover device on Near-line server.

0x0902461c Failed to discover device on Near-line server failover parnter.

0x0902461d There is not enough space available for virtual header allocation.

0x0902461e Near-line server client does not exist on the Primary server.

0x0902461f Near-line server failover partner client does not exist on the Primary server

0x09024620 Near-line server client properties is not configured for assignment.

0x09024621 Near-line server failover partner client properties is not configured for assignment.

0x09024622 Near-line recovery is not supported on the specified server.

0x09024623 Timeout waiting for sync status for thin disk packing.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 589

Page 592: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09024624 Thin disk is out-of-sync for packing.

0x09024625 Timeout waiting for swap status for thin disk packing.

0x09024626 The data copying program is missing.

0x09024627 Thin disk copy is not supported on the specified server.

0x09024628 Global cache resource is not supported on this server: %1.

0x09024629 Thin disk relocation is not supported on the specified server.

0x0902462a Near-line Disk is already configured on Near-line server, but the configuration does not match with primary disk.

0x0902462b Primary Disk is already configured on Primary server, but the configuration does not match with the specified Near-line Server.

0x0902462c The specified Primary Disk is already configured for Near-line Mirroring on the specified Near-line server.

0x0902462d The Primary server is configured for failover, but the failover partner server is not configured properly for Near-line Mirroring.

0x0902462e The Primary server is not configured as client on the Near-line server.

0x0902462f The Primary failover partner server is not configured as client on the Near-line server.

0x09024630 Near-line Disk is not assigned to the Primary server client on Near-line server.

0x09024631 Near-line Disk cannot be found on the specified Near-line server.

0x09024632 service-enabled Device of Near-line Disk cannot be found on the Primary failover partner server.

0x09024633 Failed to get the serial number for the Primary Disk.

0x09024634 Suspend mirror from the Primary first before performing conversion.

0x09024635 Failed to discover device on Near-line primary server.

0x09024636 Invalid Primary resource type.

0x09024637 Invalid Near-line resource type.

0x09024638 CDP Journal is enabled and active for replica group. Please suspend CDP Journal and wait for the data to be flushed.

0x09024639 safeCache is enabled and active for replica group. Please suspend safeCache and wait for the data to be flushed.

0x0902463a CDP Journal is enabled and active for primary group. Please suspend CDP Journal and wait for the data to be flushed.

0x0902463b safeCache is enabled and active for primary group. Please suspend safeCache and wait for the data to be flushed.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 590

Page 593: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x0902463c CDP Journal is enabled and active for replica disk. Please suspend CDP Journal and wait for the data to be flushed.

0x0902463d safeCache is enabled and active for replica disk. Please suspend safeCache and wait for the data to be flushed.

0x0902463e CDP Journal is enabled and active for primary disk. Please suspend CDP Journal and wait for the data to be flushed.

0x0902463f safeCache is enabled and active for primary disk. Please suspend safeCache and wait for the data to be flushed.

0x09024640 Primary disk is enabled with Near-line Mirroring, the operation is not allowed.

0x09024641 Primary disk is a Near-line disk, the operation is not allowed.

0x09024642 Primary disk is a NAS resource. Please umount and detach the resource first.

0x09024643 HotZone is enabled and active for the primary disk. Please suspend the HotZone first.

0x09024644 There is no member in the group for the operation.

0x09024645 CDR is enabled for the primary disk. Please disable CDR first.

0x09024646 CDR is enabled for the primary group. Please disable CDR first.

0x09024647 The group configuration is invalid. The operation cannot proceed.

0x09024648 Replication configuration between the primary and replica is inconsistent. The operation cannot proceed.

0x09024649 Forceful Role Reversal can only be performed from replica server for disaster recovery when the primary server is not available.

0x0902464a Forceful Role Reversal cannot be performed when the primary server is still available and operational.

0x0902464b Forceful Role Reversal is not supported in this version of server: %1

0x0902464c The replica disk is not loaded for the operation.

0x0902464d Updating umap timestamp for the new primary resource(s) failed.

0x0902464e The operation can only be performed after forceful role reversal.

0x0902464f HotZone is enabled on new replica, repair cannot proceed.

0x09024650 CDR is enabled for the original primary disk. Please disable CDR first.

0x09024651 CDR is enabled for the original primary group. Please disable CDR first.

0x09024652 CLI_MICRSCAN_COMPRESSION_CONFLICT_ON_TARGET.

0x09024653 Snapshot resource cannot be reinitialized when it is accessible.

0x09024654 The option for discardable changes for the timeview is not enabled.

0x09024655 Snapshot resource is offline.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 591

Page 594: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09024655 CLI_INVALID_NEARLINE_CONFIG.

0x09024656 CLI_INVALID_NEARLINE_DISK.

0x09024657 The option for discardable changes is not supported for this type of resource.

0x09024658 Fail to enable cache for the timeview to keep discardable changes.

0x09024659 Fail to enable cache for the timeview to keep discardable changes and timeview cannot be removed.

0x0902465a There is still cached data not being flushed to the timeview. Please flush the changes first if you do not want to discard the changes before deleting the timeview.

0x0902465b The option for discardable changes for the timeview is not enabled.

0x0902465c This operation can only be performed on failover secondary server in failover state.

0x0902465f The option for discardable TimeView changes is not supported for this version of server: %1.

0x09024665 Primary server user id and password are required for the target server to establish the communication information.

0x09024666 The resource is a Near-line Disk and the Primary Disk is a thin disk. Expansion is not supported for think disk.

0x09024667 The options for snapshot resource error handling are not supported.

0x09024668 Failed to connect to the primary server.

0x09024669 There is no iSCSI targets configured on the specified server.

0x0902466a iSCSI initiator connection information is not available on this version of server: %1.

0x0902466b Your password is expired. Please change your password first.

0x0902466c TimeView replication option is not supported on this version of server: %1.

0x0902466e TimeView replication option is not supported on this version of server: %1.

0x0902466f The replica disk of the source resource is invalid for TimeView replication.

0x09024670 TimeMark option is not enabled on the replica disk of the source resource of the TimeView.

0x09024671 The TimeMark of the TimeView is not available on the replica disk of the source resource.

0x09024672 Failed to get the TimeMarks of the replica disk to validate the TimeMark timestamp for TimeView replication.

0x09024673 TimeView replication can only be performed for source resource enabled with remote replication. Local replication is enabled for the source resource. TimeView replication cannot proceed.

0x09024674 TimeView replication option is not enabled for the source resource.

0x09024675 This operation can only be performed for a Near-line disk as a reversed replica.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 592

Page 595: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09024676 TimeView resource of the source TimeMark exists on the primary server. TimeView data replication cannot proceed.

0x09024677 TimeView resource of the replica TimeMark exists on the target server. TimeView data replication cannot proceed.

0x09024678 TimeView data exists on the replica TimeMark. Please specify -vf (--force-to-replicate) option to force the replication.

0x09024679 Remote replication is not enabled for the resource for TimeView data replication.

0x0902467a Inquiry page retrieval is not supported for this version of server: %1

0x0902467b TimeView rollback is not supported on this version of server: %1.

0x0902467c TimeView copy is not supported on this version of server: %1.

0x0902467d CDP Journal rollback and TimeView data rollback are mutually exclusive.

0x0902467e There is no TimeView data associated with this TimeMark to perform TimeView copy.

0x0902467f Virtual device MicroScan option is not supported in this version of server: %1.

0x09024680 The specified target device is enabled with global SafeCache. Please disable global SafeCache first if you need to copy data to this resource.

0x09024681 There is no unflushed cache marker.

0x09024682 There is no unflushed cache marker and the cache is not full. Please create a cache marker to flush the data to it first.

0x09024683 There is no TimeView data associated with this TimeMark to perform TimeView rollback.

0x09024684 Sync priority setting is not supported for this version of server: %1

0x09024685 Fail to get the TimeMarks of the source disk to validate the TimeMark timestamp for TimeView replication.

0x09024686 There is no TimeView data for the specified TimeMark on the source resource.

0x09024687 The TimeMark of the TimeView is not available on the source resource.

0x09024688 Failed to parse the configuration file from primary server.

0x09024689 The primary disk of the source resource is invalid.

0x09024690 Failed to get the TimeMarks of the primary disk to validate the TimeMark timestamp for TimeView replication status.

0x09024691 Timeview data replication is in progress for the specified Timemark.

0x09024692 TimeView data of the specified replica TimeMark is invalid.

0x09024693 Physical device cannot be found on failover partner server.

0x09024694 Physical device is already owned by the failover partner server.

0x09024695 Sync CDR replica TimeMark setting is not supported for this version of server: %1.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 593

Page 596: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x09024696 Preserve CDR primary TimeMark setting is not supported for this version of server: %1.

0x09024697 Specified CDR related parameters without CDR enabled.

0x09024698 It appears that the nearline disk is still available. Please login into the nearline server before removing the configuration.

0x09024699 Keep TimeMarks setting is not supported for this version of server: %1

0x0902469a To keep Timemarks, the TimeView resources have to be unassigned before rollback is performed.

0x0902469b CDP journal is active. To keep Timemarks, please suspend CDP journal and wait for the data to be flushed.

0x0902469c SafeCache is active. To keep Timemarks, please suspend SafeCache and wait for the data to be flushed.

0x0902469d Group CDP journal is active. To keep Timemarks, please suspend Group CDP journal and wait for the data to be flushed.

0x0902469e Group SafeCache is active. To keep Timemarks, please suspend Group SafeCache and wait for the data to be flushed.

0x09024700 Specified mirror monitoring related parameters without mirror monitoring option enabled.

0x0902469f Specified throughput control related parameters without throughput control option enabled.

0x09024701 Read the partition from inactive path option is not supported for this version of server: %1.

0x09024702 Use report luns option and lun ranges option are mutually exclusive.

0x09024703 Specified discover new devices options while in scan existing devices mode.

0x09024704 BTOS feature is not supported for this version of server: %1.

0x09024705 Select TimeMark with timeview data is not supported for this version of server: %1.

0x09024706 Fibre channel client rescan is not supported for this version of server: %1.

0x09024707 Configuration Repository can not be disabled when failover is enabled.

0x09024708 Configuration Repository is not enabled.

0x09024709 Configuration Repository has already been enabled.

0x0902470a Configuration Repository can not be enabled when failover is enabled.

0x0902470b Only administrators have the privilege for the operation.

0x0902470c TimeView replication is in progress. Please wait until the timeview replication is completed.

0x0902470d Please specify -F to allow forceful role reversal.

0x09024719 Backup is enabled. Recovery cannot proceed.

0x0902471a Replication job queue is not supported in this version of server: %1.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 594

Page 597: CDP-NSS Administration Guide

Troubleshooting / FAQs

0x0902471b Replication schedule is not allowed for this resource.

0x0902471c Replication schedule is not allowed for this group.

0x0902471d Virtual Device name cannot be renamed for this resource.

0x0902471e I/O latency retrieval is not supported in this version of server: %1.

0x0902471f The new client OS types is not supported on this server.

0x09024720 Group rollback is only supported for SAN resources.

0x09024721 Group rollback is not supported for group with Near-line disks.

0x09024722 No Timemark available for selected CDP journal timestamp.

0x09024723 Group rollback is not supported in this version of server: %1.

0x09024724 The specified virtual device does not have CDP enabled for the journal related options.

0x09021000 RPC call failed:RPC encoding arguments error.RPC decoding results error.RPC sending error.RPC receiving results error.RPC timeout error. (Note: Check the server is not disconnected from the network where RPC call timeout could happen after 30 sec.)RPC version mismatch.RPC authentication error.RPC program not available.

RPC program version mismatch.

0x80020500 Cannot parse XML configuration

0x800B0100 Cannot allocate memory

0x8023040b Cannot find openssl library or the public key file.

0x80230406 Cannot reach the registration server on the Internet.

0x80230403 Cannot connect to the registration database.

0x80230404 Cannot find the keycode in the registration database.

0x80230405 License registration limit has been reached.

0x80230406 Cannot find host while attempting register keycode.Note: If the FalconStor license server can not be reached, make sure the server has Internet access.

0x80230407 Failed to register keycode because system call timed out.

0x80230408 Server is in failover state.

0x80020600 Failed to read config file.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 595

Page 598: CDP-NSS Administration Guide

Troubleshooting / FAQs

Contact FalconStor Technical Support for any error not listed above.

0x8023040c ISHOME isn't defined.

CDP-NSS Command Line Interface Error Messages

Error code Text

CDP/NSS Administration Guide 596

Page 599: CDP-NSS Administration Guide

CDP/NSS User Guide

Port UsageThis appendix contains information about the ports used by CDP and NSS.

The following ports are used for incoming requests. The communication direction of each port listed below is a one-way communication from a source to a destination. The reply to the request is sent back to a dynamic port number. Network firewalls should allow access through these ports for successful communications.

To maintain a high level of security, it is recommended that you disable all unnecessary ports. Although you may temporarily open some ports during initial setup of the CDP/NSS appliance, such as the telnet port (23) and FTP ports (20 and 21), you should shut them down after your work is complete.

The ports are not used unless the associated option is enabled in CDP/NSS. For FalconStor appliances, the ports marked are enabled by default.

Note: Make sure there are no blocked ports and the loopback device access is open.

Port Protocol Open on Used by Description

20 TCP/UDP CDP/NSS Server FTP Client Standard FTP port used for file data transfer

21 TCP/UDP CDP/NSS Server FTP Client Standard FTP port used for sending commands

22 TCP CDP/NSS Server Host Client Standard Secure Shell (SSH) port used for remote sessions

23 TCP/UDP CDP/NSS Server Host Client Standard Telnet port used for remote sessions

25 TCP/UDP CDP/NSS Server SANClient Standard SMTP port used for Email Alerts

67 UDP CDP/NSS Server SANClient DHCP port used for iSCSI Boot (BootIP)

68 UDP CDP/NSS Server SAN Client DHCP port used for iSCSI Boot (BootIP)

69 UDP CDP/NSS Server SAN Client TFTP (Trivial File Transfer Protocol) port used for iSCSI Boot (BootIP)

80 TCP • CDP/NSS Server• FalconStor

Management Console

• RecoverTrac

• SAN Client

FalconStor web license server

Standard Internet port used for online registration of license keycodes.Registration information is sent back using HTTP protocol, where a local random port number is used (not hard-coded), just like a typical web-based page. The firewall does not block the random port if the established bit is set to let established traffic in.

81 HTTP CDP/NSS Server SAN Client Standard HTTP port used to access the FalconStor Management Console via Web Start

CDP/NSS Administration Guide 597

Page 600: CDP-NSS Administration Guide

Port Usage

111 TCP/UDP CDP/NSS Server NFS Client NFS port used for rpcbind RPC program number mapper. The NFS port is assigned via the SUNRPC protocol. The ports vary, so it is not feasible (or convenient) to keep checking them and reprogramming a firewall. Most firewalls have a setting to "Enable NFS" upon which they will change the settings if the ports change.

123 UDP CDP/NSS Server NTP Server Standard Network Time Protocol (NTP) transport layer used to access external time servers

137 UDP • CDP/NSS Server• RecoverTrac

Server

• CIFS Client• RecoverTrac

Client (Protected/Recovered)

• ipstornmbd NETBIOS Name Service used for CIFS protocol

• Standard file and print sharing port used to discover system/network settings and also used to shut down the RecoverTrac client.RecoverTrac Server ports are dynamic and based on Windows WNetConnect2() and remove WMI calls.If this port is closed, you need to manually enter the client system/network information, and power on/off clients by using IPMI, iLO, or SSH power control for physical machines and Hypervisor commands for virtual machines.

138 UDP • CDP/NSS Server• RecoverTrac

Server

• CIFS Client• RecoverTrac

Client (Protected/Recovered)

• ipstornmbd NETBIOS Datagram Service for CIFS protocol

• Standard file and print sharing port used to discover system/network settings and also used to shut down the RecoverTrac client.RecoverTrac Server ports are dynamic and based on Windows WNetConnect2() and remove WMI calls.If this port is closed, you need to manually enter the client system/network information, and power on/off clients by using IPMI, iLO, or SSH power control for physical machines and Hypervisor commands for virtual machines

Port Protocol Open on Used by Description

CDP/NSS Administration Guide 598

Page 601: CDP-NSS Administration Guide

Port Usage

139 TCP • CDP/NSS Server• RecoverTrac

Server

• CIFS Client• RecoverTrac

Client (Protected/Recovered)

• ipstornmbd NETBIOS Session Service for CIFS protocol

• Standard file and print sharing port used to discover system/network settings and also used to shut down the RecoverTrac client.RecoverTrac Server ports are dynamic and based on Windows WNetConnect2() and remove WMI calls.If this port is closed, you need to manually enter the client system/network information, and power on/off clients by using IPMI, iLO, or SSH power control for physical machines and Hypervisor commands for virtual machines

161 UDP CDP/NSS Server SNMP Client Standard Simple Network Management Protocol (SNMP) port used to query CDP/NSS MIBs

199 UDP CDP/NSS Server SNMP Client Standard SNMP multiplexing (SMUX) protocol port used to query Dell OpenManage system MIBs

443 HTTPS CDP/NSS Server FalconStor Web Setup

Standard secure HTTP port used to access FalconStor Web Setup

445 TCP RecoverTrac Server RecoverTrac Client (Protected/Recovered)

Standard file and print sharing port used to discover system/network settings and also used to shut down the RecoverTrac client.RecoverTrac Server ports are dynamic and based on Windows WNetConnect2() and remove WMI calls.If this port is closed, you need to manually enter the client system/network information, and power on/off clients by using IPMI, iLO, or SSH power control for physical machines and Hypervisor commands for virtual machines

623 UDP CDP/NSS Server Failover Server IPMI power control port used for Alert Standard Format (ASF) Remote Management and also used to power off the failed server in a failover configuration

705 UDP CDP/NSS Server SNMP Client Standard SNMP AgentX port used to query agents, such as Fujitsu ServerView

1311 HTTPS CDP/NSS Server Dell Open Manage Server

HTTPS port used for hardware configuration of DELL servers

3260 TCP CDP/NSS Server iSCSI Client or storage

• iSCSI port used for communication between iSCSI clients and the server.

• used for iSCSI Boot (BootIP) option. • used to virtualize iSCSI storage on CDP/

NSS Server

4011 UDP CDP/NSS Server SAN Client PXE port for iSCSI Boot (BootIP) option

Port Protocol Open on Used by Description

CDP/NSS Administration Guide 599

Page 602: CDP-NSS Administration Guide

Port Usage

5001 TCP CDP/NSS Server • SAN Client• CDP/NSS

Replica Server

istcp port used to test network connection and measure bandwidth performance

8009 TCP CDP/NSS Server FalconStor Web Setup

Standard Apache AJP port used for FalconStor Web Setup

8443 TCP FileSafe Server on

CDP/NSS Server

FileSafe Client Apache Tomcat SSL communication port used for internal FileSafe commands

11576 TCP CDP/NSS Server • CLI from SAN Client

• FalconStor Management Console

• IMA• RecoverTrac• HyperTrac• Snapshot

Director for VMware

Secure RPC communication port used for sending requests to the configuration management module on the server.

11577 TCP • CDP/NSS Replication Source Server

• CDP/NSS Replica Server

• CDP/NSS Replica Server

• CDP/NSS

Replication

Source

Server

Communication port used to send replication data This port is only open while replication is being performed. Otherwise, it is closed.

11580 TCP • CDP/NSS Server (Primary)

• CDP/NSS Server

(Secondary)

• CDP/NSS Server (Secondary)

• CDP/NSS

Server

(Primary)

Communication port used between a pair of failover serversIt is not required for stand-alone CDP/NSS Server

11582 TCP CDP/NSS Server • CLI from SAN Client

• RecoverTrac• HyperTrac• Snapshot

Director for VMware

Communication port used to send CLI commands to the CDP/NSS Server

11583 TCP CDP/NSS Server FalconStor Management Console

Communication port used to send report requests (i.e. report schedules, global replication report, statistics log configuration updates) to the configuration management module on the server.

Port Protocol Open on Used by Description

CDP/NSS Administration Guide 600

Page 603: CDP-NSS Administration Guide

Port Usage

11588 TCP CCM Server on CDP/NSS Server

• CCM Console

• FalconStor Management Console

FalconStor Central Client Management (CCM) plugin port used to send CCM internal commands to the server.

11762 TCP • RecoverTrac• HyperTrac• IMA• DiskSafe• Snapshot

Director for VMware

CDP/NSS Server

ipstorclntd SecureRPC port used to send management requests (i.e. snapshot notification, configuration, information retrieval) to IMA module on SAN Clients.Snapshot Director for VMware opens this ESX firewall port during installation.

18651 TCP FileSafe Server on CDP/NSS Server

FileSafe Client Communication port used for FileSafe data copy

Hypervisor ports (VMware vCenter or Hyper-V)

Hypervisor port protocol

Hypervisor RecoverTrac Server

Hypervisor ports used to manage recovery from/to VMs within Hypervisor Servers and Centers.Refer to the appropriate Hypervisor documentation to determine port requirements. RecoverTrac uses VMware vSphere APIs to manage VMware environments and remote WMI/ remote VDS to manage Hyper-V environments.

Port Protocol Open on Used by Description

CDP/NSS Administration Guide 601

Page 604: CDP-NSS Administration Guide

CDP/NSS User Guide

SMI-S IntegrationLarge Storage Systems and Storage Area Networks (SANs) are emerging as a prominent and independent layer of IT infrastructure in enterprise class and midrange computing environments. Examples of applications and functions driving the emergence of new storage technology include:

• Sharing of vast storage resources between multiple systems via networks,• LAN free backup,• Remote, disaster tolerant, on-line mirroring of mission critical data,• Clustering of fault tolerant applications and related systems around a single

copy of data.• Archiving requirements for sensitive business information.• Distributed database and file systems.

The FalconStor® SMI-S Provider for CDP and NSS storage offers CDP and NSS users the ability to centrally manage multi-vendor storage networks for more efficient utilization.

FalconStor CDP and NSS solutions use the SMI-S standard to expose the storage systems it manages to SMI-S Client. The storage systems supported by FalconStor include Fibre Channel disk arrays and SCSI disk arrays. A typical SMI-S Client can discover FalconStor devices through this interface. It utilizes CIM-XML while is a WBEM protocol that uses XML over HTTP to exchange Common Information Model (CIM) information.

The SMI-S server is included in CDP and NSS versions 6.15 Release 2 and later.

CDP/NSS Administration Guide 602

Page 605: CDP-NSS Administration Guide

SMI-S Integration

SMI-S Terms and concepts

StorageManagement

Initiative -Specification

(SMI-S)

A storage standard developed and maintained by the Storage Networking Industry Association (SNIA). SMI-S enables broad interoperability among heterogeneous storage vendor systems, allowing different classes of hardware and software products supplied by multiple vendors to reliably and seamlessly interoperate for the purpose of monitoring and controlling resources.

The FalconStor SMI-S interface overcomes the deficiencies associated with legacy management systems that deter customers from using more advanced storage management systems.

openPegasus The FalconStor SMI-S Provider uses an existing open source CIM Object Manager (CIMOM) called openPegasus for a portable and modular solution. It is an open-source implementation of the DMTF CIM and WBEM standards.

openPegasus is packaged in tog-pegasus-[version].rpm with Red Hat Linux and is automatically installed CDP and NSS appliances with version 6.15 R2 and later. If it has not been installed on your appliance, you can install it using the following command: -rpm -ivh --nodeps tog-pegasus*.rpm

CommandCentral Storage

(CCS)

SMI-S Provider Usage on Veritas CommandCentral Storage (CCS) offers a storage resource management solution by providing centralized visibility and control across physical and virtual heterogeneous storage environments. By enabling storage capacity management, centralized monitoring and application to spindle mapping, CommandCentral Storage helps improve storage utilization, optimizes resources, increases data availability, and reduces capital and operational costs.

Enable SMI-S

To enable SMI-S, right-click on the server in the FalconStor Management Console and select Properties.

Highlight the SMI-S tab and select the Enable SMI-S checkbox.

By default, the Enable SMI-S checkbox is not selected, which means the SMI-S Provider is disabled. You will need to enable SMI-S before you can use the third-party storage resource manager to perform discovery and storage management using FalconStor NSS and CDP.

CDP/NSS Administration Guide 603

Page 606: CDP-NSS Administration Guide

SMI-S Integration

Use the SMI-S Provider

Launch the Command Central Storage console

1. Use an html browser to open the address https://localhost on port 8443 to open CCS console.

2. Use the default user name: admin and password: password to login to Command Central Storage console for the first time.

The top two panels are the main menu bar of CCS and the bottom right is the main control panel. The main menu bar and the storage section of the main control panel are important to SMI-S usage.

Add FalconStor Storage

To add FalconStor managed storage devices:

1. Navigate to Tools in the main menu bar and then select Configure a New Device in the main control panel.

2. Select Array from the drop-down menu for Device Category and select FalconStor NSS for Device Type and click Next.

The Device Configuration screen displays.

3. Enter the IP address of the server, along with the user name and password, which is the same as the server login account (i.e. root account or administration account). For the Interop Namespace field, enter falconstor/Default and accept the default for the other fields.

Once the server has been added successfully, a status screen similar to the screen shown below displays:

CDP/NSS Administration Guide 604

Page 607: CDP-NSS Administration Guide

SMI-S Integration

View FalconStor Devices

To view FalconStor storage devices:

1. Select Managing-->Summary from the main menu bar of the Command Central Storage console.

Alternatively, you can select Managing -->Storage from the main menu bar.

2. In the main control panel select Arrays.

The Virtualization SAN Arrays Summary screen displays the FalconStor storage.

3. Select the corresponding device by clicking on the name.

A summary of the storage device displays.

View Storage Volumes

To view storage volumes:

1. Select the Storage Volumes tab in the sub-menu on the top of main control panel.

A summary of storage volumes displays.

Assigned virtual disks display in Unknown Storage Volumes [Masked to Unknown Host(s)] or (Un)Claimed Storage Volumes while unassigned virtual disks display in Unallocated Storage Volumes [Unmasked].

2. Select an individual volume to view the storage pool it is in, and the physical LUN it relies on.

View LUNs

To view logical unit numbers (LUNs):

1. Select the LUNs tab in the sub-menu on the top of the main control panel.

A summary of CDP/NSS vitual disks displays. Assigned vitual disks display as Unknown LUNs [Maskd to Unknown Host(s)] or (Un)claimed LUNs, while unassigned virtual disks display as Unallocated LUNs [Unmasked].

2. Select an individual LUN to view the storage pool it is in, and the physical LUN it relies upon.

View Disks

1. To view disks, select the Disks tab in the sub-menu to view LUN information.

A summary of physical storage displays. And the individual disks display.

CDP/NSS Administration Guide 605

Page 608: CDP-NSS Administration Guide

SMI-S Integration

2. Select individual disk to view which storage pool it is in and which storage volume it was created from.

View Masking Information

1. To view masking information, select Connectivity in the sub-menu bar to view a summary of all the FC adapters, ports and storage views.

2. Select individual adapters and ports to view the detail.

3. Select individual view to view the port it is seen from and the storage volume it sees.

CDP/NSS Administration Guide 606

Page 609: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances The FalconStor RAID Management Console allows you to discover, configure, and manage storage connected to VS-Series appliances.

A redundant array of independent disks (RAID) consists of a set of physical disks configured according to a specific algorithm. The FalconStor RAID Management Console enables centralized management of RAID controllers, disk drives, RAID arrays, and mapped/unmapped Logical Units for the storage enclosure head and any expansion enclosure(s) connected to it. The console can be accessed from a VS-Series server after you connect to it in the FalconStor Management Console.

The management responsibilities of the RAID Management Console and the FalconStor Management Console are shown below:.

RAID management information is organized as follows:

• Prepare to use the RAID Management Console - ‘Prepare for RAID management’.

• Launch the RAID Management Console and discover storage - ‘Launch the RAID Management Console’.

• Manage storage arrays in the RAID Management console:• ‘Display a storage profile’ • ‘View enclosures’

CDP/NSS Administration Guide 607

Page 610: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

• ‘Manage controller modules’ • ‘Manage disk drives’ • ‘Manage RAID arrays’ • ‘Logical Unit Mapping’

• Monitor configured storage - ‘Monitor storage from the FalconStor Management console’.

Prepare for RAID management

You must complete the following before attempting any RAID management procedures:

1. Connect the FalconStor appliance and storage enclosures according to steps 1 through 4 in the FalconStor Virtual-Storage Appliances (VS/TVS) Hardware QuickStart Guide (QSG) shipped with your appliance.

2. Perform initial system configuration using the FalconStor Web Setup application, as described in the FalconStor CDP/NSS Software QuickStart Guide (also shipped with your appliance).

3. Connect to the VS server in the FalconStor Management Console, logging in as a user with Administrator status.

Preconfigured storage

Preconfigured storage enclosures are shipped with a default RAID 6 configuration that consumes all available resources. In the FalconStor Management console, default devices that have been mapped to the FalconStor host are visible under Physical Resources --> Physical Devices --> SCSI Devices.

MappedLUs

CDP/NSS Administration Guide 608

Page 611: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

In the RAID Management console, these devices are known as Logical Units (LUs) (refer to ‘Logical Unit Mapping’). The FalconStor RAID Management console lets you reconfigure these default devices as needed.

When mapped LUs are available in the FalconStor Management console, you can create San Resources. The last digit of the SCSI address (A:C:S:L) corresponds to the LUN number that you choose in the Mapping dialog.

Refer to “Logical Resources” in the CDP/NSS Administration Guide and FalconStor Management Console online help for details on configuring these physical devices as virtual devices and assigning them to clients.

Unconfigured storage

If your storage array has not been preconfigured, you must prepare storage using functions in the RAID Management console before you can create SAN resources in the FalconStor Management console:

• Create RAID arrays (refer to ‘Create a RAID array’).• Create Logical Units (LUs) on each array (refer to ‘Create a Logical Unit’).• Map each LU to a Logical Unit Number (LUN) (refer to ‘Define LUN

mapping’).

Note: Other devices displayed in this location are not related to storage. PERC 6/i devices are internal devices on the CDP/NSS appliance; the Universal Xport device is a system device housing a driver that provides access to storage.

CDP/NSS Administration Guide 609

Page 612: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Launch the RAID Management Console

Right-click the server object and select RAID Management. The main screen, which describes the management categories available in the console, is displayed.

Discover storage

This procedure locates in-band or out-of-band storage.

1. Click the Discover button (upper-right edge of the display).

2. In the Discover Storage dialog, select the discovery method.

• Select Manual (the default) to discover out-of-band storage. Enter a controller IP address and select Discover. The preconfigured controller IP addresses for controller modules on the storage enclosure head (Enclosure 0) are 192.168.0.101 (slot 0) and 192.168.0.102 (slot 1).

Note: Each controller module uses a different IP address to connect to the server. You can use either IP address for the purpose of discovering storage.

CDP/NSS Administration Guide 610

Page 613: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

• Select Automatic if you do not know the IP address. This option can detect only in-band storage and will require additional time to search the subnet.

A confirmation message is displayed when storage is discovered. The example below shows two storage items discovered during Automatic discovery. Each discovered storage array includes a storage enclosure head and any expansion enclosures that were preconfigured for your system.

After discovery, each storage array profile is listed in the Discover Storage drop-down. Select a profile to display components in the RAID Management console.

You can use the keyboard to navigate through the Discover Storage list. Page Up/Page Down jump between the first and last items in the list; Up and Down cursor arrows scroll through all items in the list.

Action menu You can also manage storage profiles by clicking Action --> .

CDP/NSS Administration Guide 611

Page 614: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

To discover storage, click Add to display the Discover Storage dialog. Continue as described above.

To remove a storage profile, click its checkbox and then click Remove. After you do this, the profile you removed will still exist, but its storage will not be visible from the host server.

Future storage discovery

To discover an additional storage enclosure head or expansion enclosure in the future, select Discover Storage from the drop-down list, then click Discover.

CDP/NSS Administration Guide 612

Page 615: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Display a storage profile

After storage has been discovered, select a storage profile from the Discover Storage drop-down list. The console loads the profile using its (valid) IP address and displays the components of the array.

In the navigation pane, the Storage object is selected by default; information at this level includes the storage name and IP address and summary information about all components in the array. From this object, you can configure all controller connection settings (refer to ‘Configure controller connection settings’).

Navigationpane

The navigation pane includes objects for all components in the storage array you selected in the Discover Storage drop-down list. Double-click an object to expand and display the objects below it; double-click again to collapse. When you select any object, related information is displayed in the content pane to the right. Some items include a right-click menu of management functions, while others are devoted to displaying status information.

CDP/NSS Administration Guide 613

Page 616: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Status bar The Status Bar at the bottom of the screen identifies - from left to right - the host machine, the storage array name and its WWID, and that date/time of the last update to storage configuration.

Menu bar Action menu - Click Manage Storage to display a dialog that lets you display a storage profile and discover new storage (equivalent of Discover Storage).

Tools menu - Click Manage Event Log to view or clear the event log for the selected storage profile.

Click Exit to close the RAID Management console and return to the FalconStor Management console.

Tool bar Click Exit to close the RAID Management console and return to the FalconStor Management console.

Click About to display product version and copyright information.

Rename storage

You can change the storage name that is displayed for the Storage object in the navigation pane. To do this:

1. Right-click the Storage object and click Rename.

2. Type a new display name. It can include up to 30 characters consisting of letters, numbers, and certain special characters: “_” (underscore); “-” (hyphen); or “#” (pound sign).

3. Click OK when you are done.

Refresh the display

To refresh the current storage profile, right-click the Storage object and click Refresh. Note that this is not an alternative method for discovering storage.

CDP/NSS Administration Guide 614

Page 617: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Configure controller connection settings

After storage has been discovered, you can change the port settings for controller modules on the controller enclosure head as required by your network administrator. To do this:

1. Right-click the Storage object and select Configure Controller Connection. You can also do this from the Controller Modules object or from the object for an individual controller.

2. Select the controller from the drop-down list. The dialog displayed from the object for an individual controller provides settings for that controller only.

3. Set the IP address, subnet mask, and gateway as needed, then click Apply.

Caution: Improper network settings can prevent local or remote clients from accessing storage.

CDP/NSS Administration Guide 615

Page 618: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

View enclosures

A storage array includes one storage enclosure head (numbered Enclosure 0) and, if connected, expansion enclosures (numbered Enclosure 1 to Enclosure x). Select the Enclosures object to display summary information for components in all enclosures in the selected storage profile.

Individual enclosures

Select a specific storage enclosure object to display quantity and status information for its various components, including batteries, power supply/cooling fan modules, power supplies, fans, and temperature sensors.

CDP/NSS Administration Guide 616

Page 619: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Storage enclosure head

Expansion enclosure

CDP/NSS Administration Guide 617

Page 620: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Manage controller modules

Each enclosure head (Enclosure 0) has two RAID controller modules. Select the Controller Modules object to display summary information and status for both controllers, as well as a controller image that provides at-a-glance controller status. The controller icon in the navigation pane also indicates status:

You can configure connection settings for both controllers from this object (refer to ‘Configure controller connection settings’).

RAID controller firmware must be upgraded from time to time (refer to ‘Upgrade RAID controller firmware’).

and Controller is online.

and Controller needs attention.

and Controller activity is suspended.

and Controller has failed.

and Controller is in service mode.

and Controller slot is empty.

CDP/NSS Administration Guide 618

Page 621: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Individual controller modules

Select a controller object to display detailed information and configure its connection settings. The selected controller is outlined in yellow and will also show controller status.

You can configure connection settings for both controllers from this object (refer to ‘Configure controller connection settings’).

CDP/NSS Administration Guide 619

Page 622: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Manage disk drives

The storage enclosure head has 12 or 24 drives; an expansion storage enclosure will have either 12 or 24 drives. The display also includes an image for each enclosure, showing at-a-glance drive status.

Interactiveenclosure

images

The enclosure image in the content pane provides information about any drive, regardless of the disk object you have selected in the navigation pane. Enclosure 0 always represents the storage enclosure head. Enclosures 1 through x represent expansion enclosures. (When an enclosure has 24 drives, drive images are oriented vertically.) Hover your mouse over a single drive image to display enclosure/slot information and determine whether the drive is assigned or unassigned. Hovering adds a yellow outline to the drive. Slot statuses include:

The following disk images indicate a disk that is not healthy.

Unassigned - available to be assigned to an array.

Assigned to an array.

Set as a hot spare and in use to replace a failed disk.

Set as a hot spare, on standby.

Unassigned disk removed - empty slot.

Disk replaced - assigned.

Disk replaced - unassigned.

Previously assigned to an array but was removed.

Previously assigned to an array but failed.

Not previously assigned to an array but failed.

Hot spare failed while in use.

Hot spare standby failed.

CDP/NSS Administration Guide 620

Page 623: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Select the Disk Drives object to display summary and status information for all drives in all enclosures in the selected profile, including layout, status, disk mode, total capacity, and usable capacity, as well as interactive enclosure images (refer to ‘Interactive enclosure images’).

CDP/NSS Administration Guide 621

Page 624: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Individual disk drives

In the navigation pane, the icon for an individual disk indicates drive mode and status:

Select an individual disk drive object to display additional details about the drive. The select drive is outlined in green in the interactive enclosure image.

You can also configure the selected drive to be a global hot spare.

Assigned, status optimal

Assigned, status failed

Assigned, being replaced (rebuild action)

Unassigned, status optimal

Unassigned, status failed

Unassigned, replacing failed drive (rebuild action)

Hot spare in use, status optimal

Hot spare in use, status failed

Hot spare standby, status optimal

Hot spare standby, status failed

CDP/NSS Administration Guide 622

Page 625: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Configure a hot spare

Configuring a disk as a hot spare enables it to replace any failed disk automatically. This option is available for the selected disk only if the disk is unassigned and its status is optimal (normal).

To create a global spare, right-click an unassigned disk and select Hot Spare - Set. The procedure will start automatically.

When the procedure is done, the disk icon is changed to standby mode in all interactive enclosure displays (refer to ‘Interactive enclosure images’).

Remove a hot spare

If a hot spare is in standby mode (and not in use), you can remove the hot spare designation. To do this:

Right-click the disk and select Hot Spare - Remove.

When the procedure is done, the disk icon image changes to unassigned in all interactive enclosure displays.

CDP/NSS Administration Guide 623

Page 626: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Manage RAID arrays

A RAID array is a collection of disks chosen from all enclosures in the selected storage profile. Select the RAID Arrays object to display summary information about all arrays, including name, status, RAID level, total capacity, total free capacity, and physical disk type. When you select this object, the disks associated with all arrays are outlined in blue in the interactive enclosure image.

From this object, you can create a RAID array, then create Logical Units (LUs) on any array and map them to FalconStor hosts (refer to ‘Create a RAID array’ and ‘Create a Logical Unit’).

CDP/NSS Administration Guide 624

Page 627: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Create a RAID array

You can create a RAID array using unassigned disks chosen from all enclosures in the selected storage profile. To do this:

1. Right-click the RAID Arrays object and select Create Array.

2. Type a name for the RAID and select the RAID level.

3. Select physical disks in the interactive enclosure image. Drive status must be Optimal, Unassigned (view hover text to determine status). For most effective use of resources, all disks in a RAID array should have the same capacity. If you select a disk with a different capacity than the others you have selected, a warning (Warning: disks differ in capacity) will be displayed.

As you select disks, the Number of Disks in RAID and RAID Capacity values increase; selected disks show a check mark.

4. Select Create when you are done.

Several messages will be displayed while the RAID is created; a confirmation message will display when the process is complete. The storage profile is updated to include the new array.

CDP/NSS Administration Guide 625

Page 628: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Create a Logical Unit

You must define a Logical Unit (LU) on an array in order to map a device to a FalconStor host. To do this:

1. Right-click the RAID Arrays object or the object representing an individual array and select Create Logical Unit.

2. Type the label for the LU; this is the name that will appear in the RAID Management console.

3. If you began the procedure from the RAID Arrays object, select the RAID array on which you want to create the LU from the RAID drop-down list, which shows the current capacity of the selected array.

If you began the procedure from an individual array, the current capacity for that array is already displayed.

4. Enter a capacity for the LU and select GB, TB, or MB from the drop-down list.

5. The Logical Unit Owner (the enclosure controller) is selected by default; do not change this selection.

6. You can assign (map) the LU to the FalconStor host at this time. The Map LUN option is selected by default. You can do this now, or uncheck the option and map the LU later (refer to ‘Unmapped Logical Units’).

7. Select a host from the drop-down list.

8. Choose a LUN designation from the drop-down list of available LUNs.

9. Select Create when you are done.

CDP/NSS Administration Guide 626

Page 629: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Several messages will be displayed while the LU is created; a confirmation message will display when the process is complete. The storage profile is updated to include the new LU and you will see it appear in the display.

Individual RAID arrays

In the navigation pane, the icon for an individual array provides at-a-glance array status:

RAID 0, status optimal

RAID 0, status degraded (one or more disks have failed)

RAID 0, status failed

RAID 1, status optimal

RAID 1, status degraded (one or more disks have failed)

RAID 1, status failed

RAID 5, status optimal

RAID 5, status degraded (one or more disks have failed)

RAID 5, status failed

RAID 6 status optimal

RAID 6 status degraded (one or more disks have failed)

RAID 6 status failed

CDP/NSS Administration Guide 627

Page 630: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Select a RAID array object to display summary details and status information about physical disks assigned to the array, as well as the mapped Logical Units (LUs) that have been created on the array. When you select an array, the associated disks are outlined in green in the interactive enclosure image.

The following functions are available from the selected array:

• ‘Create a Logical Unit’• ‘Rename the array’• ‘Delete the array’• ‘Check RAID array actions’• ‘Replace a physical disk’

CDP/NSS Administration Guide 628

Page 631: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Rename the array

You can change the name that is displayed for an array in the navigation pane at any time. To do this:

1. Right-click the array object and click Rename.

2. Type a new display name. It can include up to 30 characters consisting of letters, numbers, and certain special characters: “_” (underscore); “-” (hyphen); or “#” (pound sign).

3. Click OK when you are done.

Delete the array

To delete an array, expand the RAID Arrays object until you can see the individual array objects. When you delete an array, all data will be lost and cannot be retrieved.

1. Right-click the array object and select Delete Array.

2. Type yes in the dialog to confirm that you want to delete the array, then select OK.

When the array has been deleted, the storage profile is updated automatically.

CDP/NSS Administration Guide 629

Page 632: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Check RAID array actions

LU activities may take some time. Typical actions include:

• Initialization - creating a Logical Unit• Rebuild - swapping in a hot spare to replace a failed disk• Copy-back - replacing a failed disk with an unassigned healthy disk,

removing the hot spare from the configuration

To check current actions, right-click the object for an individual array and select Check Actions. A message reporting the progress of any pending action will be displayed.

To check actions on another array, select it from the drop-down list.

Click OK to close the dialog.

Replace a physical disk

When a disk has failed in the array, the hot spare takes its place automatically. You need to follow up and replace the failed disk with an unassigned healthy disk, freeing up the hot spare. A failed disk is easily identified in the Disk Drive area of the console.

CDP/NSS Administration Guide 630

Page 633: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

In the RAID Array area of the console, the array icon shows that its status as degraded ( ) and disk status is displayed as failed.

CDP/NSS Administration Guide 631

Page 634: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Right-click the array object in the navigation pane and select Replace Physical Disk. The Replace Physical Disk dialog shows the failed disk. In the array image in the dialog, select an unassigned, healthy disk to replace the failed disk. The disk you select will show a green check mark and the disk ACSL will be displayed in the dialog.

Click Replace Disk. A rebuild action will start. While this action is in progress, the icons for the replacement disk and the disk being replaced will change to replace

.

When the action is done, replacement disk status changes to assigned/optimal.

CDP/NSS Administration Guide 632

Page 635: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Logical Units

Double-click an Array object to display the objects for mapped Logical Units (LUs) on the array. Select an LU object to display status, capacity, WWPN, RAID information, ownership, cache, and other information.

The following functions are available from the selected LU:

• ‘Define LUN mapping’• ‘Remove LUN mapping’• ‘Rename LU’• ‘Delete Logical Unit’

Define LUN mapping

If you did not enable LUN mapping when you created a Logical Unit, you can do this at any time. To do this:

CDP/NSS Administration Guide 633

Page 636: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

1. Right-click the Logical Unit object in the console and select Define LUN mapping. (You can also do this from LUS listed under the Unmapped Logical Units object; refer to ‘Unmapped Logical Units’.)

2. Choose a LUN from the drop-down list of available LUNs and select OK.

Several messages will be displayed while the LUN is assigned and a confirmation message will display when the process is complete. The storage profile is updated.

After you perform a rescan in the FalconStor Management console, you can prepare the new device for assignment to clients. In the console, the last digit of the SCSI address (A:C:S:L) corresponds to the LUN number you selected in the Mapping dialog.

CDP/NSS Administration Guide 634

Page 637: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Remove LUN mapping

Removing LUN mapping removes a physical device from the FalconStor console and prevents the server from accessing the device. To do this:

1. Right-click the LU object and select Remove LUN Mapping.

2. Type yes in the dialog to confirm that you want to remove LUN mapping, then select OK.

Several messages will be displayed while the mapping is removed and a confirmation message will display when the process is complete. The storage profile is updated.

You can re-map the LU at a later time, then rescan in the FalconStor Management console to discover the device.

Rename LU

You can change the name that is displayed for an LU in the navigation pane at any time. To do this:

1. Right-click the LU object and click Rename.

2. Type a new display name. It can include up to 30 characters consisting of letters, numbers, and certain special characters: “_” (underscore); “-” (hyphen); or “#” (pound sign).

3. Click OK when you are done.

CDP/NSS Administration Guide 635

Page 638: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Delete Logical Unit

To delete an LU, expand the object for an individual RAID array until you can see the individual LU objects. When you delete an LU, all data will be lost and cannot be retrieved.

1. Right-click the LU object and select Delete Logical Unit.

2. Type yes in the dialog to confirm that you want to delete the LU, then select OK.

Several messages will be displayed while the LU is deleted and a confirmation message will display when the process is complete. When the LU has been deleted, the storage profile is updated.

CDP/NSS Administration Guide 636

Page 639: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Logical Unit Mapping

Select this object to display current mapping information for all Logical Units created on all RAID arrays, including mapped and unmapped LUs.

The display also includes summary information about the host machine, which represents the controllers on all servers connected to the storage array, such as host and interface type and port information.

You can expand this object to display unmapped and mapped LUs.

Unmapped Logical Units

Selecting Unmapped Logical Units displays LUs that have not been mapped to a host machine and are therefore not visible in the FalconStor Management Console.

CDP/NSS Administration Guide 637

Page 640: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

From this object, you can define LUN mapping for any LU with Optimal status (refer to ‘Define LUN mapping’).

Select an individual unmapped LU to view configuration details.

From this object you can rename the LU (refer to ‘Rename LU’) or define LUN mapping (refer to ‘Define LUN mapping’).

CDP/NSS Administration Guide 638

Page 641: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Mapped Logical Units

Display information for mapped LUs from the Host object. Host information includes the host OS, the type of interface on the host controller, and the WWPN and alias for each port.

This screen includes the mapped Logical Units that are visible in the FalconStor console, where the last digit of the SCSI address (A:C:S:L) corresponds to the number in the LUN column of this display - this is the LUN number you selected in the Mapping dialog.

CDP/NSS Administration Guide 639

Page 642: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Upgrade RAID controller firmware

When an upgrade to RAID controller firmware is available, FalconStor will send a notification to affected customers. Contact FalconStor Technical Support to complete the following steps to upgrade firmware:

1. Download firmware files as directed by Technical Support.

2. Select Tools --> Upgrade Firmware in the menu bar.

3. To complete Stage 1, browse to the download location and select the firmware file.

If you also want to upgrade non-volatile static random access memory (NVSRAM), browse to the download location again and select the file.

Click Next when you are done.

4. To complete Stage 2, transfer the selected files to a server location specified by Technical Support.

5. To complete Stage 3, download the firmware to controllers.

6. In Stage 4, activate the firmware.

CDP/NSS Administration Guide 640

Page 643: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Event log

To display an event log for the selected storage profile, select Tools --> Manage Event Log --> View Event Log in the menu bar.

All events are shown by default; three event types are recorded.

- Informational events that normally occur.

- Warnings related to unusual component conditions.

- Critical errors such as device failure or loss of connectivity.

Filter the event log

• Select an event type in the Events list to display only one event category.• Click a column heading to sort event types, components, locations, or

descriptions.• Select an item in the Check Component list to display events only for the

RAID array, RAID controller modules, physical disks, virtual disks, or miscellaneous events.

Click Quit to close the Event Log.

Clear the event log

To remove events from the log for the currently displayed storage profile, click Tools --> Manage Event Log --> Clear Event Log in the menu bar, then select OK in the confirmation dialog.

CDP/NSS Administration Guide 641

Page 644: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Monitor storage from the FalconStor Management console

While all storage configuration must be performed in the RAID Management console, you can monitor storage status information in the FalconStor Management console from the Enclosures tab, which is available in the right-hand pane when you select the server object. Storage component information includes status of expansion enclosures and their components; you can also display information about the host server, management controllers, and other devices.

Storage information

To display information about storage, make sure the Check Storage Components option is checked.

Choose a storage profile from the drop-down list. Click Refresh to update the display with changes to storage resources that may have been made by another user in the RAID Management console.

If you uncheck this option, information about storage is removed from the display immediately.

CDP/NSS Administration Guide 642

Page 645: CDP-NSS Administration Guide

RAID Management for VS-Series Appliances

Server information

To include information about the host server and other devices, make sure the Host IPMI option is checked. You can display information for as many or as few categories as you like:

• Chassis status• Management controller (MC) status• Sensor information • FRU device information• LAN Channel information

If you uncheck an option, related information is removed from the display immediately.

CDP/NSS Administration Guide 643

Page 646: CDP-NSS Administration Guide

CDP/NSS Administration Guide

IndexAAccess control

Groups 288SAN Client 68SAN Resources 106Storage pools 75

Access rightsGroups 288IPStor Admins 44IPStor Users 44Read Only 97Read/Write 97Read/Write Non-Exclusive 97SAN Client 68SAN Resources 106

AccountsManage 43

ACSLChange 68

Activity Log 38Adapters

Rescan 58Administrator

Management 43AIX Client 67

Delete SAN Resource 107Expand virtual device 105SAN Resource re-assignment 97

Alias 61, 194APC PDU 205, 209Appliance

Check physical resources 113Log into 109Remove storage device 115Start 108Statistics 114Stop 109telnet access 109Uninstall 117

Appliance-based protection 20Asymmetric Logical Unit Access (ALUA) 486Authentication 182Authorization 183Auto Recovery 223AWK 456

BBackup

dd command 390To tape drive 390ZeroImpact 387

Block devices 58, 478Troubleshooting 479

BMC PatrolSNMP integration 423Statistics 424View traps 424

CCA Unicenter TNG

Launch FalconStor Management Console 420SNMP integration 419Statistics 420View traps 420

Cache resource 231Create 231Disable 236Enlarge 236Suspend 236Write 65

capacity-on-demand 71CCM error codes 515CCS

Veritas Command Central Storage 603CDP journal 306

Add tag 302Mirror 302Protect 302Recover data 306Status 302Tag 302, 308Visual slider 306

CDP/NSSLicensing 35

CDP/NSS ServerProperties 37

Central Client Manager (CCM) 20CHAP secret 47CLI

Troubleshooting 502Client

Add 66, 180iSCSI 121

CDP/NSS Administration Guide 644

Page 647: CDP-NSS Administration Guide

Index

AIX 67Delete SAN Resource 107Expand virtual device 105SAN Resource re-assignment 97

AssignmentSolaris 101Windows 101

Definition 17HP-UX 67

Delete SAN Resource 107iSCSI 119Linux 67

Expand virtual device 105Solaris 67, 101

Expand virtual device 105Troubleshooting 491Windows 101

Expand virtual device 105Client Throughput Report 139Command Line Interface 20, 396

Commands 398Common arguments 397Event Log 398Failover 398Installation and configuration 396Usage 396

Community nameChanging 425

CompressionReplication 332

Configuration repository 34, 199Mirror 199

Configuration wizard 31Connectivity 47Console 29

Administrator Management 43Change password 46Connect to server after failover 202Connectivity 47Custom menu 70Definition 17Discover Storage Servers 30, 34Import a disk 60Log 69Log Options 69Logical Resources 63Options 69Physical Resources 55Replication 65Rescan adapters 58

SAN Clients 66Search 33Server properties 37Start 29System maintenance 51Troubleshooting 487User interface 33

Continuous Data Protection (CDP) 292Continuous replication 326, 335

Enable 329Resource 336, 337

Create Primary TimeMark - 329Cross mirror

Check resources & swap 215Configuration 197Recover from disk failure 214Requirements 191Re-synchronize 215Swap 187Troubleshooting 499Verify & repair 215

DData access 182Data migration 80Data protection 265Data tab 153dd command 390Debugging 492Delta Mode 329Delta Replication Status Report 141, 338Devices

Failover 194Scan LUNs greater than zero 484

Disaster recoveryImport a disk 60Replication 23, 325

DiskForeign 60IDE 58Import 60System 56

Disk expansion behavior 358Disk Space Usage Report 142Disk Usage History Report 143DiskSafe 20, 43, 183

Linux 488DynaPath 21, 99DynaPath-FC

CDP/NSS Administration Guide 645

Page 648: CDP-NSS Administration Guide

Index

Fibre Channel Target Mode 178

EEmail Alerts

Configuration 447Exclude system log entries 456Include system log entries 455Modifying properties 458Signature 448System log check 455System log ignore 456Triggers 449, 459

Custom email destination 459New script 460Output 460Return codes 460Sample script 460

X-ray 453EnableNOPOut 127Encryption

Replication 332Event Log 33, 129

Command Line Interface 398Export 131Filter information 130Print 131Refresh 131Sort information 130Troubleshooting 489

Expand virtual device 103Linux clients 105Solaris clients 105Troubleshooting 479Windows clients 105

Export dataFrom reports 137

FFailover 185, 186

And Mirroring 227, 264Asymmetric 187Auto Recovery 211, 222Auto recovery 213Check Consistency 222Command Line Interface 398Configuration 189Connect to primary after failover 202Consistency check 222Convert to mutual failover 221

Cross mirrorCheck resources & swap 215Configuration 197Recover from disk failure 214Re-synchronize 215Swap 187Verify & repair 215

Exclude physical devices 221Fibre Channel Target failure 193Fix failed server after failover 213Force a takeover 223Heartbeat monitor 195Intervals 222Mutual failover 186Network connection failure 193Network connectivity failure 186Physical device change 220Power control 207

APC PDU 205, 209HP iLO 205, 208IPMI 205, 208RPC100 205, 208SCSI Reserve/Release 208

Primary/Secondary Servers 186Recovery 186, 211, 222Remove configuration 225Replication note 359Requirements 189

Asymmetric mode 191Clients 190Cross mirror 191General 189Shared storage 190

Sample configuration 188Self-monitor 195Server changes 220Server failure 195Setup 196Status 210Storage device failure 194, 195Storage device path failure 194Subnet change 221Suspend/resume 224TimeViews 227Troubleshooting 498Verify physical devices match 222

FalconStor Management Console 17, 29Fibre Channel

Fabric 163

CDP/NSS Administration Guide 646

Page 649: CDP-NSS Administration Guide

Index

Point-to-Point 163Fibre Channel Configuration Report 146Fibre Channel Target Mode 163, 167

2 Gig switches 166Access new devices 178Assign resources to clients 177Client HBA failover settings

DynaPath 169HP-UX 169Linux 169

DynaPath-FC 178Enable 170Fabric topology 169Failover

HBAs 174Limitations 174Multiple switches 174

Failover configuration 174Hardware configuration 164, 169Initiator mode 171Installation and configuration 164Multiple paths 177Persistent binding

Clients 169Downstream 165

QLogicconfiguration 167

QLogic ports 171Target mode 171Target port binding 165Troubleshooting clients 495Zoning 165

FileSafe 21, 43, 183FileSafe Server 21filesystem utility 80Filtered Server Throughput Report 157Foreign disk 60format utility 102, 105

GGlobal Cache 235Global options 341Groups 64, 286

Access control 288Add resources 288Create 286Replication 287

GUID 21, 60, 64

HHalt server 53health monitoring 203heartbeat 203High availability 185Host-based protection 21Hostname

Change 32, 52HotZone 21, 237

Configure 238Disable 244Prefetch 237Read Cache 237Status 242Suspend 244

HP iLO 205, 208HP OpenView

SNMP integration 417HP-UX 15HP-UX Client 67

Delete SAN Resource 107HyperTrac 21

IIBM Tivoli NetView

SNMP integration 421IDE drives 58Import

Disk 60In-Band Protection 20Installation

SNMPBMC Patrol 423CA Unicenter TNG 419HP OpenView 417IBM Tivoli NetView 421

IP addresschanging 478

IPBondingmode options 323

IPMI 53, 205, 208, 454Filter 53Monitor 53

IPStor AdminsAccess rights 44

IPStor UsersAccess rights 44

ipstorconsole.log 69iSCSI Client 22

CDP/NSS Administration Guide 647

Page 650: CDP-NSS Administration Guide

Index

Failover 127, 190Troubleshooting 494

iSCSI Target 22iSCSI Target Mode 119

Initiators 119Targets 119Windows

Add iSCSI client 121Disable 127Enable 120Stationary client 122

ismonStatistics 114

JJumbo frames 52, 491

KKeycodes 35kisdev# 390

LLabel devices 101Licensing 31, 35Link Aggregation 323Linux Client 67

Expand virtual device 105Troubleshooting 495

Local Replication 23, 325Logical Resources 22, 63

Expand 103Icons 64, 489Status 64, 489

Logs 129Activity log 38Console 69Event log refresh 69ipstorconsole.log 69

LUNScan LUNs greater than zero 484

MMaxRequestHoldTime 127MCS 480Menu

Customize Console 70MIB 22MIB file 412

loading 413, 479

MIB module 412Microscan 22, 41, 333, 341Microsoft iSCSI initiator 127

default retry period 127Migrate

Drives 80Mirroring 245

And Failover 227, 264CDP journal 302Configuration 247Configuration repository 199Expand primary disk 258Fix minor disk failure 257Global options 263Monitor 252Performance 40, 263Promote the mirrored copy 255Properties 263Rebuild 262Recover from failure 257Remove configuration 264Replace disk in active configuration 257Replace failed disk 257Replication note 359Requirements 247Resume 262Resynchronization 41, 253, 263Setup 247Snapshot resource 274Status 255Suspend 262Swap 255Synchronize 258

MPIO 480MTU 52Multipathing 61, 392

Aliasing 61Load distribution 393load distribution 393Path management 394

Mutual CHAP 47, 48

NNear-line mirroring 360

After configuration 369Configuration 361Fix minor disk failure 385Global options 383Monitor 364

CDP/NSS Administration Guide 648

Page 651: CDP-NSS Administration Guide

Index

Overview 360Performance 383Properties 384Rebuild 379Recover data 371Recover from failure 385Remove configuration 384Replace disk in active mirror 386Replace failed disk 385Requirements 361Resume 383Re-synchronization 365Rollback 372Setup 361Status 370Suspend 383Swap 379Synchronize 379

NetViewSNMP integration 421Statistics 422

Network configuration 31, 51Network connectivity 489

Failure 186NIC Port Bonding 22, 321

change IP address 478NNM

SNMP integration 417Statistics 418

NPIV 22, 166, 204NSS

What is? 14

OOID 23, 25openPegasus 603Out of kernel resources error 503

PPasswords

Add/delete administrator password 43Change administrator password 43, 46

PatchApply 49Rollback 49

Path failure 194Performance 230

Mirror 40Mirroring 263

Near-line mirroring 383Replication 40, 341

Persistent binding 55, 485Clients 169Downstream 165Troubleshooting 485

Persistent reservation 125, 180Physical device

Prepare 56Rename 57Repair 61Test throughput 61

Physical Resource Allocation Report 149Physical resources 55, 76

Check 113Icons 56IDE drives 58Prepare Disks 81Troubleshooting 479

Physical Resources Allocation Report 148Physical Resources Configuration Report 147Ports 184

usage 597Power Control options 204, 207Prefetch 23, 237Prepare disks 56, 81pure-ftpd package 51

QQLogic

Configuration 167HBA 204iSCSI HBA 116Ports 171Target mode settings 167

Queue Depth 495Quiescent 301Quota

Group 45, 46User 45, 46

RRAID Management

Array 620Automatic discovery 611Check actions 630Console 610

Navigation tree 613Controller modules 618

CDP/NSS Administration Guide 649

Page 652: CDP-NSS Administration Guide

Index

Controller settings 615Discover Storage 610, 612

Automatic 611Expansion enclosures 612Manual 610

Disk driveAssigned 620Available 620Empty 620Failed 620Hot spare 620

Remove 623Set 623

Removed 620Standby 620

Disk drive images 620Disk drives 620

Interactive images 620Enclosures 616Expansion enclosures 616FalconStor Management console

Discover storage 642Enclosures tab 642IPMI information 643

Firmware upgrade 640Hardware QuickStart Guide 608Host information 637, 639Hot spare

Remove 623Set 623

In-band 610Individual controller modules 619Individual disk drives 622Individual enclosure 616

Expansion enclosure 617Storage enclosure head 617

Individual Raid arrays 627Logical Unit Mapping 637Logical Units 609, 624, 628, 633

Create Logical Unit 626Define LUN mapping 633, 638Delete Logical Unit 636Remove LUN mapping 635Rename Logical Unit 635Unmapped Logical Units 637

LUs 609Manual discovery 610Mapped Logical Units 639Monitor storage 642

Out-of-band 610Preconfigured storage 608RAID Arrays

Check actions 630Create RAID Array 625Delete RAID Array 629Replace physical disk 630

RAID arrays 624Logical Units 633

SAN Resources 609Storage enclosure head 616, 620Storage object 613Storage profile 613

Read Cache 23Reboot server 52Recover data with TimeView 306RecoverTrac 23Relocate a replica 356remote boot 482Remote Replication 23, 325Repair

Paths to a device 61Replica resource

Protect 336Replication 23, 325, 341

Assign clients to replica disk 342Change configuration options 344Compression 332Configuration 327Console 65Continuous 326Continuous replication resource 336Delta 326Delta mode 329Encryption 332Expand primary disk 358Failover note 359First replication 335Force 346How it works 326Local 325Microscan 41, 333, 341Mirroring note 359Performance 40, 341

Parameters 341Policies 331Primary disk 23, 325Promote 342Recover files 344

CDP/NSS Administration Guide 650

Page 653: CDP-NSS Administration Guide

Index

Recreate original configuration 343Remote 325Remove configuration 357Replica disk 23, 325Requirements 327Resume schedule 346Reversal 343, 354Scan 24, 343Setup 327Start manually 346Status 338Stop in progress 346Suspend schedule 346Switch to replica disk 342Synchronize 337, 346Test 341Throttle 41TimeMark note 359TimeMark/TimeView 344Troubleshooting 500

Reports 132Client Throughput 140Creating 133

Global replication 162Delta Replication Status 338Disk Space Usage 142Export data 137Filtered Server Throughput 157Physical Resource Allocation 149Physical Resources Allocation 148Physical Resources Configuration 147SAN Client Usage Distribution 154SAN Client/Resources Allocation 155SAN Resource Usage Distribution 157SAN Resources Allocation 156SCSI Channel Throughput 151SCSI Device Throughput 153Server Throughput 140Types 139

Global replication 162Viewing 137

repositories 21Rescan 200

Adapters 58Resource IO Activity Report 149RPC100 205, 208

SSafeCache 24, 230, 332

Cache resource 231Configure 231Disable 236Enlarge 236Properties 236Status 236Suspend 236Troubleshooting 501

SAN Client 66Access control 68Add 66, 180

iSCSI 121AIX 67Assign SAN Resources 97Definition 17HP-UX 67iSCSI 119Linux 67Solaris 67, 101Windows 101

SAN Client / Resources Allocation Report 155SAN Client Usage Distribution Report 154SAN Resource tab 153SAN Resource Usage Distribution Report 157SAN Resources 63, 76, 77

Access control 106Assign to Clients 97Create service enabled device 93Create virtual device 81Creating 81Delete 107Physical resources 77Prepare Disk 81Virtual devices 77Virtualization examples 77

SAN Resources Allocation Report 156SCSI

Aliasing 61, 194Troubleshooting adapters/devices 483

SCSI Channel Throughput Report 151SCSI Device Throughput Report 153SCSI Devices tab 395Security 182

Authentication 182Authorization 183Data access 182Disable ports 184Physical security of machines 184Recommendations 183Storage network topology 184

CDP/NSS Administration Guide 651

Page 654: CDP-NSS Administration Guide

Index

System management 182Server

Authentication 182Authorization 182Check physical resources 113Definition 17Discover 30, 34Import a disk 60Log into 109Network configuration 51Properties 37Remove storage device 115Scan LUNs greater than zero 484Start 108Statistics 114Stop 109telnet access 109Uninstall 117X-ray 496

Server Throughput Report 157Service Enabled Devices 80

Creating 93Troubleshooting 502

Service enabled devicesCreating 81

SMI-S 24, 602, 603enable 603

Snapshot 265Agent 25notification 271, 298

trigger 298Resource

Check status 273Delete 274Expand 274Mirror 274offline 477Options 274Properties 274Protect 274Reinitialize 274Shrink Policy 274Troubleshooting 477

Setup 265Snapshot Copy 281

Status 285Snapshot Resource

expand 269SNMP

Advanced topics 425BMC Patrol 423CA Unicenter TNG 419Changing the community name 425HP OpenView 417IBM Tivoli NetView 421Implementing 414Integration 412Limit to subnetwork 425Manager on different network 425Traps 39, 413Troubleshooting 502Using a configuration for multiple Storage

Servers 425snmpd.conf 425Software updates

Add patch 49Rollback patch 49

Solaris Client 67Expand virtual device 105Troubleshooting 496Virtual devices 101

Statisticsismon 114

Stop Takeover option 212Storage 25

Remove device 115Storage Cluster Interlink 187, 189

Port 25, 189, 201Storage device path failure 194Storage Pool Configuration Report 160Storage pools 71

Access control 75Administrators 71Allocation Block Size 74Create 72Manage 71Properties 73Security 75Set access rights 75Tag 75Type 73

Storage quota 45Storage Server

Authentication 182Authorization 182Connect in Console 30definition 17Discover 30, 34

CDP/NSS Administration Guide 652

Page 655: CDP-NSS Administration Guide

Index

Import a disk 60Network configuration 51Scan LUNs greater than zero 484Troubleshooting 496uninstall 477X-ray 496

Swapping 215Sync Standby Devices 187, 499Synchronize Out-of-Sync Mirrors 41, 383Synchronize Replica TimeMark 329System

Disk 56log 455Management 182tab 153

System maintenance 51Halt 53IPMI 53Network configuration 51Reboot 52Restart network 52Restart the server 52Set hostname 52

TTarget mode settings

QLogic 167Target port binding 165target server 325Thin Provisioning 25, 78, 83, 247, 327Throttle 41

speed 352tab 351

Throttle windowAdd 351Delete 351Edit 351

ThroughputTest 61

TimeMark 25Replication note 359Retention

global 24Retention policy 24, 281, 293, 296, 298, 318Troubleshooting 501

TimeMark/CDP 292Add comment 303Change priority 303Copy 305

Create manually 303Delete 320Disable 320Failover 227Free up storage 320Policies 318, 319Priority 304Replication 320Resume CDP 320Roll forward 317Rollback 317Scheduling 295Setup 293Status 301Suspend CDP 320TimeView 292, 306

TimeView 25, 292, 306Recover data 306Remap 314

TivoliSNMP integration 421

Trap 25Traps 413Trigger 25Trigger Replication after TimeMark 359Troubleshooting 477, 494

Block devices 479CLI 502Client

Connectivity 491Windows 492

Console launch 487Cross mirror 499Debugging 492Event log 489Failover 498

Cross mirror 499FC storage 485Fibre Channel Client 495iSCSI Client 494Jumbo frame support 491Linux Client 491, 495Network connectivity 489Physical resources 479Replication 500SafeCache 501SCSI adapters and devices 483

Linux Client 483Service Enabled Devices 502Snapshot resources 477

CDP/NSS Administration Guide 653

Page 656: CDP-NSS Administration Guide

Index

SNMP 502Solaris Client 496TimeMark 501Virtual device expansion 479Windows client 492

UUEFI 482USEQUORUMHEALTH 189User Quota Usage Report 161

VVAAI 25Virtual devices 77

Creating 81Expand 103expansion FAQ 479, 480

Virtualization 77Examples 77

VMware 169VMware ESX server

vmkping command 92Volume set addressing 165, 175, 485VSA 165, 175, 485

enable for client 485

Wwatermark value 331Windows Client

Expand virtual device 105Troubleshooting 492Virtual devices 101

World Wide Port Names 176Write caching 65WWN Zoning 26WWPN 99, 176

mapping 99

XX-ray 496

CallHome 453System Information file 454

YYaST 51

ZZero Impact Backup 427ZeroImpact 26

backup 387Zoning 165

Soft zoning 165

CDP/NSS Administration Guide 654