100
Networking for Storage Virtualization and EMC RecoverPoint Version 7.0 EMC VPLEX and RecoverPoint Solutions Interswitch Link Hops Basics and Examples EMC RecoverPoint SAN Connectivity Information Ron Dharma Li Jiang

Networking for Storage Virtualization and EMC RecoverPoint TechBook

  • Upload
    emc

  • View
    255

  • Download
    6

Embed Size (px)

Citation preview

Page 1: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Networking for StorageVirtualization and EMC RecoverPoint

Version 7.0

• EMC VPLEX and RecoverPoint Solutions

• Interswitch Link Hops Basics and Examples

• EMC RecoverPoint SAN Connectivity Information

Ron DharmaLi Jiang

Page 2: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Networking for Storage Virtualization and EMC RecoverPoint TechBook2

Copyright © 2011 - 2014 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information issubject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NOREPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THISPUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY ORFITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicablesoftware license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the UnitedState and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date regulator document for your product line, go to EMC Online Support(https://support.emc.com).

Part number H8075.6

Page 3: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Contents

Preface.............................................................................................................................. 7

Chapter 1 VirtualizationIntroduction ....................................................................................... 14Benefits of VPLEX and storage virtualization .............................. 15Benefits of EMC RecoverPoint ........................................................ 16

Chapter 2 EMC VPLEXVPLEX overview............................................................................... 18

Product description ................................................................... 18VPLEX advantages .................................................................... 18SAN switches ............................................................................. 19VPLEX limitations ..................................................................... 19

VPLEX product family ..................................................................... 20VPLEX Local............................................................................... 20VPLEX Metro.............................................................................. 21VPLEX Geo ................................................................................. 24

VPLEX hardware............................................................................... 25VPLEX cluster ............................................................................ 25

VPLEX software ................................................................................ 28GeoSynchrony, ........................................................................... 28VPLEX management ................................................................. 28Provisioning with VPLEX......................................................... 29Consistency groups ................................................................... 30Cache vaulting ........................................................................... 30

VPLEX configuration........................................................................ 31

Networking for Storage Virtualization and EMC RecoverPoint TechBook 3

Page 4: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Contents

Configuring VPLEX with FCoE switches...................................... 32FCoE goal.................................................................................... 32Topology overview ................................................................... 33VPLEX with FCoE switches configuration steps .................. 35

VPLEX documentation .................................................................... 43

Chapter 3 EMC RecoverPointRecoverPoint architecture................................................................ 46RecoverPoint components............................................................... 48Logical topology ............................................................................... 49Local and remote replication........................................................... 51

RecoverPoint replication phases ............................................. 51Intelligent switches........................................................................... 52EMC RecoverPoint with intelligent switch splitters ................... 53RecoverPoint documentation.......................................................... 54

Chapter 4 SAN Connectivity and Interswitch Link HopsSAN connectivity overview ............................................................ 56Simple SAN topologies .................................................................... 57

Connectrix B AP-7600B simple SAN topology ..................... 57Storage Services Module (SSM) blade in a simple SANtopology ...................................................................................... 58Node placement in a simple SAN topology—

Connectrix B ............................................................................ 60Node placement in a simple SAN topology—

Connectrix MDS...................................................................... 61Complex SAN topologies ................................................................ 62

Domain count............................................................................. 62Core/edge................................................................................... 63Edge/core/edge ........................................................................ 66

Interswitch link hops........................................................................ 72Overview .................................................................................... 72Example 1 — Two-hop solution.............................................. 73Example 2 — Preferred one-hop solution.............................. 74Example 3 — Three-hop solution............................................ 75Example 4 — One-hop solution .............................................. 77

Glossary ......................................................................................................................... 79

Networking for Storage Virtualization and EMC RecoverPoint TechBook4

Page 5: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Title Page

Figures

1 VPLEX product offerings .............................................................................. 202 Basic VPLEX Metro DWDM configuration example ................................ 213 Basic VPLEX Metro FCIP configuration example ..................................... 224 VPLEX Geo topology over 1/10 GbE and Distance Extension Gear ...... 245 Architecture highlights .................................................................................. 266 VPLEX Geo topology example ..................................................................... 337 FCoE connection for Back-End ports of VPLEX Cluster-2 ....................... 348 Steps to configure VPLEX with FCoE switches ......................................... 359 ISLs between MDS 9148 and Nexus 5596 switches ................................... 3610 VFC ports ......................................................................................................... 3711 Symmetrix VMAX Masking View ............................................................... 4112 Virtual volume provisioning and exporting process ................................ 4213 RecoverPoint architecture example ............................................................. 4614 Intelligent switch running RecoverPoint .................................................... 4715 Logical topology ............................................................................................. 4916 Example Back End and Front End VSAN ................................................... 5017 Connectrix B AP-7600B simple SAN topology .......................................... 5818 Connectrix MDS SSM blade in simple SAN topology .............................. 5919 MDS 9513 with a 24-port line card and an SSM blade .............................. 6020 Core/edge design ........................................................................................... 6421 Core/edge design with a Connectrix B ....................................................... 6522 Storage on a single switch ............................................................................. 6723 Core/edge/core design with Brocade ........................................................ 6824 Storage on multiple switches ........................................................................ 7025 Storage on multiple switches with Connectrix B ....................................... 7126 Four-switch mesh design using MDS 9506s (Example 1) ......................... 7427 Four-switch mesh design using MDS 9506s (Example 2) ......................... 7528 Four-switch mesh of ED-48000Bs with an AP-7600B as the

intelligent switch (Example 3) ........................................................................7629 Four-switch mesh of MDS 9506s (Example 4) ............................................ 78

Networking for Storage Virtualization and EMC RecoverPoint TechBook 5

Page 6: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Figures

Networking for Storage Virtualization and EMC RecoverPoint TechBook6

Page 7: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Preface

This EMC Engineering TechBook provides a high-level overview of using ablock storage virtualization solution, such as EMC VPLEX, as well as EMCRecoverPoint replication protection. It also describes how RecoverPoint canbe integrated into the design of a simple or complex topology. Basicinformation on interswitch link hops, along with examples showing hopcounts in four different fabric configurations, are included.

EMC E-Lab would like to thank all the contributors to this document,including EMC engineers, EMC field personnel, and partners. Yourcontributions are invaluable.

As part of an effort to improve and enhance the performance and capabilitiesof its product lines, EMC periodically releases revisions of its hardware andsoftware. Therefore, some functions described in this document may not besupported by all versions of the software or hardware currently in use. Forthe most up-to-date information on product features, refer to your productrelease notes. If a product does not function properly or does not function asdescribed in this document, please contact your EMC representative.

Audience This TechBook is intended for EMC field personnel, includingtechnology consultants, and for the storage architect, administrator,and operator involved in acquiring, managing, operating, ordesigning a networked storage environment that contains EMC andhost devices.

EMC Support Matrixand E-Lab

InteroperabilityNavigator

For the most up-to-date information, always consult the EMC SupportMatrix (ESM), available through the EMC E-Lab InteroperabilityNavigator (ELN), at http://elabnavigator.EMC.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 7

Page 8: Networking for Storage Virtualization and EMC RecoverPoint TechBook

8

Preface

Relateddocumentation

Related documents include several TechBooks and referencemanuals. The following documents, including this one, are availablethrough the E-Lab Interoperability Navigator athttp://elabnavigator.EMC.com

• Backup and Recovery in a SAN TechBook• Building Secure SANs TechBook• Extended Distance Technologies TechBook• Fibre Channel over Ethernet (FCoE): Data Center Bridging (DCB)

Concepts and Protocols TechBook• Fibre Channel over Ethernet (FCoE): Data Center Bridging (DCB)

Case Studies TechBook• Fibre Channel SAN Topologies TechBook• iSCSI SAN Topologies TechBook• Networked Storage Concepts and Protocols TechBook• WAN Optimization Controller Technologies TechBook• EMC Connectrix SAN Products Data Reference Manual• Legacy SAN Technologies Reference Manual• Non-EMC SAN Products Data Reference Manual

◆ EMC Support Matrix, available through E-Lab InteroperabilityNavigator at http://elabnavigator.EMC.com

◆ RSA security solutions documentation, which can be found athttp://RSA.com > Content Library

◆ White papers include:

• Workload Resiliency with EMC VPLEX Best Practices Planning

All of the following documentation and release notes can be found atEMC Online Support at https://support.emc.com.

Hardware documents and release notes include those on:

◆ Connectrix B series◆ Connectrix M series◆ Connectrix MDS (release notes only)◆ VNX series◆ CLARiiON◆ Celerra◆ Symmetrix

Software documents include those on:

◆ EMC ControlCenter◆ RecoverPoint◆ TimeFinder◆ PowerPath

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 9: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Preface

The following E-Lab documentation is also available:

◆ Host Connectivity Guides◆ HBA Guides

For Cisco and Brocade documentation, refer to the vendor’s website.

◆ http://cisco.com

◆ http://brocade.com

Authors of thisTechBook

This TechBook was authored by Ron Dharma and Li Jiang, along withother EMC engineers, EMC field personnel, and partners.

Ron Dharma is a Principal Integration Engineer and team-lead forAdvance Product Solution group in E-Lab. Prior to joining EMC, Ronwas a SCSI software engineer, spending almost 11 years resolvingintegration issues in multiple SAN components. He dabbled inalmost every aspect of the SAN including storage virtualization,backup and recovery, point-in-time recovery, and distance extension.Ron provided the original information in this document, and workswith other contributors to update and expand the content.

Li Jiang is a System Integration Engineer for E-Lab I2/IOP team,qualifying VPLEX, data migrations, and interoperability betweenVPLEX and other EMC-supported products, such Fibre Channel andFibre Channel over Ethernet switches, network simulators andservers. Li contributed to maintaining EMC's Symmetrix storagesystems in the E-Lab.

Conventions used inthis document

EMC uses the following conventions for special notices:

IMPORTANT

An important notice contains information essential to software orhardware operation.

Note: A note presents information that is important, but not hazard-related.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 9

Page 10: Networking for Storage Virtualization and EMC RecoverPoint TechBook

10

Preface

Typographical conventionsEMC uses the following type style conventions in this document.

Normal Used in running (nonprocedural) text for:• Names of interface elements, such as names of windows, dialog

boxes, buttons, fields, and menus• Names of resources, attributes, pools, Boolean expressions, buttons,

DQL statements, keywords, clauses, environment variables,functions, and utilities

• URLs, pathnames, filenames, directory names, computer names,links, groups, service keys, file systems, and notifications

Bold Used in running (nonprocedural) text for names of commands,daemons, options, programs, processes, services, applications, utilities,kernels, notifications, system calls, and man pages

Used in procedures for:• Names of interface elements, such as names of windows, dialog

boxes, buttons, fields, and menus• What the user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis, for example, a new term• Variables

Courier Used for:• System output, such as an error message or script• URLs, complete paths, filenames, prompts, and syntax when shown

outside of running text

Courier bold Used for specific user input, such as commands

Courier italic Used in procedures for:• Variables on the command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by theuser

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections — the bar means “or”

{ } Braces enclose content that the user must specify, such as x or y or z

... Ellipses indicate nonessential information omitted from the example

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 11: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Preface

Where to get help EMC support, product, and licensing information can be obtained asfollows.

EMC support, product, and licensing information can be obtained onthe EMC Online Support site as described next.

Note: To open a service request through the EMC Online Support site, youmust have a valid support agreement. Contact your EMC sales representativefor details about obtaining a valid support agreement or to answer anyquestions about your account.

Product informationFor documentation, release notes, software updates, or forinformation about EMC products, licensing, and service, go to theEMC Online Support site (registration required) at:

https://support.EMC.com

Technical supportEMC offers a variety of support options.

Support by Product — EMC offers consolidated, product-specificinformation on the Web at:

https://support.EMC.com/products

The Support by Product web pages offer quick links toDocumentation, White Papers, Advisories (such as frequently usedKnowledgebase articles), and Downloads, as well as more dynamiccontent, such as presentations, discussion, relevant CustomerSupport Forum entries, and a link to EMC Live Chat.

EMC Live Chat — Open a Chat or instant message session with anEMC Support Engineer.

eLicensing supportTo activate your entitlements and obtain your Symmetrix license files,visit the Service Center on https://support.EMC.com, as directed onyour License Authorization Code (LAC) letter e-mailed to you.

For help with missing or incorrect entitlements after activation (thatis, expected functionality remains unavailable because it is notlicensed), contact your EMC Account Representative or AuthorizedReseller.

For help with any errors applying license files through SolutionsEnabler, contact the EMC Customer Support Center.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 11

Page 12: Networking for Storage Virtualization and EMC RecoverPoint TechBook

12

Preface

If you are missing a LAC letter, or require further instructions onactivating your licenses through the Online Support site, contactEMC's worldwide Licensing team at [email protected] or call:

◆ North America, Latin America, APJK, Australia, New Zealand:SVC4EMC (800-782-4362) and follow the voice prompts.

◆ EMEA: +353 (0) 21 4879862 and follow the voice prompts.

We'd like to hear from you!Your suggestions will help us continue to improve the accuracy,organization, and overall quality of the user publications. Send youropinions of this document to:

[email protected]

Your feedback on our TechBooks is important to us! We want ourbooks to be as helpful and relevant as possible. Send us yourcomments, opinions, and thoughts on this or any other TechBook to:

[email protected]

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 13: Networking for Storage Virtualization and EMC RecoverPoint TechBook

1

This chapter contains the following information on virtualization:

◆ Introduction ........................................................................................ 14◆ Benefits of VPLEX and storage virtualization ............................... 15◆ Benefits of EMC RecoverPoint ......................................................... 16

Virtualization

Virtualization 13

Page 14: Networking for Storage Virtualization and EMC RecoverPoint TechBook

14

Virtualization

IntroductionYou can increase your storage infrastructure flexibility throughstorage virtualization. To enable simultaneous information access ofheterogenous storage arrays from multiple storage vendors withindata centers, use block storage virtualization solutions, such as EMCVPLEX™ described in this TechBook in Chapter 2, ”EMC VPLEX.”

The VPLEX solution complements EMC’s virtual storageinfrastructure and provides a layer supporting virtual storagebetween host computers running the applications of the data centerand the storage arrays providing the physical storage used by theseapplications.

EMC block storage virtualization provides seamless data mobilityand lets you manage multiple arrays from a single interface withinand across a data center, providing the following benefits:

◆ Mobility — Achieve transparent mobility and access in andacross a data center.

◆ Scalability — Start small and grow larger with predictable servicelevels.

◆ Performance — Improve I/O performance and reduce storagearray contention with advanced data caching.

◆ Automation — Automate sharing, balancing, and failover of I/Oacross data centers.

◆ Resiliency — Mirror across arrays without host impact andincrease high availability for critical applications.

EMC RecoverPoint provides cost-effective, continuous remotereplication and continuous data protection, enabling on-demandprotection and recovery to any point in time. Chapter 3, ”EMCRecoverPoint,” provides more information on RecoverPoint.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 15: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Virtualization

Benefits of VPLEX and storage virtualizationEMC VPLEX™ is an inline block-based virtualization product thatshare a common set of features and benefits. There are numerousbenefits to storage virtualization. These benefits include (but are notlimited to) the following:

◆ Create an open storage environment capable of easily supportingthe introduction of new technologies that are incorporated in thestorage virtualizer. Customers can use heterogenous storagearrays that do not have an interoperable feature set, such asHeterogenous Array Mirroring, and accomplish the mirroringusing a storage virtualizer.

◆ Significantly decrease the amount of planned and unplanneddowntime by providing the ability for online migration and dataprotection across heterogeneous arrays

◆ Increase storage utilization and decrease the amount of“stranded” heterogeneous storage

◆ Reduce management complexity through single pane of glassmanagement with simplified policies and procedures forheterogeneous storage arrays

◆ Improve application, storage, and network performance (inoptimized configurations) through load balancing andmultipathing

VPLEX provides the following key benefits:

◆ The combination of a virtualized data center and EMC VPLEXprovides customers entirely new ways to solve IT problems andintroduce new models of computing.

◆ VPLEX eliminates boundaries of physical storage and allowsinformation resources to be transparently pooled and shared overdistance for new levels of efficiency, control, and choice.

◆ VPLEX Local and VPLEX Metro products address thefundamental challenges of rapidly relocating applications andlarge amount of information on-demand within and across datacenters, which is a key enabler of the private cloud.

Chapter 2, ”EMC VPLEX,” provides more details on VPLEX.

Benefits of VPLEX and storage virtualization 15

Page 16: Networking for Storage Virtualization and EMC RecoverPoint TechBook

16

Virtualization

Benefits of EMC RecoverPointRecoverPoint provides the following key services in a data center:

◆ Intelligent fabric splitting — Allows data to be split andreplicated in a manner that is transparent to the host. By havingthe switch intercept host writes and duplicate, or split, thosewrites to the RecoverPoint appliance allows for data replicationwith minimal impact to the host.

◆ Heterogeneous replication of data volumes — RecoverPoint is atruly heterogeneous replication solution, enabling the replicationof data between different storage array volumes. This enables theability to migrate from a virtualized environment to anon-virtualized environment or replication between differenttiers of storage, thus allowing greater flexibility in the data center.

◆ Continuous data protection (CDP) — Enables a fine granularityof recovery to a block level and refers to local replication of SANvolumes. The fine granularity provides DVR-like point-in-timesnapshots of the user volume.

◆ Continuous remote replication (CRR) via IP — Allows datareplication across completely separate fabrics over distanceslimited only by IP technology. The RecoverPoint appliancestransfer data from one appliance to another via TCP/IP, greatlyextending the distance between source and target volumes.

◆ Continuous remote replication (CRR) via FC — Allows datareplication within a fabric over distances limited only by FCdistance extension technology.

◆ Compression and bandwidth reduction — By implementingblock-level compression algorithms before sending traffic overthe IP network, RecoverPoint significantly reduces the networkcost required for matching the RPO/RTO expectations.

For more information on RecoverPoint, refer to Chapter 3, ”EMCRecoverPoint.” Chapter 4, ”SAN Connectivity and Interswitch LinkHops,” explains how RecoverPoint can be integrated into the designof simple and complex topologies.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 17: Networking for Storage Virtualization and EMC RecoverPoint TechBook

2

This chapter contains the following information on EMC VPLEX:

◆ VPLEX overview ................................................................................ 18◆ VPLEX product family ...................................................................... 20◆ VPLEX hardware................................................................................ 25◆ VPLEX software ................................................................................. 28◆ VPLEX configuration......................................................................... 31◆ Configuring VPLEX with FCoE switches....................................... 32◆ VPLEX documentation...................................................................... 43

EMC VPLEX

EMC VPLEX 17

Page 18: Networking for Storage Virtualization and EMC RecoverPoint TechBook

18

EMC VPLEX

VPLEX overviewThis section contains the following information:

◆ “Product description” on page 18

◆ “VPLEX advantages” on page 18

◆ “SAN switches” on page 19

◆ “VPLEX limitations” on page 19

For complete details about VPLEX, refer to the documentation listedin “VPLEX documentation” on page 43.

Product descriptionThe EMC VPLEX™ family delivers data mobility and availabilitywithin, across, and between data centers. VPLEX Local™ providessimplified management and mobility across heterogeneous arrayswithin a data center. VPLEX Metro™ provides availability andmobility between two VPLEX clusters within a data center and acrosssynchronous distances. VPLEX Geo™ further dissolves the distancesbetween data centers within asynchronous distances.

Note: See “VPLEX product family” on page 20 for more information.

With a unique scale-up and scale-out architecture, VPLEX's advanceddata caching and distributed cache coherency provide continuousavailability, non-disruptive data mobility, stretching clusters acrossdistance, workload mobility, automatic sharing, load balancing, andenable local access within predictable service levels.

Note: For more information, refer to the EMC VPLEX GeoSynchrony ProductGuide, located at EMC Online Support at https://support.emc.com.

VPLEX advantagesVPLEX addresses three primary IT needs:

◆ Mobility: VPLEX moves applications and data between differentstorage installations:

• Within the same data center or across a campus (VPLEXLocal),

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 19: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

• Within a geographical region (VPLEX Metro),

• Across even greater distances (VPLEX Geo).

◆ Availability: VPLEX creates high-availability storageinfrastructure across these same varied geographies withunmatched resiliency.

◆ Collaboration: VPLEX provides efficient real-time datacollaboration over distance for big data applications.

Note: For more unique innovations and advantages, refer to the EMC VPLEXGeoSynchrony Product Guide, located at EMC Online Support athttps://support.emc.com.

SAN switchesALL EMC-recommended FC SAN switches are supported, includingBrocade, Cisco, and QLogic.

VPLEX limitationsAlways refer to the VPLEX Simple Support Matrix for the mostup-to-date support information. Refer to the EMC VPLEX ReleaseNotes for the most up-to-date capacity limitations.

VPLEX overview 19

Page 20: Networking for Storage Virtualization and EMC RecoverPoint TechBook

20

EMC VPLEX

VPLEX product familyThe VPLEX family consists of the following, each briefly described inthis section.

◆ “VPLEX Local” on page 20

◆ “VPLEX Metro” on page 21

◆ “VPLEX Geo” on page 24

Figure 1 shows the VPLEX product offerings.

Figure 1 VPLEX product offerings

VPLEX LocalVPLEX Local consists of a single cluster and is deployed within asingle data center.

For more details about VPLEX Local, refer to the EMC VPLEXGeoSynchrony Product Guide, located at EMC Online Support athttps://support.emc.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 21: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

VPLEX MetroVPLEX Metro consists of two VPLEX clusters connected byinter-cluster links with not more than 5 ms RTT (round-trip time).

Note: Refer to VPLEX and vendor-specific White Papers for confirmation oflatency limitations.

For more details about VPLEX Metro, refer to the EMC VPLEXGeoSynchrony Product Guide, located at EMC Online Support athttps://support.emc.com.

VPLEX Metro topology for DWDM and FCIPFigure 2 shows an example of a basic VPLEX Metro topology forDWDM.

Figure 2 Basic VPLEX Metro DWDM configuration example

VPLEX product family 21

Page 22: Networking for Storage Virtualization and EMC RecoverPoint TechBook

22

EMC VPLEX

Figure 3 shows an example of a basic VPLEX Metro topology forFCIP.

Figure 3 Basic VPLEX Metro FCIP configuration example

FCIP SCSI write command spoofing, Brocade fast-write, and Ciscowrite acceleration technologies

In Geosynchrony version 5.2.1 and later, VPLEX introduced ashort-write feature that is not compatible with Brocade "fast-write"nor Cisco "write-acceleration" FCIP features.

VPLEX short-write is enabled by default. Therefore, you must disableBrocade fast-write or Cisco write-acceleration features prior toupgrading to Geosynchrony 5.2.1 or later. To enable or disable theVPLEX short-write feature, issue the following command from theVPLEX CLI:

configuration short-write -status on | off

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 23: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

For more detailed information on the VPLEX short-write feature,refer to the EMC VPLEX 5.2.1 release notes available athttp://support.emc.com.

FCIP switches introduce compression features that allow saving thebandwidth and throughput for WANcom. Due to the nature of FCIPcompression and the synchronous transaction for VPLEX Metro,there is an expected minor impact to the bandwidth and throughputfor the hosts connected to VPLEX distributed devices. It is requiredthat you disable FCIP compression for both Cisco and Brocade.

For more information about how to set up and manage FCIP features,refer to “FCIP” section of the Cisco CLI Administrator’s Guide athttp://cisco.com or the Brocade Fabric OS FCIP Administrator Guide athttp://brocade.com.

For more information about DWDM and FCIP technologies, refer tothe Extended Distance Technologies TechBook, available athttp://elabnavigator.EMC.com.

For more details about VPLEX Metro, refer to the EMC VPLEXGeoSynchrony Product Guide, located at EMC Online Support athttps://support.emc.com.

VPLEX product family 23

Page 24: Networking for Storage Virtualization and EMC RecoverPoint TechBook

24

EMC VPLEX

VPLEX GeoVPLEX Geo consists of two VPLEX clusters connected byinter-cluster links with no more than 50ms RTT.

Note: VPLEX Geo implements asynchronous data transfer between the twosites.

VPLEX Geo topology over 1/10 GbE and Distance ExtensionGearFigure 4 shows a VPLEX Geo topology over 1/10 GbE and DistanceExtension Gear.

Figure 4 VPLEX Geo topology over 1/10 GbE and Distance Extension Gear

For more details about VPLEX Geo, refer to the EMC VPLEXGeoSynchrony Product Guide, located at EMC Online Support athttps://support.emc.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 25: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

VPLEX hardwareFor complete details about VPLEX hardware, including the engine,directors, power supplies, and best practices, refer to the EMC VPLEXGeoSynchrony Product Guide, located at EMC Online Support athttps://support.emc.com. Also refer to the VPLEX online help,available on the Management Console GUI.

For VPLEX hardware VS1 and VS2 hardware configurations, refer tothe EMC VPLEX Configuration Guide, located at EMC Online Supportat https://support.emc.com.

This section contains the following information:

◆ “VPLEX cluster” on page 25

VPLEX clusterThere are two generations of VS hardware: VS1 and VS2. A VPLEXcluster (either VS1 or VS2) consists of:

◆ 1, 2, or 4 VPLEX engines.

• Each engine contains two directors.

• Dual-engine or quad-engine clusters contain:

– 1 pair of Fibre Channel switches for communicationbetween directors.

– 2 uninterruptible power sources (UPS) for battery powerbackup of the Fibre Channel switches and the managementserver.

◆ A management server.

◆ Ethernet or Fibre Channel cabling and respective switchinghardware connect the distributed VPLEX hardware components.

◆ I/O modules provide front-end and back-end connectivitybetween SANs and to remote VPLEX clusters in VPLEX Metro orVPLEX Geo configurations.

Note: In the current release of EMC GeoSynchrony™, VS1 and VS2 hardwarecannot co-exist in a cluster, except in a VPLEX Local cluster during a nondisruptive hardware upgrade from VS1 to VS2.

VPLEX hardware 25

Page 26: Networking for Storage Virtualization and EMC RecoverPoint TechBook

26

EMC VPLEX

For more details about the VPLEX cluster and architecture, refer tothe EMC VPLEX GeoSynchrony Product Guide, located at EMC OnlineSupport at https://support.emc.com.

An example of the architectural highlights is shown Figure 5.

Figure 5 Architecture highlights

For more information and interoperability details, refer to the EMCSimple Support Matrix, EMC VPLEX and GeoSynchrony, available athttp://elabnavigator.EMC.com under the Simple Support Matrixtab.

Management server Both clusters in either VPLEX Metro or VPLEX Geo configuration canbe managed from a single management server.

For more details about the management server, refer to the EMCVPLEX GeoSynchrony Product Guide, located at EMC Online Supportat https://support.emc.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 27: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

Fibre Channelswitches

The Fibre Channel switches provide high availability and redundantconnectivity between directors and engines in a dual-engine orquad-engine cluster.

For more details, refer to the EMC VPLEX GeoSynchrony ProductGuide, located at EMC Online Support at https://support.emc.com.

To begin using the VPLEX cluster, you must provision and exportstorage so that hosts and applications can use the storage. For moredetails, refer to the VPLEX Installation and Setup Guide, located atEMC Online Support at https://support.emc.com.

VPLEX hardware 27

Page 28: Networking for Storage Virtualization and EMC RecoverPoint TechBook

28

EMC VPLEX

VPLEX softwareThis section contains the following information:

◆ “GeoSynchrony,” on page 28

◆ “VPLEX management” on page 28

◆ “Provisioning with VPLEX” on page 29

◆ “Consistency groups” on page 30

◆ “Cache vaulting” on page 30

For complete details about VPLEX software, including EMCGeoSynchrony, management interfaces, provisioning with VPLEXconsistency groups, and cache vaulting, refer to the EMC VPLEXGeoSynchrony Product Guide and the EMC VLEX GeoSynchronyAdministration Guide, located at EMC Online Support athttps://support.emc.com. Also refer to the VPLEX online help,available on the Management Console GUI.

GeoSynchrony,EMC GeoSynchrony is the operating system running on VPLEXdirectors on both VS1 and VS2 hardware.

Note: For more information, refer to the EMC VPLEX GeoSynchrony ProductGuide and the EMC VLEX GeoSynchrony Administration Guide, located at EMCOnline Support at https://support.emc.com.

VPLEX managementVPLEX includes a selection of management interfaces.

◆ Web-based GUI

Provides easy-to-use point-and-click management interface.

◆ VPLEX CLI

VPLEX command line interface (CLI) supports all VPLEXoperations.

◆ VPLEX Element Manager API

Software developers and other users use the API to create scriptsto run VPLEX CLI commands.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 29: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

◆ SNMP Support for performance statistics:

Supports retrieval of performance-related statistics as publishedin the VPLEX-MIB.mib.

◆ LDAP/AD support

VPLEX offers Lightweight Directory Access Protocol (LDAP) orActive Directory as an authentication directory service.

◆ Call home

Leading technology that alerts EMC support personnel ofwarnings in VPLEX.

In VPLEX Metro and VPLEX Geo configurations, both clusters can bemanaged from either management server.

Inside VPLEX clusters, management traffic traverses a TCP/IP-basedprivate management network.

In VPLEX Metro and VPLEX Geo configurations, management traffictraverses a VPN tunnel between the management servers on bothclusters.

For more information on using these interfaces, refer to the EMCVPLEX Management Console Help, the EMC VPLEX GeoSynchronyProduct Guide, or the EMC VPLEX CLI Guide and the EMC VLEXGeoSynchrony Administration Guide, available at EMC Online Supportat https://support.emc.com.

Provisioning with VPLEXVPLEX allows easy storage provisioning among heterogeneousstorage arrays. Use the web-based GUI to simplify everydayprovisioning or create complex devices.

There are three ways to provision storage in VPLEX:

◆ Integrated storage provisioning (VPLEX integrated array servicesbased provisioning)

◆ EZ provisioning

◆ Advanced provisioning

All provisioning features are available in the Unisphere for VPLEXGUI.

VPLEX software 29

Page 30: Networking for Storage Virtualization and EMC RecoverPoint TechBook

30

EMC VPLEX

For more information on provisioning with VPLEX, refer to the EMCVPLEX GeoSynchrony Product Guide and the EMC VLEX GeoSynchronyAdministration Guide, located at EMC Online Support athttps://support.emc.com.

Consistency groupsVPLEX consistency groups aggregate volumes to enable theapplication of a common set of properties to the entire group.

There are two types of consistency groups:

◆ Synchronous

Aggregate local and distributed volumes on VPLEX Local andVPLEX Metro systems separated by 5ms or less of latency

◆ Asynchronous

Aggregate distributed volumes on VPLEX Geo systems separatedby 50ms or less of latency

For more information on consistency groups, refer to the EMCVPLEX GeoSynchrony Product Guide and the EMC VLEX GeoSynchronyAdministration Guide, located at EMC Online Support athttps://support.emc.com.

Cache vaultingVPLEX uses the individual director's memory systems to ensuredurability of user data and critical system configuration data.

For more information on cache vaulting, refer to the EMC VPLEXGeoSynchrony Product Guide and the EMC VLEX GeoSynchronyAdministration Guide, located at EMC Online Support athttps://support.emc.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 31: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

VPLEX configurationFor VPLEX hardware VS1 and VS2 configurations, refer to the EMCVPLEX Configuration Guide, located at EMC Online Support athttps://support.emc.com.

For VPLEX cluster configurations, refer to the EMC VPLEXGeoSynchrony Product Guide, located at EMC Online Support athttps://support.emc.com.

VPLEX configuration 31

Page 32: Networking for Storage Virtualization and EMC RecoverPoint TechBook

32

EMC VPLEX

Configuring VPLEX with FCoE switchesEMC currently supports connecting EMC Fibre Channel overEthernet (FCoE) arrays (EMC VMAX and EMC VNX) to VPLEXBack-End ports via FCoE switches, such as Nexus 5596UP/5548UP(UP = unified ports). The solution presented in this section uses theNexus 5596 UP switch to bridge both VMAX and VNX FCoE storageports and VPLEX FC Back-End ports. Both FC and FCoE ports arezoned together to enable the access of VMAX and VNX FCoE portsfrom VPLEX.

This section contains the following information:

◆ “FCoE goal” on page 32

◆ “Topology overview” on page 33

◆ “VPLEX with FCoE switches configuration steps” on page 35

FCoE goalI/O consolidation has long been sought by the IT industry to unifymultiple transport protocols (storage, network, managements, andheartbeats) in the data center. This section provides a basic VPLEXconfiguration solution to Fibre Channel over Ethernet (FCoE), whichallows I/O consolidation that has been defined by the FC-BB-5 T11work group. I/O consolidation, simply defined, is the ability to carrydifferent types of traffic, having different traffic characteristics andhandling requirements, over the same physical media.

I/O consolidation's most difficult challenge is to satisfy therequirements of different traffic classes within a single network. SinceFibre Channel (FC) is the dominant storage protocol in the datacenter, any viable I/O consolidation solution for storage must allowfor the FC model to be seamlessly integrated. FCoE meets thisrequirement in part by encapsulating each Fibre Channel frameinside an Ethernet frame and introducing Data Center Networkingthat guarantees low latency lossless traffic.

The goal of FCoE is to provide I/O consolidation over Ethernet,allowing Fibre Channel and Ethernet networks to share a single,integrated infrastructure, thereby reducing network complexities inthe data center.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 33: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

Assumptions This use case assumes the Nexus 55xx is configured as a switch modeto allow the VPLEX Fibre Channel Back-End ports to connect directlyvia the Universal Port in Fibre Channel. Additionally, users can tietheir Fibre Channel fabric infrastructures to FCoE storage ports.

Topology overviewThis section provides a VPLEX Geo topology diagram followed by adetailed diagram showing an FCoE connection for the Back-Endports of VPLEX Cluster-2.

This example uses VPLEX Geo. However, VPLEX Metro or VPLEXLocal can also be used.

The topology shown in Figure 6 is the recommended configurationfor storage array FCoE ports to VPLEX.

Figure 6 VPLEX Geo topology example

Figure 6 shows the Back-End ports on VPLEX Cluster-2 connecteddirectly to the Cisco Nexus 5596 FCoE switch via its FC port. TheUniversal port must be set to FC max of 8 Gb/s. FCoE storage arrayports are connected to the Cisco Nexus 5596 via Ethernet ports.

Configuring VPLEX with FCoE switches 33

Page 34: Networking for Storage Virtualization and EMC RecoverPoint TechBook

34

EMC VPLEX

Figure 7 is a detailed diagram showing an FCoE connection for theBack-End ports of VPLEX Cluster-2.

Figure 7 FCoE connection for Back-End ports of VPLEX Cluster-2

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 35: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

VPLEX with FCoE switches configuration stepsThis section provides steps to configure VPLEX with FCoE switches,as shown in Figure 6 on page 33 and Figure 7 on page 34. Figure 8provides an overview of these steps.

Figure 8 Steps to configure VPLEX with FCoE switches

These steps are further detailed in this section.

Step 1 Attach VPLEX FC Back-End (BE) directly into NX5596 portFor instructions to modify the UP port to FC port, refer to the Nexus5500 Series NX-OS SAN Switching Configuration Guide athttp://www.cisco.com. In Figure 7 on page 34, the Nexus 5596 port32 to 48 are configured as Fibre Channel. VPLEX BE FC must beconnected to FC ports to support direct connecting VPLEX BE to theFCoE storage.

Configuring VPLEX with FCoE switches 35

Page 36: Networking for Storage Virtualization and EMC RecoverPoint TechBook

36

EMC VPLEX

Step 2 Add ISLs between MDS 9148 and Nexus 5596As shown in Figure 9, two ISLs have been added between the twoswitches to avoid single link failure.

Figure 9 ISLs between MDS 9148 and Nexus 5596 switches

To configure ports in MDS 9148 for creating ISL, trunking or portchanneling, refer to the Cisco MDS 9000 Family NX-OS InterfacesConfiguration Guide at http://www.cisco.com. Additionally, refer toCisco Nexus 5500 Series NX-OS SAN Switching Configuration Guide forcreating ISL, trunking and port channeling in Nexus 5596.

Step 3 Create a VFC ports for each FCoE Array portRefer to the Cisco Switch Command Reference manual athttp://www.cisco.com for detailed information about how to createvirtual interface.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 37: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

Figure 10 shows the VFC ports on the switch.

Figure 10 VFC ports

Note that the Ethernet interface you bind the virtual Fibre Channelinterface to must be configured as follows:

◆ It must be a trunk port (use the switchport mode trunkcommand).

◆ The FCoE VLAN that corresponds to virtual Fibre Channel'sVSAN must be in the allowed VLAN list.

◆ FCoE VLAN must not be configured as the native VLAN of thetrunk port.

◆ The Ethernet interface must be configured as PortFast (use thespanning-tree port type edge trunk command).

Step 4 Create a zoneZone the storage to the VPLEX Back-End ports, following therecommendations in the Implementation and Planning Best Practices forEMC VPLEX Technical Notes located at https://support.emc.com.

on page 40 shows zoning configuration for the VPLEX Back-Endports and storage. Note that a redundant path from VPLEX to storageis recommended.

zone name VPLEX_D21A_BE_A1FC00_VMAX_0382_9G0 vsan 2pwwn 50:00:14:42:60:12:d0:10pwwn 50:00:09:73:00:05:f9:a0

zone name VPLEX_D21A_BE_A1FC00_VMAX_0382_9H0 vsan 2pwwn 50:00:14:42:60:12:d0:10pwwn 50:00:09:73:00:05:f9:e0

Configuring VPLEX with FCoE switches 37

Page 38: Networking for Storage Virtualization and EMC RecoverPoint TechBook

38

EMC VPLEX

zone name VPLEX_D21A_BE_A1FC01_VMAX_0382_10G0 vsan 2pwwn 50:00:14:42:60:12:d0:11pwwn 50:00:09:73:00:05:f9:a4

zone name VPLEX_D21A_BE_A1FC01_VMAX_0382_10H0 vsan 2pwwn 50:00:14:42:60:12:d0:11pwwn 50:00:09:73:00:05:f9:e4

zone name VPLEX_D21B_BE_B1FC01_VMAX_0382_10G0 vsan 2pwwn 50:00:14:42:70:12:d0:11pwwn 50:00:09:73:00:05:f9:a4

zone name VPLEX_D21B_BE_B1FC01_VMAX_0382_10H0 vsan 2pwwn 50:00:14:42:70:12:d0:11pwwn 50:00:09:73:00:05:f9:e4

zone name VPLEX_D22A_BE_A1FC00_VMAX_0382_9G0 vsan 2pwwn 50:00:14:42:60:12:5b:10pwwn 50:00:09:73:00:05:f9:a0

zone name VPLEX_D22A_BE_A1FC00_VMAX_0382_9H0 vsan 2pwwn 50:00:14:42:60:12:5b:10pwwn 50:00:09:73:00:05:f9:e0

zone name VPLEX_D22A_BE_A1FC01_VMAX_0382_10G0 vsan 2pwwn 50:00:14:42:60:12:5b:11pwwn 50:00:09:73:00:05:f9:a4

zone name VPLEX_D22A_BE_A1FC01_VMAX_0382_10H0 vsan 2pwwn 50:00:14:42:60:12:5b:11pwwn 50:00:09:73:00:05:f9:e4

zone name VPLEX_D22B_BE_B1FC00_VMAX_0382_9G0 vsan 2pwwn 50:00:14:42:70:12:5b:10pwwn 50:00:09:73:00:05:f9:a0

zone name VPLEX_D22B_BE_B1FC00_VMAX_0382_9H0 vsan 2pwwn 50:00:14:42:70:12:5b:10pwwn 50:00:09:73:00:05:f9:e0

zone name VPLEX_D21B_BE_B1FC00_VMAX_0382_9G0 vsan 2pwwn 50:00:14:42:70:12:d0:10pwwn 50:00:09:73:00:05:f9:a0

zone name VPLEX_D21B_BE_B1FC00_VMAX_0382_9H0 vsan 2pwwn 50:00:14:42:70:12:d0:10pwwn 50:00:09:73:00:05:f9:e0

zone name VPLEX_D22B_BE_B1FC01_VMAX_0382_10G0 vsan 2pwwn 50:00:14:42:70:12:5b:11pwwn 50:00:09:73:00:05:f9:a4

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 39: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

zone name VPLEX_D22B_BE_B1FC01_VMAX_0382_10H0 vsan 2pwwn 50:00:14:42:70:12:5b:11pwwn 50:00:09:73:00:05:f9:e4

zone name VPLEX_D21A_BE_A1FC00_VNX_13a_SPA0 vsan 2pwwn 50:00:14:42:60:12:d0:10pwwn 50:06:01:60:46:e4:01:3a

zone name VPLEX_D21A_BE_A1FC00_VNX_13a_SPB0 vsan 2pwwn 50:00:14:42:60:12:d0:10pwwn 50:06:01:68:46:e4:01:3a

zone name VPLEX_D21A_BE_A1FC01_VNX_13a_SPA1 vsan 2pwwn 50:00:14:42:60:12:d0:11pwwn 50:06:01:61:46:e4:01:3a

zone name VPLEX_D21A_BE_A1FC01_VNX_13a_SPB1 vsan 2pwwn 50:00:14:42:60:12:d0:11pwwn 50:06:01:69:46:e4:01:3a

zone name VPLEX_D21B_BE_B1FC00_VNX_13a_SPA0 vsan 2pwwn 50:00:14:42:70:12:d0:10pwwn 50:06:01:60:46:e4:01:3a

zone name VPLEX_D21B_BE_B1FC00_VNX_13a_SPB0 vsan 2pwwn 50:00:14:42:70:12:d0:10pwwn 50:06:01:68:46:e4:01:3a

zone name VPLEX_D21B_BE_B1FC01_VNX_13a_SPA1 vsan 2pwwn 50:00:14:42:70:12:d0:11pwwn 50:06:01:61:46:e4:01:3a

zone name VPLEX_D21B_BE_B1FC01_VNX_13a_SPB1 vsan 2pwwn 50:00:14:42:70:12:d0:11pwwn 50:06:01:69:46:e4:01:3a

zone name VPLEX_D22A_BE_A1FC00_VNX_13a_SPA0 vsan 2pwwn 50:00:14:42:60:12:5b:10pwwn 50:06:01:60:46:e4:01:3a

zone name VPLEX_D22A_BE_A1FC00_VNX_13a_SPB0 vsan 2pwwn 50:00:14:42:60:12:5b:10pwwn 50:06:01:68:46:e4:01:3a

zone name VPLEX_D22A_BE_A1FC01_VNX_13a_SPA1 vsan 2pwwn 50:00:14:42:60:12:5b:11pwwn 50:06:01:61:46:e4:01:3a

zone name VPLEX_D22A_BE_A1FC01_VNX_13a_SPB1 vsan 2pwwn 50:00:14:42:60:12:5b:11pwwn 50:06:01:69:46:e4:01:3a

Configuring VPLEX with FCoE switches 39

Page 40: Networking for Storage Virtualization and EMC RecoverPoint TechBook

40

EMC VPLEX

zone name VPLEX_D22B_BE_B1FC00_VNX_13a_SPA0 vsan 2pwwn 50:00:14:42:70:12:5b:10pwwn 50:06:01:60:46:e4:01:3a

zone name VPLEX_D22B_BE_B1FC00_VNX_13a_SPB0 vsan 2pwwn 50:00:14:42:70:12:5b:10pwwn 50:06:01:68:46:e4:01:3a

zone name VPLEX_D22B_BE_B1FC01_VNX_13a_SPA1 vsan 2pwwn 50:00:14:42:70:12:5b:11pwwn 50:06:01:61:46:e4:01:3a

zone name VPLEX_D22B_BE_B1FC01_VNX_13a_SPB1 vsan 2pwwn 50:00:14:42:70:12:5b:11pwwn 50:06:01:69:46:e4:01:3a

Step 5 Provision LUNS via FCoEFigure 11 on page 41 shows the VMAX Masking View. Devicesmapped to the storage FCoE ports would be discovered by VPLEX.

Refer to the EMC Solutions Enabler Symmetrix CLI Command Referencemanual, located at https://support.emc.com, for detailedinformation on how to create a Masking View on VMAX.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 41: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

Figure 11 Symmetrix VMAX Masking View

Configuring VPLEX with FCoE switches 41

Page 42: Networking for Storage Virtualization and EMC RecoverPoint TechBook

42

EMC VPLEX

Step 6 Expose virtual volumes to hostsThe virtual volume from VPLEX is the logical storage constructpresented to hosts. A virtual volume can be created with restrictedaccess to specified users or applications, providing additionalsecurity.

Figure 12 illustrates the virtual volume provisioning and exportingprocess. Provisioning and exporting storage refers to taking storagefrom a storage array and making it visible to a host.

Figure 12 Virtual volume provisioning and exporting process

For more detailed configuration information, refer to the VPLEX 5.1documentation portfolio, located at https://support.emc.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 43: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC VPLEX

VPLEX documentationThere are numerous VPLEX documentation, technical notes, andwhite papers available, some listed below, which can be found atEMC Online Support at https://support.emc.com, including:

◆ EMC VPLEX GeoSynchrony Product Guide

◆ EMC VPLEX GeoSynchrony CLI Guide

◆ EMC VPLEX GeoSynchrony Configuration Guide

◆ EMC VLEX GeoSynchrony Administration Guide

◆ EMC VPLEX Hardware Installation Guide

◆ EMC VPLEX with GeoSynchrony Release Notes

◆ VPLEX online help, available on the Management Console GUI

◆ VPLEX Procedure Generator, available at EMC Online Support athttps://support.emc.com

◆ EMC Simple Support Matrix, EMC VPLEX and GeoSynchrony,available at http://elabnavigator.EMC.com

For host-specific VPLEX information, see the appropriate HostConnectivity Guides available at EMC Online Support athttps://support.emc.com

For the most up-to-date support information, you should alwaysrefer to the EMC Support Matrix.

VPLEX documentation 43

Page 44: Networking for Storage Virtualization and EMC RecoverPoint TechBook

44

EMC VPLEX

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 45: Networking for Storage Virtualization and EMC RecoverPoint TechBook

3

This chapter contains the following information on EMCRecoverPoint:

◆ RecoverPoint architecture ................................................................. 46◆ RecoverPoint components ................................................................ 48◆ Logical topology................................................................................. 49◆ Local and remote replication............................................................ 51◆ Intelligent switches ............................................................................ 52◆ EMC RecoverPoint with intelligent switch splitters..................... 53◆ RecoverPoint documentation........................................................... 54

Note: For detailed information on RecoverPoint with Fibre Channel overEthernet (FCoE), refer to the “Solutions” chapter in the Fibre Channel overEthernet (FCoE): Data Center Bridging (DCB) Concepts and Protocols TechBook,available through the E-Lab Interoperability Navigator athttp://elabnavigator.EMC.com.

EMC RecoverPoint

EMC RecoverPoint 45

Page 46: Networking for Storage Virtualization and EMC RecoverPoint TechBook

46

EMC RecoverPoint

RecoverPoint architectureRecoverPoint uses a combination of hardware appliances configuredout of the direct data path “out of band” and a splitter driver toreplicate data from the designated source volume to the configuredtarget volume. RecoverPoint uses intelligent Fibre Channel switchesto intercept and “split” incoming writes. The intelligent FibreChannel switch then sends a copy of the incoming write to theRecoverPoint Appliance as well as passing the original write to theoriginal destination. The RecoverPoint appliance then handles thesending of the “split” write to the proper destination volume, eitherthrough TCP/IP or Fibre Channel.

Traffic routed by the intelligent switch does not introduce protocoloverhead. All virtualized I/O is routed through ASICs on theintelligent switch or blade. The payload (SCSI data frame) is notaffected by this operation, as the ASIC does not read or modify thecontent but rather reads only the protocol header information tomake routing decisions.

Figure 13 shows an example of RecoverPoint architecture.

Figure 13 RecoverPoint architecture example

SAN/WAN

Layer 2Fibre Channel

SANand FCoE

EMC IBM HDS Sun HPICO-IMG-000843

Application servers

RecoverPointappliances

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 47: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC RecoverPoint

The splitter residing in the intelligent switch is responsible forduplicating the write traffic and sending the additional write to theRecoverPoint appliance, as shown in Figure 14.

Figure 14 Intelligent switch running RecoverPoint

RecoverPoint offers:

◆ Continuous remote replication (CRR) between LUNs across twoSANs

◆ Continuous data protection (CDP) between LUNs on the sameSAN

◆ Concurrent local and remote (CLR) data protection betweenLUNs across two SANs

◆ Heterogeneous operating system and storage array support

◆ The server's I/O adapters can be FC (HBA) or FCoE (CNA)

◆ The storage array can be FC- or FCoE-connected, depending onthe splitter strategy

◆ Integrated with intelligent fabric

ICO-IMG-000941

Input I/Ostream

Mappingoperation

RecoverPointAppliance

Host

Inside the Intelligent Switch

Original I/O Storage

RecoverPoint architecture 47

Page 48: Networking for Storage Virtualization and EMC RecoverPoint TechBook

48

EMC RecoverPoint

RecoverPoint componentsThere are five major components in a RecoverPoint installation:

◆ Virtual Target (VT) — This is the virtual representation of thephysical storage that is used as either source or destination forreplicated data.

◆ Virtual Initiator (VI) — This is the virtual representation of thephysical host that is issuing I/O to the source volume.

◆ Intelligent Fabric Splitter — This is the intelligent-switchhardware that contains specialized port-level processors (ASICs)to perform virtualization operations on I/O at line speed. Thisfunctionality is available from two vendors: Brocade and Cisco.

Brocade resides on the intelligent blade, AP4-18, which can beinstalled in a Brocade DCX or Brocade 48000 chassis. It can also beinstalled directly on the intelligent switch, AP-7600.

Cisco resides on the intelligent blades Storage Services Module(SSM) or Multiservice Module, which can be installed in an MDS9513, 9509, 9506, 9216i, 9216A, or 9222i. It can also be installeddirectly on an MDS 9222i without an intelligent blade

◆ RecoverPoint Appliances (RPA) — These appliances are Linux-based boxes that accept the “split” data and route the data to theappropriate destination volume, either using IP or Fibre Channel.The RPA also acts as the sole management interface to theRecoverPoint installation.

◆ Remote Replication — Two RecoverPoint Appliance (RPA)clusters can be connected via TCP/IP or FC in order to performreplication to a remote location. RPA clusters connected viaTCP/IP for remote communication will transfer "split" data via IPto the remote cluster. The target cluster's distance from the sourceis only limited by the physical limitations of TCP/IP.

RPA clusters can also be connected remotely via Fibre Channel.They can reside on the same fabric or on different fabrics, as longas the two clusters can be zoned together. The target cluster'sdistance from the source is again only limited by the physicallimitations of FC. RPA clusters can support distance extensionhardware (such as, DWDM) to extend the distance betweenclusters.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 49: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC RecoverPoint

Logical topologyThere are Virtual Initiators (VI) and Virtual Targets (VT). A host(initiator) or storage (target) is physically connected to the fabric. TheVI and VT are logical entities that are accessible from any port on thechassis or in the fabric, as shown in Figure 15.

Figure 15 Logical topology

The exact method of this virtualization differs between the CiscoSANTap solution and the Brocade FAP solution.

The SANTap architecture uses VSANs to physically separate thephysical host from the physical storage and bridges the VSANs viavirtualization internal to the intelligent switch. When a virtual target(DVT) is created, a duplicate WWN is created and placed in the“Front End” VSAN, that is the VSAN that encompasses the physicalhosts. The physical hosts then communicate with the virtual target.SANTap then also creates virtual initiators (VI) and duplicates thehost WWNs and places them in the “Back End” VSAN, which is theVSAN that encompasses the physical storage. The VIs thencommunicate to the physical storage. SANTap also creates virtualentries representing the splitter itself and communication betweenthe RPAs and the intelligent switch happens via these virtual entries.

Logical topology 49

Page 50: Networking for Storage Virtualization and EMC RecoverPoint TechBook

50

EMC RecoverPoint

Figure 16 on page 50 shows example “Back End” and “Front End”VSANs. and shows the duplicate WWNs in both VSANs.

Figure 16 Example Back End and Front End VSAN

The Brocade FAP architecture uses Frame Redirect to channel all hostI/O through the corresponding bound VI. When a host is bound to aparticular physical target, a virtual target is created with a newWWN. The separation in this case is handled internally by the switch.The physical target remains zoned with the physical storage. Datawould flow from the physical host to the virtual target, then from thebound VI to the physical storage.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 51: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC RecoverPoint

Local and remote replicationReplication by RecoverPoint is based on a logical entity called aconsistency group. SAN-attached storage volumes at the primary andsecondary sites, called replication volumes, are assigned to aconsistency group to define the set of data to be replicated. Anapplication, such as Microsoft Exchange, typically has its storageresources defined in a single consistency group so there is a mappingbetween an application and a consistency group.

EMC RecoverPoint replicates data over any distance:

◆ Within the same site or to a local bunker site some distance away

Note: Refer to “CDP configurations” in the EMC RecoverPointAdministrator’s Guide, located at EMC Online Support athttps://support.emc.com.

◆ To a remote site, see CRR configurations

Note: Refer to “CRR configurations” in the EMC RecoverPointAdministrator’s Guide, located at EMC Online Support athttps://support.emc.com.

◆ To both a local and a remote site simultaneously

Note: Refer to “CLR configurations” in the MC RecoverPointAdministrator’s Guide, located at EMC Online Support athttps://support.emc.com.

RecoverPoint replication phasesThe three major phases performed by RecoverPoint to guarantee dataconsistency and availability (RTO and RPO) during replication are:

◆ Write phase (the splitting phase)

◆ Transfer phase

◆ Distribution phase

Local and remote replication 51

Page 52: Networking for Storage Virtualization and EMC RecoverPoint TechBook

52

EMC RecoverPoint

Intelligent switchesThere are several intelligent switches that provide fabric-level I/Osplitting for the RecoverPoint instance:

◆ Connectrix AP-7600

◆ Connectrix ED-48000B

◆ Connectrix MDS Storage Services Module (SSM)

For more information on the intelligent switches, refer to the productdata information in the EMC Connectrix Products Data TechBook,through the E-Lab Interoperability Navigator athttp://elabnavigator.EMC.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 53: Networking for Storage Virtualization and EMC RecoverPoint TechBook

EMC RecoverPoint

EMC RecoverPoint with intelligent switch splittersEMC RecoverPoint is an enterprise-scale solution designed to protectapplication data on heterogeneous SAN-attached servers and storagearrays. RecoverPoint runs on an out-of-path appliance and combinesindustry-leading continuous data protection technology with abandwidth-efficient no data loss replication technology that protectsdata on local and remote sites. EMC RecoverPoint provides local andremote data protection, enabling reliable replication of data over anydistance; that is, locally within the same site or remotely to anothersite using Fibre Channel or an IP network to send the data over aWAN. For long distances, RecoverPoint uses either Fibre Channel orIP network to send data over a WAN.

RecoverPoint uses existing Fibre Channel and Fibre Channel overEthernet (FCoE) infrastructure to integrate seamlessly with existinghost application and data storage subsystems. For detailedinformation on RecoverPoint with Fibre Channel over Ethernet(FCoE), refer to the “Solutions” chapter in the Fibre Channel overEthernet (FCoE): Data Center Bridging (DCB) Concepts and ProtocolsTechBook, available through the E-Lab Interoperability Navigator athttp://elabnavigator.EMC.com.

RecoverPoint's fabric splitter leverages intelligent SAN-switchhardware from EMC's Connectrix partners. This section focuses ontopology considerations where the data splitter resides on anintelligent SAN switch.

To locate more detailed information about RecoverPoint, refer to“RecoverPoint documentation” on page 54.

EMC RecoverPoint with intelligent switch splitters 53

Page 54: Networking for Storage Virtualization and EMC RecoverPoint TechBook

54

EMC RecoverPoint

RecoverPoint documentationThere are a number of documents for RecoverPoint relatedinformation, all which can be found at EMC Online Support athttps://support.emc.com, including:

◆ Release notes

Information in the release notes include:

• Support parameters• Supported configurations• New features and functions• SAN compatibility• Bug fixes• Expected behaviors• Technical notes• Documentation issues• Upgrade information

◆ EMC RecoverPoint Administrator’s Guide

◆ RecoverPoint maintenance and administration documents

◆ RecoverPoint installation and configuration documents

◆ RecoverPoint-related white papers

◆ EMC Support Matrix, located at http://elabnavigator.emc.com

For detailed information on RecoverPoint with Fibre Channel overEthernet (FCoE), refer to the “Solutions” chapter in the Fibre Channelover Ethernet (FCoE): Data Center Bridging (DCB) Concepts and ProtocolsTechBook, available through the E-Lab Interoperability Navigator athttp://elabnavigator.EMC.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 55: Networking for Storage Virtualization and EMC RecoverPoint TechBook

4

This chapter includes SAN connectivity information and describeshow RecoverPoint can be integrated into the design of a simple orcomplex topology.

◆ SAN connectivity overview.............................................................. 56◆ Simple SAN topologies ..................................................................... 57◆ Complex SAN topologies ................................................................. 62◆ Interswitch link hops ......................................................................... 72

SAN Connectivity andInterswitch Link Hops

SAN Connectivity and Interswitch Link Hops 55

Page 56: Networking for Storage Virtualization and EMC RecoverPoint TechBook

56

SAN Connectivity and Interswitch Link Hops

SAN connectivity overviewRecoverPoint use intelligent switches, which allows for seamlessconnectivity into a Fibre Channel switched fabric.

There are a number of different SAN topologies, classified in twocategories: simple and complex topologies, which this chapter willfurther discuss.

RecoverPoint can be integrated into the design of either topologytype.

◆ “Simple SAN topologies” on page 57

Note: For more detailed information on simple SAN topologies, refer tothe “Simple Fibre Channel SAN topologies” chapter in the Fibre ChannelSAN Technologies TechBook, available through the E-Lab InteroperabilityNavigator at http://elabnavigator.EMC.com.

◆ “Complex SAN topologies” on page 62

Note: For more detailed information on complex SAN topologies, refer tothe “Complex Fibre Channel SAN topologies” chapter in the FibreChannel SAN Technologies TechBook, available through the E-LabInteroperability Navigator at http://elabnavigator.EMC.com.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 57: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

Simple SAN topologiesRecoverPoint is dependent on the intelligent switches to handle datasplitting to the RecoverPoint Appliance.

The intelligent switches can act as stand-alone FC switches as well asLayer 2 FC switches. In small SANs, or with small virtualizationimplementations, this is one possible topology. There are a fewconfigurations based on the choice in FC switch vendors, eachdiscussed further in this section:

◆ “Connectrix B AP-7600B simple SAN topology” on page 57

◆ “Storage Services Module (SSM) blade in a simple SANtopology” on page 58

◆ “Node placement in a simple SAN topology—Connectrix B” onpage 60

◆ “Node placement in a simple SAN topology—Connectrix MDS”on page 61

Connectrix B AP-7600B simple SAN topologyThe Connectrix B AP-7600B, when operating as a platform for theRecoverPoint instance, supports F_Port connections. That is, FC-SWnodes, such as hosts and storage, can be connected directly to any oneof the 16 ports on the AP-7600B, as shown in Figure 17 on page 58.There is no need to connect through a Layer 2 switch, such as anED-48000B or DS-4900B, for example.

Simple SAN topologies 57

Page 58: Networking for Storage Virtualization and EMC RecoverPoint TechBook

58

SAN Connectivity and Interswitch Link Hops

Note: Figure 17 does not show the CPCs or the AT switches. It only displaysthe FC connectivity.

Figure 17 Connectrix B AP-7600B simple SAN topology

Storage Services Module (SSM) blade in a simple SAN topologyThe Connectrix MDS Storage Services Module (SSM) can be installedin any MDS switch or director with open slots. This includes the MDS9513, 9509, 9506, and 9216. In Figure 18 on page 59, hosts areconnected to the SSM ports and storage is connected to the fixedports on the MDS 9216i. Hosts may be also be connected to the fixedports; however, since the SSM ports are oversubscribed, they are bestsuited for host connectivity.

Since the fixed ports are line rate mode (2 Gb on the MDS 9216),storage ports should be connected there to ensure maximumbandwidth.

WAN

Dual initiator hosts

Non-EMC

EMC

F_Port

F_PortF_Port

SYM-002039

Dell 1950

Remote site

RPA cluster

F_Port F_Port

AP-7600BAP-7600B

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 59: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

Figure 18 Connectrix MDS SSM blade in simple SAN topology

Figure 19 on page 60 shows an MDS 9513 with a 24-port line card andan SSM blade. Hosts are connected to the SSM ports and storage isconnected to the 24-port line card. The SSM ports are oversubscribedso they are best suited for host connectivity. The 24-port line cardports are line rate mode (4 Gb), so storage ports should be connectedthere to ensure maximum bandwidth.

92161 with SSM

WAN

Dual initiator hosts

Non-EMC

EMC

F_Port

F_Port F_Port

F_Port

GEN-000515

Dell 1950

Remote site

RPA cluster

MDS 9216i

STATUS

SYSTEM

RESET CONSOLE MGMT

10/ 100 COM1

1

2

FANSTATUS

LINK - - SPEED

41 32

STATUS

LINK - - SPEED

5 96 87

LINK - - SPEED

10 1411 1312

LINK - LINK -21

S T A T U S1 7 2 0 2 1 2 4 2 5 2 8 2 9 3 2

1 4 5 8 9 1 2 1 3 1 6

WS-X9 0 3 2-S MV

FC Ser vices Module

2

0

2

1

2

4

2

5

2

8

2

9

4 5 8 912

13

MDS 9216i

STATUS

SYSTEM

RESET CONSOLE MGMT

10/ 100 COM1

1

2

FANSTATUS

LINK - - SPEED

41 32

STATUS

LINK - - SPEED

5 96 87

LINK - - SPEED

10 1411 1312

LINK - LINK -21

S T A T U S1 7 2 0 2 1 2 4 2 5 2 8 2 9 3 2

1 4 5 8 9 1 2 1 3 1 6

WS-X9 0 3 2-S MV

FC Ser vices Module

2

0

2

1

2

4

2

5

2

8

2

9

4 5 8 912

13

Simple SAN topologies 59

Page 60: Networking for Storage Virtualization and EMC RecoverPoint TechBook

60

SAN Connectivity and Interswitch Link Hops

Figure 19 MDS 9513 with a 24-port line card and an SSM blade

Note: Figure 18 on page 59 and Figure 19 do not show the CPCs or the ATswitches. They only display the FC connectivity.

Node placement in a simple SAN topology—Connectrix BThe AP-7600B, when used as a single switch, may have both hostsand storage nodes connected directly to any of the ports. It is unlikelythat the AP-7600B will be used in this manner because it does notmake effective use of the available ports. Since there are only 16 ports,only a mixture of 16 hosts and storage ports are possible. It is morelikely that this intelligent switch will be used with more complexdesigns.

1

2

7

6

5

4

3

13

12

11

10

9

8

FANSTATUS

SERIESMDS 9500

DS-C9513

F_Port

S T A TU S 1

7

2

0

2

1

2

4

2

5

2

8

2

9

3

2

1 4 5 8 912

13

16

WS -

X 9 0 3 2-

S M V

FC Ser vices Module

DS-X9124

STATUS

1/2/4 Gbps FCModule

LINK5 63 41 2 7 8 9 10 11 12

LINK17 1815 1613 14 19 20 21 22 23 24

SUPERVISOR

WS-X9530 SFI

CONSOLEMGMT

STATUS

SYSTEM

RESET

ACTIVE

PWR MGMT

COM 1CFI

10/100

SUPERVISOR

WS-X9530 SFI

CONSOLEMGMT

STATUS

SYSTEM

RESET

ACTIVE

PWR MGMT

COM 1CFI

10/100

DS-X9124

STATUS

1/2/4 Gbps FCModule

LINK5 63 41 2 7 8 9 10 11 12

LINK17 1815 1613 14 19 20 21 22 23 24

DS-X9112

STATUS

1/2/4 Gbps FCModule

LINK

5 63 41 2

LINK

11 129 107 8

1

2

7

6

5

4

3

13

12

11

10

9

8

FANSTATUS

SERIESMDS 9500

DS-C9513

F_Port

S T A TU S 1

7

2

0

2

1

2

4

2

5

2

8

2

9

3

2

1 4 5 8 912

13

16

WS -

X 9 0 3 2-

S M V

FC Ser vices Module

DS-X9124

STATUS

1/2/4 Gbps FCModule

LINK5 63 41 2 7 8 9 10 11 12

LINK17 1815 1613 14 19 20 21 22 23 24

SUPERVISOR

WS-X9530 SFI

CONSOLEMGMT

STATUS

SYSTEM

RESET

ACTIVE

PWR MGMT

COM 1CFI

10/100

SUPERVISOR

WS-X9530 SFI

CONSOLEMGMT

STATUS

SYSTEM

RESET

ACTIVE

PWR MGMT

COM 1CFI

10/100

DS-X9124

STATUS

1/2/4 Gbps FCModule

LINK5 63 41 2 7 8 9 10 11 12

LINK17 1815 1613 14 19 20 21 22 23 24

DS-X9112

STATUS

1/2/4 Gbps FCModule

LINK

5 63 41 2

LINK

11 129 107 8

9513 with SSM

WAN

Dual initiator hosts

Non-EMC

EMC

F_Port

F_Port

F_Port F_Port

F_Port

GEN-000514

Dell 1950

Remote site

RPA cluster

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 61: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

Node placement in a simple SAN topology—Connectrix MDSThe Connectrix MDS single switch/director model is sometimesknown as a collapsed core/edge design. There is a mixture of linerate mode (LRM) and oversubscribed mode ports (OSM). Theexternal ports on an SSM blade are oversubscribed (same ratio as thestandard 32-port line card). The Cisco switch/director can supportvirtualized and non-virtualized storage simultaneously. Hosts maybe connected to any port in the chassis. Storage may be connected toany port in the chassis.

Note: Hosts accessing virtualized storage, and physical storage supplyingRecoverPoint, do not have to be connected to the SSM.

The same best practice holds true for host and storage connectivitywith or without RecoverPoint. Hosts should be connected to OSMports whenever possible unless they have high I/O throughputrequirements. Storage should be connected to LRM ports wheneverpossible because a storage port is a shared resource with multiplehosts requiring access.

If there are hosts using solely virtualized storage, then they requirethe SSM. Therefore, because a dependency exists, connect the host(whenever possible) directly to the SSM. As previously stated, hostand storage nodes may be connected to any port on theswitch/director.

Simple SAN topologies 61

Page 62: Networking for Storage Virtualization and EMC RecoverPoint TechBook

62

SAN Connectivity and Interswitch Link Hops

Complex SAN topologiesComplex SAN topologies derive their complexity from severalpossible conditions:

◆ Domain count

◆ Heterogeneous switch vendors in a single SAN infrastructure

◆ Distance extension

◆ Multi-protocol configurations

For the purposes of RecoverPoint, only the first two conditions willbe discussed in this section.

Domain countSANs come in various sizes and configurations. Customers withhundreds of host and storage ports require several FC switches to belinked together through ISLs. Although the size of director class FCswitches has significantly increased, core/edge topologies (asdiscussed in “Compound core edge switches” section in the FibreChannel SAN Technologies TechBook, available through the E-LabInteroperability Navigator at http://elabnavigator.EMC.com) arecommon because legacy switches are used in combination with thenew high-density directors. These switches can be all departmentalswitches, all directors, or a combination of both.

A topology with numerous switches will most likely be in one of twodesigns, each discussed in the next sections:

◆ Core/edge (hosts on an edge switch and storage on a coreswitch). (See Figure 20 on page 64.)

◆ Edge/core/edge (hosts and storage on an edge switch but ondifferent tiers). (See Figure 21 on page 65.)

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 63: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

Core/edgeFigure 20 on page 64 shows a core/edge design. In this model, hostsare connected to the MDS 9506s (edge tier) and storage is connectedto MDS 9513s (core tier). The core tier is the centralized location andconnection point for all physical storage in this model. All I/Obetween the host and storage must flow over the ISLs between theedge and core. It is a one-hop logical infrastructure. (An ISL hop isthe link between two switches.)

The SSM blade is located in the core. To minimize latency andincrease fabric efficiency, the SSM should be co-located with thestorage that it is virtualizing, just as one would do in a Layer 2 SAN.

Note: Physical placement of the RecoverPoint Appliances can be anywherewithin the fabric and need not be connected directly to the intelligent switch.

Note: Direct connections to the SSM are not required.

◆ ISLs between the switches do not have to be connected to the SSMblade

◆ Hosts do not have to be connected to the SSM blade

◆ Storage should not be connected to the SSM blade

Since the SSM is located inside the MDS 9513 core, internal routingbetween blades in the chassis is not considered an ISL hop. There isadditional latency with the virtualization ASICs; however, there is noprotocol overhead associated with routing between blades in achassis.

Complex SAN topologies 63

Page 64: Networking for Storage Virtualization and EMC RecoverPoint TechBook

64

SAN Connectivity and Interswitch Link Hops

Figure 20 Core/edge design

Figure 21 on page 65 shows a core/edge design with a Connectrix Bor RecoverPoint implementation. The AP-7600B is an externalintelligent switch; therefore, it must be linked through ISLs to thecore switch.

Note: Physical placement of the RecoverPoint Appliances can be anywherewithin the fabric and need not be connected directly to the intelligent switch.

In this model, hosts are connected to the DS-32B2 (edge tier) andstorage is connected to ED-48000B (core tier). The core tier is thecentralized location and connection point for all physical storage inthis model. All I/O between the host and storage must flow over theISLs between the edge and core. It is a one-hop infrastructure for

Dual initiator hosts

F_PortEdge 9506s with no SSM

Core 9513s with SSMF_Port

E_Port

E_Port

Non-EMC EMC SYM-002518

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 65: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

non-virtualized storage. However, all virtualized storage traffic mustpass through at least one of the ports on the AP-7600B. Therefore, theI/O from the host must traverse an ISL between the DS-32B2 and theED-48000B. Then, it must traverse an ISL between the ED-48000B andthe AP-7600B. Finally, it must traverse an ISL back from the AP-7600Bto the ED-48000B where the I/O is terminated. This is a three-hoplogical topology.

Figure 21 Core/edge design with a Connectrix B

WAN

SYM-002038

Dell 1950RPA cluster

Remote site

Dual initiator hosts

Non-EMC

EMC Edge 48000s

Edge 3900s

E_Port

E_Port

F_Port

F_Port

AP-7600B

Complex SAN topologies 65

Page 66: Networking for Storage Virtualization and EMC RecoverPoint TechBook

66

SAN Connectivity and Interswitch Link Hops

Edge/core/edgeFigure 22 on page 67 shows an edge/core/edge design. In thismodel, hosts are connected to the Connectrix MDS 9506s (edge tier)and storage is connected to MDS 9506 (storage tier). The MDS 9513sact as a connectivity layer. The core tier is the centralized location andconnection point for edge and storage tiers. All I/O between the hostand storage must flow over the ISLs between. This is a two-hoplogical topology.

Within the fabric, hosts can access virtualized and non-virtualizedstorage. The location of the physical storage to be virtualized willdetermine the location of the SSM blade. There are two possibilities:

◆ Storage to be virtualized is located on a single switch

◆ Storage to be virtualized is located on multiple switches on thestorage tier

Note: Physical placement of the RecoverPoint Appliances can be anywherewithin the fabric and need not be connected directly to the intelligent switch.

Storage on a singleswitch

If the physical storage is located on one edge switch, place the SSMon the same switch. See the red box in Figure 22 on page 67.

To minimize latency and increase fabric efficiency, the SSM isco-located with the storage that it is virtualizing.

Note: Connections to the SSM are not required.

◆ ISLs between the switches do not have to be connected to the SSMblade

◆ Hosts do not have to be connected to the SSM blade

◆ Storage should not be connected to the SSM blade

Since the SSM is located inside the edge MDS 9506, internal routingbetween blades in the chassis is not considered an ISL hop. There isadditional latency with the virtualization ASICs; however, there is noprotocol overhead associated with routing between blades in achassis.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 67: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

Figure 22 Storage on a single switch

Figure 23 on page 68 shows a core/edge/core design with ConnectrixB. The AP-7600Bs are linked via ISLs to the edge ED-48000B.

Note: Physical placement of the RecoverPoint Appliances can be anywherewithin the fabric and need not be connected directly to the intelligent switch.

In this model, hosts are connected to the DS-32B2 (edge tier) andstorage is connected to a ED-48000B, which is the other edge tier. The

Dual initiator hosts

F_Port

E_Port

F_Port

Edge 9506s with no SSM

Core 9513s with no SSMPlace SSM in 9506 if

physical storage islocated on one switch

Edge 9506s with SSM

E_Port

E_Port

Non-EMC EMC SYM-002517

Complex SAN topologies 67

Page 68: Networking for Storage Virtualization and EMC RecoverPoint TechBook

68

SAN Connectivity and Interswitch Link Hops

core ED-48000Bs are for connectivity only. All I/O between the hostand storage must flow over the ISLs in the core. It is a two-hopinfrastructure for non-virtualized storage. However, all virtualizedstorage traffic must pass through at least one of the ports on theAP-7600B. Therefore, the I/O from the host must traverse an ISLbetween the DS-32B2 and the ED-48000B. It must then traverse an ISLbetween the ED-48000B and the next ED-48000B. Then, it musttraverse the ISL between the ED-48000B and the AP-7600B. Finally, itmust traverse an ISL back from the AP-7600B to the ED-48000B wherethe I/O is terminated. This is a four-hop design that would require anRPQ.

Figure 23 Core/edge/core design with Brocade

Dual initiator hosts

Non-EMC

EMC Edge 48000s

Edge 3900s

E_Port

E_Port

F_Port

F_Port

AP-7600B

SYM-002040

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 69: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

Storage on multipleswitches

If the physical storage is spread amongst several edge switches, asingle SSM must be located in a centralized location in the fabric. Bydoing this, one will achieve the highest possible efficiencies. InFigure 24 on page 70, only one side of the dual fabric configuration isshown. Because the physical storage ports are divided amongmultiple edge switches in the storage tier, the SSM has been placed inthe connectivity layer in the core.

Note: Physical placement of the RecoverPoint Appliances can be anywherewithin the fabric and need not be connected directly to the intelligent switch.

For RecoverPointJust as with the core/edge design (or any design), all virtualizedtraffic must flow through the virtualization ASICs. By locating theSSM in the connectivity layer, RecoverPoint's VIs will only need totraverse a single ISL in order to access physical storage. If the SSMwas placed in one of the storage tier switches, most or some of thetraffic between the SSM and storage would traverse two or moreISLs.

Note: Connections to the SSM are not required.

◆ ISLs between the switches do not have to be connected to the SSMblade

◆ Hosts do not have to be connected to the SSM blade

◆ Storage should not be connected to the SSM blade

Since the SSM is located inside the MDS 9513, internal routingbetween blades in the chassis is not considered an ISL hop. There isadditional latency with the virtualization ASICs; however, there is noprotocol overhead associated with routing between blades in achassis.

Complex SAN topologies 69

Page 70: Networking for Storage Virtualization and EMC RecoverPoint TechBook

70

SAN Connectivity and Interswitch Link Hops

Figure 24 Storage on multiple switches

Similarly in the RecoverPoint solution, the physical storage may bespread amongst several edge switches. An AP-7600B must be locatedin a centralized location in the fabric. This will achieve the highestpossible efficiencies.

In Figure 25 on page 71, only one side of the dual fabric configurationis shown. Because the physical storage ports are divided amongmultiple edge switches in the storage tier, the AP-7600B has beenplaced in the connectivity layer in the core.

Dual initiator hosts

Place SSM in 9513 ifphysical storage islocated on multiple

edge switches

“B” Fabric

“B” Fabric

“A” Fabric

Non-EMC EMC SYM-002519

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 71: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

Note: When RecoverPoint is added to a SAN, not all physical storage has tobe virtualized. Multiple storage ports may have LUNs to be virtualized.However, if more than 75% of the LUNs to be virtualized reside on a singleswitch (for example, Figure 22 on page 67, then locate the AP-7600B on thatswitch, as in Figure 22).

Figure 25 Storage on multiple switches with Connectrix B

Dual initiator hosts

“B” Fabric

“B” Fabric

“A” Fabric

AP-7600B

Placement of AP-7600Bif physical storage islocated on multipleedge switches

Non-EMC EMCSYM-002041

Complex SAN topologies 71

Page 72: Networking for Storage Virtualization and EMC RecoverPoint TechBook

72

SAN Connectivity and Interswitch Link Hops

Interswitch link hopsThis section contains the following information on interswitch linkhops:

◆ “Overview” on page 72

◆ “Example 1 — Two-hop solution” on page 73

◆ “Example 2 — Preferred one-hop solution” on page 74

◆ “Example 3 — Three-hop solution” on page 75

◆ “Example 4 — One-hop solution” on page 77

OverviewAn interswitch link hop is the physical link between two switches.The Fibre Channel switch itself is not considered a hop. The numberof hops specified in the Switched Fabric Topology table of EMCSupport Matrix states the maximum supported number of hops in afabric under no fault conditions. For Class 3 traffic from initiator portto target port, and for Class F traffic among switches, there could bebetween three and five ISL hops, depending on the vendor. Thephysical link between host and switch as well as storage and switch isnot counted as a hop. If there is a fault in the fabric, more than theEMC Support Matrix specified number of hops will be supportedtemporarily until the fault is resolved. In excess of maximumspecified hops are not supported under normal operating conditions.

Note: In a fabric topology with RecoverPoint, all virtualized traffic must flowthrough an ASIC on the AP-7600B or the SSM blade.

In the previous chapters, recommendations were provided forplacement in simple and complex fabrics. When including anintelligent switch into a fabric, traffic patterns across ISLs may changeand hop counts may increase depending on the vendor and switchtype used. The diagrams used in this section display hop counts invarious fabric configurations.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 73: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

In a Cisco MDS design, the SSM blade can be inserted in any MDSchassis with a slot. This includes the MDS 9513, 9509, 9506, and 9216.If the SSM is added to an existing fabric, the addition of the bladedoes not add any hops within the chassis itself. However, dependingon the placement of the SSM in the environment, there may be adirect impact on the number of ISL hops.

Note: All virtualized traffic must pass through the SSM. All non-virtualizedtraffic does not have to pass through the SSM.

Example 1 — Two-hop solutionFigure 26 on page 74 and Figure 27 on page 75, show an example of afour-switch mesh design using MDS 9506s. (Fabric ‘B’ is not shownfor simplicity of the diagram. Fabric ‘B’ would be a mirror of Fabric‘A’.) Figure 26 shows a host attached to sw1 and physical storageattached to sw4. The SSM module has been placed in sw3. The hostI/O must follow a path from sw1 to sw3 (first hop) because of thefabric shortest path first (FSPF) algorithm inherent to FC switches.The SSM then re-maps the I/O and sends it to its physical target onsw4 (second hop). This is a two-hop solution.

Interswitch link hops 73

Page 74: Networking for Storage Virtualization and EMC RecoverPoint TechBook

74

SAN Connectivity and Interswitch Link Hops

Figure 26 Four-switch mesh design using MDS 9506s (Example 1)

Example 2 — Preferred one-hop solutionFigure 27 shows the same host attached to sw1 and the same physicalstorage attached to sw4. The SSM module has been moved to sw4. Thehost I/O must follow a path from sw1 to sw4. The SSM then re-mapsthe I/O and sends it to its physical target on the same switch. There isno hop increase when I/O passes between blades in the MDS chassis.This is a preferred one-hop solution.

Dual initiator hosts

To Fabric “B”

To Fabric “B”

sw1 sw2

sw3 sw4

Non-EMC EMC

2

1

SYM-002520

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 75: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

Figure 27 Four-switch mesh design using MDS 9506s (Example 2)

Example 3 — Three-hop solutionFigure 28 on page 76 shows a four-switch mesh of ED-48000Bs withan AP-7600B as the intelligent switch. Because the AP-7600B is anexternal switch, it must be linked via an ISL to the existing fabric.Again, the host is attached to sw1 and physical storage attached tosw4. The AP-7600B has been linked via an ISL to sw4. The host I/Omust follow a path from sw1 to sw4 (first hop) because of the FSPF(fabric shortest path first) algorithm inherent to FC switches. The I/Ois then passed from sw4 to the AP-7600B (second hop).

The virtualization ASIC re-maps the I/O and passes it from theAP-7600B to sw4 where the physical storage resides (third hop). Thisis a three-hop solution.

Dual initiator hosts

To Fabric “B”

To Fabric “B”

sw1 sw2

sw3 sw4

Non-EMC EMC

1

SYM-002521

Interswitch link hops 75

Page 76: Networking for Storage Virtualization and EMC RecoverPoint TechBook

76

SAN Connectivity and Interswitch Link Hops

Note: The number of hops and traffic patterns would be identical in an MDSconfiguration if an MDS 9216 with an SSM blade which is hosting theRecoverPoint instance is attached to an existing fabric of MDS directors.

Figure 28 Four-switch mesh of ED-48000Bs with an AP-7600B as the intelligentswitch (Example 3)

Dual initiator hosts

1

2

3

To Fabric “B”

To Fabric “B”

AP-7600B

Non-EMC EMC SYM-002042

sw1

sw3

sw2

sw4

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 77: Networking for Storage Virtualization and EMC RecoverPoint TechBook

SAN Connectivity and Interswitch Link Hops

With both mesh and complex fabric designs, it is not always possibleto maintain a one-hop solution. With higher domain-count fabrics,there is a likelihood of increased hops. This is true of a virtualizedand non-virtualized environment. It is also likely with the addition ofintelligent switches or blades that some hosts will have more hopsthan others due to the placement of the physical storage in relation tothe location of the intelligent switch.

Example 4 — One-hop solutionFigure 29 on page 78 shows a four-switch mesh of MDS 9506s.Storage has been spread across sw2, sw3, and sw4 switches. The SSMblade is located in sw4. (Again, Fabric ‘B’ is not shown for simplicity).Both Host A and Host B are accessing virtualized storage.

All I/O must pass through the SSM. I/O initially passes from sw1 tosw4 (first hop) because of the fabric shortest path first (FSPF)algorithm inherent to FC switches. The SSM re-maps the I/O. Thefinal destination of the I/O may be the EMC storage on sw4.

If the I/O terminates there, it is considered a one-hop solution.However, the final destination may be the EMC storage on sw2 or thenon-EMC storage on sw3. In this case, it is a two-hop solution. Host Cis not accessing virtualized storage. Its physical target is the EMCstorage array on sw2. The I/O passes from sw1 to sw2 because of theFSPF algorithm inherent to FC switches. This is always a one-hopsolution.

Interswitch link hops 77

Page 78: Networking for Storage Virtualization and EMC RecoverPoint TechBook

78

SAN Connectivity and Interswitch Link Hops

Figure 29 Four-switch mesh of MDS 9506s (Example 4)

Dual initiator hosts

sw1 sw2

sw3 sw4

Non-EMC EMC

EMC

1

A B C

SYM-002522

2

2

1

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 79: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

This glossary contains terms related to EMC products and EMCnetworked storage concepts.

Aaccess control A service that allows or prohibits access to a resource. Storage

management products implement access control to allow or prohibitspecific users. Storage platform products implement access control,often called LUN Masking, to allow or prohibit access to volumes byInitiators (HBAs). See also “persistent binding” and “zoning.”

active domain ID The domain ID actively being used by a switch. It is assigned to aswitch by the principal switch.

active zone set The active zone set is the zone set definition currently in effect andenforced by the fabric or other entity (for example, the name server).Only one zone set at a time can be active.

agent An autonomous agent is a system situated within (and is part of) anenvironment that senses that environment, and acts on it over time inpursuit of its own agenda. Storage management software centralizesthe control and monitoring of highly distributed storageinfrastructure. The centralizing part of the software managementsystem can depend on agents that are installed on the distributedparts of the infrastructure. For example, an agent (softwarecomponent) can be installed on each of the hosts (servers) in anenvironment to allow the centralizing software to control andmonitor the hosts.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 79

Page 80: Networking for Storage Virtualization and EMC RecoverPoint TechBook

80

Glossary

alarm An SNMP message notifying an operator of a network problem.

any-to-any portconnectivity

A characteristic of a Fibre Channel switch that allows any port on theswitch to communicate with any other port on the same switch.

application Application software is a defined subclass of computer software thatemploys the capabilities of a computer directly to a task that userswant to perform. This is in contrast to system software thatparticipates with integration of various capabilities of a computer,and typically does not directly apply these capabilities to performingtasks that benefit users. The term application refers to both theapplication software and its implementation which often refers to theuse of an information processing system. (For example, a payrollapplication, an airline reservation application, or a networkapplication.) Typically an application is installed “on top of” anoperating system like Windows or Linux, and contains a userinterface.

application-specificintegrated circuit

(ASIC)

A circuit designed for a specific purpose, such as implementinglower-layer Fibre Channel protocols (FC-1 and FC-0). ASICs contrastwith general-purpose devices such as memory chips ormicroprocessors, which can be used in many different applications.

arbitration The process of selecting one respondent from a collection of severalcandidates that request service concurrently.

ASIC family Different switch hardware platforms that utilize the same port ASICcan be grouped into collections known as an ASIC family. Forexample, the Fuji ASIC family which consists of the ED-64M andED-140M run different microprocessors, but both utilize the sameport ASIC to provide Fibre Channel connectivity, and are therefore inthe same ASIC family. For inter operability concerns, it is useful tounderstand to which ASIC family a switch belongs.

ASCII ASCII (American Standard Code for Information Interchange),generally pronounced [aeski], is a character encoding based onthe English alphabet. ASCII codes represent text in computers,communications equipment, and other devices that work withtext. Most modern character encodings, which support manymore characters, have a historical basis in ASCII.

audit log A log containing summaries of actions taken by a ConnectrixManagement software user that creates an audit trail of changes.Adding, modifying, or deleting user or product administration

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 81: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

values, creates a record in the audit log that includes the date andtime.

authentication Verification of the identity of a process or person.

Bbackpressure The effect on the environment leading up to the point of restriction.

See “congestion.”

BB_Credit See “buffer-to-buffer credit.”

beaconing Repeated transmission of a beacon light and message until an error iscorrected or bypassed. Typically used by a piece of equipment whenan individual Field Replaceable Unit (FRU) needs replacement.Beaconing helps the field engineer locate the specific defectivecomponent. Some equipment management software systems such asConnectrix Manager offer beaconing capability.

BER See “bit error rate.”

bidirectional In Fibre Channel, the capability to simultaneously communicateat maximum speeds in both directions over a link.

bit error rate Ratio of received bits that contain errors to total of all bitstransmitted.

blade server A consolidation of independent servers and switch technology in thesame chassis.

blocked port Devices communicating with a blocked port are prevented fromlogging in to the Fibre Channel switch containing the port orcommunicating with other devices attached to the switch. A blockedport continuously transmits the off-line sequence (OLS).

bridge A device that provides a translation service between two networksegments utilizing different communication protocols. EMC supportsand sells bridges that convert iSCSI storage commands from a NIC-attached server to Fibre Channel commands for a storage platform.

broadcast Sends a transmission to all ports in a network. Typically used inIP networks. Not typically used in Fibre Channel networks.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 81

Page 82: Networking for Storage Virtualization and EMC RecoverPoint TechBook

82

Glossary

broadcast frames Data packet, also known as a broadcast packet, whosedestination address specifies all computers on a network. See also“multicast.”

buffer Storage area for data in transit. Buffers compensate for differences inlink speeds and link congestion between devices.

buffer-to-buffer credit The number of receive buffers allocated by a receiving FC_Port to atransmitting FC_Port. The value is negotiated between Fibre Channelports during link initialization. Each time a port transmits a frame itdecrements this credit value. Each time a port receives an R_Rdyframe it increments this credit value. If the credit value isdecremented to zero, the transmitter stops sending any new framesuntil the receiver has transmitted an R_Rdy frame. Buffer-to-buffercredit is particularly important in SRDF and Mirror View distanceextension solutions.

CCall Home A product feature that allows the Connectrix service processor to

automatically dial out to a support center and report systemproblems. The support center server accepts calls from the Connectrixservice processor, logs reported events, and can notify one or moresupport center representatives. Telephone numbers and otherinformation are configured through the Windows NT dial-upnetworking application. The Call Home function can be enabled anddisabled through the Connectrix Product Manager.

channel With Open Systems, a channel is a point-to-point link thattransports data from one point to another on the communicationpath, typically with high throughput and low latency that isgenerally required by storage systems. With Mainframeenvironments, a channel refers to the server-side of theserver-storage communication path, analogous to the HBA inOpen Systems.

Class 2 Fibre Channelclass of service

In Class 2 service, the fabric and destination N_Ports provideconnectionless service with notification of delivery or nondeliverybetween the two N_Ports. Historically Class 2 service is not widelyused in Fibre Channel system.

Class 3 Fibre Channelclass of service

Class 3 service provides a connectionless service without notificationof delivery between N_Ports. (This is also known as datagramservice.) The transmission and routing of Class 3 frames is the same

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 83: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

as for Class 2 frames. Class 3 is the dominant class of communicationused in Fibre Channel for moving data between servers and storageand may be referred to as “Ship and pray.”

Class F Fibre Channelclass of service

Class F service is used for all switch-to-switch communication in amultiswitch fabric environment. It is nearly identical to class 2 from aflow control point of view.

community A relationship between an SNMP agent and a set of SNMP managersthat defines authentication, access control, and proxy characteristics.

community name A name that represents an SNMP community that the agent softwarerecognizes as a valid source for SNMP requests. An SNMPmanagement program that sends an SNMP request to an agentprogram must identify the request with a community name that theagent recognizes or the agent discards the message as anauthentication failure. The agent counts these failures and reports thecount to the manager program upon request, or sends anauthentication failure trap message to the manager program.

community profile Information that specifies which management objects areavailable to what management domain or SNMP communityname.

congestion Occurs at the point of restriction. See “backpressure.”

connectionless Non dedicated link. Typically used to describe a link betweennodes that allows the switch to forward Class 2 or Class 3 framesas resources (ports) allow. Contrast with the dedicated bandwidththat is required in a Class 1 Fibre Channel Service point-to-pointlink.

Connectivity Unit A hardware component that contains hardware (and possiblysoftware) that provides Fibre Channel connectivity across a fabric.Connectrix switches are example of Connectivity Units. This is a termpopularized by the Fibre Alliance MIB, sometimes abbreviated toconnunit.

Connectrixmanagement

software

The software application that implements the management userinterface for all managed Fibre Channel products, typically theConnectrix -M product line. Connectrix Management software is aclient/server application with the server running on the Connectrixservice processor, and clients running remotely or on the serviceprocessor.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 83

Page 84: Networking for Storage Virtualization and EMC RecoverPoint TechBook

84

Glossary

Connectrix serviceprocessor

An optional 1U server shipped with the Connectrix -M product lineto run the Connectrix Management server software and EMC remotesupport application software.

Control Unit In mainframe environments, a Control Unit controls access to storage.It is analogous to a Target in Open Systems environments.

core switch Occupies central locations within the interconnections of a fabric.Generally provides the primary data paths across the fabric and thedirect connections to storage devices. Connectrix directors aretypically installed as core switches, but may be located anywhere inthe fabric.

credit A numeric value that relates to the number of available BB_Creditson a Fibre Channel port. See“buffer-to-buffer credit”.

DDASD Direct Access Storage Device.

default Pertaining to an attribute, value, or option that is assumed whennone is explicitly specified.

default zone A zone containing all attached devices that are not members of anyactive zone. Typically the default zone is disabled in a Connectrix Menvironment which prevents newly installed servers and storagefrom communicating until they have been provisioned.

Dense WavelengthDivision Multiplexing

(DWDM)

A process that carries different data channels at different wavelengthsover one pair of fiber optic links. A conventional fiber-optic systemcarries only one channel over a single wavelength traveling through asingle fiber.

destination ID A field in a Fibre Channel header that specifies the destinationaddress for a frame. The Fibre Channel header also contains a SourceID (SID). The FCID for a port contains both the SID and the DID.

device A piece of equipment, such as a server, switch or storage system.

dialog box A user interface element of a software product typically implementedas a pop-up window containing informational messages and fieldsfor modification. Facilitates a dialog between the user and theapplication. Dialog box is often used interchangeably with window.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 85: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

DID An acronym used to refer to either Domain ID or Destination ID. Thisambiguity can create confusion. As a result E-Lab recommends thisacronym be used to apply to Domain ID. Destination ID can beabbreviated to FCID.

director An enterprise-class Fibre Channel switch, such as the ConnectrixED-140M, MDS 9509, or ED-48000B. Directors deliver highavailability, failure ride-through, and repair under power to insuremaximum uptime for business critical applications. Major assemblies,such as power supplies, fan modules, switch controller cards,switching elements, and port modules, are all hot-swappable.

The term director may also refer to a board-level module in theSymmetrix that provides the interface between host channels(through an associated adapter module in the Symmetrix) andSymmetrix disk devices. (This description is presented here only toclarify a term used in other EMC documents.)

DNS See “domain name service name.”

domain ID A byte-wide field in the three byte Fibre Channel address thatuniquely identifies a switch in a fabric. The three fields in a FCID aredomain, area, and port. A distinct Domain ID is requested from theprincipal switch. The principal switch allocates one Domain ID toeach switch in the fabric. A user may be able to set a Preferred IDwhich can be requested of the Principal switch, or set an InsistentDomain ID. If two switches insist on the same DID one or bothswitches will segment from the fabric.

domain name servicename

Host or node name for a system that is translated to an IP addressthrough a name server. All DNS names have a host name componentand, if fully qualified, a domain component, such as host1.abcd.com. Inthis example, host1 is the host name.

dual-attached host A host that has two (or more) connections to a set of devices.

EE_D_TOV A time-out period within which each data frame in a Fibre Channel

sequence transmits. This avoids time-out errors at the destinationNx_Port. This function facilitates high speed recovery from droppedframes. Typically this value is 2 seconds.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 85

Page 86: Networking for Storage Virtualization and EMC RecoverPoint TechBook

86

Glossary

E_Port Expansion Port, a port type in a Fibre Channel switch that attaches toanother E_Port on a second Fibre Channel switch forming anInterswitch Link (ISL). This link typically conforms to the FC-SWstandards developed by the T11 committee, but might not supportheterogeneous inter operability.

edge switch Occupies the periphery of the fabric, generally providing the directconnections to host servers and management workstations. No twoedge switches can be connected by interswitch links (ISLs).Connectrix departmental switches are typically installed as edgeswitches in a multiswitch fabric, but may be located anywhere in thefabric

Embedded WebServer

A management interface embedded on the switch’s code that offersfeatures similar to (but not as robust as) the Connectrix Manager andProduct Manager.

error detect time outvalue

Defines the time the switch waits for an expected response beforedeclaring an error condition. The error detect time out value(E_D_TOV) can be set within a range of two-tenths of a second to onesecond using the Connectrix switch Product Manager.

error message An indication that an error has been detected. See also “informationmessage” and “warning message.”

Ethernet A baseband LAN that allows multiple station access to thetransmission medium at will without prior coordination and whichavoids or resolves contention.

event log A record of significant events that have occurred on a Connectrixswitch, such as FRU failures, degraded operation, and port problems.

expansionport See “E_Port.”

explicit fabric login In order to join a fabric, an Nport must login to the fabric (anoperation referred to as an FLOGI). Typically this is an explicitoperation performed by the Nport communicating with the F_port ofthe switch, and is called an explicit fabric login. Some legacy FibreChannel ports do not perform explicit login, and switch vendorsperform login for ports creating an implicit login. Typically logins areexplicit.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 87: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

FFA Fibre Adapter, another name for a Symmetrix Fibre Channel director.

F_Port Fabric Port, a port type on a Fibre Channel switch. An F_Port attachesto an N_Port through a point-to-point full-duplex link connection. AG_Port automatically becomes an F_port or an E-Port depending onthe port initialization process.

fabric One or more switching devices that interconnect Fibre ChannelN_Ports, and route Fibre Channel frames based on destination IDs inthe frame headers. A fabric provides discovery, path provisioning,and state change management services for a Fibre Channelenvironment.

fabric element Any active switch or director in the fabric.

fabric login Process used by N_Ports to establish their operating parametersincluding class of service, speed, and buffer-to-buffer credit value.

fabric port A port type (F_Port) on a Fibre Channel switch that attaches to anN_Port through a point-to-point full-duplex link connection. AnN_Port is typically a host (HBA) or a storage device like Symmetrix,VNX series, or CLARiiON.

fabric shortest pathfirst (FSPF)

A routing algorithm implemented by Fibre Channel switches in afabric. The algorithm seeks to minimize the number of hops traversedas a Fibre Channel frame travels from its source to its destination.

fabric tree A hierarchical list in Connectrix Manager of all fabrics currentlyknown to the Connectrix service processor. The tree includes allmembers of the fabrics, listed by WWN or nickname.

failover The process of detecting a failure on an active Connectrix switch FRUand the automatic transition of functions to a backup FRU.

fan-in/fan-out Term used to describe the server:storage ratio, where a graphicrepresentation of a 1:n (fan-in) or n:1 (fan-out) logical topology lookslike a hand-held fan, with the wide end toward n. By conventionfan-out refers to the number of server ports that share a single storageport. Fan-out consolidates a large number of server ports on a fewernumber of storage ports. Fan-in refers to the number of storage portsthat a single server port uses. Fan-in enlarges the storage capacityused by a server. A fan-in or fan-out rate is often referred to as just the

Networking for Storage Virtualization and EMC RecoverPoint TechBook 87

Page 88: Networking for Storage Virtualization and EMC RecoverPoint TechBook

88

Glossary

n part of the ratio; For example, a 16:1 fan-out is also called a fan-outrate of 16, in this case 16 server ports are sharing a single storage port.

FCP See “Fibre Channel Protocol.”

FC-SW The Fibre Channel fabric standard. The standard is developed by theT11 organization whose documentation can be found at T11.org.EMC actively participates in T11. T11 is a committee within theInterNational Committee for Information Technology (INCITS).

fiber optics The branch of optical technology concerned with the transmission ofradiant power through fibers made of transparent materials such asglass, fused silica, and plastic.

Either a single discrete fiber or a non spatially aligned fiber bundlecan be used for each information channel. Such fibers are often calledoptical fibers to differentiate them from fibers used innon-communication applications.

fibre A general term used to cover all physical media types supported bythe Fibre Channel specification, such as optical fiber, twisted pair, andcoaxial cable.

Fibre Channel The general name of an integrated set of ANSI standards that definenew protocols for flexible information transfer. Logically, FibreChannel is a high-performance serial data channel.

Fibre ChannelProtocol

A standard Fibre Channel FC-4 level protocol used to run SCSI overFibre Channel.

Fibre Channel switchmodules

The embedded switch modules in the back plane of the blade server.See “blade server” on page 81.

firmware The program code (embedded software) that resides and executes ona connectivity device, such as a Connectrix switch, a Symmetrix FibreChannel director, or a host bus adapter (HBA).

F_Port Fabric Port, a physical interface within the fabric. An F_Port attachesto an N_Port through a point-to-point full-duplex link connection.

frame A set of fields making up a unit of transmission. Each field is made ofbytes. The typical Fibre Channel frame consists of fields:Start-of-frame, header, data-field, CRC, end-of-frame. The maximumframe size is 2148 bytes.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 89: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

frame header Control information placed before the data-field when encapsulatingdata for network transmission. The header provides the source anddestination IDs of the frame.

FRU Field-replaceable unit, a hardware component that can be replaced asan entire unit. The Connectrix switch Product Manager can displaystatus for the FRUs installed in the unit.

FSPF Fabric Shortest Path First, an algorithm used for routing traffic. Thismeans that, between the source and destination, only the paths thathave the least amount of physical hops will be used for framedelivery.

Ggateway address In TCP/IP, a device that connects two systems that use the same

or different protocols.

gigabyte (GB) A unit of measure for storage size, loosely one billion (109) bytes. Onegigabyte actually equals 1,073,741,824 bytes.

G_Port A port type on a Fibre Channel switch capable of acting either as anF_Port or an E_Port, depending on the port type at the other end ofthe link.

GUI Graphical user interface.

HHBA See “host bus adapter.”

hexadecimal Pertaining to a numbering system with base of 16; valid numbers usethe digits 0 through 9 and characters A through F (which representthe numbers 10 through 15).

high availability A performance feature characterized by hardware componentredundancy and hot-swappability (enabling non-disruptivemaintenance). High-availability systems maximize systemuptime while providing superior reliability, availability, andserviceability.

hop A hop refers to the number of InterSwitch Links (ISLs) a FibreChannel frame must traverse to go from its source to its destination.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 89

Page 90: Networking for Storage Virtualization and EMC RecoverPoint TechBook

90

Glossary

Good design practice encourages three hops or less to minimizecongestion and performance management complexities.

host bus adapter A bus card in a host system that allows the host system to connect tothe storage system. Typically the HBA communicates with the hostover a PCI or PCI Express bus and has a single Fibre Channel link tothe fabric. The HBA contains an embedded microprocessor with onboard firmware, one or more ASICs, and a Small Form FactorPluggable module (SFP) to connect to the Fibre Channel link.

II/O See “input/output.”

in-band management Transmission of monitoring and control functions over the FibreChannel interface. You can also perform these functions out-of-bandtypically by use of the ethernet to manage Fibre Channel devices.

information message A message telling a user that a function is performing normally orhas completed normally. User acknowledgement might or might notbe required, depending on the message. See also “error message” and“warning message.”

input/output (1) Pertaining to a device whose parts can perform an input processand an output process at the same time. (2) Pertaining to a functionalunit or channel involved in an input process, output process, or both(concurrently or not), and to the data involved in such a process.(3) Pertaining to input, output, or both.

interface (1) A shared boundary between two functional units, defined byfunctional characteristics, signal characteristics, or othercharacteristics as appropriate. The concept includes the specificationof the connection of two devices having different functions. (2)Hardware, software, or both, that links systems, programs, ordevices.

Internet Protocol See “IP.”

interoperability The ability to communicate, execute programs, or transfer databetween various functional units over a network. Also refers to aFibre Channel fabric that contains switches from more than onevendor.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 91: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

interswitch link (ISL) Interswitch link, a physical E_Port connection between any twoswitches in a Fibre Channel fabric. An ISL forms a hop in a fabric.

IP Internet Protocol, the TCP/IP standard protocol that defines thedatagram as the unit of information passed across an internet andprovides the basis for connectionless, best-effort packet deliveryservice. IP includes the ICMP control and error message protocol asan integral part.

IP address A unique string of numbers that identifies a device on a network. Theaddress consists of four groups (quadrants) of numbers delimited byperiods. (This is called dotted-decimal notation.) All resources on thenetwork must have an IP address. A valid IP address is in the formnnn.nnn.nnn.nnn, where each nnn is a decimal in the range 0 to 255.

ISL Interswitch link, a physical E_Port connection between any twoswitches in a Fibre Channel fabric.

Kkilobyte (K) A unit of measure for storage size, loosely one thousand bytes. One

kilobyte actually equals 1,024 bytes.

Llaser A device that produces optical radiation using a population inversion

to provide light amplification by stimulated emission of radiationand (generally) an optical resonant cavity to provide positivefeedback. Laser radiation can be highly coherent temporally, spatially,or both.

LED Light-emitting diode.

link The physical connection between two devices on a switched fabric.

link incident A problem detected on a fiber-optic link; for example, loss of light, orinvalid sequences.

load balancing The ability to distribute traffic over all network ports that are thesame distance from the destination address by assigning differentpaths to different messages. Increases effective network bandwidth.EMC PowerPath software provides load-balancing services for serverI/O.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 91

Page 92: Networking for Storage Virtualization and EMC RecoverPoint TechBook

92

Glossary

logical volume A named unit of storage consisting of a logically contiguous set ofdisk sectors.

Logical Unit Number(LUN)

A number, assigned to a storage volume, that (in combination withthe storage device node's World Wide Port Name (WWPN))represents a unique identifier for a logical volume on a storage areanetwork.

MMAC address Media Access Control address, the hardware address of a device

connected to a shared network.

managed product A hardware product that can be managed using the ConnectrixProduct Manager. For example, a Connectrix switch is a managedproduct.

management session Exists when a user logs in to the Connectrix Management softwareand successfully connects to the product server. The user mustspecify the network address of the product server at login time.

media The disk surface on which data is stored.

media access control See “MAC address.”

megabyte (MB) A unit of measure for storage size, loosely one million (106) bytes.One megabyte actually equals 1,048,576 bytes.

MIB Management Information Base, a related set of objects (variables)containing information about a managed device and accessedthrough SNMP from a network management station.

multicast Multicast is used when multiple copies of data are to be sent todesignated, multiple, destinations.

multiswitch fabric Fibre Channel fabric created by linking more than one switch ordirector together to allow communication. See also “ISL.”

multiswitch linking Port-to-port connections between two switches.

Nname server (dNS) A service known as the distributed Name Server provided by a Fibre

Channel fabric that provides device discovery, path provisioning, and

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 93: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

state change notification services to the N_Ports in the fabric. Theservice is implemented in a distributed fashion, for example, eachswitch in a fabric participates in providing the service. The service isaddressed by the N_Ports through a Well Known Address.

network address A name or address that identifies a managed product, such as aConnectrix switch, or a Connectrix service processor on a TCP/IPnetwork. The network address can be either an IP address in dotteddecimal notation, or a Domain Name Service (DNS) name asadministered on a customer network. All DNS names have a hostname component and (if fully qualified) a domain component, suchas host1.emc.com. In this example, host1 is the host name and EMC.comis the domain component.

nickname A user-defined name representing a specific WWxN, typically used ina Connectrix -M management environment. The analog in theConnectrix -B and MDS environments is alias.

node The point at which one or more functional units connect to thenetwork.

N_Port Node Port, a Fibre Channel port implemented by an end device(node) that can attach to an F_Port or directly to another N_Portthrough a point-to-point link connection. HBAs and storage systemsimplement N_Ports that connect to the fabric.

NVRAM Nonvolatile random access memory.

Ooffline sequence

(OLS)The OLS Primitive Sequence is transmitted to indicate that theFC_Port transmitting the Sequence is:

a. initiating the Link Initialization Protocol

b. receiving and recognizing NOS

c. or entering the offline state

OLS See “offline sequence (OLS)”.

operating mode Regulates what other types of switches can share a multiswitch fabricwith the switch under consideration.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 93

Page 94: Networking for Storage Virtualization and EMC RecoverPoint TechBook

94

Glossary

operating system Software that controls the execution of programs and that mayprovide such services as resource allocation, scheduling,input/output control, and data management. Although operatingsystems are predominantly software, partial hardwareimplementations are possible.

optical cable A fiber, multiple fibers, or a fiber bundle in a structure built to meetoptical, mechanical, and environmental specifications.

OS See “operating system.”

out-of-bandmanagement

Transmission of monitoring/control functions outside of the FibreChannel interface, typically over ethernet.

oversubscription The ratio of bandwidth required to bandwidth available. When allports, associated pair-wise, in any random fashion, cannot sustainfull duplex at full line-rate, the switch is oversubscribed.

Pparameter A characteristic element with a variable value that is given a constant

value for a specified application. Also, a user-specified value for anitem in a menu; a value that the system provides when a menu isinterpreted; data passed between programs or procedures.

password (1) A value used in authentication or a value used to establishmembership in a group having specific privileges. (2) A unique stringof characters known to the computer system and to a user who mustspecify it to gain full or limited access to a system and to theinformation stored within it.

path In a network, any route between any two nodes.

persistent binding Use of server-level access control configuration information topersistently bind a server device name to a specific Fibre Channelstorage volume or logical unit number, through a specific HBA andstorage port WWN. The address of a persistently bound device doesnot shift if a storage target fails to recover during a power cycle. Thisfunction is the responsibility of the HBA device driver.

port (1) An access point for data entry or exit. (2) A receptacle on a deviceto which a cable for another device is attached.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 95: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

port card Field replaceable hardware component that provides the connectionfor fiber cables and performs specific device-dependent logicfunctions.

port name A symbolic name that the user defines for a particular port throughthe Product Manager.

preferred domain ID An ID configured by the fabric administrator. During the fabricbuild process a switch requests permission from the principalswitch to use its preferred domain ID. The principal switch candeny this request by providing an alternate domain ID only ifthere is a conflict for the requested Domain ID. Typically aprincipal switch grants the non-principal switch its requestedPreferred Domain ID.

principal downstreamISL

The ISL to which each switch will forward frames originating fromthe principal switch.

principal ISL The principal ISL is the ISL that frames destined to, or coming from,the principal switch in the fabric will use. An example is an RDIframe.

principal switch In a multiswitch fabric, the switch that allocates domain IDs toitself and to all other switches in the fabric. There is always oneprincipal switch in a fabric. If a switch is not connected to anyother switches, it acts as its own principal switch.

principal upstream ISL The ISL to which each switch will forward frames destined for theprincipal switch. The principal switch does not have any upstreamISLs.

product (1) Connectivity Product, a generic name for a switch, director, or anyother Fibre Channel product. (2) Managed Product, a generichardware product that can be managed by the Product Manager (aConnectrix switch is a managed product). Note distinction from thedefinition for “device.”

Product Manager A software component of Connectrix Manager software such as aConnectrix switch product manager, that implements themanagement user interface for a specific product. When a productinstance is opened from the Connectrix Manager software productsview, the corresponding product manager is invoked. The productmanager is also known as an Element Manager.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 95

Page 96: Networking for Storage Virtualization and EMC RecoverPoint TechBook

96

Glossary

product name A user configurable identifier assigned to a Managed Product.Typically, this name is stored on the product itself. For a Connectrixswitch, the Product Name can also be accessed by an SNMP Manageras the System Name. The Product Name should align with the hostname component of a Network Address.

products view The top-level display in the Connectrix Management software userinterface that displays icons of Managed Products.

protocol (1) A set of semantic and syntactic rules that determines the behaviorof functional units in achieving communication. (2) A specificationfor the format and relative timing of information exchanged betweencommunicating parties.

RR_A_TOV See “resource allocation time out value.”

remote access link The ability to communicate with a data processing facility through aremote data link.

remote notification The system can be programmed to notify remote sites of certainclasses of events.

remote userworkstation

A workstation, such as a PC, using Connectrix Management softwareand Product Manager software that can access the Connectrix serviceprocessor over a LAN connection. A user at a remote workstation canperform all of the management and monitoring tasks available to alocal user on the Connectrix service processor.

resource allocationtime out value

A value used to time-out operations that depend on a maximum timethat an exchange can be delayed in a fabric and still be delivered. Theresource allocation time-out value of (R_A_TOV) can be set within arange of two-tenths of a second to 120 seconds using the Connectrixswitch product manager. The typical value is 10 seconds.

SSAN See “storage area network (SAN).”

segmentation A non-connection between two switches. Numerous reasons exist foran operational ISL to segment, including interop modeincompatibility, zoning conflicts, and domain overlaps.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 97: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

segmented E_Port E_Port that has ceased to function as an E_Port within amultiswitch fabric due to an incompatibility between the fabricsthat it joins.

service processor See “Connectrix service processor.”

session See “management session.”

single attached host A host that only has a single connection to a set of devices.

small form factorpluggable (SFP)

An optical module implementing a shortwave or long wave opticaltransceiver.

SMTP Simple Mail Transfer Protocol, a TCP/IP protocol that allows users tocreate, send, and receive text messages. SMTP protocols specify howmessages are passed across a link from one system to another. Theydo not specify how the mail application accepts, presents or stores themail.

SNMP Simple Network Management Protocol, a TCP/IP protocol thatgenerally uses the User Datagram Protocol (UDP) to exchangemessages between a management information base (MIB) and amanagement client residing on a network.

storage area network(SAN)

A network linking servers or workstations to disk arrays, tapebackup systems, and other devices, typically over Fibre Channel andconsisting of multiple fabrics.

subnet mask Used by a computer to determine whether another computerwith which it needs to communicate is located on a local orremote network. The network mask depends upon the class ofnetworks to which the computer is connecting. The maskindicates which digits to look at in a longer network address andallows the router to avoid handling the entire address. Subnetmasking allows routers to move the packets more quickly.Typically, a subnet may represent all the machines at onegeographic location, in one building, or on the same local areanetwork.

switch priority Value configured into each switch in a fabric that determines itsrelative likelihood of becoming the fabric’s principal switch.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 97

Page 98: Networking for Storage Virtualization and EMC RecoverPoint TechBook

98

Glossary

TTCP/IP Transmission Control Protocol/Internet Protocol. TCP/IP refers to

the protocols that are used on the Internet and most computernetworks. TCP refers to the Transport layer that provides flow controland connection services. IP refers to the Internet Protocol level whereaddressing and routing are implemented.

toggle To change the state of a feature/function that has only two states. Forexample, if a feature/function is enabled, toggling changes the state todisabled.

topology Logical and/or physical arrangement of switches on a network.

trap An asynchronous (unsolicited) notification of an event originating onan SNMP-managed device and directed to a centralized SNMPNetwork Management Station.

Uunblocked port Devices communicating with an unblocked port can log in to a

Connectrix switch or a similar product and communicate withdevices attached to any other unblocked port if the devices are in thesame zone.

Unicast Unicast routing provides one or more optimal path(s) between any oftwo switches that make up the fabric. (This is used to send a singlecopy of the data to designated destinations.)

upper layer protocol(ULP)

The protocol user of FC-4 including IPI, SCSI, IP, and SBCCS. In adevice driver ULP typically refers to the operations that are managedby the class level of the driver, not the port level.

URL Uniform Resource Locater, the addressing system used by the WorldWide Web. It describes the location of a file or server anywhere on theInternet.

Vvirtual switch A Fibre Channel switch function that allows users to subdivide a

physical switch into multiple virtual switches. Each virtual switchconsists of a subset of ports on the physical switch, and has all theproperties of a Fibre Channel switch. Multiple virtual switches can beconnected through ISL to form a virtual fabric or VSAN.

Networking for Storage Virtualization and EMC RecoverPoint TechBook

Page 99: Networking for Storage Virtualization and EMC RecoverPoint TechBook

Glossary

virtual storage areanetwork (VSAN)

An allocation of switch ports that can span multiple physicalswitches, and forms a virtual fabric. A single physical switch cansometimes host more than one VSAN.

volume A general term referring to an addressable logically contiguousstorage space providing block I/O services.

VSAN Virtual Storage Area Network.

Wwarning message An indication that a possible error has been detected. See also “error

message” and “information message.”

World Wide Name(WWN)

A unique identifier, even on global networks. The WWN is a 64-bitnumber (XX:XX:XX:XX:XX:XX:XX:XX). The WWN contains an OUIwhich uniquely determines the equipment manufacturer. OUIs areadministered by the Institute of Electronic and Electrical Engineers(IEEE). The Fibre Channel environment uses two types of WWNs; aWorld Wide Node Name (WWNN) and a World Wide Port Name(WWPN). Typically the WWPN is used for zoning (path provisioningfunction).

Zzone An information object implemented by the distributed Nameserver

(dNS) of a Fibre Channel switch. A zone contains a set of memberswhich are permitted to discover and communicate with one another.The members can be identified by a WWPN or port ID. EMCrecommends the use of WWPNs in zone management.

zone set An information object implemented by the distributed Nameserver(dNS) of a Fibre Channel switch. A Zone Set contains a set of Zones.A Zone Set is activated against a fabric, and only one Zone Set can beactive in a fabric.

zonie A storage administrator who spends a large percentage of hisworkday zoning a Fibre Channel network and provisioning storage.

zoning Zoning allows an administrator to group several devices by functionor by location. All devices connected to a connectivity product, suchas a Connectrix switch, may be configured into one or more zones.

Networking for Storage Virtualization and EMC RecoverPoint TechBook 99

Page 100: Networking for Storage Virtualization and EMC RecoverPoint TechBook

100

Glossary

Networking for Storage Virtualization and EMC RecoverPoint TechBook