79
MirrorView Knowledgebook: Releases 30 - 31.5 A Detailed Review Abstract This white paper is a comprehensive guide for MirrorView functionality, operations, and best practices. It discusses the specifics of the synchronous (MirrorView/S) and asynchronous (MirrorView/A) products, and compares them to help users determine how each is best deployed. August 2011

h2417 Mirrorview Know Cx Series Flare Wp Ldv

Embed Size (px)

DESCRIPTION

h2417 Mirrorview Know Cx Series Flare Wp Ldv

Citation preview

Page 1: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review

Abstract

This white paper is a comprehensive guide for MirrorView™

functionality, operations, and best practices. It

discusses the specifics of the synchronous (MirrorView/S) and asynchronous (MirrorView/A) products, and

compares them to help users determine how each is best deployed.

August 2011

Page 2: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 2

Copyright © 2006, 2007, 2008, 2009, 2010, 2011 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is

subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED ―AS IS.‖ EMC CORPORATION

MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE

INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED

WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable

software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com

All other trademarks used herein are the property of their respective owners.

Part Number H2417.7

Page 3: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 3

Table of Contents A Detailed Review .............................................................................................. 0

Executive summary ........................................................................................... 6

Introduction ....................................................................................................... 7

Audience ..................................................................................................................................... 8

Terminology ................................................................................................................................ 8

Remote replication overview ............................................................................ 8

Recovery objectives .................................................................................................................... 8

Recovery point objective ......................................................................................................... 9 Recovery time objective .......................................................................................................... 9

Replication models ...................................................................................................................... 9

Synchronous replication model ............................................................................................... 9 Asynchronous replication model(s) ....................................................................................... 10

MirrorView family ............................................................................................. 10

MirrorView configurations ......................................................................................................... 11

Bidirectional mirroring ............................................................................................................ 11 System level fan-in/fan-out .................................................................................................... 11 LUN level fan-out ................................................................................................................... 12

MirrorView over iSCSI ............................................................................................................... 12

MirrorView ports and connections ............................................................................................. 13

Common operations and settings ............................................................................................. 18

Synchronization ..................................................................................................................... 18 Promoting a secondary image ............................................................................................... 19 Fracture ................................................................................................................................. 21 Recovery policy ..................................................................................................................... 21 Minimum required images ..................................................................................................... 22

Interoperability .......................................................................................................................... 22

Storage-system based replication software ........................................................................... 22 Virtual LUNs .......................................................................................................................... 23 Block operating environment ................................................................................................. 25

Mirroring thin LUNs ................................................................................................................... 25

MirrorView/Synchronous ................................................................................ 27

Data protection mechanisms .................................................................................................... 27

Fracture log ........................................................................................................................... 27 Write intent log ...................................................................................................................... 28 Fracture log persistence ........................................................................................................ 29

Synchronization ........................................................................................................................ 29

Thin LUN secondary images ................................................................................................. 30 Link requirements ..................................................................................................................... 30

Limits ........................................................................................................................................ 31

MirrorView/Asynchronous .............................................................................. 32

Data protection mechanisms .................................................................................................... 33

Tracking and transfer maps ................................................................................................... 33 Delta set ................................................................................................................................ 34 Gold copy (protective snapshot) ............................................................................................ 35 Reserved LUNs ..................................................................................................................... 35

Initial synchronization ................................................................................................................ 36

Link requirements ..................................................................................................................... 37

Page 4: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 4

Limits ........................................................................................................................................ 37

Storage-system-based consistency groups .................................................. 38

Benefits of storage-system-based consistency groups ............................................................. 38

MirrorView/S .......................................................................................................................... 39 MirrorView/A .......................................................................................................................... 40

Managing storage-system-based consistency groups .............................................................. 41

Consistency group promote options.......................................................................................... 42

MirrorView management ................................................................................. 43

Required storage system software............................................................................................ 43

Management topology .............................................................................................................. 43

Management network connectivity ........................................................................................ 43 Unisphere domains ............................................................................................................... 44

Management software .............................................................................................................. 45

MirrorView Wizard ................................................................................................................. 46 Mirror LUN properties ............................................................................................................ 47 Remote mirror configuration .................................................................................................. 47 Consistency group configuration ........................................................................................... 48 Storage Group configuration ................................................................................................. 48 Write intent log configuration (MirrorView/S) ......................................................................... 49 Reserved LUNs (MirrorView/A) ............................................................................................. 50 Advanced dialogs .................................................................................................................. 51

Unisphere Analyzer ................................................................................................................... 53

MirrorView failure recovery scenarios ........................................................... 55

Primary component failure and recovery .................................................................................. 55

Path failure between server and primary image .................................................................... 55 Primary image LUN failure .................................................................................................... 55 Storage processor (SP) controlling primary image failure ..................................................... 56 Primary image storage system failure ................................................................................... 56

Secondary component failure and recovery .............................................................................. 58

Secondary image LUN failure ................................................................................................ 58 Storage processor (SP) controlling secondary image failure ................................................. 58 Secondary image storage system (or link) failure .................................................................. 58

Full reserved LUN (MirrorView/A only) ...................................................................................... 59

On the primary storage system ............................................................................................. 59 On the secondary storage system ......................................................................................... 59

Using SnapView with MirrorView ................................................................... 59

LUN eligibility ......................................................................................................................... 60 Limit dependencies ............................................................................................................... 61 Instant restore ....................................................................................................................... 61 Clone state “Clone Remote Mirror Synching” ........................................................................ 61

Automation and reporting software ............................................................... 62

VMware vSphere and Site Recovery Manager (SRM) .......................................................... 62 EMC Replication Enabler for Microsoft Exchange Server 2010 ............................................ 63 EMC Data Protection Advisor (DPA) ..................................................................................... 63 MirrorView/Cluster Enabler (MV/CE) ..................................................................................... 63 Replication Manager ............................................................................................................. 63 AutoStart ............................................................................................................................... 64 VNX for File and Celerra ....................................................................................................... 64 Sybase Mirror Activator ......................................................................................................... 64

Disk drive technologies and advanced data services ................................... 64

Page 5: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 5

iSCSI connections ........................................................................................... 67

Conclusion ....................................................................................................... 71

References ....................................................................................................... 71

White papers ............................................................................................................................. 71

Product documentation ............................................................................................................. 71

Appendix A: Remote mirror conditions and image states............................ 73

Mirror states .............................................................................................................................. 73

Secondary image states ........................................................................................................... 73

Image condition......................................................................................................................... 74

Appendix B: Consistency group states and conditions ............................... 75

Consistency group states .......................................................................................................... 75

Consistency group conditions ................................................................................................... 75

MirrorView/S .......................................................................................................................... 75 MirrorView/A .......................................................................................................................... 76

Appendix C: Storage-system failure scenarios ............................................. 77

Appendix D: Data protection roles ................................................................. 79

Page 6: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 6

Executive summary EMC

® MirrorView

™ software offers two complementary storage-system-based remote mirroring products:

MirrorView/Synchronous (MirrorView/S) and MirrorView/Asynchronous (MirrorView/A). For the VNX

series, both MirrorView products are included in the Remote Protection Suite, the Total Protection Pack,

and the Total Efficiency Pack. For CX4 series and prior systems, the synchronous and asynchronous

products are licensed separately or can be purchased as the MirrorView bundle. These products are

designed as block-based storage-system disaster recovery (DR) solutions for mirroring local production

data to a remote/disaster recovery site. They provide end-to-end data protection by replicating the contents

of a primary volume to a secondary volume that resides on a different VNX series or CLARiiON series

storage system. MirrorView data protection is defined as end-to-end because, in addition to performing

replication, it protects the secondary volume from tampering or corruption by only making the volume

available for server access when initiated through MirrorView.

MirrorView/S is a synchronous product that mirrors data in real time between local and remote storage

systems. MirrorView/A is an asynchronous product that offers extended-distance replication based on a

periodic incremental-update model. It periodically updates the remote copy of the data with all the changes

that occurred on the primary copy since the last update.

MirrorView also offers consistency groups, a unique consistency technology for the midrange market that

replicates write-order dependent volumes. Using this technology, MirrorView maintains write ordering

across secondary volumes in the event of an interruption of service to one, some, or all of the write-order

dependent volumes.

Data replication for both MirrorView/A and MirrorView/S is supported over VNX series and CLARiiON’s

native Fibre Channel (FC) and iSCSI ports. Several distance extension technologies such as Dense Wave

Division Multiplexing (DWDM) and FC over Internet Protocol (IP) are supported for use with MirrorView.

Consult the EMC Support Matrix (ESM) for supported products.

Business requirements determine the structure of a DR solution. The business will decide how much data

loss is tolerable, if any, and how soon the data must be accessible again in the event of a disaster. It is

common for businesses to classify their applications and employ a DR strategy for each application,

balancing service levels with cost.

MirrorView/S offers a zero data loss option, while MirrorView/A offers an alternative when minutes of

data loss may be tolerable. Consistency groups enable both MirrorView products to offer application

restart. That is, applications with write-ordered dependent volumes, like databases, can resume operating

on the secondary image with little to no maintenance required. This is compared to application recovery,

which typically entails restoring from an older copy, such as a backup, and applying changes to bring the

application to its state before the disaster.

RecoverPoint (including RecoverPoint/SE) is an appliance-based DR solution that supports both local and

remote replication. A RecoverPoint/SE license is also included in the Remote Protection Suite, the Total

Protection Pack, and the Total Efficiency Pack.

RecoverPoint performs write journaling; as a result several very granular points of recovery are available. It

supports local replication with full binary copies. It supports remote replication with zero data loss

synchronous and asynchronous options. You should consider RecoverPoint when you need a solution that:

Provides very granular recovery points – updates are made every few seconds as opposed to

minutes

upports heterogeneous and/or non-EMC storage systems. (RecoverPoint/SE supports the platform

with which it is sold. For example, if it is purchased with a VNX it supports local and remote

replication with VNX platforms.)

Can switch between synchronous and asynchronous replication on the fly

Supports a recovery point objective (RPO) of seconds

Requires a large number of replicas (100s to 1,000s)

Page 7: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 7

RecoverPoint offers many other benefits such as supporting WAN optimization features like compression

and deduplication. For more information, see the white paper EMC RecoverPoint Family Overview on

Powerlink®.

SAN Copy™

is another storage-system-based remote replication product. It is designed as a multipurpose

replication product for migrations, data mobility, content distribution, and DR. However, SAN Copy does

not provide the complete end-to-end protection that MirrorView provides; instead it provides an efficient

replication mechanism that can be used as part of a DR strategy. SAN Copy is made available at no

additional cost to VNX customers.

SAN Copy typically plays a complementary role with other software to form a robust DR solution.

Applications such as Replication Manager can be used in conjunction with SAN Copy to manage

application integration and consistency across volumes. Table 1 is a comparison of MirrorView,

RecoverPoint, and SAN Copy.

Table 1. MirrorView, RecoverPoint, and SAN Copy use case comparison

Product MirrorView RecoverPoint SAN Copy

Storage system

support

Replication between VNX

and/or CLARiiON primary

and secondary systems

Replication between

VNX, CLARiiON,

Symmetrix®, and non-

EMC systems

Replication between

VNX, CLARiiON,

Symmetrix, and non-

EMC systems

Software location Storage-system-based Appliance-based (Splitter

software supported on

VNX or CLARiiON SP)

Storage-system-based

Replication modes Sync and async supported

at storage-system level.

Either sync or async at

LUN level

Replication mode at

LUN level can be

dynamically switched

between sync and async

Full copy or incremental

copy

Site level fan

out/fan in

4:1 1:2 cascade (stretched

CDP + CRR)

1:100

LUN level fan out 1 primary to 1 secondary

async; 2 secondaries sync

(1:1 with con groups)

1 primary to 1 secondary

async

1 source copied to up to

100 targets

Data protection Access to secondary

volumes controlled by

MirrorView. SnapView

can be used to access a

replica of secondary image.

Access to secondary

volumes controlled by

RecoverPoint

Remote copy is available

for server access.

SnapView recommended

for data access

Consistency across

volumes

Native consistency group

support

Native consistency group

support

Consistency managed by

the user (Example: hot

backup mode) or another

application (Example:

Replication Manager)

Space reclamation

(thick to thin

replication)

Limited support – thin

attributes are not

maintained in many

recovery scenarios

Supported Supported in remote pull

configuration – Ideal for

space reclamation during

migration

Introduction This white paper provides an overview for MirrorView software and discusses key software features,

functions, and best practices. Also refer to the ―References‖ section of this white paper for related

information, including help files, administrator guides, and white papers.

Page 8: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 8

Audience This white paper is intended for systems integrators, systems administrators, customers, and members of

the EMC and partners’ professional services community. It is assumed that the audience is familiar with

VNX series and CLARiiON storage systems.

Terminology Terminology, operations, and object statuses are the same for both MirrorView products. The following is

a list of frequently used terms and conditions. A more comprehensive list is available in the product

documentation.

Primary image – The LUN containing production data and the contents of which are replicated to the

secondary image. Also referred to as the primary.

Secondary image – A LUN that contains a mirror of the primary image LUN. Also referred to as the

secondary. This LUN must reside on a different storage system than the primary image.

Image Condition1 – The condition of a secondary image provides additional information about the

status of updates for the image. Values include normal, administratively fractured, system

fractured, queued to be synchronized, or synchronizing.

State – Remote mirror states and image states. The remote mirror states are: Active and Attention.

The image states are: Synchronized, Consistent, Synchronizing, and Out-of-Sync.

Consistency group – A set of mirrors that are managed as a single entity and whose secondary images

always remain in a consistent and recoverable state with respect to their primary image and each other.

Consistency Group State – Indicates the current state of the consistency group: Synchronized,

Consistent, Synchronizing, Out-of-Sync, Scrambled, Empty, Incomplete, or Local.

Consistency Group Condition – Displays more detailed information about the consistency group,

including whether the group is active, inactive, admin fractured, system fractured, waiting on

admin, or invalid.

Fracture – A condition in which I/O is not mirrored to the secondary image; this can be caused when

you initiate the fracture (Admin Fracture) or when the system determines that the secondary image is

unreachable (System Fracture).

Promote – The operation by which the administrator changes an image’s role from secondary to

primary. As part of this operation, the previous primary image becomes a secondary image. If the

previous primary image is unavailable when you promote the secondary image (perhaps because the

primary site suffered a disaster), the software does not include it as a secondary image in the new

mirror. A secondary image can be promoted if it is in either the Synchronized state or the Consistent

state. An image cannot be promoted if it is out-of-sync or synchronizing.

Remote replication overview It is a requirement that critical business information always be available. To protect this information, it is

necessary for a DR plan to be in place to safeguard against any disaster that could make the data at the

primary site unavailable.

Recovery objectives Recovery objectives are service levels that must be met to minimize the loss of information and revenue in

the event of a disaster. The criticality of business applications and information defines the recovery

objectives. The terms commonly used to define the recovery objectives are recovery point objective (RPO)

and recovery time objective (RTO).

1 A complete list of mirror and image states and conditions are listed in ―Appendix A: Remote mirror

conditions and image states.

Page 9: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 9

Recovery point objective

Recovery point objective (RPO) defines the amount of acceptable data loss in the event of a disaster. The

business requirement for RPO is typically expressed as duration of time. For instance, application A may

have zero tolerance for loss of data in the event of a disaster. This is typical for financial applications where

all completed transactions must be recovered. Application B may be able to sustain the loss of minutes or

hours worth of data. RPO determines the required update frequency of the remote site. The rate of change

of the information determines how much data needs to be transferred. This, combined with the RPO, has a

significant impact on the distance, protocol, and bandwidth of the link between remote sites.

Recovery time objective

RTO is defined as the amount of time required to bring the business application back online after a disaster

occurs. Mission-critical applications may be required to be back online in seconds, without any noticeable

impact to the end users. For other applications, a delay of a few minutes or hours may be tolerable.

Figure 1 shows an RPO and RTO timeline. Stringent RPO and RTO requirements can add cost to a DR

solution. Therefore, it is important to distinguish between absolutely critical business applications and all

other applications. Every business application may have different values for RPO and RTO, based on the

criticality of the application.

Figure 1. RPO and RTO timeline

Replication models There are a number of replication solutions available to replicate data from the primary (production) to the

secondary (remote) site. These replication solutions can be broadly categorized as synchronous and

asynchronous.

Synchronous replication model

In a synchronous replication model, each server write on the primary side is written concurrently to the

secondary site. The primary benefit of this model is that its RPO is zero, since the transfer of each I/O to

the secondary occurs before acknowledgement is sent to the server. Figure 2 depicts the data flow of

synchronous replication. In case of a disaster at the primary site, data at the secondary site is exactly the

same as data at the primary site at the time of disaster.

Limited distance

Primary

Secondary

1

4 3

2

Page 10: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 10

Figure 2. Synchronous replication data flow

The main consideration of the synchronous model is that each server I/O is affected by the time it takes to

replicate the data to the remote storage system. Therefore, the distance of the replication may be limited by

the application’s tolerance for latency. Secondly, the link between the primary and remote systems must be

able to handle peak workload bandwidths, which can add cost to the link.

Asynchronous replication model(s)

There are a few general approaches to asynchronous replication and each implementation may vary to some

degree. At the highest level, all asynchronous replication models decouple the remote replication of the I/O

from the acknowledgement to the server. The primary benefit of this is that it allows longer distance

replication because application write response time is not dependent on the latency of the link. The trade

off is that RPO will be greater than zero.

In classic asynchronous models, writes are sent to the remote system as they are received from the server.

Acknowledgement to the server is not held for a response from the secondary, so if writes are coming into

the primary faster than they can be sent to the remote system, multiple I/Os may be queued up on the

source system awaiting transfer.

Figure 3. Asynchronous (periodic update) replication data flow

Periodic update models, as shown in Figure 3, track changes on the primary side and then apply those

changes to the secondary at a user-determined frequency. By tracking data areas that change, it is possible

to send less data if writes occur to the same areas of the volume between updates. Updates are smoothed

out over the entire update period, regardless of bursts of writes to the source LUN allowing for a low

bandwidth and low-cost link between sites.

MirrorView family Since it is storage-system-based software, MirrorView does not consume server resources to perform

replication. It can take advantage of Fibre Channel SANs, IP extended SANs, and TCP/IP networks for the

transfer medium. Two complementary products are offered in the MirrorView family.

MirrorView/Synchronous (MirrorView/S) for synchronous replication

MirrorView/Asynchronous (MirrorView/A) for asynchronous replication, which uses the periodic

update model

Both products run on all available storage system platforms, including the VNX, CX4, CX3, and CX series,

and the AX4-5 FC. MirrorView is not available on the VNXe series, CX300 or the AX4-5 iSCSI. For

maximum protection of information, both products have advanced features, including:

Bidirectional mirroring. Any one storage system can host both primary and secondary images as long

as the primary and secondary images within any one mirror reside on different storage systems.

Extended distance

Primary Secondary

4 Delta

Setpoint-in-time

3

Secondary

Primary

2

1

Page 11: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 11

The ability to have both synchronous and asynchronous mirrors on the same storage system.

Consistency groups for maintaining data consistency across write-order dependent volumes.

Mirroring between supported VNX and CLARiiON storage system models.

The ability to mirror over Fibre Channel and iSCSI front-end ports. VNX, CX4 and CX3 series storage

systems with FC and iSCSI front-end ports can mirror over Fibre Channel ports to some storage

systems, while mirroring over iSCSI front-end ports to other storage systems.

MirrorView configurations MirrorView allows for a large variety of topologies and configurations. In all configurations, the primary

and secondary images must have the same server-visible capacity (user capacity), because they are allowed

to reverse roles for failover and failback.

Servers have access to all of the capacity of a traditional LUN. You set thick LUN and thin LUN user

capacity during the LUN creation process. You set a metaLUN’s server-visible capacity using the user

capacity parameter. When mirroring between LUN types, the user capacities of the two LUNs in the mirror

(whether it is traditional-LUN capacity, metaLUN-user capacity, thin LUN, or thick LUN) must be the

same.

The following sections describe configuration options between primary and secondary storage systems and

primary and secondary images.

Bidirectional mirroring

Storage systems running MirrorView can simultaneously house primary volumes for some applications and

secondary volumes for others. Each mirrored relationship can run in either Synchronous or Asynchronous

mode. In such instances, system sizing considerations should be well thought out in case either site has to

run both primary and secondary applications during an outage.

Figure 4. Bidirectional mirroring

System level fan-in/fan-out

MirrorView supports connections to up to four other MirrorView systems. Each system can have primary

images and/or secondary images with any of the systems with which it has a MirrorView connection. Each

one of the primary/secondary pairs can be synchronously or asynchronously mirrored allowing for specific

levels of protection at the application level.

P

P S

S

Page 12: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 12

Figure 5. MirrorView 4:1 fan-out/fan-in ratio

LUN level fan-out

MirrorView/S allows for a 1:2 fan-out ratio. Each primary volume mirrored synchronously can have one or

two secondary images. The secondary images are managed independently, but must reside on separate

storage systems. No secondary image can reside on the same storage system as its primary image. When

using consistency groups, MirrorView/S’s LUN fan-out ratio is 1:1.

Figure 6. MirrorView/S 1:2 fan-out

MirrorView/A allows for one secondary image per primary image as depicted in Figure 7.

.

Figure 7. MirrorView/A 1:1 fan-out

MirrorView over iSCSI MirrorView over iSCSI is supported between FC/iSCSI systems. FC/iSCSI storage systems include all

VNX series, CX4 series systems and the CX3-10 through CX3-40. The storage systems must also be

P S

S P

P

S1

S2

P

S

P

S P

S

P

S

Centralized

Data Center

Distributed Locations

Page 13: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 13

running a minimum of release 26 FLARE code. It is possible to have iSCSI connections between FC/iSCSI

systems and CX300i and CX500i storage systems, but those connections are only valid for SAN Copy use.

The connection type (Fibre Channel or iSCSI) is determined for each storage system pair. A system can

mirror over iSCSI to one or more storage systems while mirroring over Fibre Channel to other storage

systems. Any combination of iSCSI and Fibre Channel MirrorView connections is possible within the 4:1

storage system fan-out ratio discussed in the ―LUN level fan-out‖ section.

The MirrorView Wizard supports both Fibre Channel and iSCSI connections. If Fibre Channel and iSCSI

connectivity are both present, you can simply choose which connection type to use in the wizard. If

connectivity exists over the preferred link, the wizard performs all setup and configuration steps. The

MirrorView Wizard is discussed in detail in the ―MirrorView Management‖ section of this paper.

MirrorView ports are discussed in the ―MirrorView ports and connections‖ section. Additional information

for advanced users is available in the ―iSCSI Connections‖ section.

Although VNXe storage systems support iSCSI, replication is not achieved using MirrorView or SAN

Copy. With the use of Replication Manager, the native remote replication capability of VNXe systems can

be used to replicate iSCSI LUNs and file shares between other VNXe systems or between VNXe and VNX

series systems. The technote Replication VNXe iSCSI to VNX using Replication Manager is available on

PowerLink for more information.

MirrorView ports and connections MirrorView operates over discrete paths between SPs of the primary system and secondary system. All

MirrorView traffic goes through one port of each connection type (FC and/or iSCSI) per SP. For systems

that only have FC ports, MirrorView traffic goes through one FC port of each SP. For FC/iSCSI systems,

there is one FC port and one iSCSI port available for MirrorView traffic. A path must exist between the

MirrorView ports of SPA of the primary and SPA of the secondary system. The same relationship must be

established for SP B.

The MirrorView ports are automatically assigned when the system is initialized. Systems shipped from the

factory with FC and/or iSCSI I/O modules will arrive with their MirrorView ports persisted. MirrorView

port designations do not change, even if additional I/O modules are added at a later time. For systems that

add the first FC or iSCSI I/O module after the initial configuration, the MirrorView port for that protocol is

designated as the new I/O module is persisted. For instance, when adding the first iSCSI I/O module to a

system that has only FC connectivity, the MirrorView port is assigned automatically as the iSCSI I/O

module is persisted.

VNX series systems assign the lowest logical port number of each supported type (FC or iSCSI) for use by

MirrorView. For the VNX5100, VNX-5300, and VNX-5500, logical ports SPA 0 and SPB 0 will be the FC

MirrorView port and correspond to physical port 2 of the embedded mezzanine ports. The iSCSI

MirrorView port will be assigned once an iSCSI IO module is persisted in IO module slot 0 or 1.

Connectivity is highly customizable for VNX 5700 and VNX 7500 systems. Out of a total of 5 I/O module

slots, up to 4 can be available for protocol assignment by the user. There is a minimum of one SAS I/O

module in slot 0 that is used for back-end SAS connectivity to disk enclosures. As the first FC I/O module

or iSCSI I/O module is persisted, the MirrorView port is assigned as the lowest logical port number of each

type. The physical location will depend on which slot the I/O module is installed. For example, if a unified

VNX 5700 is ordered from the factory with FC and iSCSI IO modules for SAN host attaches, the port

assignment would be the following:

Table 2. I/O Module slot and MirrorView port assignments of a sample VNX 5700 configuration

I/O Slot A0/B0 - SAS A1/B1 - FC A2/B2 - iSCSI A3/B3 - Empty A4/B4 - FC

Slot Use Connection to Physical Port 0 Physical port 0 Open slot for FC xBlade

Page 14: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 14

disk enclosures (A0/B0) is the FC

MV port

(A4/B4) is the

iSCSI MV port

future expansion connections

for NAS.

Slot 0 is allocated to the SAS I/O module that is required for back end disk connectivity. Slot 1 contains the

first FC I/O module which has the lowest numbered FC ports (SPA 0/SPB 0), and are designated as the MV

ports. Slot 2 has the first iSCSI module, and thus contains the lowest numbered iSCSI ports. Physical port 0

on the I/O module will correspond to logical ports SPA4/SPB4, because the FC I/O module has three ports

(SPx0 -SPx3). Therefore SPA4/SPB4 are the iSCSI MirrorView ports.

The CX4 series also offers flexibility in the number and type of front-end ports on each system. The

MirrorView port is assigned to the highest-numbered port of each type. Like VNX systems, MirrorView

ports based are automatically designated on the initial port configuration. The MirrorView ports will not

change, even if additional ports are added at a later time.

For example, the base configuration for a CX4-960 is four Fibre Channel and two iSCSI front-end (FE)

ports per SP. This represents ports 0-3 for Fibre Channel and ports 4-5 for iSCSI. If the customer orders the

base configuration, the Fibre Channel MirrorView port is port 3 and the iSCSI MirrorView port is port 5.

If an additional four Fibre Channel ports and/or an additional two iSCSI ports are added to the system at a

later time, the MirrorView ports remain the same, port 3 and port 5. Table 3 lists the base FE port

configurations for the CX4 series and the MirrorView ports.

Table 3. CX4 series base configuration front-end port numbering and MirrorView ports

Model FC ports iSCSI ports FC MV port iSCSI MV port

CX4-960 0-3 4-5 3 5

CX4-480 0-3 4-5 3 5

CX4-240 0-1 2-3 1 3

CX4-120 0-1 2-3 1 3

If a CX4 system is ordered from the factory with a configuration other than the base configuration, then the

MirrorView port assignment is based on that specific configuration. For example, if a CX4-960 is ordered

with eight Fibre Channel FE ports (ports 0-7) and two iSCSI FE ports (ports 8-9), then the MirrorView

ports will be port 7 for Fibre Channel and port 9 for iSCSI. These MirrorView port assignments will not

change, even if additional ports are added to the system at a later time.

The VNX and CX4 series offer several FE connectivity options. A 4-port I/O module is available for 8

Gb/s FC connectivity. 4-port 1 Gb/s and 2-port 10 Gb/s I/O modules are available for iSCSI connectivity. A

2-port FCoE module is also available for host connectivity, but is not supported for use by MirrorView.

I/O modules in CX4 series systems can be upgraded to newer and faster I/O modules. 4 Gb/s FC (not

offered for VNX series) I/O modules can be upgraded to 8 Gb/s modules. Also,1 Gb/s iSCSI I/O modules

can be upgraded to 10 Gb/s I/O modules. This allows users to upgrade connectivity options for

MirrorView.

For the CX3 series, CX series, and AX4-5 FC, MirrorView operates over one predetermined Fibre Channel

port and/or one predetermined iSCSI port per storage processor (SP). The MirrorView port assignments are

always the same for these systems. Table 4 lists the MirrorView port assignments for each CX3 series

system and the AX4-5 FC.

Table 4. Front-end port numbering and MirrorView port by system type

Model FC MV port iSCSI MV port

CX3-80, CX700, CX600 FC 3 --

Page 15: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 15

CX3-40*, CX3-20*, CX500, CX400 FC 1 --

CX3-20, CX3-40 FC/iSCSI 5 3

CX3-10 FC/iSCSI 3 1

AX4-5 FC 1 --

*The MirrorView port (port 1) is the same across all CX3-20 and CX3-40 Fibre Channel-only models.

You can use Unisphere to view MirrorView ports. In Release 30 (Unisphere 1.0), you can see MirrorView

ports in the System > Hardware view. For Release 31 (Unisphere 1.1 and higher), you can also see

MirrorView ports in the <Storage System Name > Settings > Network Settings for Block view. The

MirrorView port is displayed with a unique icon as shown in Error! Reference source not found.. In this

case, the system is a VNX5300, so the FC MirrorView ports are physically located on ―Onboard Port 2‖ of

each SP. These ports equate to Logical Port 0 (A-0, B-0). Physical ports 0 and 1 are SAS backend ports.

The lower arrow points to a port in I/O module slot 0, where a 4-port iSCSI module is installed. Physical

port 0 is the lowest numbered iSCSI port in the I/O module. In this case, it equates to logical port 4.

Figure 8. Physical and Logical Port assignments viewed in Unisphere.

The same information is available through the Navisphere Secure CLI ioportconfig –list command. ―Port

usage‖ is one of the fields returned for each port. If the port is used for host I/O, port usage is ―Normal‖. If

the port is used by MirrorView, port usage is ―Special‖. Figure 9 shows output of the ioportconfig –list

command for the system shown in Figure 8. (The display of the output has been manipulated to better fit

the figure, but the values are unchanged.)

The information about each I/O module(s) on SPA is as follows:

SP ID: A

I/O Module Slot: Onboard I/O Module Slot: 0 I/O Module Type: Multi Protocol I/O Module Type: iSCSI

Page 16: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 16

I/O Module State: Present I/O Module State: Present

I/O Module Substate: Good I/O Module Substate: Good I/O Module Power state: N/A I/O Module Power state: On

I/O Carrier: No I/O Carrier: No

The information listed about each port on this I/O module is as follows:

Physical Port ID: 2 Physical Port ID: 0

Port Role: FE Port Role: FE

Logical Port ID: 0 Logical Port ID: 4

Port Usage: Special Port Usage: Special

Port Type: Fibre Channel Port Type: iSCSI

Port State: Enabled Port State: Enabled

Port Substate: Good Port Substate: Good

Is Persisted: Yes Is Persisted: Yes

Figure9. Output of naviseccli ioportconfig –list command

Host attach protocols are unrelated to the protocol used for MirrorView replication. Therefore, the protocol

used to attach the hosts to the primary/secondary volumes can be either Fibre Channel, FCoE, or iSCSI,

regardless of the protocol used by MirrorView.

Ports used by MirrorView can be shared with server traffic; adding traffic over this port, however, may

impact the performance of MirrorView and vice versa. Therefore, when possible, EMC recommends you

dedicate a port to MirrorView. The MirrorView port cannot be shared with either SAN Copy or the

RecoverPoint array-resident splitter. RecoverPoint only supports FC connections to the storage system.

SAN Copy and the array resident RecoverPoint splitter can run on any FC front-end port(s) other than the

MirrorView ports. RecoverPoint host-based or SAN-based splitters can share the FC MirrorView port as all

I/O is considered host traffic.

Connections should exist between SPA MirrorView ports and SPB MirrorView ports, but they should not

be cross connected. For Fibre Channel SANs, place the SP A MirrorView port and SP B MirrorView port

in separate zones. If there are servers zoned across SPs and storage systems, create multiple zones for each

server initiator so that SP A and SP B MirrorView ports remain separated. Furthermore, iSCSI connections

should be made separately between SPA MirrorView ports and SPB MirrorView ports. However, this is

handled by the MirrorView Wizard.

EMC also recommends that for high-availability configurations, connectivity for SP A and SP B reside in

separate physical networks, whether it is a Fibre Channel SAN or a TCP/IP network. This is illustrated in

Figure 10, which depicts a VNX 5700 mirrored to a VNX 5700 storage system, ―Site 1‖ and ―Site 2.‖ The

Site 1 and Site 2 systems have both Fibre Channel and iSCSI connectivity between them. These systems

are used in several figures and examples throughout this paper.

Figure 10. Example of MirrorView port zoning

SP A

SP B

SP A

SP B SAN/

Network B

SAN/

Network A

VNX 5300 “Site 2” VNX-5500 “Site 1”

Page 17: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 17

SP ownership of the mirror is determined by the ownership of the primary image. Secondary images have

the same SP ownership on the secondary storage system as the primary image has on the primary storage

system. Therefore, connections for both SPs must be defined to allow failover from one SP to the other.

Failover is discussed in detail in the section ―MirrorView failure recovery scenarios.‖

Individual SP port connectivity can be verified in the Connectivity Status dialog box in Unisphere. The

Connectivity Status dialog box shows all initiators logged in to the system’s front-end ports. There are

separate tabs for viewing initiator records of hosts, mirrored storage systems, and SAN Copy connected

storage systems. The Connectivity Status dialog box for the Site 1 storage system (using the system names

shown in the above figure) is illustrated in more detail in Figure 11.

The first two entries in the dialog box show that Site 2’s Fibre Channel SPA-0 and SPB-0 ports are logged

in and registered to the Site 1’s FC MirrorView ports SPA-0 and SPB-0.. The second two entries show that

there are also iSCSI connections between the iSCSI MirrorView ports. PortsA4/B4 of Site 2 are logged into

ports A4/B4 of Site 1.

Figure 11. Connectivity Status dialog box

The Initiator Name column shows the 8-byte WWN and 8-byte WWPN combination for Fibre Channel

initiators and the IQN number for iSCSI initiators logged in to the Site 1 system. You can compare these

with the initiator ports of the Site 2 system. You can verify the WWPN and IQN numbers in the Settings

for Block view within Unisphere as shown in Figure 8.

After connectivity is established, MirrorView connections can be created to specify which connected arrays

have mirrored relationships. MirrorView connections are automatically created when connectivity exists

and you create the first mirror between storage systems using the MirrorView Wizard (launched from the

Replicas page in Unisphere).

You can also create MirrorView connections in the Unisphere Manage Mirror Connections (more

commonly called the MirrorView Connections) dialog box or in the Command Line Interface (CLI). You

can open the Manage Mirror Connections dialog box in Unisphere’s Data Protection > Mirrors and

Replications > LUN Mirrors page by selecting Manage Mirror Connections in the Protection task

menu. The following Navisphere CLI commands enable and disable MirrorView connections:

Naviseccli mirror –enablepath –pathSPHost <-connectiontype fibre|iSCSI>

Naviseccli mirror –disablepath –pathSPHost <-connectiontype fibre|iSCSI>

Page 18: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 18

(For simplicity, the required credentials and SP name/IP Address for Naviseccli not shown in these

commands.)

Figure 12 shows the Manage Mirror Connections dialog box for the storage system named Site 1, which

has a MirrorView connection configured with the Site 2 storage system. The systems shown in Figure 11

are both zoned in a Fibre Channel SAN and have iSCSI connections defined. By looking at the Type

column and the Logged-In Status columns, you can see that the MirrorView ports of SPA and SPB are

logged in to one another.

Figure 12. Manage Mirror Connections dialog box

If both connectivity types exist, users can switch between Fibre Channel and iSCSI mirror connections with

relatively little impact. Mirrors will be able to incrementally synchronize on the new link type after the

change. A full synchronization is not required.

At the time the change is made, MirrorView/A mirrors must be ―Admin Fractured‖, while MirrorView/S

mirrors will fracture automatically. An explicit Synchronize command is needed to make MirrorView/A

resume replication. MirrorView/S will synchronize on the new link if the Automatic Recovery Policy is

set. If the mirror is set to Manual Recovery, an explicit Synchronize command is required to resync to

mirrors.

It should be noted that if the MirrorView connection is over iSCSI, iSCSI connections are created prior to

the enabling of MirrorView connection. iSCSI connections establish the initiator/target relationship and

login behavior between the iSCSI MirrorView ports. However, iSCSI connections are automatically

created for the user when the first MirrorView connection is created by the wizard and/or the Manage

Mirror Connections (―MirrorView Connections‖) dialog box. Therefore, there are a very limited number of

instances where the user will explicitly configure iSCSI connections for MirrorView. Explicit iSCSI

connection management is discussed in the ―iSCSI connections‖ section of this paper.

Common operations and settings The following sections outline aspects of common operations for MirrorView/S and MirrorView/A. More

specific information for either product is included in the product-specific sections if necessary.

Synchronization

Synchronization is a copy operation MirrorView performs to copy the contents of the primary image to the

secondary image. This is necessary to establish newly created mirrors or to re-establish existing mirrors

after an interruption.

Page 19: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 19

Initial synchronization is used for new mirrors to create a baseline copy of the primary image on the

secondary image. In almost all cases, when a secondary image is added to a mirror, initial synchronization

is a requirement. In the special case where the primary and secondary mirror LUNs are newly created and

have not been assigned to a server, the user has the option of deselecting the initial sync required option,

which is selected by default. In this case, MirrorView software considers the primary and secondary mirror

to be synchronized. Use this option with caution and under restricted circumstances. If there is any doubt

that the primary LUN is modified, use the default setting of initial sync required.

Primary images remain online during the initial synchronization process. During the synchronization,

secondary images are in the synchronizing state. Until the initial synchronization is complete, the

secondary image is not in a usable state. If the synchronization were interrupted, the secondary image

would be in the out-of-sync state indicating the secondary data is not in a consistent state.

After the initial synchronization, MirrorView/S and MirrorView/A have different synchronization

behaviors due to the nature of their replication models. MirrorView/S synchronizes only after an

interruption, because under normal conditions all writes are sent to the secondary as they occur. After the

initial synchronization, MirrorView/A images do not return to the synchronizing state under normal

operating conditions. Secondary images are in the updating condition while incremental updates are in

progress.

Synchronization rate Synchronization rate is a mirror property that sets the relative value (Low, Medium, or High) for the

priority of completing updates. The setting affects initial synchronizations, MirrorView/S

resynchronizations, and MirrorView/A updates. Medium is the default setting, but can be changed at any

time.

Low and Medium settings have low impact on storage system resources such as CPU and disk drives, but

may take longer to complete a transfer. Synchronizations at the high setting will complete faster, but may

affect performance of other operations such as server I/O or other storage system software.

Promoting a secondary image

A secondary image is promoted to the role of primary when it is necessary to run production applications at

the DR site. This may be in response to an actual disaster at the primary site, part of a migration strategy, or

simply for testing purposes. A secondary image can be promoted if it is in the consistent or synchronized

state. The definitions for consistent and synchronized image states are:

Synchronized -- The secondary image is identical to the primary image. This state persists only

until the next write to the primary image, at which time the image state becomes consistent. After

promoting an image in the synchronized state, replication is reversed and synchronization is not

needed.

Consistent – The secondary image is identical to either the current primary image or to some

previous instance of the primary image. This means that the secondary image is available for

recovery when promoted. After promoting a secondary image in the consistent state, a full

synchronization is needed from the new primary image to the new secondary image (old primary).

When a promote occurs, the secondary image is placed in a new mirror as a primary image. The new mirror

has the same mirror name, but a different mirror ID. The existing mirror is either maintained or destroyed

depending on the nature of the failure and whether in-band communication remains between the primary

and secondary images. Note that it is not necessary or recommended to fracture a mirror prior to promoting

a secondary image.

Also note that the image state is dictated by the storage system that has the primary image. If using

Navisphere CLI to check the secondary image state, direct the command to the primary storage system

Page 20: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 20

whenever possible. If the primary storage system is not available, the image state can be obtained from the

secondary storage system.

Table 5. Mirror attributes before and after promote

Mirror state Before promote After promote

Mirror name XYZ XYZ

Mirror ID aaa bbb

Primary image LUN xxx LUN yyy

Secondary image LUN yyy LUN xxx

The promote options based on these conditions are:

Normal promote – The storage system executes a normal promote when the promote command is issued

to a secondary image in the synchronized state and an active link exists between the primary and

secondary images. Images in the synchronized state are exact, block-for-block copies of the primary image.

When promoted, the secondary image becomes the primary image for the new mirror and the original

primary becomes the secondary image. The old mirror is destroyed, so the user is left with a mirror of the

same name, whose images have opposite roles than the original mirror. I/O can then be directed to the new

primary image, and no synchronization is required.

Force promote – Force promote is used when a normal promote fails. A normal promote may fail for a

variety of reasons, including that the secondary image is in the consistent state, the mirror is fractured,

and/or there is no active link between images. Active primary images often have corresponding secondary

images in the consistent state. Images in the consistent state are in a recoverable/usable state but are not

exact copies of the primary. Therefore, if the original primary is placed in the new mirror as a secondary, a

full synchronization is necessary.

If there is an active link, force promote places the original primary in the new mirror as a secondary image.

The original mirror is destroyed, so there is only one mirror with the original mirror name. The user would

then issue a synchronize command to start the full synchronization.

If the secondary image state is synchronized or consistent when the promote is issued, but the link between

them is not active, force promote does not place the original primary as a secondary of the new mirror.

Instead, the original secondary is placed in a new mirror as a primary, but with no secondary image. The

original primary remains in the original mirror, but there is no secondary image. The mirrors have the same

name, but different mirror IDs. Both primaries remain available for host access, and when the link is back

up, secondary images can be added to one or both images.

Local only promote - Like force promote, this option promotes the secondary without adding the original

primary as a secondary image. The original mirror and the new mirror exist with primary images. However,

local only promote also allows this type of promote with an active link. Use this option to recover changes

made since the last update to the original primary. This option is available for individual MirrorView/A

mirrors, MirrorView/A consistency groups, and MirrorView/S consistency groups. It is not available for

individual MirrorView/S mirrors.

Promoting with MirrorView/S When promoting with MirrorView/S, if the image state and link conditions are met for a normal promote, a

normal promote is executed. If the conditions for normal promote are not met, MirrorView/S will execute a

force promote.

MirrorView/Synchronous uses the Quiesce Threshold setting to determine when an image moves from the

consistent state to the synchronized state. The Quiesce Threshold setting defines the amount of time, in

seconds, MirrorView waits before transitioning the secondary image from the consistent state to the

synchronized state when there is no write activity. It should be set to the amount of time it takes a server to

flush its buffers to disk when write activity is halted. The Quiesce Threshold is a property of the mirror and

can be changed at any time. Its default value is 60 seconds.

Page 21: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 21

To perform a controlled failover to the secondary site and fail back without conducting a full

synchronization, you must:

1. Quiesce I/O to the primary image.

2. Wait for the secondary image to become ―Synchronized.‖

3. Promote the secondary image.

4. Resume I/O to the new primary image (the previous secondary).

Promoting with MirrorView/A A secondary image will be in the synchronized state if I/O is stopped and a subsequent update is started

and completed before I/O resumes to the primary image. This occurs in a controlled failover case, such as

a data migration or a DR site test.

MirrorView/A secondaries are usually in the consistent state when the primary is active. A delta between

primary and secondary images is expected in an asynchronous model. Therefore, users should expect to

promote a secondary in the consistent state in the event of a disaster.

The following steps outline how to conduct a normal promote to fail over to the secondary site and fail

back to the primary without requiring a full synchronization:

1. Quiesce I/O to the primary image (an unmount recommended).

2. Perform an update, so that the secondary becomes ―synchronized.‖

3. Promote the secondary image.

4. Resume I/O to the new primary (previous secondary) image.

In cases where a MirrorView/A update is in progress at the time of a site failure, the secondary can still be

recovered. This is possible because MirrorView/A maintains a protective snapshot (gold copy) of the

secondary image. When the promote command is issued to the secondary, MirrorView/A rolls the contents

of the gold copy back to the secondary image, and makes the secondary image available for access. The

rollback only occurs if the image is promoted. In cases where the primary is recovered and the secondary is

not promoted, the update will be allowed to continue. The gold copy process is discussed in detail in the

―Data protection mechanisms‖ section of this paper.

Fracture

A fracture stops MirrorView replication from the primary image to the secondary mirror image.

Administrative (Admin) fractures are initiated by the user when they wish to suspend replication as

opposed to a system fracture, which is initiated by MirrorView software when there is a communication

failure between the primary and secondary systems.

For MirrorView/S, this means that writes continue to the primary image, but are not replicated to the

secondary. Replication can resume when the user issues the synchronize command. For MirrorView/A, the

current update (if any) stops, and no further updates start until a synchronize request is issued. The last

consistent copy remains in place on the secondary image if the mirror was updating.

Although not required, Admin fracture may be used to suspend replication while conducting preventive

maintenance, such as software updates to connectivity elements and/or storage arrays. It is common to have

components restart and/or run in degraded mode for some portion of the maintenance. During this time,

users may choose to minimize the number of concurrent operations in the environment. After the

maintenance is complete, a synchronization can be conducted to resume steady state operations.

Recovery policy

The recovery policy determines MirrorView’s behavior in recovering from link failure(s). The policy is

designated when the mirror is created but can be changed at any time. There are two settings.

Page 22: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 22

Auto Recovery – Option to have synchronization start as soon as a system-fractured secondary image

is reachable. Does not require human intervention to resume replication and is the default setting.

Manual Recovery – Option to have MirrorView wait for a synchronization request from the user

when a system-fractured secondary image is reachable. Intervention may be desired for several

reasons. Users may choose to control link usage in a degraded link situation and/or to coordinate the

synchronization with other operations like starting snapshots, fracturing clones, and so on.

MirrorView will automatically set the recover policy to Manual after a promote is executed. The policy is

set to manual to give users control over full synchronization when it is required.

Minimum required images

Minimum required images specifies the number of secondary images that must be active for the mirror

not to enter the Attention State. Possible values are 0, 1, or 2 for MirrorView/S and 0 or 1 for

MirrorView/A. The default value is 0. Mirroring continues while in the attention state if there are any

active secondaries. Users can set up alerts through Navisphere’s Event Monitor feature (Event Code

0x71050129) to alert them if a mirror enters the Attention State.

Interoperability MirrorView can be used with many storage system based replication and migration products. The following

sections address common interoperability questions at a high level. For each case, there are documents that

provide more specific and detailed implementation guidelines. Those documents should always be

consulted when designing a solution.

Storage-system based replication software

MirrorView can be used with other storage system based replication software. For example, SnapView

snapshots and clones are often used for local replication of MirrorView images. The tables below

summarize storage-system-based replication software interoperability with MirrorView. For a

comprehensive description of interoperability, see the EMC VNX Open Systems Configuration Guide and

the EMC CLARiiON Open Systems Configuration Guide for CX4 Series, CX3 Series, CX Series, and AX4-5

Series Storage Systems on EMC Powerlink.

Table 6. Storage system replication software interoperability

LUN Type Potential usages

Make a

snapshot of it?

Make a clone

of it?

Use as source

for

MirrorView?

Use as source

for full SAN

Copy?

Use as source

for

incremental

SAN Copy?

LUN,

metaLUN,

thick LUN, or

thin LUN not

yet replicated1,2

Yes Yes Yes

MV/A or MV/S,

not both

Yes Yes

Clone Yes No No Yes Yes

Snapshot No No No Yes No

MV primary Yes Yes3 No Yes Yes

MV secondary Yes Yes3 No No Yes

Notes:

1. Includes any LUN used as a destination for either a full or incremental SAN Copy session.

2. Thin LUNs may only be remotely replicated to storage systems running release 29 or later.

3. For AX4-5, you cannot make a clone of a mirror.

Page 23: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 23

Virtual LUNs

Pool LUNs Virtual LUNs can be deployed as RAID group (traditional) LUNs or pool-based (thick and thin) LUNs.

RAID group LUNs offer a high degree of control of data placement and are used for environments that

demand high performance. Pool-based LUNs offer intelligent and automated data placement, auto tiering,

and capacity on demand. Any or all LUN types can coexist in a storage system. Virtual LUN technology

allows each LUN type to be migrated to different physical disks and/or LUN types, while remaining online

for applications. For detailed information on pool-based LUNs, refer to the EMC VNX Virtual Provisioning

and EMC CLARiiON Virtual Provisioning white papers on Powerlink.

Pool-based LUNs consist of thick LUNs and thin LUNs. Pools offer automatic implementation of best

practices and automation of basic capacity-allocation tasks. The consumed capacity of a pool LUN defines

the amount of physical capacity that has been allocated to the LUN. User capacity represents the server-

visible capacity of a pool LUN. Thick LUNs consume their total user capacity plus a minimal amount for

metadata from the pool when they are created. Therefore, thick LUNs are the pool-based equivalent of

traditional RAID-group LUNs. Thick LUNs are intended for applications with higher performance

requirements, while offering the ease-of-management features that pool-based LUNs offer. Thick LUNs are

available in release 30, and are supported by all VNX block-based and CLARiiON replication products.

Thin LUNs use a capacity-on-demand model for allocating physical disk capacity. Subscribed capacity is

the total user capacity for all LUNs in a pool. Subscribed capacity can exceed the total physical capacity of

the pool when thin LUNs are deployed. When creating a mirror of a thin LUN, the user capacity is used to

determine which LUNs are eligible for source/destination pairs. When replicating from a thin-LUN primary

to a thin-LUN secondary, only the consumed capacity of the thin LUN is replicated.

It is possible to replicate a thin LUN to a thick LUN, traditional LUN, or a metaLUN. The user capacity for

a thin LUN must be equal to the user capacity of the target LUN. When a thin LUN is replicated with

MirrorView to other LUN types, the thin LUN initially retains its thin properties. Zeros are written to the

destination for areas of the thin LUN that have yet to be allocated. However, the thin LUN becomes fully

consumed if a failback performs a full synchronization from the secondary image to the (thin) primary

image.

It is also possible to mirror a traditional LUN or metaLUN to a thin LUN. In this case, the thin LUN

becomes fully consumed when a full synchronization is performed from the primary image to the (thin)

secondary image. You can prevent this at the initial synchronization; this is described in the

―Synchronization‖ section. However, a full sync may not be avoidable in some failover and failback cases.

MetaLUNs MetaLUNs have two capacity attributes: total capacity and user capacity. Total capacity is the maximum

capacity of the metaLUN in its current configuration. Additional capacity can be added to a metaLUN to

increase its total capacity. User capacity is the amount of the total capacity that is presented to the server.

Users have control of how much of the total capacity is presented to the server as user capacity. When the

total capacity is increased, user capacity also can be increased. MirrorView looks at user capacity to

determine which LUNs can be in a mirrored relationship.

LUN shrink and LUN expansion LUN shrink and LUN expansion operations are supported on all LUN types. LUN shrink is available for

Windows 2008 servers and requires Solutions Enabler to coordinate file system activities with the

reduction in LUN capacity. LUN expansion is performed via metaLUNs for RAID-group-based LUNs. In

release 30 and higher, pool-based LUNs can be expanded on the fly. Any increase in pool LUN capacity is

available almost instantaneously. For thick LUNs, expansion only involves assigning additional 1 GB slices

to the LUN. For thin LUN expansion, the system simply increases the LUN’s user capacity while

consumed capacity is unaffected by the operation.

Page 24: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 24

When expanding a LUN that is using MirrorView, SnapView, or SAN Copy, user capacity cannot be

increased until the replication software is removed from the LUN. Since metaLUNs allow for separate

control of total capacity and user capacity, the total capacity can be increased while the replica is in place.

For example, assume that a user wants to stripe-expand a LUN using metaLUNs. When a LUN is stripe

expanded, data is restriped over added disks before the additional user capacity is available. You can

minimize the time the LUN is not being mirrored by allowing the striping operation to complete before

removing the mirror from the LUN to increase user capacity. Increasing metaLUN user capacity is a nearly

instantaneous process.

Once user capacity is increased, the mirror can be re-created on the LUN. To use the same secondary

image, user capacity has to be increased to the same user capacity as the primary image.

If I/O continues while the mirror is removed, a full synchronization is required when the mirror is re-

created. A full synchronization can be avoided if I/O can be quiesced while the mirror is removed and user

capacity is increased. The following steps are recommended to avoid a full synchronization.

1. Quiesce I/O and remove either the host or LUNs from the storage group. You must make sure that

there is no I/O or discovery of the new capacity while there is no mirror in place. 2. Ensure the secondary image is in the synchronized state as reported by the primary storage

system. For MV/S, wait for the image to become synchronized once I/O is quiesced. For

MirrorView/A, perform an update after I/O is quiesced. 3. Remove the secondary image and destroy the mirror. 4. Increase user capacity of the primary and secondary images to the exact same value. 5. Re-create the mirror and clear the Initial Sync Required option when adding the secondary

image. 6. Add hosts or LUNs back into the storage group for discovery of new capacity and resume I/O.

For more information on LUN expansion with metaLUNs, see the white paper EMC CLARiiON

MetaLUNs: Concepts, Operations, and Management on EMC.com and Powerlink.

LUN migration LUN migration can be used with MirrorView images under certain conditions. LUN migration is supported

to and from any LUN type, therefore, if requirements are met, it is possible to leave existing mirrors in

place during the migration. Table 7 outlines the conditions under which MirrorView LUNs can and cannot

be migrated. For more information on LUN migration, see UnisphereHelp and the white paper EMC

Virtual LUN Technology – A Detailed Review on EMC.com and Powerlink.

Table 7. LUN migration rules for mirror images

LUN type Migrate to same size LUN Migrate to larger LUN

Primary image Can be migrated2 Remove secondary image(s) and

destroy mirror before migration

Secondary image Can be migrated2 Remove secondary image from

mirror

Write intent log LUN (MV/S) Cannot be migrated

Reserved LUN (MV/A) Cannot be migrated

Notes:

1. A mirror cannot be created on a LUN that is migrating.

Page 25: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 25

2. In release 26 and higher, a migration can be started on a MV/S primary or secondary if the secondary is

synchronizing. In prior versions, migration was only possible if the secondary image was in the consistent

or synchronized state.

Block operating environment

EMC recommends that storage systems replicating LUNs between one another run the latest release of the

base operating environment. We recommend this for both the VNX for Block Operating Environment for

the VNX series and the FLARE operating environment for legacy CLARiiON systems. When mirrored

systems are running the same version of the operating environment, it’s noted as ―Preferred‖ in the table

below. Release 31.5 is the latest release for VNX series systems. Release 30 is the latest release for all CX4

models. Release 26 is the latest release for all CX3 models, the CX700, and CX500. Release 23 is the

current release for AX4-5 systems.

Depending on the model types used, it is not always possible to run the latest version of the operating

environment. For example, release 26 is the latest release for the CX3 series. It is also possible for a new

storage system to be added to an environment with older systems that cannot be upgraded immediately.

Therefore, EMC supports running MirrorView between storage systems whose releases are one version

apart. This relationship is designated as ―Supported‖ in the table.

If users cannot maintain a minimum difference of one release between storage systems during an upgrade,

mirrors should be fractured prior the upgrade. A partial synchronization can be performed once the storage

systems are on interoperable versions. If it is not possible to upgrade older systems, the mirrors should be

fractured until upgrades have been performed to both storage systems. MirrorView interoperability is

provided in the table (―N‖ stands for ―not supported.‖).

Table 8. Software version interoperability

Secondary

R31, 31.5

(VNX

series)

R30 (CX4

series)

R29 (CX4

series)

R28 (CX4

series)

R26 (CX3

series,

CX700,

CX500)

R24 (CX3

series,

CX700,

CX500)

R23

(AX4-5

FC)

Pri

ma

ry

R31, 31.5 (VNX) Preferred Preferred Supported N Preferred N Preferred

R30 (CX4 series Preferred Preferred Supported N Supported Supported Supported

R29 (CX4 series) Supported Supported Preferred Supported Supported Supported Preferred

R28 (CX4 series) N N Supported Preferred Preferred Supported Preferred

R26 (CX3 series,

CX700, CX500)

Preferred Supported Supported Preferred Preferred Supported Preferred

R24 (CX3 series,

CX700, CX500)

N Supported Supported Supported Supported Preferred Supported

R23 (AX4-5 FC) Preferred Supported Preferred Preferred Preferred Supported Preferred

Under normal conditions, when software versions are within one release of one another, it is not necessary

to fracture the mirrors to perform a code upgrade. While upgrading storage systems containing secondary

images, mirrors will system fracture while the owning SP of the secondary image reboots. When the

owning SP of a primary image reboots, an SP trespass is performed. MirrorView/S will continue

replicating. MirrorView/A updates are automatically held by the system until the upgrade is complete.

Mirroring thin LUNs When mirroring a thin LUN to another thin LUN, only consumed capacity is replicated between storage

systems. This is most beneficial for initial synchronizations. For traditional LUNs, all data, from the first

logical block address to the last, is replicated in an initial sync. Since thin LUNs track the location of

written data, only the data itself is sent across the link. No whitespace is replicated. Steady state replication

remains similar to mirroring traditional LUNs since only new writes are written from the primary storage

Page 26: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 26

system to the secondary system. Figure 13 shows an async mirror (LUN 269) that has thin LUNs as

primary and secondary image LUNs.

Figure 13. Mirroring thin LUNs asynchronously

Figure 13 shows the Mirrors and Consistency Group page in Unisphere. As you can see, if you select

Mirror of ESX5 LUN 269 in the upper pane, details about the primary and secondary images are listed in

the lower pane. You can also see the LUN Properties dialog boxes for the primary image, LUN 269, and

the secondary image (the image of ESX5 LUN 269). The user capacities, as you might expect, are the

same. Consumed Capacity is 8 GB for the primary image, and 5 GB for the secondary image. This

indicates that changes made to the primary images have not been replicated to the secondary image yet.

The secondary image’s Consistent state also indicates that changes have been made to the primary since

the last update. Having the secondary image LUN lag behind the primary image LUN is expected for

asynchronous replication with MirrorView/A.

In the case of a synchronous mirror, it is expected that the consumed capacities of the primary and

secondary image remain the same. Figure 14 illustrates this point. All changes to the primary are made to

the secondary before acknowledgement is provided to the server.

Page 27: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 27

Figure 14. Mirroring thin LUN synchronously

MirrorView/Synchronous MirrorView/Synchronous (MirrorView/S) follows the replication model outlined in the ―Replication

models‖ section for synchronous replication. MirrorView software writes new data to both the primary and

secondary volumes before the write is acknowledged to the server. The following sections describe the

software MirrorView/S software.

Data protection mechanisms MirrorView/S has mechanisms to protect data loss on the primary and/or secondary image. The fracture log

protects primarily against loss of communication with the secondary image. The write intent log protects

primarily against interruptions to the primary image. Use of the write intent log is optional. Both of these

structures exist to enable partial synchronizations in the event of interruptions to the primary and secondary

images.

Fracture log

The fracture log is a bitmap held in the memory of the storage processor that owns the primary image. It

indicates which physical areas of the primary have been updated since communication was interrupted with

the secondary.

The fracture log is automatically invoked when the secondary image of a mirror is lost for any reason and

becomes fractured. The mirror is fractured (system fractured) by MirrorView software if the secondary is

not available or it can be administratively fractured through Navisphere Manager or CLI. MirrorView/S

system fractures an image if an outstanding I/O to the secondary is not acknowledged within 10 seconds.

While fractured, the primary pings the secondary every 10 seconds to determine if communication has been

restored.

Page 28: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 28

The fracture log tracks changes on the primary image for as long as the secondary image is unreachable. It

is a bitmap that represents areas of the primary image with regions called extents. The amount of data

represented by an extent depends on the size of the mirror images. Since the fracture log is a bitmap and

tracks changed areas of the primary image, it is not possible to run out of fracture log capacity. It may be

necessary, depending on the length of the outage and the amount of write activity, to resynchronize the

entire volume.

When the secondary LUN returns to service, the secondary image must be synchronized with the primary.

This is accomplished by reading those areas of the primary addressed by the fracture log and writing them

to the secondary image. This activity occurs in parallel with any writes coming into the primary and

mirrored to the secondary. Bits in the fracture log are cleared once the area of the primary marked by an

extent is copied to the secondary. This ability to perform a partial synchronization can result in significant

time savings.

By default, the fracture log is stored in memory. Therefore, it would be possible for a full resynchronization

to be required if a secondary image is fractured and an interruption in service occurs on the primary SP.

The write intent log protects against such scenarios.

Write intent log

The write intent log is a record stored in persistent memory (disk) on the storage system on which the

primary LUN resides. The write intent log consists of two 128 MB LUNs, one assigned to each SP in the

storage system. Each 128 MB LUN services all the mirrors owned by that SP that have the write intent log

enabled. When the write intent log LUNs are assigned to the SP, they become private LUNs and are under

the control of MirrorView software. Write intent log LUNs must reside on RAID-group-based LUNs. If all

eligible disks in a storage system are deployed in pools, then the write intent log LUNs may be placed on

the system drives (first five drives in the enclosure).

During normal operation, the write intent log tracks in-flight writes to both the primary and secondary

images in a mirror relationship. Much like the fracture log, the write intent log is a bitmap composed of

extents indicating where data is written. However, the write intent log is always active and the fracture log

is only enabled when the mirror is fractured.

When in use, MirrorView makes an entry in the write intent log of its intent to update the primary and

secondary images at a particular location, and then proceeds with the attempted update. After both images

respond that data has been written (governed by normal LUN access mechanisms, for example, written to

write cache), it clears previous write intent log entries. For performance reasons, the write intent log is not

cleared immediately following the acknowledgement from the primary and secondary images; it will be

cleared while subsequent write intent log operations are performed.

In a recovery situation, the write intent log can be used to determine which extents must be synchronized

from the primary storage system to the secondary system. For instance, if a single SP becomes unavailable

(for example during a reboot or failure), there may be in-flight writes that were sent to the secondary, but

not acknowledged before the outage. These writes will remain marked in the write intent log.

Then server software, such as PowerPath®

, trespasses the LUN to the peer SP. The remaining SP directly

accesses the unavailable SP’s write intent log and recovers the recent modification history. The SP then

resends the data marked by the extents in the write intent log. This allows for only a partial

resynchronization, rather than a full resynchronization because it ensures that any writes in process at the

time of the failure are acknowledged by the secondary image. If the entire array becomes unavailable (for

example, due to a power failure), then the write intent log is used similarly to facilitate a partial

resynchronization from primary to secondary, once the primary array is recovered.

Use of the write intent log is optional on CX3 series and lower models. On these systems, the maximum

number of mirrors that can use the write intent log is less than the maximum number of configurable

mirrors. Details are provided in the ―Limits‖ section.

Page 29: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 29

There were also performance considerations when using the write intent log on these systems. However,

releases 26 and later have optimizations that improve performance when using the write intent log. The

optimizations are implemented in SP write cache. In versions prior to release 26, performance would

degrade if the storage system write cache neared full. In release 26, there is no impact at lower workloads

and only a slight impact (≤10%) on response time at high workloads. Therefore, use of the write intent log

should be maximized. If FAST Cache is enabled at the system level, it should be disabled for write intent

log LUNs. Optimal performance is achieved when write intent log LUNs are fronted only by SP write

cache.

The write intent log is automatically used on all mirrors on VNX series and CX4 series systems. Both

series of systems allow all mirrors to have the write intent log allocated up to the maximum allowable

number of mirrors on the system. The performance improvements previously made in release 26 allow

write intent log use with negligible performance impacts under normal operating conditions. Therefore, the

write intent log is configured on all mirrors by default.

The storage system that owns the primary image controls the write intent log for the mirror. If a VNX

series and/or CX4 systems are mirrored to a storage system that does not support as many write intent log

enabled mirrors (for example, the CX3 series), all of the mirrors can use the write intent log if the primary

image resides on the CX4.

As secondary images are promoted, the write intent log is enabled for as many mirrors as the secondary

storage system supports. When this number is exceeded, subsequent mirrors do not use the write intent log.

Fracture log persistence

The fracture log resides in SP memory. By itself, it cannot withstand outages, such as an unplanned SP

reboot (for example physically removing an SP during normal operation). However, mirrors with the write

intent log enabled are able to reconstruct the fracture log in SP memory if there is an outage. Since the data

structures in the write intent log achieve a persistent record of the areas on the primary image that have

been written to, when the write intent log is enabled, it also provides fracture log persistence— but only for

those mirrors with the write intent log enabled. In this case, users get the benefit of partial

resynchronization after an unplanned outage.

For example, if a mirror is fractured when a failure occurs on the primary system, a partial

resynchronization is still possible if the write intent log is enabled. With fracture log persistence, changes to

regions on the primary after the mirror is fractured are captured on persistent media, and can be read by the

storage processors when they are recovered. Without the benefit of fracture log persistence, changes to the

primary since the mirror was fractured are lost and a full resynchronization will be necessary. If the SP is

rebooted in a controlled manner through Unisphere, the fracture log is preserved by sending it to the other

SP before the reboot.

Fracture log persistence, along with improved performance in release 26, is why the write intent log should

be enabled on as many mirrors as possible. In cases where more mirrors exist than can be write intent log

enabled, prioritize mirrors based on preference for resynchronization in the instance of an unplanned

outage.

Synchronization There are full synchronizations and partial synchronizations. Initial synchronization is a full

synchronization. Partial synchronization occurs after a connectivity failure to the secondary is resolved.

Synchronizations can be initiated on all secondary images either by the user or by MirrorView software

when the Automatic recovery policy is set.

The number of concurrently running synchronizations depends on the image synchronization rates used.

Since images with synchronization rates set to high have the most impact on host I/O, the storage system

will not run as many of these concurrently as it would images with medium or low synchronization rates.

Page 30: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 30

Images waiting to be synchronized remain in a queued-to-be-synchronized image condition until some of

the current synchronizations are completed. As each secondary image finishes its synchronization, the

storage system determines if the next image queued to be synchronized can be started. If all images have

the same synchronization rate, the next image in the queue starts its synchronization each time a

synchronization completes. If there is a mix of sync rates, then the number of images started is based on the

resources made available. For example, if a synchronization with a high sync rate finishes, the storage

system can start several synchronizations at medium or low.

The synchronization progress is tracked in the fracture log. Bits of the fracture log are cleared as the areas

they represent are copied to the secondary. You can view progress in Navisphere in the Mirror Properties

dialog box by clicking the Secondary Image tab.

Thin LUN secondary images

Before starting the synchronization of the mirror or consistency group, MirrorView/S determines if the

secondary pool has enough capacity to accept all of the synchronization data. For example, in the case of an

individual mirror, if there is 25 GB of synchronization data to be sent, MirrorView/S verifies that there is at

least 25 GB of free capacity in the secondary image LUN storage pool.

For example, assume a consistency group has three mirrors. The first two mirrors have secondary image

LUNs in Pool A and have 20 GB and 40 GB of sync data, respectively. The third has its secondary image

LUN in Pool B on the same array and has 30 GB of sync data. MirrorView/S will verify that Pool A has at

least 60 GB of free capacity and that Pool B has at least 30 GB of free capacity before starting the

synchronization.

If enough capacity does not exist in a secondary image LUN storage pool, the mirror and/or consistency

group becomes administratively fractured. An error will be returned to the user and an event will be logged

in the SP event log. The events are:

0x71058269 Synchronization cannot start due to insufficient space in the remote system’s pool

0x710589B4 consistency group cannot sync due to insufficient pool space for at least one of its

members

Checking available pool capacity does not guarantee that all of the synchronizations will be accommodated

by the secondary LUN’s pool. It does ensure that the capacity is available at the beginning of the

synchronization; however, it is possible for other thin LUN activity in the same pool to consume pool

capacity while the synchronization is in progress.

Link requirements MirrorView/S is qualified for a large variety of SAN topologies that include high-speed Fibre Channel

SANs, distance extension solutions including longwave optics, FC/IP converters, and native iSCSI. For

comprehensive guidance on supported topologies, see the Topology Resource Center available within the

E-Lab Interoperability Navigator. Within the Resource Center there are techbooks for Fibre Channel,

iSCSI, Extended Distance Technologies and more.

It is expected that bandwidth and latency requirements will be governed by the application, since each write

I/O traverses the link. Under these conditions, links are typically high bandwidth lines with low latency.

There are no bandwidth requirements enforced by MirrorView software. However, there are functional

requirements for latency, since MirrorView/S will fracture a secondary image if it has an I/O outstanding to

the secondary for 10 seconds.

The graph in Figure 15 illustrates the effect of latency on transaction response time plotted against the write

component of the transactions. An OLTP simulator was run, where each OLTP transaction consisted of 21

reads and nine writes. The load was generated using 8-KB I/O over 12 data LUNs and 1 log LUN on a

CX3-80. The link is 1000 Mb with compression and fast write enabled through an FC/IP device.

Page 31: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 31

Individual results vary for transaction response times in an end-user environment depending on a wide

variety of factors, including application concurrency, server hardware and software, connectivity

components, storage system model, number of disks, and so on. The graph does not represent any

MirrorView or system limits, but simply illustrates the effects of latency with all other things equal across

the configuration.

Figure 15. Effects of link latency on transaction response times

When going over short distances with low latency lines, performance differences between using native

iSCSI connectivity and using FC/IP devices should not be very great. As distance and latency are

increased, FC/IP devices that offer advanced features such as compression and ―fast write‖ will have

performance advantages over native iSCSI connections.

Limits Table 9 lists the MirrorView/S limits for VNX series and CX4 series systems. For the latest and most

comprehensive limits, consult the EMC MirrorView/ Synchronous release notes.

Network Delay

8 KB I/O over 1000 Mb LAN McData 1620 with Fast Write

and compression enabled

Page 32: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 32

Table 9. VNX series and CX4 series MirrorView/S storage system limits

Parameter

Storage system

VNX7500

CX4-960

VNX5700

CX4-480

VNX5500

CX4-240

VNX5300

VNX5100

CX4-120

Maximum allowed mirror images per

storage system1

1024 –R30+

512 – Pre-R30

512 256 128

Maximum allowed mirrors per storage

system with write intent logs

1024 –R30+

512 – Pre-R30

512 256 128

Maximum synchronous consistency

groups2

64 64 64 64

Maximum mirrors per synchronous

consistency groups3

64 64 32 32

Notes:

1. Images include primary and secondary images.

2. The max number of consistency groups per storage system for VNX series and CX4 series systems is

shared between MirrorView/S and MirrorView/A. The total for the shared limit is 64, and can be

divided however the user wishes between MirrorView/S and MirrorView/A, provided that the max

combined number of Consistency Groups does not exceed 64, and also provided that the max number

of Consistency Groups for MirrorView/A does not exceed 50.

3. Release 29 increases the mirrors per group limit from 32 (960/480) and 16 (240/120) in release 28

Table 10 lists the MirrorView/S limits for the AX4-5, CX3 series, and CX400-CX700.

Table 10. CX3, CX, and AX4 series MirrorView/S storage system limits

Parameter

Storage system

CX3-80,

CX3-40,

CX700

CX600 CX3-20,

CX500

AX4-5, CX3-10,

CX400

Maximum allowed mirror images per

storage system1,2

200 100 100 50

Maximum allowed mirrors per storage

system with write intent logs

100 50 50 25

Maximum synchronous consistency

groups

16 16 8 8

Maximum mirrors per synchronous

consistency groups

16 16 8 8

Notes:

1. Images include primary and secondary images.

2. In release 24 and later, MirrorView/S and SnapView clone limits are independent of each other. In

previous releases, MV/S images and SnapView clones were counted together against a storage system

image count limit. Note that the highest release available for the AX4-5 is release 23 and highest

release for the CX400 and CX600 is release 19.

MirrorView/Asynchronous MirrorView/A uses a periodic update model for transferring write data to the secondary image. Between

updates, areas written to are tracked by MirrorView/A. During an update, the data represented by these

areas is transferred to the secondary image. The set of data that is transferred is referred to as a delta set.

Page 33: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 33

A delta set may be smaller than the aggregate number of writes that occurred to the primary image, since

areas in a bitmap are marked to denote a change occurred to a particular location of the image. If more than

one write occurs to the same area, only the current data is sent when the update is started. SnapView

technology is used to preserve a point-in-time copy of the delta set during the update.

MirrorView/A is designed to enable distance solutions over lower bandwidth and higher latency lines than

would be possible with a synchronous solution. Solutions are often implemented over T3 type lines or the

equivalent bandwidth within a larger link. Replication distances can range from 10s to 1000s of miles.

RPO is determined by the frequency and duration of updates. There are three methods for defining the

frequency of updates.

Manual – Each update is initiated by the user

Start of Last Update – MirrorView starts updates at user defined intervals

End of Last Update – MirrorView starts updates after a user defined rest period from the completion of

the last update

User defined intervals are specified in minutes, with an input range of 0 – 40,320 (28 days). However,

typical update intervals range from 15 minutes to hours.

Data protection mechanisms MirrorView/A uses SnapView technology for data protection on the primary and secondary storage

systems. On the primary storage system, SnapView tracks changes on the primary image and creates a

point-in-time image that is the source for the delta set transfer.

On the secondary storage system, SnapView creates a protective copy of the secondary image during the

update. The protective copy, referred to as the gold copy, ensures that there is a usable copy on the

secondary storage systems at all times.

All of MirrorView’s use of SnapView is autonomous and does not need be managed by the user. Therefore,

a SnapView license is not required to use MirrorView/A.

Tracking and transfer maps

MirrorView/A uses two bitmaps on the primary image. For each update, one bitmap (the tracking map)

tracks changes between updates, and the other bitmap (the transfer map) tracks the progress of the update

when transferring to the secondary. Each bitmap is used in a revolving fashion, first as the tracking map

between updates and then as the transfer map for the subsequent update.

The tracking and transfer maps are persistently stored on a reserved LUN on a per-mirror basis. As soon as

a MirrorView/A mirror is created on a primary image, a reserved LUN is assigned to the image to store the

tracking and transfer maps. The reserved LUN is assigned to the source for as long as the mirror exists.

The bits within the tracking and transfer maps represent chunks of the primary image. Chunks are a fixed

size, so the number of chunks varies depending on the size of the LUN. Changes to the primary image are

tracked at a 2 KB granularity.

This granularity allows smaller inter-site links (T1/T3) to be used efficiently by transferring a data set

closer in size to the amount of data changed. MirrorView/A can also coalesce writes for adjacent LUN

locations into one write, up to 64 KB, when transferring it to the secondary. For example, two separate

4 KB writes that occur on contiguous block locations on the LUN can be transferred as one 8 KB write

from the primary to the secondary.

Between periodic updates, only the bitmap acting as the tracking map is active. When a server’s write I/O

changes a region for the first time since the last update, a bit is set in the tracking map indicating the

change. Any changed regions are transferred to the secondary site during the next periodic update.

Page 34: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 34

The transfer bitmap keeps track of the progress of the transfer of data from the primary to the secondary

image. At the start of an update, the tracking map and transfer map switch roles, so that the tracking map

becomes the transfer map for the update. It already contains the set bits indicating where changes occurred

since the last update, so the same information is used to monitor the delivery of the data to the secondary

image. At the same time, the existing transfer map becomes the new tracking map for the next update. Each

time an update is started, this switching of roles occurs.

As the areas of the LUN represented by the set bits in the transfer map are sent, the bits are cleared from the

transfer map. This way, at the end of a successful update, the transfer map is cleared of any set bits.

During the transfer, if a server write is initiated to the primary image for a location not yet transferred, the

data area will be copied into the reserved LUN before accepting the write. This preserves the point-in-time

image of the changed data from the time the update was initiated. This only occurs the first time a write is

initiated into the primary for a location that has not been transferred. This is referred to as copy-on-first-

write protection. If a write comes into the primary for an area already transferred, no-copy-on-first-write

occurs. The area is marked only in the tracking map and is transferred in the next update cycle.

Delta set

A delta set is a virtual object that represents the regions on the primary mirror LUN that need to be

transferred to the secondary mirror LUN. A SnapView snapshot, managed by MirrorView/A, is used as the

source of the transfer to maintain a point-in-time image of the delta set for the duration of the transfer.

Ultimately, the size of the delta set will depend on the number of writes between updates and the locality of

those writes on the LUN. Some applications that are random in nature may have to replicate most of the

writes that occurred between updates. Other applications, which have high locality, may write to the same

location several times between updates and result in small delta sets. Results can vary greatly by

application and implementation.

For example, Figure 16 shows expected delta set behavior for a consistent workload. The delta set size

peaks before the first update. While the update progresses, I/O continues to the primary volume, so some

data lag exists throughout the update process. Delta set peaks remain consistent in this test because of the

steady workload.

Figure 16. Data lag trend for Oracle 9i RAC test over T3 (50 ms latency)

Update End

Update Start

Page 35: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 35

The data for this test was generated using an Oracle Database. Replication was conducted over a T3 line

with a 50 ms delay to simulate long-distance replication. The bottleneck for update bandwidth in this test

was the T3 link.

Gold copy (protective snapshot)

A gold copy is a virtual point-in-time copy (snapshot) of the secondary image that is initiated before the

update begins. It ensures that there is a known good copy on the secondary during an update. The gold copy

is managed entirely by MirrorView/A software and is not directly available for server access. Users can

take additional snapshots of the secondary image for a usable point-in-time copy of the secondary.

If the update from primary to secondary is interrupted due to a catastrophic failure at the primary site,

MirrorView/A uses the gold copy to roll back the partial update on the secondary LUN if the secondary is

promoted for use. If the secondary is not promoted, the gold copy is maintained so that the update may

complete when the interruption is rectified.

Reserved LUNs

Reserved LUNs are a resource used by SnapView snapshots to store copy-on-first-write data. In cases

where SnapView technology is used for point-in-time management, such as in incremental SAN Copy and

MirrorView/A, the reserved LUNs also store the tracking and transfer bitmaps necessary to manage the

updates.

Reserved LUN resources must be allocated on the primary and secondary storage systems when using

MirrorView/A. Reserved LUNs on the primary side are used for storing the bitmaps associated with delta

sets and for storing any copy-on-first-write data that is created during the update. On the secondary side,

they are used for storing copy-on-first-write data to maintain the gold copy.

When sizing the reserved LUNs for the primary image, consider how much data will change during an

update. MirrorView/A maintains optimized copy on first write protection during an update, where only

those blocks that haven’t been transferred to the secondary are copy on first write protected. Additional

consideration needs to be given if there are user initiated SnapView snapshots on the primary image. For

standard snapshots, the entire volume is copy-on-first-write protected for as long as the snapshot exists. So

it is expected that capacity requirements for user initiated snapshots are greater than those needed by

MirrorView/A.

When sizing the reserved LUNs for the secondary image, the rate of change must be known. Since the

primary image is sending only areas that change, every update write from the primary storage system is

copied on the first write protect to the secondary. It is best to assume no locality and provide enough

reserved LUN capacity to, at the minimum, accommodate the largest number of aggregate writes for any

one period. The aggregate number of writes can easily be measured by OS-based tools such as I/O Stat and

Perfmon or array-based tools such as Navisphere Analyzer. If MirrorView/A is being considered for an

application that is not yet running, a rule of thumb of 20 percent of the Primary image capacity is

commonly used.

Additional space beyond the maximum aggregate writes should also be considered in case there is an

interruption to the update process. If I/O continues to the primary image during the interruption, the

subsequent delta set after the interruption will be larger than over a normal period. Some additional

considerations for MirrorView/A reserved LUN configuration are:

Place the reserved LUNs on different physical drives than the primary images. Since it is possible

to have tracking and/or copy on first write activity associated with a server write, it is best not to

have all of the activity queued up for the same set of disks. Often times, several reserved LUNs are

configured on their own RAID group or pool.

Use high performance SAS or FC drives for the reserved LUNs. It’s common to have several

reserved LUNs on the same drives. The tracking and copy on first write activity is a critical

component for write response time, so it is important to avoid I/O contention on these drives. flash

drives do not provide a significant performance advantage over SAS or FC drives when used for

reserved LUNs. Therefore, we recommend that you place reserved LUNs on write-cache enabled

Page 36: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 36

LUNs on SAS or FC drives. For more information, see the ―Disk drive technologies‖ section of

this white paper.

Use the same configuration on the primary and secondary side. Some end users consider cutting

costs on the DR side by using fewer drives or lower-cost drive technology. However, the reserved

LUNs plays an important role in copy-on-first-write protecting the secondary images during an

update. It’s also expected that in a recovery situation, the applications will run with similar

performance when failed over to the DR site.

For a more comprehensive look at reserved LUN guidelines for MirrorView and other replication

applications, see the white paper CLARiiON Reserved LUN Pool Configuration Considerations on EMC

Powerlink.

Initial synchronization The transfer map is used to track initial synchronization. When an initial sync is required, all of the bits in

the transfer map are set to indicate transfer. The bits of the transfer map are cleared as the areas they

represent are copied to the secondary.

All mirrors can synchronize simultaneously. Resources are shared among active mirrors to evenly

distribute bandwidth during the synchronization period.

There is no gold copy (so no protective snapshot) during the initial sync, since there is no usable copy of

data until the initial sync is complete. If the initial sync is interrupted, the secondary will be in an out-of-

sync state and cannot be promoted.

MirrorView/A is intended to operate over lower bandwidth lines. In implementations where the delta set

would consume most or all of the link capacity, it can be difficult to perform the initial synchronization.

This is prevalent in implementation with large volumes (multiple TBs) and relatively smaller change rates

(GBs) over the update period.

Often in these scenarios, the secondary system is brought to the same site as the primary for the initial

synchronization and then shipped to the remote site, where an incremental update is performed. The general

process to do this is:

1. With primary and secondary on the local SAN, create mirrors, add secondary images, and perform

initial synchronization.

2. Admin fracture the mirrors. An incremental update is optional before the fracture to get the very last

possible changes.

3. Remove the secondary system from the Unisphere/Navisphere domain.

4. Transport secondary and install at final site including any zoning and IP address changes. Changing IP

addresses and resulting SP reboot do not affect the MirrorView configuration.

5. Bring the secondary either back into the Unisphere/Navisphere domain or manage it with the Multi-

Domain Management feature of the UI if in a different domain.

6. Re-establish the MirrorView connections and synchronize the Mirrors.

In Release 24 and later, MirrorView/A uses the Bulk Copy feature of SAN Copy for the initial

synchronization. The bulk copy process is an advanced feature of SAN Copy that minimizes SnapView

operations during the initial synchronization. It also provides check pointing, so that initial

synchronizations can resume if a link outage occurs.

The Bulk Copy process is completely managed by MirrorView/A software, so there are no changes to user

operations. MirrorView/A performs a bulk copy and an immediate incremental update as part of the initial

synchronization process. Users may see improved performance during the initial synchronization when

compared with prior versions.

Page 37: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 37

Link requirements MirrorView/A does not have stringent bandwidth and latency requirements. A minimum bandwidth of 128

KB/s is required. This is very likely to be much lower than any practical implementation would require.

MirrorView/A will system fracture during an update if it does not receive acknowledgement from the

secondary in five to six minutes. Since server I/O is decoupled from the transfer of data from the primary to

the secondary, it is not as important to system fracture within the same time as a synchronous solution.

MirrorView/A is commonly replicated over TCP/IP links to enable long distance replication. Table 10

shows nominal bit rates and bandwidths for commonly used link types. When determining the required link

type also factor in TCP/IP overhead, which can lower the effective payload by about 20 percent.

Table 11. Nominal link speeds

Link type Nominal bit rate (Mb/s) MB/s GB/h

T1 1.5 0.18 0.63

T3 45 5.4 18.9

100 Mb 100 11.9 41.9

OC3 155 18.5 65.0

As distance increases, advanced features such as compression and support of large TCP window sizes can

often help to optimize link utilization. These features are offered on FC/IP conversion devices, and they are

not offered on the native iSCSI ports. So for very long distance replication, like coast to coast of the

continental U.S. (~3,000 miles), users may opt to employ FC/IP devices, depending on their workload as

compared to the link’s bandwidth and latency.

Limits Table 12 lists the MirrorView/A limits for VNX series and CX4 series storage systems. For the CX4 series,

Release 29 offers a significant increase in scalability compared to release 28.

Table 12. VNX series and CX4 series MirrorView/A limits

Parameter

Storage system

VNX7500/VNX5700

CX4-960/CX4-480

VNX5500/VNX5300

CX4-240/CX4-120

VNX5100

Maximum images per storage

system1

256 256 128

Maximum async consistency

groups2

64 64 64

Mirrors per async con groups 64 32 32

Notes:

1. MirrorView/A source limits are shared with SAN Copy incremental-session and SnapView snapshot

source limits.

2. The max number of consistency groups per storage system for VNXseries and CX4 series systems is

shared between MirrorView/S and MirrorView/A. The total for the shared limit is 64, and can be

divided however the user wishes between MirrorView/S and MirrorView/A.

Page 38: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 38

Table 13. CX4 MirrorView limits for release 28

Parameter

Storage system

CX4-960/CX4-480 CX4-240/CX4-120

Maximum images per storage system1 100 100

Maximum async consistency groups2 50 50

Mirrors per async con groups 16 8

Notes:

1. MirrorView/A source limits are shared with SAN Copy incremental-session and SnapView snapshot

source limits.

2. The max number of Consistency Groups per storage system for CX4 series systems is shared between

MirrorView/S and MirrorView/A. The total for the shared limit is 64, and can be divided however the

user wishes between MirrorView/S and MirrorView/A, provided that the max combined number of

Consistency Groups does not exceed 64, and also provided that the max number of Consistency

Groups for MirrorView/A does not exceed 50.

Table 14 lists the MirrorView/A limits for the AX4-5, CX3 series, and CX400-CX700. For the most recent

limits guidelines, consult the EMC MirrorView/ Asynchronous Release Notes

Table 14. CX3, CX, and AX4 series MirrorView/A system limits

Parameter

Storage system

CX3-80, CX3-40,

CX700, CX600

CX3-20, CX500,

CX400

AX4-5,

CX3-10

Maximum images per storage system 100 50 25

Maximum async consistency groups 16 8 8

Mirrors per async con groups 16 8 8

Note: MirrorView/A source limits are shared with SAN Copy incremental-session and SnapView snapshot

source limits.

Storage-system-based consistency groups MirrorView (both /S and /A) includes the storage-system-based consistency groups feature. A storage-

system-based consistency group is a collection of mirrors that function together as a unit within a storage

system. All operations, such as synchronization, promote, and fracture, occur on all the members of the

consistency group. Once a mirror is part of a consistency group, most operations on individual members are

prohibited. This is to ensure that operations are atomically performed across all the member mirrors.

The members of a consistency group can span across the Storage Processors, but all of the member mirrors

must be on the same storage system. A consistency group cannot have mirrors that span across the storage

systems. In addition, although consistency groups are supported for both synchronous and asynchronous

mirrors, all of the mirrors in each group must be same type of mirror.

Benefits of storage-system-based consistency groups Managing all write-order dependent volumes as one entity ensures that there is a restartable copy of the

application on the secondary system. The copies are restartable because all the volumes are as current as

the solution’s RPO will allow. Other replication strategies may yield a recoverable copy of data at the DR

site. Recoverable copies typically require more operations to bring them back online. Consider the example

of a DBMS application. When using log shipping in the event of a disaster, data files are restored to a

previous point in time (for example, a backup) and then made current by applying logs replicated more

frequently.

Page 39: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 39

Restartable copies offer shorter RTOs than recoverable copies in terms of access to current data. A trade-

off is that more data may be replicated to constitute restartable copies as both logs and data files are being

replicated on a regular basis, as opposed to only replicating the logs.

Consistency groups protect against data corruption in the event of partial failures, for example on one SP,

LUN, or disk. With partial failures, it is possible for the data set at the secondary site to become out of

order or corrupt.

For example, assume an interruption prevents only one SP from communicating with its secondary volume.

If the log volumes reside on the interrupted SP and the database volumes reside on the other SP, it is

possible for updates to be written to the database secondaries, but not the log secondaries.

The following sections discuss specific benefits of consistency groups as they pertain to MirrorView/S and

MirrorView/A.

MirrorView/S

For MirrorView/S, consistency groups are crucial to performing consistent fracture operations. Figure 17

shows a database where SP A owns the log components and SP B owns the data files. Without consistency

groups, an interruption in communication on SP A invokes the fracture log on the log LUNs, and then I/O

continues to the log LUNs and the data LUNs. Since there is no outage on SP B, I/O to the data LUNs

would still be mirrored to the secondary. If an outage occurs on the primary system during this time, there

would not be a usable copy because data would be ahead of the logs at the secondary site.

Figure 17. Partial fracture with no consistency groups

Figure 18 shows the same scenario with consistency groups. If one member of the group fractures, then all

of the members of the group fracture, and data integrity is preserved across the set of secondary images.

At the time of the interruption, MirrorView fractures all of the mirrors in the consistency group due to the

communication failure of those mirrors on SP A. This allows I/O to continue to the primary volumes, but

not to the secondary volumes. While MirrorView performs the fracture operation, it briefly holds write I/Os

to members of the consistency group until that particular member is fractured. After each member is

fractured, I/O is allowed to continue to that volume.

Remote Site (Newark) Primary Site (NYC)

SPA SPB

SPA SPB

Log Trgt 1

Log Trgt 2

Data Trgt 1

Data Trgt 2

F

F Log Src 2

Data Src 1

Data Src 2

Log Src 1

?

Page 40: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 40

Figure 18. Consistency group fracture with partial interruption

For a database application, as the fracture log is invoked on the log LUN, the storage system acknowledges

the log write to the server. The server then issues the subsequent write to the data volume. If the data

volume write request is issued to the storage system before it is fractured, the storage system holds the I/O

until the fracture of the data LUN can occur. Once fractured, the storage system acknowledges the write to

the server.

MirrorView/A

In addition to managing fractures, consistency groups for MirrorView/A play a key role in the update

process. At the start of an update, a point-in-time copy is created that serves as the source for the point-in-

time transfer. When there is a set of write-order-dependent volumes, the point-in-time copy must be started

simultaneously across all volumes.

As shown in Figure 19, at the start of an update, I/O is held by the system for each mirror until the point-in-

time session has been established for that volume. Once the point-in-time session is created, I/O resumes.

Similarly, before updates are applied to the secondary images, a gold copy is created for all secondary

mirrors. These gold copies remain in place until members of the consistency group complete the transfer of

the delta set. This ensures that changes are applied to all the secondary images or none of them, and that

there is always a consistent point-in-time image available to promote. If the secondary images are promoted

with gold copies in place, MirrorView rolls the contents of the gold copies back to the secondary images

and makes them available for direct server access as primary images. The rollback process has instant

restore capabilities, which means that users have access to their data while the rollback process runs in the

background.

Remote Site (Newark) (Newark)

Primary Site (NYC) (NYC)

SPA SPB

SPA SPB

√ Log Src 2

Data Src 1

Data Src 2

Log Trgt 1

Log Trgt 2

Data Trgt 1

Data Trgt 2

Log Src 1 F

F

F

F

Page 41: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 41

Figure 19. MirrorView/A storage-system-based consistency group atomic operations

Managing storage-system-based consistency groups You can manage consistency groups using Unisphere or Navisphere CLI. Figure 20 illustrates how to

perform operations such as synchronization, fracture, and so forth from Unisphere.

Figure 20. Using Unisphere to manage MirrorView/A storage-system-based consistency groups

Page 42: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 42

To automate consistency group operations, you can use Navisphere CLI. Figure 21shows the Naviseccli

CLI syntax.

Figure 21. Using Navisphere CLI to manage MirrorView storage-system-based consistency groups

Consistency group promote options Like individual mirrors, consistency groups have conditions (normal, admin fractured) and states

(synchronized and consistent). Consistency group conditions and states depend on the condition and states

of the member mirrors. For a consistency group to be in the synchronized state, all of its members must be

in the synchronized state. The group will transition to synchronized after all of the mirrors in the group

have transitioned to this state. A complete list of consistency group conditions and states is available in

Appendix B for reference.

If using Navisphere Secure CLI, verify the state of the consistency group with the storage system that owns

the primary images. If the storage system that owns the primary images is not available, the state of the

consistency group can be obtained from the secondary storage system.

Consistency groups can be promoted if they are in the synchronized or consistent states. The various

promote options are as follow:

Normal promote – When promoting the secondary images of a group, MirrorView determines if

connectivity exists between the primary and secondary storage systems, and then checks to see if all

the mirrors within are synchronized. If these conditions are met, the promote proceeds and the current

set of primaries become secondaries of the group. If there is no connectivity or MirrorView determines

that any of the primaries do not match the secondaries, then it will fail and the user can select from the

other promote options listed below. Unlike individual mirrors, a new consistency group is not created

for the promote. Therefore, after a normal promote, the consistency group ID is the same before and

after the promote.

Force promote –Force promote is used if the conditions for normal promote are not met. It is a ―brute

force‖ promote that ignores most errors preventing a normal promote, such as the secondary image

state being consistent, the mirror being in a fractured state (admin or system fracture), a failed link, and

so forth. In cases where connectivity still exists between the primary and secondary storage systems,

the secondary images are placed as primary images of the same consistency group. The original

primaries are placed in the group as secondary images. A full synchronization is then required from the

new primary (old secondary) images to the new secondary (old primary) images. If connectivity does

not exist, force promote acts like Local Only promote.

Local only promote – This promotes the consistency group on the secondary site without adding the

primary images as secondary images of the mirrors. A new group is created on the secondary storage

system, and the original secondary images are placed in the new consistency group as primary images.

The primary images on the primary storage system remain in the original consistency group. Both

consistency groups will be in the Local Only state. Although this option is not available for individual

MirrorView/S mirrors, it is available for MirrorView/S consistency groups.

Cancel – Do not proceed with the promote.

Page 43: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 43

MirrorView management

Required storage system software MirrorView runs within the VNX Operating Environment for Block on VNX systems. The MirrorView/A

and MirrorView/S enablers are included in the Remote Protection Suite, the Total Protection Pack and the

Total Efficiency Pack. MirrorView runs within the FLARE operating environment on CLARiiON and

Unified NS series systems.. MirrorView/A and MirrorView/S are separately licensed products for these

systems. Figure 22 shows the presence of various enablers in the Storage System Properties dialog box.

Figure 22. Storage System Properties (Software tab)

Once the enabler is installed, all relevant menus and dialogs become available to the user within Unisphere,

and the storage systems will respond to feature specific CLI commands.

Management topology

Management network connectivity

Network connectivity requirements for configuring MirrorView are determined by Unisphere and/or CLI.

Unisphere and Navisphere CLI communicate with the storage systems using ports 80/443 or alternately,

2162/2163. Furthermore, network connectivity between systems on these ports is required to set up and

configure mirrors. Once configured, MirrorView communicates in-band, via the data path, between the

storage systems; any communication needed, such as to facilitating a promote operation, occurs in-band.

For example, assume a user stops I/O to a primary volume that is replicated with MirrorView/S until the

secondary image is in the synchronized state. A command can then be issued to the secondary storage

system to promote the secondary image. This command is sent through the secondary storage system’s

management LAN port. MirrorView on the secondary system then communicates in-band with MirrorView

on the primary system to reverse the image roles. The process could then be reversed by issuing the

promote command to the current secondary (former primary) storage system.

Although MirrorView communicates in-band after initial configuration, we recommend that

communication between storage system network ports always be available. That way, you can recover from

all failure scenarios (like force promote) without having to first reconfigure the management LAN to allow

connectivity.

Page 44: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 44

Unisphere domains

MirrorView can be used between systems in the same Unisphere domain and/or with systems that are not in

the same Unisphere domain. Unisphere has a multi-domain management feature that allows management of

discrete Unisphere domains in one user interface. Figure 23 shows Unisphere configured to manage

multiple domains.

Figure 23. Unisphere 1.1.25 multidomain management

Each domain is mutually exclusive and has its own set of users and user credentials. MirrorView can be

configured between systems in different domains provided the user has the proper rights in each of the

system’s domains that they wish to configure.

VNX series systems running VNX OE for Block release 31.5 can have multiple VNX series systems in a

single domain. All VNX series systems in the domain must be running release 31.5 or later. Unisphere

client 1.1.25 was released in support of Release 31.5, and adds multi-domain management for VNX series

and prior generation systems. Unisphere client 1.1.25 can manage VNX series systems running VNX OE

for Block R31.5 and R31 as well as legacy systems like CLARiiON (FLARE release 23 and higher) and

Celerra running DART 6.0.

VNX OE for Block release 31 (Unisphere 1.1), supports only one system per domain, however, Unisphere

client 1.1.25 can be used to manage several of these systems in a multi-domain management configuration.

Therefore, users are not required to upgrade the VNX Operating Environment to take advantage of multi-

domain management. For more information regarding Unisphere Domains, see the whitepaper Multi-

Domain Management with VNX Storage Systems on PowerLink.

VNX series and legacy CLARiiON/NS series domains have slightly different user role definitions. Some of

the user roles are data protection specific. The data protection roles allow storage administrators to give

users specific privileges for managing replicas, without giving them the rights for other storage operations.

Examples include allowing application administrators or service providers to start and refresh local copies

(SnapView), remote copies (SAN Copy), and DR copies (MirrorView), and perform restore and recovery

operations. The VNX Data Protection roles are listed below with the analogous CLARiiON/NS series roles

in parenthesis:

Local Data Protection (Local Replication) – Allows the user to perform basic operation for

snapshots and clones, such as session start/activate and clone sync/fracture.

Page 45: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 45

Data Protection (Replication) – Allows the user to perform Local Replication operations, plus

basic operations for SAN Copy sessions and MirrorView mirrors.

Data Recovery (Replication/Recovery) – Allows the user to perform Local Replication and

Replication operations, plus operations such as restoring replicas back to production LUN (Snap

rollback, clone reverse sync, MirrorView promote).

None of these roles allow users to create new replication objects such as snapshots, clones, SAN Copy

sessions, or mirrors, nor do they allow users to delete objects. Users can only control existing replica

objects with these roles. All three roles have view-only privileges in the domain for object types outside of

their control, so that they can understand the context of the replica object. A detailed list of management

rights for each role is listed in ―Appendix B: Consistency group states and conditions Protection Roles.‖

Data protection user roles are created and managed in the same manner as all other user roles. However,

when data protection roles are employed, all of the storage systems in the domain must be running a

minimum of release 29 software for legacy CLARiiON/NS systems. Data protection roles users can be

assigned a local, global, or LDAP scope in the same manner as traditional user roles.

Management software Unisphere is introduced in release 30 as the replacement for Navisphere Manager. It offers users a task-

based approach to management while still offering many of the object-based menus available in

Navisphere. A menu of tasks is presented to the user. From this menu users can launch dialog boxes to

perform specific tasks or to launch a wizard. One wizard in this suite is the MirrorView Wizard. Figure 24

shows the Navisphere Task Bar with the Replicas tab selected.

Figure 24. MirrorView Wizard launch from the task pane

The MirrorView Wizard uses a task-oriented approach to guide users through the process of creating

mirrors. Along with creating the mirrors, the wizard automatically performs any needed setup operations

Page 46: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 46

such as creating iSCSI connections and MirrorView connections, and creating/assigning write intent log

and reserved LUNs.

Explicit dialog boxes for creating mirror connections, creating mirrors, adding secondary images, and so

forth are still available for advanced users. Advanced users who have very specific requirements or have a

set of conditions that are not covered by the wizard may use this approach.

Navisphere CLI offers a command line approach. All configuration options are available through the CLI

and all security features for authentication, authorization, privacy, and audit are shared with Navisphere

Manager.

For a comprehensive description of management operations, the following documents and white papers are

available on EMC.com and Powerlink except where noted:

Unisphere Manager Help (Powerlink only)

EMC VNX for Block Command Line Interface(CLI)Reference

Security Configuration Guide for VNX for Block whitepaper

CLARiiON Security Configuration Guide white paper

The following sections discuss the features, benefits, and assumptions of the MirrorView Wizard, object-

based management, and CLI.

MirrorView Wizard

The MirrorView Wizard uses a server-centric approach. This ease-of-use feature changes the traditional

approach to storage management, which is to manage the environment from the storage system out to the

server. The basic steps for the wizards are select a server > select a storage system > select a LUN(s) >

perform an action. So the first assumption is that a server will be attached and a LUN assigned prior to

using the MirrorView Wizard.

Before using the wizard, physical connections must be established between the two storage systems. This

includes proper LAN preparation, SAN zoning, and Unisphere Domain configuration, tasks that are not

included in the MirrorView Wizard.

In the wizard, the user is first guided through the LUN selection process by selecting the desired server and

storage system. One or many LUNs may be selected. Then, the wizard performs the following discrete

operations based on user input and existing conditions:

1. Select the remote storage system.

2. If necessary, create iSCSI connections.

3. Select the MirrorView connection type if both iSCSI and Fibre Channel exist.

4. If necessary, create a MirrorView connection.

5. Select a mirror-type: synchronous or asynchronous.

6. Select or create a storage pool for the secondary image(s). The wizard’s preference is to create the

same LUN type as the primary image, be it RAID-group-based or pool-based.

7. Bind secondary image LUN(s) on the remote storage system.

8. Configure the remote mirror.

9. If multiple source LUNs are selected, perform consistency group configuration.

10. Assign the mirror LUN to a secondary server.

11. Configure the write intent log (in MirrorView/S) or the reserved LUN (in MirrorView/A).

Write intent log LUNs are supported on RAID groups only.

Page 47: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 47

Reserved LUNs are created as thick LUNs by default. Placing reserved LUNs in the same pool as

server-visible LUNs or mirror images is avoided whenever possible. More information is provided

in the ―Reserved LUNs (MirrorView/A)‖ section of this paper.

Other than selecting the source LUN, remote storage system, and the secondary image storage pool, the

wizard requires minimal input from the user. To simplify the process, there is a default naming convention

for objects created by the wizard, based on the storage system and source LUN names. Users can edit these

names in the wizard on the fly.

Mirror LUN properties

The wizard binds a secondary LUN in all instances. The wizard sets all secondary LUN properties to match

those properties of the source LUN, including LUN type. The default name for the mirror LUN is

Image Of <primary_server_ name><primary LUN name>. The default name may be changed simply by

modifying the default name in the Secondary LUN Name field in the wizard, as shown in Figure 25.

Figure 25. Mirror LUN name

Remote mirror configuration

The remote mirror wizard creates and configures the remote mirror with minimal input from the user. In

fact, there are only two or three parameters the user may alter based upon the type of mirror chosen, as

shown in Table 15.

Page 48: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 48

Table 15. Remote mirror user configurable properties

Mirror property Default value Mirror type

Sync rate Medium Both

Mirror name ―Mirror of <primary server> <primary LUN>‖ Both; See Figure 25

Update interval 4 hours Asynchronous only

Consistency group name ―Group of <primary server> <primary LUN1> <primary LUN2> primary LUN3>…‖

Both; only when multiple LUNs are selected

Additional parameters are automatically configured by the wizard with no user input, as listed in Table 16.

In addition, the contents of the primary LUN are automatically copied to the secondary LUN upon

completion of the wizard.

Table 16. Default remote mirror properties

Mirror property Default value Mirror type

Minimum required images 0 Both

Quiesce threshold 60 Synchronous only

Use write intent log Selected Synchronous only

Recovery policy Automatic Both

Assumption – Hidden property values are not commonly changed by customers.

Implication – Customers that require specific settings for these hidden properties may not change

them using the wizard.

Alternative – Modify the property value after mirror creation with the wizard or use the traditional,

object-based method to configure a mirror.

Consistency group configuration

The MirrorView Wizard automatically configures a consistency group when you select multiple primary

LUNs. The consistency group default name is listed in Table 15 and may be modified by the user in the

wizard. The wizard automatically creates the consistency group with the selected name and adds each

remote mirror to the group.

Storage Group configuration

The MirrorView Wizard optionally allows the user to assign the secondary LUN to a server, for example,

for DR preparedness. The wizard automatically performs the storage group operations if the user selects a

secondary server as shown in Figure 26.

Page 49: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 49

Figure 26. Select the server for a secondary image

The specific steps performed by the wizard vary based on the existence of a storage group for the selected

server.

The server is already configured in a storage group – The wizard places the newly bound LUNs in the

server’s storage group.

The server is not configured in a storage group – The wizard creates a storage group for the server,

places the server in the storage group, and places the newly bound LUNs in the storage group. In this

case, the wizard automatically creates a storage group named ―SG_<server name>.‖ If you need a

specific storage group name, you may change it using the traditional Storage Group Properties

dialog box.

Write intent log configuration (MirrorView/S)

The MirrorView Wizard checks for the existence of the write intent log LUNs on the system and will bind

them on both storage systems appropriately if needed, based on the following policy.

The RAID group used for each write intent log LUN is determined by the following criteria:

Does not contain the source LUN

Contains the fewest server-visible LUNs

Contains the fewest clones and mirror images

Page 50: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 50

Contains the most free space

Contains the fewest LUNs

The general Default Owner property is assigned based on the RAID group from which the write intent

LUN is bound:

For even-numbered RAID groups, the default owner is SP-A.

For odd-numbered RAID groups, the default owner is SP-B.

The naming convention used is virtual disk sequence_number, where the sequence number is generated

based on the existing LUN names in the storage system. Sequence number generation uses the following

formula:

The number starts from the largest number found at the end of existing LUN names. For example,

if the existing LUNs in the storage system have names LUN 0 to LUN 10, LUN 50 to LUN 59,

then the sequence number starts at 60, giving the first LUN the name "Virtual Disk 60.‖

Each time the wizard is run, the above method is used to find a new starting sequence number, if

necessary.

If no trailing numbers exist in the LUN names (for example, if all LUN names were changed to

not include a number), the sequence numbering starts at 0.

Reserved LUNs (MirrorView/A)

Reserved LUNs are required for both the primary and secondary LUNs in an asynchronous mirror

relationship. The MirrorView Wizard automatically configures the reserved LUNs on both storage systems

with no input from the user.

The following attributes are used to create the LUNs:

Reserved LUN capacity for the primary and secondary images is set at 20 percent of the image

LUN size.

Two Reserved LUNs are bound for each primary and secondary LUN selected in the wizard.

The total overhead storage space is divided evenly among the number of Reserved LUNs.

By default, reserved LUNs are configured as thick LUNs using this algorithm:

1. It configures thick LUNs in pools that do not contain server-visible LUNs or mirror

images, and contain the least number of reserved LUNs.

2. If the conditions in step 1 cannot be met, the system tries to create a new storage pool

(three- or five-disk RAID 5 or two-disk RAID 1/0) for the thick LUNs. It then configures

the reserved LUNs on this pool.

3. If there no disks are available to create a new storage pool, it places the reserved LUNs in

an existing pool that has enough capacity.

4. If there is not an existing pool with enough capacity, it evaluates RAID groups using

steps 1 to 4.

5. If it cannot find an acceptable RAID group, the process fails.

Reserved LUNs created in pools typically alternate SP ownership. By default two reserved LUNs are

created per source, one owned by SP A and the other by SP B. When created on a RAID group, the Default

Owner property is assigned based on the RAID group from which the reserved LUN is bound:

For even-numbered RAID groups, the Default Owner is SP-A.

For odd-numbered RAID groups, the Default Owner is SP-B.

The naming convention used is the same as for write intent log LUNs: Virtual Disk sequence_number

For example, assume a user wants to create a mirror of LUN 12, which has a capacity of 200 GB. Existing

LUNs on both systems range in LUN number and name from LUN 0 – LUN 50. The wizard will configure

Page 51: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 51

the reserved LUN pool on the primary storage system with two 20 GB LUNs and secondary storage system

with two 20 GB LUNs. The reserved LUN names on both arrays will be ―Virtual Disk 51‖ through ―Virtual

Disk 54‖.

The wizard always creates two new Reserved LUNs for each source LUN and never uses unallocated

LUNs in the pool. Two Reserved LUNs for each source LUNs allow the reserved snapshots to expand over

time.

Advanced dialogs

Dialogs to perform specific tasks such as creating a mirror, adding a secondary, and so on are available in

Unisphere. These dialogs can be launched from the task menu pane or from the right-click object menu of a

specific object. Additional configuration options are available in these dialogs. They can be used to create

mirrors as well as change the mirror properties of any mirror regardless of which method was used to create

it.

Figure 27 shows right-click menu from which mirror-related operations can be launched. Other

MirrorView object menus are available on the LUN and remote mirror objects in the tree structure.

Figure 27. MirrorView task menu and right-click menu options in Unisphere

Creating a mirror in this manner is a two-step operation. First, create the mirror while selecting the primary

image LUN. Then, add a secondary image to the mirror. Figure 28 shows the object-based dialogs for

creating a sync mirror and adding a secondary image. Here users have the option to specify values other

than default for Minimum Required Images, Quiesce Threshold, use of the write intent log, initial sync

required, and Recovery Policy.

Page 52: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 52

Figure 28. Create Remote Mirror and Add Secondary Image dialog boxes

Figure 29 shows the object menus for creating an asynchronous mirror and adding a secondary image. For

asynchronous mirrors, users can configure minimum required images, initial sync required, recovery

policy, and the update type in the object dialog boxes, where these items are shown in the wizard.

Figure 29. Create Remote Mirror and Add Secondary Image dialog boxes

Consistency groups have similar dialogs as shown in Figure 30. Note that for asynchronous mirrors, you

can designate the Recovery Policy and Synchronization rate at the consistency group level. Values set at

the consistency group level override the same settings for individual mirrors in the consistency group. For

synchronous mirrors, you set the recovery policy at the consistency group level, but you set the

synchronization rate at the individual mirror level.

Page 53: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 53

Figure 30. Create Group dialog boxes for synchronous and asynchronous mirrors

Unisphere Analyzer Unisphere Analyzer is optional performance-monitoring software that operates within Unisphere. It

provides detailed performance statistics for disks, RAID groups, LUNs, metaLUNs, thin LUNs, SnapView

sessions, MirrorView/A mirrors, front-end ports, and SPs. Users can view the statistics in real time or log

the metrics in a Unisphere Analyzer archive (.nar) file for later viewing. There are no specific statistics for

MirrorView/S. The following performance metrics are available for MirrorView/A mirrors:

Total Bandwidth – Total update bandwidth, from the primary image to the secondary image, of an

async mirror.

Total Throughput – Writes per second, from the primary image to the secondary image, of an

async mirror.

Average Transfer Size – Average write size of the MirrorView/A update.

Time Lag (minutes) – The time difference between the point in time of the secondary image and

the primary image.

Data Lag (MB) – The amount of writes that have been tracked on the primary image, but have yet

to be sent to the secondary image.

Cycle Count – The number of updates that completed within the Unisphere Analyzer polling

period

Average Cycle Time – The average time duration of the MirrorView/A updates that have

completed within the Unisphere Analyzer polling period.

Figure 29 shows sample charts for total bandwidth and time lag. There are two mirrors in the charts, Edrive

and Fdrive. The mirror, Edrive, is set to start updates 15 minutes after the start of the previous update. The

Fdrive mirror is set to update 20 minutes after the start of each update.

Page 54: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 54

Figure 31. Unisphere Analyzer charts for total bandwidth and time lag

MirrorView/A and MirrorView/S replicated data is included in the front-end port statistics for the

secondary storage system; it is not in frond-end port statistics for the primary storage system. If the

MirrorView port for the secondary storage systems is dedicated to MirrorView, then all traffic shown by

Analyzer will be MirrorView activity. However, it is important to remember that if the port is also used for

servers or other host traffic, the traffic shown by Analyzer will include all of that traffic, not just

MirrorView traffic.

Page 55: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 55

MirrorView failure recovery scenarios When a failure occurs during normal operations, MirrorView and the base operating environment

implement several actions to facilitate recovery. MirrorView tries to achieve three goals in the recovery of

failures: minimize the duration of data unavailability, preserve data integrity, and when possible, complete

a recovery transparent to the user.

The following sections describe various failure scenarios of primary components and secondary

components. In these scenarios, failures may occur in the path between the production server and the

primary storage system, to the primary or secondary image LUN itself 2, to a single SP, or to an entire

storage system. For the purpose of these discussions, the term primary storage system refers to the storage

system containing the primary image and secondary storage system refers to the storage system containing

the secondary image. ―Appendix C: Storage-system failure scenarios‖ contains much of the same

information in a table format for quick reference.

Primary component failure and recovery Failures in the production server to storage system path, primary image(s), or primary storage system affect

application performance and availability. The nature of the failure may dictate that little corrective action is

needed. It may also dictate that the secondary image be promoted and used as the production image until

the primary site (or image) can be repaired. The following sections describe various failure scenarios within

the primary storage system and its components.

Path failure between server and primary image

If a server loses its I/O path(s) to a storage processor, LUNs are trespassed to the alternate SP. This is

typically enacted by server-based path management software such as PowerPath. With SP ownership

changed, the primary storage system will send a request to the secondary storage system to trespass the

secondary image. This way, SP ownership is consistent between the mirrored pairs on the primary and

secondary storage systems. Between updates, MirrorView/A may not trespass the secondary volume until

the start of the next update. Once trespassed, the update proceeds.

Primary image LUN failure

VNX series and legacy CLARiiON/NS systems (Release 26 and later) have internal I/O redirection features

that make them more resilient to certain types of LUN failures than previous releases. When a failure

prevents an SP from accessing its LUNs on the back end, I/O is internally redirected to the other SP,

provided that the other SP can access the LUNs. The benefit is that back-end failures affecting just one SP

(for example an LCC failure) do not result in a LUN trespass by the host failover software. Once the failure

is corrected, the original SP resumes control of back-end I/O to the LUN.

Masking the back-end failure internally provides a faster and less complex failover and recovery process.

Server I/O and mirroring traffic remain balanced across the connectivity environment. Also, there is no

interruption to the mirroring process, which reduces the number of fractures and resynchronizations.

If the LUN failure occurs in such a way that neither SP can access the LUNs, then the mirror will become

fractured and server I/O to the LUN is rejected until the LUN becomes available again. During this process,

the user may choose to promote the secondary image, and redirect server I/O to the secondary image.

If the secondary image state is consistent and/or fractured at the point in time of the failure on the primary

image, a full resynchronization of the originally-primary-now-secondary image would be required, since

there would be no other way to guarantee byte-for-byte synchronicity.

For MirrorView/A, if the primary image fails during an update and the user promotes the secondary image,

any changes to the secondary are rolled back from the protective snapshot. If the user fixes the error on the

primary without promoting the secondary, the update continues from the point it was interrupted.

2 Many media failures are themselves automatically detected and corrected by lower-level CLARiiON

software, so this scenario is probably the least likely.

Page 56: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 56

Storage processor (SP) controlling primary image failure

If the SP owning the primary image fails, the MirrorView LUN trespasses to the surviving SP on the

primary storage system per direction of the server failover software. At the same time, MirrorView on the

primary system will issue a trespass command to the secondary to trespass the secondary image.

For MirrorView/S, the write intent log is used to avoid a full resynchronization. Mirrors not using the write

intent log will have to do a full synchronization. For MirrorView/A, a full synchronization is not required

since tracking/transfer data is stored on persistent media.

Figure 32. Before the SP failure: Both images are controlled by SP A

Figure 33. Following the SP failure: Both images are trespassed to SP B

Primary image storage system failure

If the storage system containing the primary image fails, server I/O to the LUN will fail until the storage

system becomes available again. As with the primary image LUN failure, the user may choose to promote

Production

Host

SPB

Primary

SPA

Secondary

SPB

SPA

Primary Secondary

Primary

SPB

Primary

SPA

Secondary

Secondary

SPB

SPA

Production Host

Page 57: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 57

the secondary image, and redirect server I/O to this image. However, if the secondary image is promoted

while communication to the original primary storage system is unavailable, it will not be possible to add

the original primary image in the new mirror as a secondary image. Instead, it is necessary to add3 the

original primary device subsequently (when that storage system comes back online) to the new mirror and

then perform a full synchronization. Alternatively, a different LUN on the original primary array can be

added as the secondary image for the newly promoted primary (as shown in Figure 35), which likewise

would require a full synchronization.

Figure 34. Before the primary storage system failure: The primary image (on the primary system) has mirror ID X

3 Before adding the original primary image to the new mirror, the old mirror must be force destroyed. In

this case, the force destroy option is used because changes were made to the mirror while to primary system

was unavailable.

Standby

Host

Primary

SPB

Primary [Mirror ID X]

SPA

Primary [Mirror ID Y]

Secondary

SPB

SPA

Production

Host Secondary [Mirror ID Y]

Primary

SPB

Primary [Mirror ID X]

SPA

Secondary [Mirror ID X]

Secondary

SPB

SPA

Secondary

Host

Production

Host

Page 58: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 58

Figure 35. Following the primary storage system failure: The newly promoted primary image (on the secondary system) has mirror ID Y

Secondary component failure and recovery Loss of secondary components reduces or eliminates DR protection of the mirror. When a primary image

determines that it is not possible to communicate with a secondary image, it marks the secondary as

unreachable and discontinues attempts to write to the secondary. However, the secondary image remains a

member of the mirror.

Secondary image LUN failure

The I/O redirection capability (as described in the ―Primary image LUN failure‖ section) allows mirroring

to continue if the LUN failure only affects the current owning SP. The primary image would continue

mirroring to the secondary image on the original SP. If the secondary image LUN fails in such a way that

affects both SPs, then the image is admin fractured. At that point, the administrator can choose whether to

repair the LUN or remove it from the MirrorView relationship altogether. If the LUN is repaired, it can be

resynchronized once it is available for use. If another LUN is selected to serve as the secondary image, it

must first be fully synchronized.

Storage processor (SP) controlling secondary image failure

If the SP owning the secondary image fails, it is system fractured until the SP comes back online. Once the

SP comes back online, the mirror is automatically resynchronized—provided that the auto recovery policy

is selected. If auto restore is not enabled, then the administrator must intervene by explicitly starting the

synchronization4. Additionally, if the same SP on the primary storage system also fails—resulting in the

trespassing of the primary LUN to the peer SP and, likewise, the trespassing of the secondary LUN to its

peer SP—the secondary image is automatically resynchronized (again, provided auto-restore is enabled).

Secondary image storage system (or link) failure

If the storage system containing the secondary image fails, the secondary image is system fractured.

Similarly, if the link between the primary and secondary storage system fails, the secondary image is also

system fractured. In either event, once the issue has been resolved and the secondary storage system is back

in communication with the primary storage system, the secondary image is automatically resynchronized if

auto recovery policy is enabled.

4 Users are given the option to have the resynchronization occur automatically by enabling auto restore. If

they prefer, they can manually initiate this themselves. This is a property of the mirror, and can be changed

at any time.

Page 59: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 59

Figure 36. A failure of the link between storage systems or of secondary storage systems temporarily suspends mirroring

Full reserved LUN (MirrorView/A only)

On the primary storage system

If the reserved LUN space for a mirror fills up during an update, MirrorView software aborts the update

operation and the mirror transitions into an admin-fractured state. For the MirrorView/A session, the point-

in-time view of the primary mirror LUN created at the start of the update operation is stopped while the

information in the tracking map is maintained. In this condition, the user has an option of adding more

reserved LUNs and manually starting the MirrorView/A session update operation. The contents of the

tracking map are persistently maintained. Starting the update operation will result in incremental update of

the secondary mirror with the changes on the primary mirror since the last successful update.

On the secondary storage system

The amount of reserved LUN capacity required on the secondary side equals the amount of changes applied

to the secondary. For example, if 20 percent of the contents of the primary LUN changes, the reserved

LUNs must have a minimum capacity equal to the 20 percent change.

If, during the update, the reserved LUNs on the secondary system are full, the MirrorView/A session will

transition into an admin-fractured state. The point-in-time view of the secondary mirror LUN in the form of

a gold copy is maintained. User intervention is required to allocate more storage to the reserved LUNs and

restart the update operation. The update will continue from the point it was interrupted.

Using SnapView with MirrorView SnapView is storage-system-based replication software for local replicas. SnapView is licensed separately

from MirrorView/S and MirrorView/A. For VNX series platforms, SnapView is included in the Local

Protection Suite, Total Protection Pack, and the Total Efficiency Pack. The license can be purchased

separately for legacy CLARiiON/NS systems. SnapView supports both pointer-based replicas, called

snapshots, and full binary copies, called clones. SnapView benefits users by providing business

continuance copies for local rapid recovery and parallel processing operations such as back up, reporting,

or testing.

Production

Host

Primary

SPB

Primary

SPA

Secondary

Secondary

SPB

SPA

Page 60: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 60

When used with MirrorView, SnapView can provide local replicas of primary and/or secondary images. It

allows for secondary access to data at either the primary location, secondary location, or both, without

taking production data offline.

Figure 37. Clones and snapshots of primary and secondary images

For in-depth information on SnapView architecture and basic operations, see the following white papers on

EMC.com and Powerlink:

EMC CLARiiON SnapView Clones – A Detailed Review

EMC SnapView SnapShots and Snap Sessions Knowledgebook

SnapView replicas of MirrorView primary images follow standard operating guidelines as described in the

above white papers. SnapView replicas of MirrorView secondary images have to be produced very

carefully because secondary image states can change frequently.

All SnapView replicas of secondary images should represent points in time where the secondary image is

in the consistent or synchronized state. During MirrorView synchronizations, the secondary image LUN is

not in a usable state. Secondary images are in the synchronizing state during MirrorView/S

synchronizations and the updating state during MirrorView/A updates. You should not start SnapView

snapshots or fracture clones while the secondary images are in either state.

You can use the Manual MirrorView recovery policy to coordinate SnapView activities with fractured

mirrors. The Manual option requires user intervention to start synchronizations after a system fracture.

Snapshots can be started or clones fractured before initiating the mirror synchronization.

SnapView and MirrorView consistency features are also compatible. Use SnapView’s ―consistent start‖

and ―consistent fracture‖ operations when creating SnapView replicas of a MirrorView/S consistency

group. SnapView consistent operations cannot be performed on the consistency group, but can be

conducted across all members of the group. Consistent operations can also be used with MV/A, but are not

required if you are starting sessions or fracturing clones between updates. Replication Manager 5.1 and

later versions have support for creating SnapView replicas of MirrorView/S and MirrorView/A primary

images and MirrorView/S secondary images.

Snapshots of MirrorView image LUNs are available on all storage systems that support both SnapView

snapshots and MirrorView. Clone of a mirror is available on all VNX series systems and legacy systems

running release 24 or later. The AX4-5 currently runs release 23, so it does not offer this option.

LUN eligibility

Any mirror image can be a clone source; however, a clone image of another source cannot serve as a

mirror image. The LUN can become a clone source or mirror image in any order. For example, an existing

Secondary

Primary

10 AM Snap

4 PM Clone

1 PM Snap

10PM Clone

8 AM Clone

1 PM Snap

Page 61: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 61

clone source could be added to a mirror as a secondary image. Alternately, a mirror secondary image could

also be made a clone source.

A snapshot can be taken of any MirrorView image. Snapshot images cannot serve as mirror images. In

other words, you cannot mirror a snap.

Limit dependencies

Clone and MirrorView/S limits are independent of one another. The number of mirror images and clones

for each LUN and each storage system are counted separately against the LUN and storage system limits.

While system limits for clones vary with storage system type, the number of clones per source LUN

remains eight for all storage system models.

MirrorView/A images count toward the LUN-level snap-session limit of eight. The LUN-level snap-session

limit is the same for all storage system models. MirrorView/A uses a reserved snap session on both the

primary and secondary images to track changes and maintain point-in-time copies. Therefore, users can

create up to seven SnapView snap sessions on a MirrorView/A primary or secondary image. There are no

such limit dependencies between MirrorView/S images and SnapView snapshots.

Instant restore

Both snapshots and clones offer instant restore capability. For clones, the reverse synchronization operation

is used for instant restore. For snap sessions, the rollback feature is used. In both cases, the source LUN is

instantly available and will appear to the host as the point in time that the SnapView replica represents. In

the background, SnapView moves the data from the SnapView replica back to the source LUN.

When used with MirrorView, instant restores can only be performed on primary image LUNs. During an

instant restore to a primary image, the secondary image must be fractured. If the mirror is part of a

consistency group, then the group must be fractured. The software enforces this to prevent MirrorView

from presenting a consistent secondary image state while the data itself isn’t usable. Once the restore

process completes, a partial resync can be performed to move the changes to the secondary image.

Secondary image LUNs can be restored from a SnapView replica only after they are promoted to the role

of primary image.

Promoting a secondary image while the primary image is being restored is possible with the Local Only

promote option. This option is available for individual MirrorView/A mirrors and consistency groups for

both MirrorView/A and MirrorView/S. It is not available for individual MirrorView/S mirrors. The local

only promote option places the secondary image LUN in a new mirror as the primary image. The original

primary image LUN cannot be demoted to secondary while the restore process is active.

Finally, in the case of MirrorView/A only, an instant restore cannot be performed on a primary image if the

image is rolling back. An image is rolling back if it was recently promoted and MirrorView/A is

recovering it from the gold copy. It is possible to restore the primary image once the rollback process is

complete.

Clone state “Clone Remote Mirror Synching”

The Remote Mirror Synching clone state was added to help users coordinate mirror-sync and

clone-sync-and-fracture operations. During mirror sync operations the secondary image LUN is not in a

usable state. This is true for initial syncs, MirrorView/S partial syncs, and MirrorView/A updates when the

secondary image is in the synchronizing or updating states.

Clones that are in-sync during a mirror sync will be in the Clone Remote Mirror Synching state as shown

in Figure 38. This clone state indicates that the source LUN, also serving as a secondary image, is not in a

consistent state and should not be fractured.

Page 62: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 62

Figure 38. Remote Mirror Syncing Clone state

Automation and reporting software Several EMC and third-party software products are qualified to work with MirrorView to automate high-

availability and DR solutions. The following products are currently qualified and available for use with

MirrorView. Always check the latest EMC Support Matrix for the latest interoperability information.

VMware vSphere and Site Recovery Manager (SRM)

Both VMFS volumes and RDM volumes can be replicated with MirrorView. Some of the major

implementation guidelines for MirrorView and other replication applications are listed below.

RDM volumes must be in Physical compatibility mode when used with storage-system-based

replication.

When replicating an entire VMFS volume that contains a number of virtual disks on a single LUN,

the granularity of replication is the entire LUN with all of its virtual disks.

When making copies of VMFS volumes that span multiple LUNs, you must use the consistency

technology of SnapView/MirrorView

For VMFS-3 and RDM volumes, the replica can be presented to the same ESX server but not to

the same virtual machine.

VMware Site Recovery Manager (SRM) provides a framework for remote site recovery automation. SRM

works with storage vendor-provided Storage Replication Adapters (SRAs) to automate site recovery using

the storage vendor’s native replication software. The VNX series/CLARiiON SRA supports SRM

automation with MirrorView/A and MirrorView/S.

Please consult the E-Lab™ Navigator and/or the following references on Powerlink for detailed

implementation information:

Using EMC VNX Storage with VMware vSphere techbook

Using EMC CLARiiON Storage with VMware vSphere and VMware Infrastructure

EMC MirrorView Adapter for VMware Site Recovery Manager Release Notes

MirrorView Insight for VMware (MVIV) is a new tool bundled with the MirrorView Site Recovery

Adapter (SRA) that complements the Site Recovery Manager framework by providing failback capability

(experimental support only). MVIV also provides detailed mapping of VMware file systems and their

replication relationships. For more details, refer to the MirrorView Insight for VMware (MVIV) Technical

Note available on Powerlink.

Page 63: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 63

EMC Replication Enabler for Microsoft Exchange Server 2010

The EMC Replication Enabler for Microsoft Exchange Server 2010 is a free software utility that integrates

EMC RecoverPoint and RecoverPoint/SE synchronous remote replication and MirrorView/S replication

with the Exchange Server 2010 data availability group (DAG) architecture. This allows customers to use

the same, consistent, array-based replication solutions for Exchange 2010 and other applications in the

environment. For more information, see the Replication Enabler for Microsoft Exchange Server 2010

Release Notes on EMC Powerlink.

EMC Data Protection Advisor (DPA)

EMC Data Protection Advisor helps users ensure their replication environment remains accurate and

effective in the face of constant change. It provides a scalable, enterprise-level solution for replication

analysis to ensure mission-critical applications are protected and recoverable. It discovers selected clients’

applications, databases, and file systems, and maps them to physical storage devices. It maps all copies of

the primary data including snapshots, clones, and remote synchronous and asynchronous replicas. After

discovery, it correlates client, application, storage mapping, and other metadata to identify incomplete and

inconsistent replicas that would result in a failed recovery. It then provides an intuitive graphical map of the

relationship between host and storage. It represents any recoverability gaps and exposures in easy-to-

understand reports and views that can be used by administrators to resolve issues. For VNX series and

legacy CLARiiON/NS series systems, DPA supports MirrorView/A, MirrorView/S, SnapView, SAN

Copy, and RecoverPoint (CDP,CRR, CLR). Application integration includes Oracle, SQL Server,

PostgreSQL, Exchange, and file systems.

DPA eliminates the need for manual data collection, and collects data more frequently than would be

practical manually. With this information, it can actively alert administrators to any issues as well as reduce

the time it takes to conduct and audit by as much as 95 percent. This can reduce costs by eliminating the

need for third-party consulting services to perform these tasks. For more information on Data Protection

Advisor, the following white papers are available on Powerlink:

EMC Data Protection Advisor for Replication Analysis – A Detailed Review

Increasing Recoverability of Critical Data with EMC Data Protection Advisor and Replication

Analysis

MirrorView/Cluster Enabler (MV/CE)

EMC MirrorView/Cluster Enabler is a replication automation product from EMC. MV/CE is host-based

software that integrates with Microsoft Failover Cluster software to allow geographically dispersed cluster

nodes across MirrorView links. MV/CE seamlessly manages all storage system processes necessary to facilitate

cluster node failover. MirrorView/A and MirrorView/S replication is supported.

MV/CE offers reduced RTO by allowing the Windows operating system and applications like Microsoft

Exchange and Microsoft SQL Server to automate resource failover between sites. MV/CE can also be used in

Hypervisor environments in host clustering configurations for Hyper-V and guest clustering configurations for

Hyper-V and VMware. For more information, please see the following resources on Powerlink:

EMC MirrorView/Cluster Enabler (CE) – A Detailed Review white paper

Cluster Enabler Product Guide

MirrorView/Cluster Enabler Release Notes

MV/CE 4.1 supports VNX Series systems, CLARiiON/NS systems running FLARE releases 26-30, and AX4

systems running Release 23.

Replication Manager

Replication Manager (RM) is replica automation software that creates point-in-time replicas of databases

and file systems residing on VNX, CLARiiON, Symmetrix, or Celerra® storage systems. Replicas can be

used for repurposing, backup, and recovery, or as part of a DR plan. RM provides a single interface for

Page 64: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 64

managing local and remote replicas across supported storage systems. Replication Manager supports

SnapView snapshot and clone replicas of MirrorView/S and MirrorView/A primary and secondary images.

For more information see the Replication Manager Product Guide on Powerlink.

AutoStart

AutoStart™ is high-availability software that enables clustering of enterprise applications. It safeguards

critical applications by automating the failover of applications such as ERP, database, e-mail, and web

applications. It also monitors application health and performance, provides alerting, and

reprovisions/restarts services as necessary in response to failures.

AutoStart is heterogeneous software that provides a consistent solution across a variety of operating

systems and platforms. AutoStart 5.3 adds support for applications being replicated by MirrorView/S and

MirrorView/A. This enables automated application failover between nodes across MirrorView links.

For more detailed product information, consult the EMC AutoStart Administrator’s Guide. For supported

environment information, consult the EMC Information Protection Software Compatibility Guide.

VNX for File and Celerra

LUNs allocated to VNX for File or legacy Celerra line of network-attached storage products can be

replicated with MirrorView/S. The following are high-level guidelines for using MirrorView/S with

Celerra. More detailed information is included in the technical modules Using MirrorView/Synchronous

with VNX for File for Disaster Recovery and Using MirrorView/Synchronous with Celerra for Disaster

Recovery found on Powerlink.)

All LUNs dedicated to VNX for File or Celerra must be managed within a consistency group.

Consistency groups are discussed in the ―Storage-system-based consistency groups‖ section.

As part of the baseline configuration, control LUNs 0, 1, and 4 must be included in the

consistency group. The remainder of the LUNs in the consistency group can be allocated as user

LUNs.

The write intent log must be allocated. See more in the ―Write intent log‖ section.

Existing configurations can implement MirrorView/S provided they can meet the first guideline.

Sybase Mirror Activator

Sybase Mirror Activator is a Sybase software solution that allows users to combine the benefits of

transaction replication and storage-based replication to deliver business continuity and DR for mission-

critical applications.

The combination of Sybase Mirror Activator and MirrorView/S provides a zero data loss solution that

significantly reduces RTO and improves return on investments from continued access to standby resources.

This solution is available for use with MirrorView/S. Sybase Mirror Activator is able to maintain an active

copy of the database at the DR site by having the replication server apply log files as they are replicated by

MirrorView/S.

For more information about this solution, see the EMC MirrorView/Synchronous with Sybase Mirror

Activator white paper on Powerlink.

Disk drive technologies and advanced data services EMC midrange storage systems offer the widest range of disk drive technologies in the industry. Multiple

price points and performance profiles are needed to meet a diverse set of end-user application needs. Disk

drive offerings are categorized into three tiers based on price-performance attributes:

Page 65: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 65

Tier 0 – Flash drives are comprised of solid state memory technology. Flash drives are applicable

for very high I/O per second environments with low response time requirements. Flash drives can

offer many times the IOPs performance of other drive technologies, along with high reliability.

They also offer low power consumption per I/O and per GB as there are no moving parts.

Tier 1 – Serial Attached SCSI (SAS) and Fibre Channel (FC) drives. SAS and FC drives consist

of traditional rotating drives, and are offered in 10k and 15k rpm varieties. They offer high I/O and

bandwidth performance with high reliability and high capacity.

Tier 2 – Near Line SAS (NL-SAS) and Serial Advanced Technology Attach II (SATA II) drives.

NL-SAS and SATA II drives are best for applications with very high capacity requirements and

with duty cycles less than 100 percent. They are traditional rotating drives that are available in

7200 rpm and low-power 5400 rpm offerings. They have a lower IOPs profile than the other drive

offerings, but can offer comparable sequential stream bandwidth performance.

For more information on disk drive technologies available for midrange storage systems, see the white

paper An Introduction to EMC CLARiiON and Celerra Unified Platform Storage Device Technology on

Powerlink.

EMC midrange systems also offer the widest array of storage-system-based data services in the midrange

market. Fully Automated Storage Tiering for Virtual Pools (FAST VP) and FAST Cache work at the sub-

LUN level to optimize data placement and performance based on current activity. Compression allows

users to maximize capacity utilization by storing data in a smaller footprint than would normally be

possible.

When LUNs are deployed in RAID groups, they have a homogeneous drive type over the entire address

space. Pools can be deployed in homogeneous pools or heterogeneous pools. For heterogeneous pools,

FAST VP allows optimization of data placement on heterogeneous drive types at a sub-LUN level.

FAST Cache is available for both RAID group and pool LUNs. FAST Cache can boost performance of a

storage system’s most active data by moving the most active components to Flash drives. It allows a

relatively small allotment of Flash drives (up to 2 TB on the VNX7500 and CX4-960) to boost performance

of the most active data in the entire storage system. Data movement via FAST VP and FAST Cache is

largely decided by the activity of the production application relative to other application activity on the

storage system.

Compression is an option for data that is relatively inactive but still requires five 9s of availability. It is

enabled at the LUN level and runs as a background process to reduce the data footprint of a given LUN.

New data is written uncompressed. The compression feature monitors the amount of uncompressed data

and processes it once enough has been written to surpass a system-defined threshold.

Storage system based replication is supported with any LUN type and in conjunction with any feature such

as FAST VP, FAST Cache, or compression. FAST VP and FAST Cache perform their analysis on a per-

array basis. When used with remote replication, any FAST VP data relocation or FAST Cache promotions

occur independently at each location. I/O performed by a replication product such as MirrorView or SAN

Copy is factored into the I/O analysis of the source/primary and target/secondary LUNs.

For example, consider the case where a synchronous mirror is set up between two thick LUNs. Both the

primary and secondary images reside in heterogeneous pools with FAST VP. FAST Cache is enabled on

the primary image LUN. The application workload is 80 percent reads and 20 percent writes. On the

primary volume, FAST VP and FAST Cache manage any data movement considering both the read and

write I/Os. FAST VP on the secondary storage system factors in only the write I/Os. Whether or not data is

moved between storage tiers via FAST VP depends on the other I/O activity on each storage system.

EMC recommends that that you disable FAST Cache on MirrorView secondary image LUNs for best

performance. FAST Cache typically doesn’t improve the performance of secondary image LUNs, so fewer

system resources are used if it’s disabled. If using storage pools, secondary image LUNs should be placed

Page 66: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 66

in a storage pool with FAST Cache disabled. If the secondary image LUNs are promoted, FAST Cache can

be enabled for the pool containing the newly promoted LUNs.

Compression analyzes data at the LUN level. Replication I/O can contribute to the uncompressed data

threshold used to trigger the compression process. For instance, if a mirror secondary or SAN Copy

destination is compressed, updates to those volumes will be written uncompressed. When enough data is

written, the compression process will start automatically. If both the source and target are compressed, their

respective compression processes run independently.

When choosing drive technologies for a DR site, users must consider if production applications will run at

the DR site for significant periods of time. If so, similar drive types and configurations should be used at

the DR location. Keep in mind that after promotion, FAST and FAST Cache will begin to react to the

production load running on that system. For instance, after promotion,FAST Cache will have a warming

period as the system analyzes the production workload. For more information on FAST VP and FAST

Cache, please see the following white papers on Powerlink

EMC CLARiiON, Celerra Unified, and VNX FAST Cache – A Detailed Review

EMC FAST VP for Unified Storage Systems – A Detailed Review

EMC VNX Virtual Provisioning – Applied Technology

EMC CLARiiON Data Compression – A Detailed Review

The remainder of this section describes the effect of how drive types factor into MirrorView

implementation from the perspective of server I/O and replication I/O. Replication I/O is I/O generated by

MirrorView, like synchronizations. The principles are the same no matter if the data was explicitly placed

on a drive type or it was automatically moved via FAST and/or FAST Cache.

Flash drives excel in transactional I/O profiles when compared to other drives, particularly when there is a

significant random read component. Random reads typically cannot be serviced by read-ahead algorithms

of the drive itself or by the storage system read cache. Therefore, the latency of a random read is directly

related to the seek time of the disk drive. For rotating drives, this is the mechanical movement of the disk

head to the desired area. Flash drives can perform this operation at solid-state memory speeds, which can

be an order of magnitude faster when compared to rotating drives.

Generally speaking, there is no major advantage in using Flash drives purely to improve replication I/O

performance, because replication I/O involves a high percentage of writes. However, applications that

benefit from Flash drives will continue to do so while using MirrorView for DR.

When using MirrorView/S, Flash drives still provide the same advantage to the production application for

random reads. Minimum write response times will probably be dictated by the replication over the link to

the secondary image.

Write intent log LUNs should always be on write-cache enabled LUNs as their I/O profile is almost

exclusively writes and it is required to take advantage of SP cache optimizations. Write intent log LUNs

use an optimization that keeps active data in SP cache. If placed on Flash drives, users must enable write

cache on those LUNs. There is no advantage to placing the write intent log LUNs on Flash drives over Tier

1 drives when write cache is enabled. For optimal performance, it is recommended that FAST Cache be

disabled on write intent log LUNs.

While using MirrorView/A, Flash drives continue to provide the same read-performance benefits to the

production application. In addition, when primary image LUNs are placed on flash, the impact of copy-on-

first-write operations on production volumes can be reduced. Part of the copy-on-first-write operation is a

read from the source LUN and a write to the reserved LUN. The read performance advantage of Flash

drives can provide benefits to this portion of the copy-on–first-write operation, thereby reducing response

time impacts. Therefore, when source LUNs reside on Flash drives, copy-on-first-write activity has

minimal impact on the application.

Page 67: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 67

The I/O profile for reserved LUNs is largely comprised of writes. Therefore, there is little to no advantage

to explicitly placing reserved LUNs on flash drives. We recommend that you use write-cache enabled

LUNs on Tier 1 drives for reserved LUNs. Reserved LUNs can be RAID-group-based LUNs and/or thick

LUNs.

Writes to reserved LUNs can be random in nature, particularly when there are several reserved LUNs on a

particular set of drives. Therefore, we do not recommend Tier 2 drives for reserved LUNs. In the case of

heterogeneous pools, it is possible that some percentage of a reserved LUN resides on Tier 2 drives. If the

reserved LUN is allocated for a long period of time, like for MirrorView/A primary images, FAST VP can

process the reserved LUN for tiering. For reserved LUN allocations of shorter duration, such as short-term

snapshots, the LUN may not be allocated long enough for FAST VP or FAST Cache to assist. In these

cases, the user may choose to explicitly place reserved LUNs on the preferred drive type by using RAID-

group-based LUNs or specifying tier placement in the pool.

iSCSI connections iSCSI connections define initiator/target pairs for iSCSI remote replication between storage systems.

Similar to zoning in a Fibre Channel environment, they dictate which targets the initiators log in to. Since

all necessary iSCSI connections are automatically created with the MirrorView Wizard or the MirrorView

Connections dialog box, it is rare that it’s necessary to explicitly manage iSCSI connections for

MirrorView.

Release 29 adds support for VLAN tagging, which allows you to configure several virtual ports on each

physical iSCSI port. Enabling VLAN tagging is optional on a per-port basis. VNX series systems support

up to 8 virtual ports per 10 Gb/s and 1 Gb/s iSCSI port. CX4 and NS series systems support up to eight

virtual ports per 10 Gb/s port and up to two virtual ports per 1 Gb port. The next figure shows the iSCSI

port properties for a port with VLANs enabled and two virtual ports configured.

Figure 39. ISCSI Port Properties with VLAN tagging enabled

Page 68: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 68

iSCSI connections are defined per physical front-end port for ports without VLAN tagging enabled or per

virtual port for ports with VLAN tagging enabled. For each MirrorView connection to another storage

system, an iSCSI connection will be defined. iSCSI connections must be made for the primary storage

system to the secondary system and for the secondary system back to the primary system. This is required

because mirroring can be bidirectional between systems and primary/secondary images can reverse roles,

resulting in a reversal of replication direction.

Figure 40 shows a connection defined for the storage system ―Site 1‖. This connection allows the

MirrorView ports (A-4, B-4)) to log in to target ports of the ―Site 2‖ system. The IP addresses listed in the

Target Portal column represent the MirrorView ports of the Site 2 system. The connections shown were

automatically generated by MirrorView when the MirrorView connection was enabled.

Figure 40. iSCSI Connections Between Storage Systems dialog box

When MirrorView automatically creates the iSCSI connection between storage systems, it will try to use a

non-routed (that is, same subnet) path between storage systems. If there are several non-routed options

available, it will choose the option with the lowest virtual port number.

If you do not want to use the default selection criteria for iSCSI connections, connections can be created

manually. To do so, use the Add Connection dialog box, which is shown in Figure 41. In this dialog,

initiator target pairs and CHAP authentication methods are identified. The dialog can be launched in the UI

by navigating to the dashboard; right-clicking the storage system of interest; going to > iSCSI >

Connections Between Storage Systems; and clicking the Add… button. Checkboxes are used to choose

which front-end ports or virtual ports are wanted to log in to the target port. The target port is selected using

the drop-down menu. One or more ports can be assigned to log in to each target port. When manually

creating iSCSI connections, be sure to check the MirrorView (/A or /S) release notes for setting

recommendations.

When creating iSCSI connections, the user has three options for CHAP authentication:

Shared – A general set of CHAP credentials that can be used by any or all of the connections of a

storage system

Connection Specific – CHAP credentials that are defined specifically for the connection being

created

None – No authentication

Page 69: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 69

Figure 41. Add Connection dialog box

Connection-specific CHAP credentials are used when an iSCSI connection is created by the MirrorView

Wizard or MirrorView Connections dialog box. The connection shown in Figure 40 shows the connection

name and username format based on the system names used when automatically generated. A CHAP

username and secret is generated by the system when the connection is created.

System-generated CHAP secrets are unique for each connection created. Corresponding entries in the target

system’s CHAP management are made for the user by the system when the iSCSI connection is created.

Users that must define their own CHAP username and secrets can edit the system-generated entries by

using the Modify button on the iSCSI Connections Between Storage Systems dialog box (Figure 40).

Corresponding changes must be made to the target system’s CHAP management (Figure 43).

The need for CHAP credentials is determined by the target system. There is an option in the iSCSI port

properties page for requiring initiator authentication. Figure 42 shows the iSCSI Port Properties page,

which is launched from the storage system right-click menu and going to <Storage System Name> Settings

> Network > Settings for Block, selecting the desired port, and clicking Properties. The option for

requiring initiator authentication is at the bottom of the dialog box.

Page 70: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 70

Figure 42. iSCSI Port Properties dialog box

When initiator authentication is required, initiators must be defined within the target system’s CHAP

management dialog. Figure 43 shows the iSCSI CHAP Management dialog box for the Site 2 system. The

entries shown are the system-generated entries created for the MirrorView ports of the Site 1 storage

system. Initiator and CHAP user/secret entries can be manually added from this dialog, however, it’s rare

that manual entries are required for MirrorView ports, since the entries are automatically created as part of

creating the iSCSI connection.

Page 71: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 71

Figure 43. iSCSI CHAP Management dialog box

When iSCSI connections are created by the system, CHAP credentials are always entered regardless of

whether they are required by the target system. This way, if the user did not initially have initiator

authentication required, the credentials are already in place if they were to enable it later.

Conclusion MirrorView offers two products that can satisfy a wide range of recovery point requirements.

MirrorView/S offers a zero data loss solution typically implemented over high bandwidth, low latency

lines. MirrorView/A offers a periodic update solution with recovery points in minutes to hours.

MirrorView/A is typically implemented over long distances on low bandwidth, higher latency lines.

Both products offer a consistency group feature for managing write ordering over a set of volumes.

Consistency groups can lower RTO by maintaining restartable copies of a data set on the remote storage

system. Ease of management is achieved with the ability to manage all remote mirrors in the consistency

group as a single unit.

Several solution automation products such as VMware SRM, MirrorView/CE, and Replication Manager,

are available to simplify disaster recovery solutions for specific applications and platforms. They rely on

MirrorView to perform the replication options because of its flexibility, reliability, and performance.

References

White papers The following papers are available on EMC.com and Powerlink, EMC’s customer- and partner-only

extranet:

EMC VNX Virtual Provisioning

Using EMC VNX Storage with VMware vSphere techbook

EMC CLARiiON Reserved LUN Pool Configuration Considerations

CLARiiON Integration with VMware ESX

Navisphere Task Bar Explained

EMC CLARiiON SnapView Clones

EMC SnapView SnapShots and Snap Sessions Knowledgebook

Product documentation VNX Open Systems Configuration Guide

EMC CLARiiON Open Systems Configuration Guide for CX4 Series, CX3 Series, CX Series, and AX4-

5 Series Storage Systems

Unisphere Help

MirrorView/Synchronous Release Notes

MirrorView/Asynchronous Release Notes

Using MirrorView/Synchronous with VNX for File for Disaster Recovery

Using MirrorView/Synchronous with Celerra for Disaster Recovery technical module

EMC VNX for Block Command Line Interface(CLI)Reference

MirrorView/Synchronous Command Line Interface (CLI) Reference

MirrorView/Asynchronous Command Line Interface (CLI) Reference

Page 72: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 72

E-Lab Navigator

Page 73: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 73

Appendix A: Remote mirror conditions and image states

Mirror states

State Meaning

Active The remote mirror is running normally.

Attention The mirror’s secondary image is fractured, and the mirror is

configured to generate an alert in this case. The mirror

continues to accept server I/O in this state. The event code

ID for a mirror moving into the Attention state is

0x71050129.

Secondary image states

State Meaning

Synchronized

The secondary image is identical to the primary image. This

state persists only until the next write to the primary image,

at which time the image state becomes Consistent.

Consistent The secondary image is identical to either the current

primary image or to some previous instance of the primary

image. This means that the secondary image is available for

recovery when you promote it.

Synchronizing The software is applying changes to the secondary

image to mirror the primary image, but the current

contents of the secondary are not known and are not

usable for recovery.

Out-of-Sync The secondary image requires synchronization with the

primary image. The image is unusable for recovery.

Rolling Back (MirrorView/A only) A successful promotion occurred where there was an

unfinished update to the secondary image. This state persists

until the Rollback operation completes.

Page 74: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 74

Image condition Along with an image state, an image will have an image condition that provides more information.

Condition Meaning

Normal The normal processing state.

Admin Fractured The administrator has fractured the mirror, or a media failure

(such as a failed sector or a bad block) has occurred. An

administrator must initiate a synchronization.

System Fractured The mirror is system fractured

Synchronizing MirrorView/A or MirrorView/S is performing an initial

synchronization.

MirrorView/S is synchronizing after a fracture.

Updating (MirrorView/A) Mirror is performing periodic update.

Waiting on admin The mirror is no longer system fractured. It is a temporary

condition for automatic recovery mirrors. For manual

recovery mirrors, the administrator can now initiate the

synchronization command

Queued to be Synchronized

(MirrorView/S)

Synchronize command has been received, but the maximum

number of synchronizations are in progress. The mirror will

synchronize when one of the in-process synchronizations

completes. See the release notes for maximum

synchronizations per SP limits.

Page 75: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 75

Appendix B: Consistency group states and conditions

Consistency group states State Meaning

Synchronized All the secondary images are in the Synchronized state.

Consistent All the secondary images are either in the Synchronized or

Consistent state, and at least one is in the Consistent state.

Quasi Consistent (MirrorView/A) A new member that is not consistent with existing members

is added to the consistency group, which automatically starts

an update. After the update completes, the consistency group

is again consistent.

Synchronizing At least one mirror in the group is in the Synchronizing state,

and no member is in the Out-of-Sync state.

Out-of-Sync The group may be fractured, waiting for synchronization

(either automatic or via the administrator), or in the

synchronization queue. Administrative action may be

required to return the consistency group to having a

recoverable secondary group.

Scrambled There is a mixture of primary and secondary images in the

consistency group. During a promote, it is common for the

group to be in the scrambled state.

Empty The consistency group has no members.

Incomplete Some, but not all of the secondary images are missing, or

mirrors are missing. This is usually due to a failure during

group promotion. The group may also be scrambled.

Local Only The consistency group contains only primary images. No

mirrors in the group have a secondary image.

Rolling Back (MirrorView/A) A successful promotion occurred where there was an

unfinished update to the group. This state persists until the

Rollback operation completes.

Consistency group conditions

MirrorView/S

Condition Meaning

Active The normal processing state.

Inactive The group is not accepting I/O; this is normal during group

promotion.

Admin Fractured The administrator has fractured the group, or a media failure

(such as a failed sector or a bad block) has occurred. An

administrator must initiate a group synchronization.

System Fractured Some or all of the members are system fractured.

Waiting on sync The group is waiting on a Synchronize command.

Consistency groups with the manual recovery option require

the user to initiate the synchronization. If after a system

fracture, it is only a temporary condition for automatic

recovery groups before the system initiates a

synchronization.

Invalid Temporary condition while a group fracture is in progress;

the group is partially fractured. If the state is incomplete, the

Page 76: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 76

condition is irrelevant and is thus set to invalid. If this state

persists, try fracturing the group and synchronizing it.

MirrorView/A

Condition Meaning

Normal The normal processing state.

Initializing The group is performing an initial synchronization.

Updating The group is performing a periodic update.

Admin Fractured The administrator has fractured the group, or a media failure

(such as a failed sector or a bad block) has occurred. An

administrator must initiate a group synchronization.

System Fractured Some or all of the members are system fractured.

Waiting on admin The group is no longer system fractured. It is a temporary

condition for automatic recovery groups. For manual

recovery groups, the administrator can now initiate the group

synchronization.

Page 77: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 77

Appendix C: Storage-system failure scenarios The following table details storage-system failure scenarios during various states or conditions. Primary

image failures are covered first, followed by secondary image failures.

Storage-system failure scenarios

Primary image failure

Event State/Condition Action

Primary storage system totally

fails.

Any Option 1:

Administrator promotes a secondary LUN on a secondary storage system. After application data

recovery, applications on a standby server can start up

and access required data.

Note: Any writes that are in progress when the primary storage system fails may not propagate to the secondary

storage system. Also, if the remote image was fractured

at the time of the failure, any writes since the fracture

will not have propagated.

Option 2:

Repair and reboot the primary storage system, then synchronize the secondary LUN(s) using the write

intent log. If the write intent log is disabled, then a full

synchronization is necessary.

Primary storage system’s

secondary SP fails.

Any Repair the failed SP to restore high-availability

protection for the data. Access to the mirrored LUN

data is uninterrupted.

Primary storage system’s

controlling SP fails.

Server attempts I/O request to

the LUN.

The I/O request fails, and software on the server retries

the request to the secondary SP. This results in a trespass to the secondary SP, which then synchronizes

the remote LUN (if necessary) based on the write intent

log. The I/O request is coordinated with the

synchronization.

Primary storage system’s

controlling SP fails and reboots. No server I/O attempted while the

SP is unavailable.

Any All primary LUNs are checked for a current

relationship with their secondary LUNs.

Primary LUN is in-sync with

secondary LUN.

No action

Primary LUN is synchronizing

with secondary LUN.

If a full synchronization is interrupted, then it continues

from the last checkpoint.

If a fracture log-based synchronization is interrupted,

then a full synchronization is necessary unless the

mirror is using the write intent log, in which case the

partial synchronization can be resumed.

In either case, if auto sync is enabled, then

synchronization starts automatically. Otherwise, an

administrator must start the synchronization.

Primary LUN is out-of-sync

with secondary LUN.

A full synchronization is necessary.

If auto sync is enabled, then synchronization starts

automatically. Otherwise, an administrator must start the synchronization.

Page 78: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 78

Primary image failure

Event State/Condition Action

Primary LUN is consistent with

secondary LUN.

For a primary LUN using the write intent log, regions

in the log marked potentially out-of-sync are read from

the primary LUN and written to the secondary LUN.

For a primary LUN not using the write intent log, the

secondary LUN is marked out-of sync and a full

synchronization is necessary. If auto sync is enabled,

then synchronization starts automatically. Otherwise,

an administrator must start the synchronization.

Path failure - Primary LUN

trespass from controlling SP to peer SP

LUN is consistent or in-sync. I/O is quiesced during the trespass. The fracture log

(MV/S) or Reserved LUNs (MV/A) are transferred to the peer SP.

Primary LUN is synchronizing with secondary LUN.

Synchronization continues on the new controlling SP.

Secondary LUN is fractured. Fracture log or Reserved LUNs transfer from the previous controlling SP to the secondary SP.

Back end failure Any If I/O can be redirected to the other SP, host access and mirroring continue on original SP.

Media error: data write An error is returned to the server. Any defined secondary images are administratively fractured.

Media error: write intent log modification (MV/S)

Disable use of write intent log for the mirror. Log the error and alert an administrator to review. The mirror is

admin fractured.

Reserved LUNs full (MV/A) MV/A Update Current point-in-time image is stopped. Mirror

becomes admin fractured. User must add capacity to

RLP and resume update. Update will include all

changes made to the primary up to the restart of the update.

Secondary image failure

Event State/Condition Action

Secondary storage system totally

fails

Any Repair and reboot the secondary storage system.

The secondary LUN is fractured. To rejoin the mirror,

synchronization (either full or based on a fracture log)

of the secondary LUN is necessary for all states except

in-sync.

Secondary storage system’s SP

fails

Any No action

Repair the failed SP to restore high-availability protection for the data.

Secondary storage system’s controlling SP fails or reboots

Primary storage system detects the failure and fractures the secondary LUN.

If the failed SP is unavailable for an extended period,

an administrator should manually trespass the primary

LUN to its secondary SP so mirroring can continue.

When the failed SP returns to service, synchronization

is necessary (unless the LUN is in-sync). If the LUN is

consistent or synchronizing, the system uses the

fracture log to perform synchronization. Otherwise, a full synchronization is necessary.

LUN trespass from the controlling SP to the secondary SP

No action

This is the expected result when the primary storage

system trespasses for any reason.

Page 79: h2417 Mirrorview Know Cx Series Flare Wp Ldv

MirrorView Knowledgebook: Releases 30 - 31.5

A Detailed Review 79

Primary image failure

Event State/Condition Action

Back end failure Any If I/O can be redirected to the other SP, host access and

mirroring continue on original SP.

Media error: data write Secondary image will be marked as administratively

fractured. When repairs are complete, administrator

starts synchronization.

Reserved LUNs Full (MV/A) MV/A Update Mirror becomes admin fractured. Point-in-time image

of secondary is maintained. User must add more

capacity to RLP and resume update. Update will continue from where it was interrupted.

Appendix D: Data protection roles The following table provides detailed control capabilities for each data protection role.

VNX Series Local Data Protection

Data Protection

Data Recovery

Legacy CLARiiON/NS Series Local Replication Replication Replication/Recovery

SnapView

Start a (consistent) snap session Yes Yes Yes

Stop a (consistent) snap session Yes Yes Yes

Activate a session to a snapshot LUN Yes Yes Yes

Deactivate a session from a snapshot LUN Yes Yes Yes

Synchronize a clone Yes Yes Yes

Fracture a clone Yes Yes Yes

Roll back a snap session No No Yes

Reverse synchronize a clone No No Yes

MirrorView

Synchronize a mirror / consistency group No Yes Yes

Fracture a mirror / consistency group No No Yes

Control the update parameters of an async mirror No Yes Yes

Modify the update frequency of an async mirror No Yes Yes

Throttle a mirror / consistency group No Yes Yes

Promote a sync or async secondary mirror / consistency group No No Yes

SAN Copy

Start a session No Yes Yes

Stop a session No Yes Yes

Pause a session No Yes Yes

Resume a session No Yes Yes

Mark a session No Yes Yes

Unmark a session No Yes Yes

Verify a session No Yes Yes

Throttle a session No Yes Yes