66
Copyr ight © 2007 EMC Cor porati on. Do not Cop y - All Rights Re serve d. EMC Storage Virtualization Foundations - 1  © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations EMC Storage Virtualization Foundations Welcome to EMC Storage Virtualization Foundations. The AUDIO portion of this cours e is supplemental to the material and is not a replacement for the student notes accompanyin g this cours e. EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and reading the notes in their entirety. Copyright © 2007 EMC Corporat ion. All right s reserved. These materials may not be copied without EMC's written consent. EMC believes the information in thi s publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLI CATION IS PROVIDED “AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC, VMWare, Rainfinity, Invist a, and RecoverPoint are trademarks of EMC C orporation. All other trademarks used herein are the property of their respective owners.

Storage Virtualizations

  • Upload
    selva

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 1/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 1

 © 2007 EMC Corporation. All rights reserved.

EMC Storage Virtualization FoundationsEMC Storage Virtualization Foundations

Welcome to EMC Storage Virtualization Foundations.

The AUDIO portion of this course is supplemental to the material and is not a replacement for the student notesaccompanying this course.

EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and reading the notes intheir entirety.

Copyright © 2007 EMC Corporation. All rights reserved.

These materials may not be copied without EMC's written consent.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to changewithout notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NOREPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THISPUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY ORFITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC, VMWare, Rainfinity, Invista, and RecoverPoint are trademarks of EMC Corporation.

All other trademarks used herein are the property of their respective owners.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 2/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 2

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 2

Course Objectives 

Upon completion of this course, you will be able to:

Define a Virtual Infrastructure

List VMWare product differences

Explain the concept and benefits of Storage Virtualization

Identify benefits, features, and advantages of an InvistaSolution

Cite basic concepts of File Level Virtualization

Describe Rainfinity features, functions and benefits

Identify key features of a RecoverPoint solution

The objectives for this course are shown here. Please take a moment to read them.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 3/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 3

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 3

VMware Virtualization 

Upon completion of this module, you will be able to:

Define a Virtual Infrastructure

List VMWare product differences

The objectives for this module are shown here. Please take a moment to read them.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 4/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 4

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 4

Virtualization Technologies 

* Formerly ESX Server 

 APP

OS

 APP

OS

 APP

OS

 APP

OS

 APP

OS

 APP

OS

 APP

OS

 APP

OS

 APP

OS

 APP

OS

 APP

OS

 APP

OS

Server virtualizationwith VMware

Infrastructure 3*

File virtualizationwith EMC Rainfinity

Global FileVirtualization

IP network 

Global Namespace

NAS storage pool 

File serversEMC

NetApp

Block-storagevirtualization

with EMC Invista

Invista

Runs in the storage network 

Virtual volumes 

Physical storage

Complementing virtualization services are the virtualization technologies – server, file, and block. We

all know these EMC virtualization technologies well – VMware, Rainfinity, and Invista. Take the time

to learn about these technologies if you’re not familiar with them. There are occasions when one of 

these technologies will be an obvious recommendation to address a customer problem.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 5/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 5

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 5 5

Each application sees its own logicalserver, independent of physical servers

VirtualServers

Flexible Infrastructure 

Server Virtualization

APP

OS

APP

OS

APP

OS

APP

OS

APP

OS

APP

OS

Each application sees its own logicalstorage, independent of physical storage

VirtualStorage

Benefits of VirtualServersRun multiple applications on asingle server, increase serverutilization, and move applicationsnondisruptively

Next, let's move on to virtual servers and virtual storage, and consider the benefits to applications from

seeing their own logical server and storage resources. These are the two areas in which EMC is

focused on delivering virtualization technology.

This slide shows the principle of server virtualization in one graphic. There are three physical servers

with six copies of the operating system and applications running on top of them. Each of these

operating systems believes it has its own dedicated server.

The result is the ability to run multiple applications on the same server, even if an application believes

it must have its own server in order to function correctly, and increases server utilization. And the

EMC technology we review even moves applications across servers non-disruptively!

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 6/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 6

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 6

What is a Virtual Infrastructure? 

Virtual Infrastructure allows dynamic mapping of

compute, storage, and network resources to businessapplications

In traditional data centers, there is a tight relationship among particular computers, disk drives, and

network ports and the applications they support. VMware’s Virtual Infrastructure allows us to break 

those bonds. We can dynamically move resources where they are needed, and move processing where

it makes most sense. VMware detaches the operating system and its applications from the hardwarethey run on.

VMware Infrastructure 3 is a suite of software to optimize and manage the virtual infrastructure.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 7/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 7

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 7

VMware Products 

Server consolidationP2V Assistant

Test/dev, evaluate software, serverprovisioning

VMware Server

Run, share, evaluate pre-builtapplications and beta software inVMs

VMware PlayerFREE

Virtualization

Desktop securityVMware ACEEnterprise

Desktop

Application lifecycle managementVMTN Subscription

Application lifecycle management;

Field OperationsVMware Workstation

Developer

Application lifecycle management;

Datacenter operations

VMware Infrastructure 3 (ESX Server,Virtual SMP, VirtualCenter, VMotion,

VMware HA, VMware DRS, and VCB )Datacenter

Use caseProductCategory

This table categorizes various VMware products and identifies the typical use case for each product

listed.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 8/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 8

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 8

VMware Infrastructure 3 

A software suite for optimizing and

managing IT environments throughvirtualization

Consists of the following software:

– ESX Server

Virtual SMP

– VirtualCenter

VMotion

VMware HA (High Availability)

VMware DRS (Distributed ResourceScheduler)

– VMware Consolidated Backup (VCB)

VMware Infrastructure 3 is a suite of software that provides virtualization, management, resource

optimization, application availability and operational capabilities.

VMware Infrastructure 3 consists of the following products: VMware ESX Server: Platform on which virtual machines run

VMware Virtual SMP: Multi-processor support (up to 4) for virtual machines

VMware VirtualCenter: Centralized management tool for ESX Servers and virtual machines

VMware VMotion: Migration of virtual machines while they are powered on

VMware HA: VirtualCenter’s high availability feature for virtual machines

VMware DRS: VirtualCenter’s feature for dynamic balancing and allocation of resources for

virtual machines

VMware Consolidated Backup: Centralized backup software for virtual machines

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 9/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 9

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 9

VMware ESX Server 

A virtual machine platformthat installs on “bare metal”

VMkernel has completecontrol over hardwareresources

Uses a high-performancefilesystem, VMFS-3

Supports dynamic allocationof computing resources

Enables VMs to use up to4 physical processors withVirtual SMP

Virtual Machine 

Platform for the Datacenter 

VMware ESX Server is the virtual infrastructure platform for datacenter environments. ESX Server

uses a bare-metal hypervisor architecture to provide the optimal performance and scalability for server

applications running in virtual machines.

With ESX Server, the VMware virtualization layer runs directly on the x86 hardware with AMD or

Intel processors, to give the virtual machines the most direct access to the host’s resources. Because

the VMware virtualization layer is in complete control of the host’s hardware, it makes it possible to

provide fine-grained resource allocations to each virtual machine. Precise amounts of host processor,

memory, network I/O and disk I/O resources can be granted to each virtual machine and those

allocations can be dynamically adjusted as workloads and service levels change.

The ESX Server virtualization layer is a very thin, special purpose kernel entirely dedicated to

execution of virtual machines. It lacks much of the “surface area” found in conventional operating

systems like user logins, network stacks and remote access.

ESX Server gives you the flexibility to run it on large x86 servers in a scale-up environment orinstalled on many smaller servers in a scale-out strategy. ESX Server is designed to run on host

systems ranging from dual processor blades to 16-way NUMA servers. Each ESX Server host can run

up to 128 virtual processors concurrently and sharing up to 64GB of memory. With Virtual SMP, the

ESX Server lets you configure 2- or 4-way virtual machines for larger workloads that require more

than one processor.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 10/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 10

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 10

ESX Server Architecture 

Under ESX Server, applications running within virtual machines access CPU, memory, disk, and their

network interfaces without direct access to the underlying hardware. The ESX Server’s virtualization

layer intercepts these requests and presents them to the physical hardware.

The service console supports administrative functions for the ESX Server. The service console is based

on a modified version of Red Hat Enterprise Linux 3 (Update 6). Users of ESX Server who use the

command line find that Red Hat Linux experience, or experience with other versions of Unix family

operating systems, can be very helpful to them.

The VMkernel always assumes that it is running on top of valid, properly functioning x86 hardware.

Hardware failures, such as the death of any physical CPU, can cause ESX Server to fail. If you are

concerned about the reliability of your server hardware, the best approach is to cluster virtual machines

between ESX Servers.

ESX 3 is supported on Intel (Xeon and above) and AMD Opteron (32-bit mode) processors. ESX 3

offers experimental support for a number of 64-bit guest operating systems. Refer to the SystemsCompatibility Guide for a complete list.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 11/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 11

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 11

VMware VirtualCenter 

Create and manageinventory of hosts and virtualmachines

Provision virtual machinesfrom templates

Migrate running VMs acrosshosts (VMotion)

Balance virtual machineworkloads across hosts

(VMware DRS) Manage virtual machines for

high availability and disasterrecovery (VMware HA)

VMware VirtualCenter is VMware’s tool for managing your virtual infrastructure. VirtualCenter gives

you a “single pane of glass” view of your entire virtual infrastructure, spanning all ESX Servers and

virtual machines hosted on those servers.

Provisioning new server virtual machines with VirtualCenter is a quick operation and VirtualCenter

lets you create a library of standardized virtual machine templates so your newly provisioned systems

always conform to your datacenter requirements.

VirtualCenter delivers a feature called VMotion that lets you migrate running virtual machines

between servers so you can perform hardware maintenance and shift servers with minimal downtime.

Other VirtualCenter features include VMware DRS, which helps you balance virtual machine

workloads across hosts, and VMware HA, which helps you manage virtual machines for high

availability and disaster recovery.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 12/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 12

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 12

VirtualCenter Components 

The VirtualCenter environment consists of the following software components:

VirtualCenter Server: Service used to centrally administer ESX Servers, it directs actions to be

taken on the virtual machines and the ESX Servers

VMware License Server: Server-based licensing for VirtualCenter and ESX Server functionality

VI Client: GUI-based interface for configuring and managing ESX Servers and virtual machines

Web Access: Web-based interface for managing virtual machines

VirtualCenter database: Main repository of VirtualCenter information, including configuration and

performance data

VirtualCenter Agents: Processes that run on the ESX Servers, used to receive tasks initiated by

VirtualCenter

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 13/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 13

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 13

ESX Server 3.0 and VC 2.0 Architecture 

The VI Client and Web Client are the user interfaces used to access the VirtualCenter Server or ESX

Server directly. The Web Client provides a browser-based interface for managing VMs. Web Client

requires that Web Access run on the VirtualCenter Server, or ESX Server, or both.

Two services exist on the ESX Server that are responsible to coordinate and launch tasks received from

VirtualCenter or client interfaces. The VirtualCenter Server sends task requests to the VirtualCenter

agent, vpxa, which then forwards them to hostd. hostd is a background process that launches the task to

be performed.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 14/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 14

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 14

Module Summary 

Key points covered in this module:

Virtual Infrastructure allows dynamic mapping ofcompute, storage, and network resources to businessapplications

VMware ESX Server is the virtual infrastructure platformfor datacenter environments

VMware VirtualCenter is VMware’s tool for managingyour virtual infrastructure

These are the key points covered in this module. Please take a moment to review them

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 15/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 15

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 15

Rainfinity Overview 

Upon completion of this module, you will be able to:

Describe basic concepts of File Level Virtualization

Identify and describe Rainfinity features and functions

Explain the operation of Rainfinity Virtualization

List the benefits of a Rainfinity solution

The objectives for this module are shown here. Please take a moment to read them.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 16/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 16

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 16

File-Level Virtualization Basics 

After File-Level Virtualization

Break dependencies between end-useraccess and data location

Storage utilization is optimized

Non-disruptive migrations

NAS devices/platforms

IP network 

NAS devices/platforms

IP network 

Every NAS device is an independententity physically and logically

Underutilized storage resources

Downtime caused by data migrations

Before File-Level Virtualization

EMC Rainfinity Global File Virtualization virtualizes NAS environments, making them simple to

manage. It does this by dynamically moving information without disruption to clients or applications.

Rainfinity Global File Virtualization achieves this through its unique network file-virtualizationcapabilities – an out-of-band file system virtualization that enables non-disruptive data movement in

multi-vendor NAS environments.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 17/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 17

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 17

Virtualization: One Big Virtual Box 

Consolidation of Data

– First approach… Small file servers consolidate into a large file server– Next approach …Purchase a larger file server, and add to it

– Current approach…Virtualize

Virtualization

– Increases data mobility by providing location independence of filesystems from the users and applications that utilize them

– This provides a layer of transparency to users

Consolidation of Data– Allows customers to have more boxes

– Looks like One Big Virtual Box

An old way to solve the file virtualization problem was to consolidate smaller file servers into a single

large file server. Another approach was to use an even larger file server - one that was easier to

manage and maintain. However, at some level of scale, it's impossible or doesn't make sense to keep

finding a bigger box. Of course, it is possible to perform lots of acrobatics on the client side usingauto-mounters, name servers and load balancers. But as system size mushroomed, those solutions

ultimately proved unwieldy and impossible to maintain.

That's when the current batch of file virtualization tools became attractive. They allow customers to

have more boxes, but make it look like it's just one big box. That simplifies life for storage users.

What virtualization does is increase data mobility by providing location independence of file systems

and files from the applications and people who use them.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 18/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 18

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 18

Rainfinity 

Rainfinity is a dedicated hardware/software solution that

manages file-oriented (NFS/CIFS) storage access Provides transparent data mobility

Enables file storage virtualization using industrystandard protocols and mechanisms in a heterogeneousenvironment

GFV is the abstraction of file-based storage over an IPnetwork

– Physical storage location is transparent to users and applications

– File-based storage systems are seen as a logical pool of resources

– Provides constant access to data while moving NFS/CIFS data

Rainfinity supports the management of file-oriented data and their servers. File-oriented data is data

that is accessed by CIFS or NFS. Rainfinity is a dedicated hardware/software platform solution.

Rainfinity allows clients to access data it is managing, and is done transparently to the user.Virtualization is an abstraction of the logical and physical paths to data. The client is unaware where

the data physically resides. The management of the namespace can be accomplished by industry

standard mechanisms such as a Distributed File System (DFS) in a Windows environment, and

NIS/Automount and LDAP in a UNIX environment. Rainfinity does not create its own namespace; it

integrates with these existing industry namespaces.

Rainfinity GFV provides for constant access to data. This means file-sharing data can be moved from

one file server to another while clients are reading and writing to that data.

GFV provides a layer of transparency to users and applications. With Rainfinity, data is mirrored and

appears as a single source to users and applications. Namespace transparency maps the logical name

to the physical location after it has been moved so users and applications are redirected to the newlocation without reconfiguring the physical path names. GFV simplifies storage management by

bringing location transparency to users and applications accessing storage.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 19/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 19

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 19

EMC Rainfinity Global File Virtualization 

NAS and CASenvironments simple tomanage

Easier to moveinformation andresources withoutdisruption

Networked Storage Virtualization for NAS,CAS and File Servers

Global Namespace

NAS devices/platforms

IP network 

NAS storage pool 

File serversEMC 

Celerra 

NetAppFAS

RainfinityGlobal File

VirtualizationEMC 

Centera 

EMC Rainfinity Global File Virtualization delivers these benefits by virtualizing NAS environments,

making them simple to manage, and dynamically moving information without disruption to clients or

applications.

Rainfinity Global File Virtualization achieves this through its unique network file-virtualization

capabilities, which are delivered through a 2U rack mountable appliance.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 20/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 20

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 20

Rainfinity Hardware 

GFV-4

Customized Linux operating system

– Version 2.6 kernel

64 bit hardware (processor)– Single or clustered unit

– 4GB SDRAM DIMM for singleconfiguration

– 12 GB SDRAM DIMM for enterpriseconfigurations

CPU:– Dual Intel Xeon Processors 3.6 GHz

or higher

L2 Cache:– 1 MB (GFV-4)

Keyboard/Mouse:– PS/2 (GFV-4)

GFV-5

Customized Linux operating system

– Version 2.6 kernel

64 bit hardware (processor)– Single or clustered unit

– CIFS license 4 GB single rank registered 667-MHz

SDRAM DIMM

– CIFS, NFS or Enterprise license 16GB dual rank registered 667-MHz

SDRAM DIMM

CPU:– Dual Intel Xeon Processors 3.0 GHz

or higher

L2 Cache:– 4 MB (GFV-5)

Keyboard/Mouse:– USB (GFV-5)

GFV 4:

All versions of the Rainfinity software requires a 64 bit processor.

It ships as a stand-alone unit or can be clustered for high availability.

Rainfinity uses a customized operating system based on the Linux 2.6 kernel.

The Rainfinity appliance is based on HP ProLiant DL380 G4 hardware.

2U rack-mount form actor with sliding rails.

SmartArray 6i storage controller buffers all writes to disk so that in event of a critical full-system

failure, important state is saved even during abrupt disk or power failure.

GFV 5 only:

The Global File Virtualization appliance chassis is based on Intel processor-based hardware.

The appliance includes dual Intel Xeon processors, 4MB of L2 cache and a 1333 MHz front-side bus.

The memory available is either 4GB or 16GB. The 4GB (8 x 512MB) single rank registered 667-MHz

SDRAM DIMM (Dual Inline Memory Module) is available with the standard configuration CIFS

license only. The 16GB (8 x 2GB) dual rank registered 667-MHz SDRAM DIMM is available with

CIFS, NFS and the Enterprise configuration license.

Hot swappable fans are included.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 21/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 21

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 21

Front of Appliance 

GFV- 4

GFV- 5

Front of Appliance GFV-4

The following describes the front of appliance:

a) CDROM: CDROM drive for doing full CD system upgrades

b) & c): Disk: Two hot-swappable SCSI 146.8GB hard drives, configured for RAID-1 disk mirroring

with SmartArray 6i SCSI controller

Front of Appliance GFV-5

Shown here is a front view of the appliance. The CD-ROM drive is used for full CD system upgrades.

Two hot-swappable SAS (Serial Attached SCSI) 146.8 GB hard drives, configured for RAID-1 disk 

mirroring with a hardware RAID controller are shown.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 22/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 22

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 22

Rainfinity Software 

Ships with 3 Separate Cd’s:

– Rainfinity code– Windows Proxy Service

Required to move CIFS Data

– SID Translator

Moves local groups from source to destination server

Minimal setup

– Network Config …

IP address, Netmask, Default Gateway IP, DNS, Hostname

– Date/time

Date, Time, NTP time services, Time Zone

Rainfinity ships with three disks, the Rainfinity Code, Windows Proxy Service, and an SID Translator.

The Windows Proxy service is required to move CIFS data. Windows servers require that someoperations be performed by native Windows clients. Rainfinity connects to a computer running the

Windows Proxy and uses it to perform statistic collection and administration tasks.

Rainfinity is able to translate Security IDs (SIDs) in the security properties of the files and directoriesinvolved in a CIFS transaction. The capability may be used to assist data migration projects in whichthe data’s group or user association changes from the source to the destination. For example, when theAccess Control List (ACL) is defined in terms of local groups on the source file server. When the datais migrated to the destination server, the ACL should be defined in terms of corresponding local groupson the destination server. The rules governing such translation are defined in the SID translation tables.

The first step is initial Rainfinity setup. This includes, at a minimum: logging in and setting thedate/time, port configuration, and basic network settings.

When a Rainfinity appliance ships to a customer location, the software has been installed and tested. In

some cases, it might be necessary to reconfigure or re-install the system to customer specifications.

Upon first time login, the rssetup script receives the login request and acts as an interactive menu-based interface for the user.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 23/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 23

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 23

Automount

NIS LDAP

DFS

ADRainfinityAppliance

Protecting Critical Files: Global File Virtualization - 1

DFS

AD

Automount

NIS LDAP

Namespace

Global NamespaceManager

Admin

Data MobilityRainfinity is triggered

1

RedirectionGlobal namespace updated

2

Event Log

RainfinityAppliance

This is a representation of a NAS environment. The heterogeneous back-end file servers and NAS devices are representedbelow – Rainfinity operates at the protocol level, CIFS and NFS, so a broad range of support is available.

The namespace is represented in the top left. This shows that clients are accessing data through a logical view thenamespace provides the underlying physical location to the logical map. Rainfinity supports industry standard namespaces

such as DFS and Automount. Rainfinity can also work with custom login scripts.

The IP Client network is shown here connecting clients to the Network Attached Storage. Rainfinity installs by plugginginto the network switch. There are no changes required to client mount points. Rainfinity installs in the network but is not inthe data path. When you install Rainfinity you set up a separate VLAN in the network. Clients continue to access storagewith no disruption.

When there is some data relocation that needs to take place for cost or optimization reasons the ports associated with theinvolved file servers are associated with the Rainfinity VLAN. Rainfinity is now in the data path for these file servers andcan ensure client access to the data even though it is being dynamically relocated.

Rainfinity treats a data relocation as a transaction and has rollback capability and pause restart whether a directory, volumeor file system is being relocated. Any updates during the transaction are synchronized across both the original source andnew destination. If Rainfinity were to be moved from the network in the middle of a transaction, there is no data integrityrisk. All updates are reflected on the source. The clients are still mounting the source. You can plug Rainfinity back in andthe transaction resumes.

Once the data relocation is complete, Rainfinity now updates the global namespace (DFS for Windows, Automount forUnix, or login scripts or homegrown namespace solution). The namespace in turn updates the clients. Rainfinity leveragesthe industry standard approaches such as Microsoft DFS so there is no need for additional agents to be deployed on allclients.

The new authority copy of the data is at the new location. The original source reflects a point in time copy at the end of thetransaction and reflects any updates made up and to that point. Updating client mappings takes time however, so Rainfinityremains in the data path and redirects client access to the new location. Overtime the number of sessions to redirectdecreases as new sessions are mounting directly to the new location.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 24/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 24

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 24

Automount

NIS LDAP

DFS

AD

Protecting Critical Files: Global File Virtualization - 2 

RainfinityAppliance

Namespace

Admin

Data MobilityRainfinity is triggered

1

RedirectionGlobal namespace updated

2

3 Transaction CompleteWithout downtime

Event Log

RainfinityAppliance

When all of the client sessions have been remapped to the new location Rainfinity completes the

transaction and the NAS devices move out of the Rainfinity VLAN. Rainfinity is now out of the data

path.

Rainfinity has an autocomplete feature that provides a policy to control transaction completion. This

can be based on percentage of client remapping.. You can set up tiers of users and have different

policies for each. Key operation clients or apps have to be 100% remapped for instance while other

department users can have lower percentages Rainfinity can also terminate client sessions to perform a

remap/remount based on idle time thresholds.

Rainfinity virtualizes an environment 100% of the time with the namespace providing a logical

abstraction layer. The Rainfinity appliances selectively virtualizes traffic on the wire based on

particular optimization or relocation events that need to take place.

Questions that might come up:

Can Rainfinity handle simultaneous transactions? Yes, you can define multiple transactions. Rainfinity

only does 1 active move transaction at a time and these are queued, but Rainfinity can perform the

redirection simultaneously for multiple transactions

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 25/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 25

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 25

Automount

NIS LDAP

DFS

AD

Protecting Critical Files: Global File Virtualization - 3 

Namespace

Admin

Policy based FileArchivingRainfinity archives files

4

End-user RetrievalAccess stub file, retrieves file

5

Event Log

File ManagementRainfinityAppliance

The last animation regarding the migration process is displayed here; please take a moment to review.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 26/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 26

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 26

Rainfinity Applications 

Capacity Management

PerformanceManagement

Tiered StorageManagement

Global NamespaceManagement

Migration andConsolidation

SynchronousReplication

Rainfinity Platform

File Management

Rainfinity application software provides the functionality of the system. The first three applications

listed here drive storage optimization by visualizing usage trends and exceptions. The Global

Namespace Management application manages existing namespaces in the environment. The other

applications move data between storage devices. The next few slides highlight the major features of Rainfinity, although there is so much more.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 27/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 27

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 27

Rainfinity: In Band and Out of Band 

Rainfinity combines the advantages of both architectures:

– Rainfinity product stays out-of-band until needed for data mobilityNo scaling problems in-band appliances have

No performance penalty of in-band appliances

Works with industry standard mechanisms like DFS and Automounter

– When a file must be moved, File Server goes in-band to prevent theuser disruption typical of out-of-band devices

Monitors sessions and connections

Proxies as needed

Keeps source and destination in synch as long as needed

Tracks client access to source and destination

Rainfinity is designed to be installed between file servers and clients on the network. To achieve this,

Rainfinity functions as an Ethernet switch.

This functionality as a Layer 2 switch enables Rainfinity to see and process traffic between clients andfile servers with minimal modification to your existing networking. Rainfinity is aware of file-sharing

protocols. It is this application-layer intelligence that allows Rainfinity to move data without

interrupting client access.

When Rainfinity is doing a move, the two file involved in the move must be on the private side (or

server-segment). In this case, Rainfinity is said to be in-band servers, and those file servers are also

referred to as in-band.

When Rainfinity is not doing a move or redirecting access to certain file servers, those file servers may

be moved to the public (or client-side) LAN segment. In this case, Rainfinity is said to be out-of-band

for these file servers, and those file servers are also referred to as out-of-band.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 28/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 28

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 28

Tiered Storage Management Overview 

Tiered storage managementallows for the efficient placement

of data based on capacity,performance, and price, andservice-level- agreements

Primary Storage: Production datarepository that makes up theprimary top tier of storage in termsof performance and availability

Secondary Storage: Tier or tierswhere, based on policy,designated files are stored. It hasa lower cost and performance

profile than primary storage. Policy Engine: Intelligence that

classifies and migrates files basedon pre-established policy

Rainfinity GFV Cluster

Secondary Storage 

RFM

Primary Storage 

NAS Clients

Policy Engine 

Tiered storage management, or file archival, allows for the efficient placement of data based on

capacity and service-level agreements. File placement can increase efficiency by intelligently placing

data in optimal tiers of performance and price, thereby lowering the overall average cost of storage.

Enterprises can use lower cost storage, such as ATA drives or tape, to store less critical data at afraction of the cost of high performance storage. Intelligent software is required to make file

placement practical and feasible, and that automatically classifies and migrates data based on policy.

Policy engines, or intelligent software running on a separate server, migrate data from one storage tier

to another based upon configured policies. These policies can be based upon the size of the data, the

length of time since the last access, or by particular file extension type. One of the primary features

announced with Rainfinity GFV version 7.0 is Rainfinity File Management, or RFM. RFM provides

the policy engine functionality and currently supports NetApp to Centera archival and retrieval.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 29/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 29

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 29

Advantages of the Rainfinity Global File Virtualization Architecture 

Supports heterogeneous, multi-vendor environments

No client or server software required

Operates transparently to clients and applications

Leverages industry-standard namespace

Fixes issues in heterogeneous environments

– Identifies and rebalances poor utilization and performance

– Offers seamless migration during code upgrades

– Dynamically moves data between platforms

29

Rainfinity Global File Virtualization leverages industry-standard Global Namespaces with a scalable,

transparent file-protocol switching capability. There are many advantages to this approach: it limits

risk and performance concerns, but also leverages the continuing investments being made by large

vendors and Standards bodies, whether your namespace is Microsoft DFS or Automount. RainfinityGlobal File Virtualization can also work with existing environments in which a standard namespace is

not deployed, such as login scripts or “homegrown”.

Rainfinity Global File Virtualization is a solution that provides complete transparency…not just

transparency of file access - transparency to the environment. This solution does  not require mount-

point changes or the deployment of agents on clients or servers.

Virtualization should leverage the investments you’ve already made in your storage-infrastructure and

management tools. Check for Standards support and vendor certifications, and make sure your existing

management tools and data-protection policies are not adversely impacted.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 30/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 30

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 30

Why Rainfinity Global File Virtualization? 

Built for the enterprise:

– Most scalable– Safest

– Easiest to deploy

Industry standards-based

Enterprise service andsupport

– EMC’s world class technical

service organization and 24 X 7global hardware and softwaresupport

30

If you’re looking to streamline the operations of your file-server and NAS environments, Rainfinity

Global File Virtualization delivers:

Optimized utilization of storage resources

Accelerated storage consolidation

Simplified management

Increased protection of critical files

It does this by simplifying capacity management through non-disruptive data movement and

namespace-management updates, maintaining a virtual file system of your physical file-serving and

NAS resources.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 31/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 31

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 31

Module Summary 

Key points covered in this module:

Virtualization is the newest approach in solving the consolidation ofmany servers into one

Rainfinity is a dedicated hardware/software solution that managesfile-oriented (NFS/CIFS) storage access

Both GFV-4 and GFV-5 includes mirrored 146.8 GB drives. Bothhave slight differences for hardware and processing power

Rainfinity combines applications that monitor and move databetween storage devices

GFV streamlines operations of your file-server and NASenvironments through non-disruptive data movement, andleveraging the customers existing environment

These are the key points covered in this module. Please take a moment to review them.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 32/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 32

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 32

Invista Overview 

Upon completion of this module, you will be able to:

Understand the concept and benefits of storagevirtualization

Identify the benefits, features, and advantages of anInvista Solution

List the hardware and software components of Invistaand how they work together to achieve storagevirtualization

The objectives for this module are shown here. Please take a moment to read them.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 33/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 33

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 33

EMC Invista 

Network-based Storage Virtualization

Performance architecture– Leverages next-generation “intelligent” SAN

switches for high performance

– Designed to work in enterprise-classenvironments

Provides advanced functionality– Dynamic volume mobility

– Network-based volume management

– Heterogeneous point-in-time copies (clones)

Enterprise management

– EMC ControlCenter Integration– EMC Replication Manager Integration

Supports heterogeneous environments– Works with EMC and third-party storage Physical storage

Runs in the storage 

network 

Virtual volumes

Virtual initiators

Invista Control Path

Cluster

Data PathController

Data PathController

Invista is a SAN-based storage-virtualization solution. Its architecture leverages new intelligent SAN-

switch hardware from EMC’s Connectrix partners that enables new levels of scalability and

functionality.

Unlike other storage-virtualization products, Invista is not appliance-based. This enables it to deliver

consistent, scalable performance across a heterogeneous storage environment, even using highly

random-I/O applications. Because Invista uses the processing capabilities of intelligent switches, it

eliminates the latency and bandwidth issues associated with an “in-band” appliance approach. By

using purpose-built switches with port-level processing, this “split-path” architecture delivers wire-

speed performance with negligible latency.

EMC’s unique network-based approach to storage virtualization enables certain key functionalities,

such as the ability to move active applications to different tiers of storage non-disruptively, and the

ability to leverage clones across a heterogeneous storage environment. These functions work uniformly

across qualified hosts and heterogeneous storage arrays.

In addition to integrating discovery and monitoring functions for virtual volumes into EMC

ControlCenter, Invista can also be easily managed from a graphical user interface (GUI) or a

command-line interface (CLI).

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 34/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 34

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 34

Invista’s Advanced Software Functionality 

Dynamic Volume Mobility

– Move and change primaryvolumes while applicationremains online

Network-based VolumeManagement

– Pool storage and managevolumes at the networklevel

Heterogeneous Point-in-

Time Copies– Create local copies of data

for testing and repurposingacross multiple types ofstorage

The next-generation hardware, combined with powerful Invista software, enables some unique

capabilities such as Dynamic Volume Mobility, Network-based Volume Management, and the ability

to create heterogeneous point-in-time copies.

Dynamic Volume Mobility allows Administrators to move primary volumes between heterogeneous

storage arrays while the application remains online. This enabler of information lifecycle management

allows you to move applications non-disruptively to the appropriate storage tier, based on application

requirements and service levels.

Network-based volume management is the basis for what many people think of as “virtualization.”

Invista enables you to create and configure virtual volumes from a heterogeneous storage pool and

present them out to hosts. It makes sense for the network to be the control point for this - abstracting

and aggregating the back-end storage, configuring it, and making it available to all of the connected

hosts.

Invista also gives you the ability to create clones of virtual volumes. This allows you to extend theuse of clones to areas where their use may have previously been impossible, due to compatibility

issues. For example, you can now create a clone from a high-tier, primary storage array and extend it

to a lower-tier, lower-cost storage array. This provides another local-replication option in your tiered

storage environment.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 35/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 35

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 35

Key Invista Benefits 

Support for EMC and third-party arrays

– Leverages existing investments in storage capacity and resources

Delivers Information Lifecycle Management

– Enables data movement across multiple storage tiers

Reduces complexity

– Single interface for managing all tiers of storage

Increases operational efficiency by simplifying:

– Movement of data to optimize performance

– Provisioning of storage among multiple vendor arrays

Invista provides support for EMC and third-party arrays, which allows an enterprise to leverage its

existing investments in storage capacity and resources.

Invista also supports Information Lifecycle Management by enabling data movement across multiplestorage tiers, and reduces management complexity by establishing a single interface for managing all

tiers of storage.

Finally, it increases operational efficiency by simplifying the movement of data to optimize

performance and the provisioning of storage among multiple vendor arrays.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 36/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 36

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 36

Invista Hardware Components 

Layer 2 SAN 

HeterogeneousStorage Arrays

Invista Instance 

Layer 2 SAN 

DPCData Path Controller

(Intelligent Switch)

Ethernet switchAllied Telesys

Control PathCluster (CPC)

CPC

IP NetworkProduction Hosts

In this illustration, the components of an Invista instance are detailed. Note that the illustration is a

simplified representative of the Invista components - it does not show redundant components needed

for fault tolerance that are mandatory in Invista production configurations.

Invista includes these major components: the Control Path Cluster (CPC), Data Path Controller (DPC),

and an Ethernet switch.

The Control Path Cluster does not contain user data. Instead, it stores the Invista configuration

parameters and performs management functions for an Invista instance. These functions include

configuring and managing virtual storage.

The Data Path Controller accepts data and control requests from hosts to perform on the virtual

storage. It is the component that maps data read and write operations between the hosts (front end) and

storage arrays (back-end). The DPC gets its configuration from the CPC.

The Ethernet switch connects the CPC and DPC, and configuration and control information is passed

between the CPC and DPC via the Ethernet connections.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 37/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 37

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 37

Theory of Operation: CPC 

The CPC stores information about the physical and virtual

storage, including:– Storage Elements dedicated to Invista

– Imported Storage Elements and associated storage elements

– Virtual Volumes and associated imported storage elements

– Virtual Frames and which hosts and virtual volumes belong to them

– Clone Groups and which storage volumes belong to them

The CPC downloads information to the DPC

The (CPC) stores the following configuration meta data about the Invista instance.

The storage element (back-end array volumes) information. These volumes have been assigned to the

Invista instance. The back-end volumes must be allocated exclusively for Invista usage by theadministrator of the storage arrays.

The imported storage element information. Imported storage elements are simply storage elements that

have been “imported” into the Invista instance. This identifies storage array capacity that Invista

intends to use for creating virtual volumes.

The Virtual Volumes information. This includes the virtual volume name, storage volume

identification (ID), and the imported storage element used to create the virtual volume.

The Virtual Frame information. A Virtual Frame identifies one or more virtual volumes and the host

allowed to access them.

The Clone Group information, including a data (source) volume and clone (copy) volumes.

The CPC downloads the appropriate information to the DPC.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 38/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 38

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 38

Theory of Operation: DPC 

Receives part of its configuration from the CPC

Examines all incoming/outgoing FC frames– Read/write frames are mapped to the appropriate virtual target or

initiator and FC port

– Control frames are passed between the requesting host and theCPC

Serves as a:

– Virtual target for hosts

– Virtual initiator for storage arrays

The Data Path Controller (DPC) resides in the intelligent switch component of Invista. It receives part

of its configuration from the CPC.

The DPC is the center of all traffic in Invista. Each FC frame generated by hosts and storage arrays isexamined and forwarded to the appropriate device.

To the host, the DPC is a virtual target. To a storage array, the DPC is a virtual initiator.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 39/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 39

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 39

Separation of Data and Control Path Operations 

Data Frame - Read - Write 

Control Frame - Inquiry - Read Capacity - Report LUNs - Test Unit Ready - Request Sense - Reservation - Group Reservation - Persistent Reservation - Format - Verify - Rezero Unit 

Array

ControlOperations

Data PathController

(DPC)

Parse andRedirect frames

I/O Streams

ProcessedI/O Stream

Control Path Cluster(CPC)

Invista Virtualization Services

SAL (Switch Abstraction Layer)

SAL-Agent

When a command arrives at the DPC, there are two places where processing occurs.

The data path processors are the port-level ASICs that handle the incoming I/O from the host and do

the remapping to the back-end storage. In typical operations, 95%+ of I/O will be handled by theDPC’s. Whatever can’t be handled by the DPC’s is termed an exception. An exception might be a

SCSI inquiry about the device, or an I/O for which the DPC doesn’t have mapping information. These

exceptions are handled by the Control Path Cluster.

The CPC is also where the storage application actually runs. When the system starts up, the CPC loads

the mapping tables for the virtual volumes into the DPC’s.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 40/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 40

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 40

Invista Logical Topology 

VirtualTargets

VirtualInitiators

VirtualVolumes

DPC

Hosts

The logical view of Invista is displayed here.

Virtual Targets are abstract entities created by designating specific ports on the switch to be used as

front-end ports. On each port designated as front-end, a Virtual Target is created that becomes visiblein the NameServer on the switch. Each Virtual Target will have a unique World Wide Port Name

(WWPN). Invista uses virtual targets to map Virtual Volumes to logical devices on the back-end

arrays. Each Virtual Volume presented to a host is mapped to a logical device on a back-end array.

Virtual initiators are also abstract entities created when the intelligent switch is imported.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 41/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 41

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 41

Layer 2 SAN(A/B Fabric)

High-Availability Configuration 

Mirrored SAN– 2 separate SANs, dual-HBA hosts

– Supports nondisruptive codeupgrade to virtualizationcomponents

– Provides HA for switchconfigurations through faultisolation

– Invista LUNs can be exposed onboth fabrics

CPC uses 2 Control Path Nodes– Active/Active cluster

– LUN ownership model follows CP

nodes

Multiple DPCs– Failover LUNs across DPCs

– Support for switch upgrades(hardware and firmware)

Storage Arrays

Hosts

InvistaCore

Layer 2 SAN(A/B Fabric)

CPC DPCDPC

In this illustration, each host has two HBA’s. Each HBA is cabled into a unique front-end layer two

fabric. Each layer two fabric is cabled into a separate DPC. The DPC shares the same CPC for

redundancy and the ability to share volume mapping in case a component of one of the paths is out of 

order. Each DPC is cabled into a layer 2 backend SAN, which is cabled into one port in the backendarrays.

In this example, only one CPC is needed because high availability is implemented within the CPC.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 42/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 42

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 42

Invista Sharing an Existing SAN 

L 2

SAN

Invista Instance 

Fabric B Fabric A

A

DPC A DPC B

HeterogeneousStorage Arrays

B C D E

F

The diagram shows how an Invista configuration may look when it is coexisting with a traditional

SAN. DPC A and the fabric A switches are cabled together and are managed as one fabric. DPC B and

fabric B are configured in the same manner.

In this scenario, a hosts C, D, and E are directly connected to the Invista environments. Hosts A and F

are directly attached to the L2 SAN environment.

Host B has one connection to the Invista instance and another to the L2 SAN. Hosts may be connected

in this manner for a number of reasons. One, they may not be taking part in the virtualized

environment. Or maybe they are being prepared for volume migration to the virtualized environment.

Hosts A, C, D, E and F can be separated from Invista by zoning the HBA to the array port. By not

zoning the HBA to a virtual target, it bypasses Invista. However, physical connectivity to the DPC is

preserved in case the host is migrated in the future.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 43/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 43

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 43

Invista Element Manager 

In this example, the Storage tree folder has been expanded. Next, the underlying Storage Elements

folder is expanded, and then the Imported folder. The tree view displays a list of all imported storage

elements and the properties screen displays the properties of all imported storage elements.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 44/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 44

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 44

Invista’s Advanced Software Functionality 

Network-based VolumeManagement

– Pool storage and managevolumes at the network level

Dynamic Volume Mobility– Move and change primary

volumes while applicationremains online

Heterogeneous Point-in-Time Copies

– Create local copies of data fortesting and repurposingacross multiple types ofstorage

Remote Replication– RecoverPoint & Invista

integration

– Remote replication of Invistavolumes

The initial version of Invista includes the advanced software functionality best suited for storage

virtualization.

They are network-based volume management, dynamic volume mobility, and heterogeneous point-in-time copies (clones). Each of these applications is controlled using Element Manager or the INVCLI

command line.

Remote Replication is achieved by integrating Invista with EMC RecoverPoint.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 45/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 45

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 45

Volume Management 

Simplify volume presentationand management

– Create, delete and changefunctionality

– Provides front-end LUN maskingand mapping of storage volumesto the host

Centralized volumemanagement and control

– Element Manager provides singleinterface for volume management

Reduce volume managementcomplexity

– Single interface to allocate andreallocate storage resources

Virtual Volumes

Concatenated volume

Invista provides a robust volume management capability.

This slide shows how the storage elements from the backend arrays (shown in yellow, red, green, and

blue) are created.Several storage elements can be concatenated together to form a single virtual volume that can then be

configured to the host. A concatenated volume is shown in the example.

A virtual volume can consist of the entire storage element, as in the case of the red virtual volume, or a

virtual volume can be a smaller chunk of the storage element, as shown by the green virtual volume.

Use Element Manager to create, delete, and modify virtual volumes. Element Manager is also used to

configure or un-configure virtual volumes to a host.

By using Element Manager, administrators have a one tool that can provide all the capabilities needed

to manage the Invista instance.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 46/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 46

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 46

Heterogeneous Point-in-Time Copies: Cloning 

Key Features

– Can clone a virtual volume to anothervirtual volume of the same size acrossheterogeneous array types

– High performance data copy

Use Cases– Heterogeneous backup and recovery

– Testing, development, training

– Parallel processing, reporting, queries

Integrated management– Replication Manager

– EMC Control Center

– Microsoft VSS

This slide illustrates how a virtual volume (shown in blue), can be cloned to other virtual volumes of 

the same size. In this example, there are three clones shown in yellow, red, and green.

Invista permits users to create one or more full copies of a virtual volume. This functionality isperformed by Invista, not hosts or arrays, and does not require host CPU cycles. Administrators can

use Element Manager console or the Invista CLI to control cloning operations.

Active clones are managed as a “Clone Group”, which consists of a source volume and one or more

clone volumes. Clones can be built on volumes that span heterogeneous arrays. Invista cloning has the

requirement that the source and clone volumes must be the same size.

Clones created with Invista can be used for backups, restoring data, testing, report creation, etc.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 47/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 47

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 47

Dynamic Volume Mobility 

Key Features

– Non disruptive high speed movement of

data across homogeneous arrays

Use Cases

– Reduce planned application downtime

Roll out application to production

Move legacy applications to lower tiers ofstorage

– Reduce migration costs

Perform lease roll-overs or technologyrefreshes faster

– Increase ability to meet service levels Match storage AND host capacity to

application performance requirements

Integral component to Information LifecycleManagement

With Invista, data mobility refers to the high speed non-disruptive movement of data from one virtual

volume to another virtual volume. The source and destination arrays must be available to Invista. The

move is transparent to the host. There is no requirement to reboot or take other action due to the

migration. The host “sees” the same virtual volume before and after the data has been moved,regardless of the storage array containing the data. In this example, the data on the green volume is

being moved to the blue volume.

Data mobility is a valuable tool for any situation in which the customer needs to move data without

impacting the application. For example, it is useful when the lease expires on a storage array and the

data must be retained. In this case, the data can be moved to the new array while the application is

running.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 48/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 48

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 48

Module Summary 

Key points covered in this module:

Storage virtualization concepts

Invista design and benefits

Major hardware components of Invista

Invista:

– Management interfaces

– Services (functionality)

– Theory of operations– Configuration strategies

These are the key points covered in this module. Please take a moment to review them.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 49/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 49

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 49

RecoverPoint 

Upon completion of this module, you will be able to:

Identify key features of a RecoverPoint solution

Discuss the logical and physical components of theRecoverPoint solution

Describe the architecture of RecoverPoint for CRR andCDP implementations

Discuss the methods of write splitting used by

RecoverPoint

The objectives for this module are shown here. Please take a moment to read them.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 50/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 50

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 50

EMC RecoverPoint 

A single, unified solutionfor all storage arrays

Recover data at a localor remote site to anypoint in time

Simplified managementand reduces cost acrossyour data center

Continuous replication toa remote site withoutimpacting performanceand retaining write-orderfidelity

Applicationservers

Databaseservers

Messagingservers

File andprint servers

Disk Subsystems

RecoverPointAppliance

TapeLibrary

Local CDPJournals

SAN 

The EMC RecoverPoint data protection solution is a comprehensive solution for your entire data

center, providing local continuous data protection (CDP) and continuous remote replication (CRR).

RecoverPoint is a single, unified solution to protect and/or replicate data from all your current andfuture storage arrays. It allows you to recover data at a local or remote site to any point-in-time. It

simplifies management, reduces cost across the data center, and ensures continuous replication to a

remote site without impacting performance and retaining write-order fidelity.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 51/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 51

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 51

The RecoverPoint Solution 

RecoverPoint is a network-based platform that is outside the primary datapath, providing enterprise class performance, scalability and availability

Storage Array Storage Array

Oracle dB Exchange SQL

Source Site

Journal

Volume

RecoverpointAppliance

Key:FCIP

LocalRecovery

(CDP)

Host/FabricWrite Splitter

Storage Array

Remote Site

Journal

Volume

Oracle dB Exchange SQL

Storage Array

RemoteRecovery

CRR

I/OBookmarking

Target SideProcessing

RecoverpointAppliance

The RecoverPoint solution provides heterogeneous, bi-directional, and asynchronous replication across

an IP WAN infrastructure. This allows it to be placed into existing or new environments constrained

by long distances, high latency, or low bandwidth requirements.

Additionally, it significantly reduces infrastructure operational costs because dedicated bandwidth,

expensive Fibre Channel extension gear, and multiple complex solutions for different heterogeneous

arrays are no longer absolutely required. Moreover, RecoverPoint provides recovery capabilities at the

local or remote site and between different array models and or types.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 52/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 52

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 52

Data-Replication Challenges 

Heterogeneous Environments

Application-consistentrecovery

Corruptionprotection

Application responsetime

Heterogeneousstorage

Existinginfrastructure

Disaster-recoverytesting

Communicationscost

Local site

SAN SAN 

Oracle Exchange SQL

Remote site

SAN 

Oracle Exchange SQL

RecoverPoint tackles the challenges noted in this slide with the following features:

Integration with existing (heterogeneous) storage arrays, switches, and server environments - no

“rip and replace”.

Intelligent use of bandwidth and data compression.

A policy-driven engine that supports multiple applications with different data-protection

requirements.

True bi-directional local and remote support, enabling flexible protection and recovery schemes

that can be tailored to business processes.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 53/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 53

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 53

RecoverPoint Bandwidth Reduction Technology 

Delta Differentials– Maintains write-order fidelity

– Tracks and transmits only the changed bytes

Hot Spot Identification– Checks for multiple writes to the same address

– Transfers the most recent write only for each block

Algorithmic Compression– EMC RecoverPoint’s advanced compression techniques

– Application specific I/O tracking

RecoverPoint offers three types of bandwidth reduction features that offer significant reduction in the

WAN bandwidth used across the network. One is Delta Differentials, which only send the changed

data across the WAN. The next is Hot Spot identification, which only sends the last write for each

block across the WAN. Another is Compression, which takes the reduced data and compresses itbefore sending across the WAN. The RPA at the remote site then performs the de-compression of the

data.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 54/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 54

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 54

RecoverPoint Family of Products 

RecoverPoint

– Integrated replication and data protection solution– Host-based or Fabric-based splitter

– Heterogeneous server and storage support

RecoverPoint/SE

– Replication or data protection solution for Windows

– Host-based Splitter

– Support for CLARiiON Arrays only

– Can be upgraded to RecoverPoint w/o data loss

RecoverPoint is an out-of-band, block level replication product for a heterogeneous server and storage

environment.

RecoverPoint CDP (Continuous Data Protection) provides local synchronous replication betweenLUNs that reside in one or more arrays at the same site.

RecoverPoint CRR (Continuous Remote Replication) provides remote asynchronous replication

between two sites for LUNs that reside in one or more arrays. RecoverPoint CDP and RecoverPoint

CRR feature bi-directional replication and an any-point-in-time recovery capability which allows the

target LUNs to be rolled-back to a previous point-in-time and used for read/write operations, without

effecting the ongoing replication or data protection.

RecoverPoint/SE is a version of RecoverPoint targeted for the Windows-based, CLARiiON-only

environments. RecoverPoint/SE CDP supports the local synchronous replication of up to 4TB of data

between LUNs that reside inside the same CLARiiON array. RecoverPoint/SE CRR supports the

remote asynchronous replication of up to 4TB of data between LUNs that reside in ONE CLARiiONarray at one site to ONE CLARiiON array that resides at the other site.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 55/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 55

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 55

EMC RecoverPoint CDP Overview 

Continuous Data Protection

Event Based Recovery

Space-efficient protection

Storage Array Storage Array

Oracle dB Exchange SQL

Local Site

Local CDPJournal

RecoverPoint Appliance

RecoverPoint Continuous Data Protection is a licensable solution that provides true CDP with real

time data recovery at the local site. It provides the same level of recovery as Continuous Remote

Replication - just to a local site.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 56/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 56

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 56

RecoverPoint Local-Protection Process (CDP) 

1. Data is “split” and sent tothe RecoverPoint appliancein one of two ways

4. The appliance writes data tothe Journal volume, along withtimestamp and application-specific bookmarks

5. Write-order-consistent data isdistributed to the replica volumes

Production volumes Replica volumes Journal volume

/ A / C/ B r A r Cr B

2a. Host splitter

3. Writes are acknowledgedback from the RecoverPointappliance

2b. Cisco SANTapfabric splitter

This slide describes the data flow from the application host to the production volumes, and how the

RecoverPoint appliance accesses the data as part of the CDP process.

An application server issues a write to a LUN that is being protected by RecoverPoint. This write is“split,” then sent to the RecoverPoint appliance in one of two ways. One is through a host splitter,

which is installed as a driver on the host. The splitter looks at the destination for the write packet. If it

is to a LUN that RecoverPoint is protecting, the splitter will send a copy of the write packet to the

RecoverPoint appliance.

The other is through an intelligent fabric switch, such as the Connectrix MDS 9000 with the SSM

module running SANTap. The switch will intercept all writes to LUNs being protected by

RecoverPoint, and will send a copy of that write to the RecoverPoint appliance.

In either case, the original write travels though its normal path to the production LUN. When the copy

of the write is received by the RecoverPoint appliance it is acknowledged back. This acknowledgement

is received by the splitter where it is held until the acknowledgement is received back from theproduction LUN. Once both acknowledgement are received, the acknowledgement is sent back to the

host, and I/O continues normally.

Once the appliance has acknowledged the write, it will move the data into the local journal volume,

along with a timestamp and any application-, event-, or user-generated bookmarks for the write. Once

the data is safely in the journal, it is then distributed to the target volumes, with care taken to ensure

that write order is preserved during this distribution.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 57/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 57

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 57

EMC RecoverPoint CRR Overview 

Continuous Remote Replication– Bi-directional replication across any distance

– Application bookmarks for consistent

recovery WAN Bandwidth Reduction/Compression

– Delivers up to 15X bandwidth reduction

– Dramatic operational cost savings

Storage Array Storage Array

Oracle dB Exchange SQL

Local Site

LocalJournal

Appliance

Key:FCIP

Appliance

Storage Array

Remote Site

RemoteJournal

Oracle dB Exchange SQL

Storage Array

Instant Recovery at Remote Site– Small aperture snapshots with fine resolution

– Application bookmarks ensure consistent recovery

Target Side Processing– Immediate access to data at any time

– Very effective tool for DR, R&D, and QA testing

RecoverPoint Continuous Remote Replication provides heterogeneous bi-directional replication across

an IP WAN infrastructure.

It provides WAN Reduction and Compression, which can reduce the amount of bandwidth used byanywhere from 3 to 15 times.

RecoverPoint is the only solution to provide recovery capabilities at the remote site. It uses a small

aperture snapshot and the remote Journal to capture application events, and provides consistent

recovery at the remote site.

RecoverPoint Continuous Remote Replication allows the use of Target Side Processing. Target Side

Processing is a feature that enables immediate access to the replicated volume to any point-in-time.

The remote volume can be mounted to any host at the remote site for qualification, testing or remote

recovery.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 58/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 58

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 58

RecoverPoint Remote-Protection Process (CRR) 

Local site

/ A / C/ B

2a. Host splitter

2b. Cisco SANTap

7. Data is written tothe Journal volume

Remote site Journal volume

r A r Cr B

3. Writes areacknowledgedback from theRecoverPointappliance

5. Data issequenced,checksummed,compressed,and replicated

to the remoteRPAs over IP(eitherasynchronous,or synchronousand bi-directional)

4. Appliancefunctions

• FC- IPconversion

• Replication

• Data Reduction& compression

• Monitoring andmanagement

1. Data is “split” and sentto the RecoverPoint

appliance in one of twoways

6. Data is received,uncompressed,sequenced, andchecksummed

8. Consistent data isdistributed to theremote volumes

This slide describes the data flow from the application host to the production volumes, and how theRecoverPoint appliance accesses the data as part of the CRR process.

An application server issues a write to a LUN that is being protected by RecoverPoint. This write is

“split,” then sent to the RecoverPoint appliance exactly the same as is done in a CDP deployment.From this point, the original write travels though its normal path to the production LUN. When thecopy of the write is received by the RecoverPoint appliance, it is immediately acknowledged back from the local RecoverPoint appliance, unless synchronous remote replication is in effect. If synchronous replication is in effect, the acknowledgement is delayed until the write has been receivedat the remote site. Once the acknowledgement is issued, it is processed by the splitter, where it is helduntil the acknowledgement is received back from the production LUN. Once both acknowledgementsare received, it is sent back to the host, and I/O continues normally.

Once the appliance receives the write, it will bundle this write up with others into a package.

Redundant blocks are eliminated from the package, and the remaining writes are sequenced and stored

with their corresponding timestamp and bookmark information. The package is then compressed, and

an checksum is generated for the package.

The package is then scheduled for delivery across the IP network to the remote appliance. Once the

package is received there, the remote appliance verifies the checksum to ensure the package was not

corrupted in the transmission. The data is then uncompressed and written to the journal volume. Once

the data has been written to the journal volume, it is distributed to the remote volumes, ensuring that

write-order sequence is preserved.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 59/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 59

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 59

Host-based Write Splitting 

Requires RecoverPoint SplitterDriver– Sits before SCSI and Multi-pathing

drivers

Application write goes to host’sdesignated storage volume

RPSD splits application write,directs to the RPA

Minimal footprint on host

Support Varies per OperatingSystem– Check the Support Matrix

RPA

RPSD

Writes split

at host

Splitters are components that split each application write, and send a copy to the Appliance.

Host-based splitters require the RecoverPoint Splitter Driver (RPSD) to be installed on the host. This

driver performs the actual I/O splitting from the host, and sends one copy to the RPA and one to thehost’s normal storage volume.

The RPSD can be managed with a utility program, which provides functionality such as Umount,

mount, Quiesce, and host log data gathering.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 60/66

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 61/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 61

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 61

Replication Concepts 

Replication Pairs

– Contains a replication volume at each site– Associate which replication volumes will replicate between each

other

Consistency Group

– Contains all replication pairs used by an application

– Replication type (CDP/CRR)

– Contains replication policy information

Journal Volume– Tracks data changes, time ordering, and block location

– Keeps bookmarks for recovery

– Repository for live data updates

A replication pair contains two replication volumes, source and target, that will have data replicated

between them. Since a consistency group can maintain consistency of data across multiple volumes,

multiple replication pairs can be created to form a single consistency group.

A consistency group requires a minimum of one replication pair per consistency group. Consistency

and write order fidelity is maintained across all replication pairs contained in the same consistency

group. The consistency group also contains at least one Journal volume on each site to hold consistent

point-in-time images for the specific consistency group. All policy information, such as RPO and

RTO, is associated with a specific consistency group, allowing for multiple policies to be maintained

in the replication environment. Each group also contains the ability to compress the data for a single

consistency group beyond the intuitive compression already present in the solution. It also allows for

specific bandwidth to be allocated for the consistency group.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 62/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 62

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 62

Consistency Group Structure 

Site 1 CG1Site 2 CG1

R1

R2

R3 R3’

R2’

R1’

H

ConsistencyGroup

jvol Replication Pair

Replication Pair

Replication Pair

jvol

A Consistency Group is a logical grouping of replication pairs that must be consistent across each

other. The need for consistency across these volumes could be due to the volumes being used by the

same application. A Consistency Group has at least 1 replication pair, and each site has a Journal

volume.

A Consistency Group is also used to determine replication direction and policies on a set of replication

volumes. Each consistency group is an independent entity and can have different replication direction

and policies than other consistency groups. This allows for synchronous and asynchronous replication,

as well as bi-directional replication, to exist in the same environment.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 63/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 63

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 63

RecoverPoint GUI Management Console 

The Management Console is accessible via an Internet browser from a computer that is connected to

the appliance management network (the latest Java plug-in is required). To manage a RecoverPoint

cluster, open an Internet browser and connect to the RecoverPoint Appliance Management IP address

for one of the sites. All configuration and monitoring of the replication environment can be performedthrough the Management Console.

The system status section provides a basic visual representation of the environment. It groups each

part of the replication into “types” instead of displaying each individual component. Each site contains

a Host, Switch, Storage and RecoverPoint Appliances. The appliances provide pro-active monitoring

of all major connectivity in the replication environment. If an error occurs or connectivity is lost to a

specific component, a visual alert indicator is displayed on the representation and the status of the

component type on the right of the system status portion displays “Error”.

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 64/66

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 65/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 65

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 65

Module Summary 

Key points covered in this module:

The EMC RecoverPoint data protection solution provideslocal continuous data protection (CDP) and continuousremote replication (CRR)

RecoverPoint provides heterogeneous server andstorage support

RecoverPoint CRR provides remote asynchronousreplication between two sites for LUNs that reside in oneor more arrays

These are the key points covered in this module. Please take a moment to review them

8/8/2019 Storage Virtualizations

http://slidepdf.com/reader/full/storage-virtualizations 66/66

Copyright © 2007 EMC Corporation. Do not Copy - All Rights Reserved.

 © 2007 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 66

Course Summary 

Key points covered in this course:

Virtual Infrastructures

VMWare product differences

Concepts and benefits of Storage Virtualization

Benefits, features, and advantages of an Invista Solution

File Level Virtualization basic concepts

Rainfinity features, functions and benefits

Key features of a RecoverPoint solution

These are the key points covered in this training. Please take a moment to review them.

This concludes the training.