51
Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved. EMC Storage Virtualization Foundations - 1 © 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations EMC Storage Virtualization Foundations Welcome to EMC Storage Virtualization Foundations. Copyright © 2009 EMC Corporation. All rights reserved. These materials may not be copied without EMC's written consent. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC² , EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution, Celerra, Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak, CodeLink, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare, HighRoad, InputAccel,InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale, PixTools, Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security, SecurID, SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC Proven, EMC Snap, EMC Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation Technology, Common Information Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix, EDM, E-Lab, eInput, Enginuity, FarPoint, FirstPass, Fortress, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, MediaStor, MirrorView, Mozy, MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap, QuickScan, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of their respective owners.

EMC Storage Virttualization Foundations

Embed Size (px)

DESCRIPTION

Document release 2009

Citation preview

Page 1: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 1

© 2009 EMC Corporation. All rights reserved.

EMC Storage Virtualization FoundationsEMC Storage Virtualization Foundations

Welcome to EMC Storage Virtualization Foundations.

Copyright © 2009 EMC Corporation. All rights reserved.These materials may not be copied without EMC's written consent.EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.EMC² , EMC, EMC ControlCenter, AdvantEdge, AlphaStor, ApplicationXtender, Avamar, Captiva, Catalog Solution, Celerra, Centera, CentraStar, ClaimPack, ClaimsEditor, ClaimsEditor, Professional, CLARalert, CLARiiON, ClientPak, CodeLink, Connectrix, Co-StandbyServer, Dantz, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, EmailXaminer, EmailXtender, EmailXtract, enVision, eRoom, Event Explorer, FLARE, FormWare, HighRoad, InputAccel,InputAccel Express, Invista, ISIS, Max Retriever, Navisphere, NetWorker, nLayers, OpenScale, PixTools, Powerlink, PowerPath, Rainfinity, RepliStor, ResourcePak, Retrospect, RSA, RSA Secured, RSA Security, SecurID, SecurWorld, Smarts, SnapShotServer, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, VSAM-Assist, WebXtender, where information lives, xPression, xPresso, Xtender, Xtender Solutions; and EMC OnCourse, EMC Proven, EMC Snap, EMC Storage Administrator, Acartus, Access Logix, ArchiveXtender, Authentic Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, CLARevent, Codebook Correlation Technology, Common Information Model, CopyCross, CopyPoint, DatabaseXtender, Digital Mailroom, Direct Matrix, EDM, E-Lab, eInput, Enginuity, FarPoint, FirstPass, Fortress, Global File Virtualization, Graphic Visualization, InfoMover, Infoscape, MediaStor, MirrorView, Mozy, MozyEnterprise, MozyHome, MozyPro, NetWin, OnAlert, PowerSnap, QuickScan, RepliCare, SafeLine, SAN Advisor, SAN Copy, SAN Manager, SDMS, SnapImage, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix DMX, UltraFlex, UltraPoint, UltraScale, Viewlets, VisualSRM are trademarks of EMC Corporation.All other trademarks used herein are the property of their respective owners.

Page 2: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 2

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 2

Course Objectives

Upon completion of this course, you will be able to:

Define a virtual infrastructure

List VMware product differences

Cite basic concepts of file-level virtualization

Describe Rainfinity features, functions, and benefits

Identify benefits, features, and advantages of an Invista solution

Explain the concept and benefits of storage virtualization

The objectives for this course are shown here. Please take a moment to read them.

Page 3: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 3

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 3

Virtualization Technologies

*Formerly ESX Server

APPOS

APPOS

APPOS

APPOS

APPOS

APPOS

APPOS

APPOS

APPOS

APPOS

APPOS

APPOS

Server virtualization with VMware Infrastructure

File virtualizationwith EMC Rainfinity File Virtualization

Appliance

IP networkGlobal NamespaceNAS storage pool

File serversEMC

NetApp

Block-storage virtualization

with EMC Invista

Invista

Runs in the storage network

Virtual volumes

Physical storage

Complementing virtualization services are the virtualization technologies—server, file, and block. We all know these EMC virtualization technologies well—VMware, Rainfinity, and Invista. Take the time to learn about these technologies if you are not familiar with them. There are occasions when one of these technologies can be obvious recommendations to address a customer problem.

Page 4: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 4

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 4

VMware vSphere 4 Virtualization Overview

Upon completion of this module, you will be able to:

Define virtualization concepts

Describe VMware vSphere infrastructure and components

Describe the architecture of ESX/ESXi Host

Describe the architecture of Virtual Machine (VM)

Describe vCenter Server and its usage

The objectives for this module are shown here. Please take a moment to read them.

Page 5: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 5

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 5

What is Virtualization?Virtualization allows you to run multiple operating systems and applications on a single computer

Virtualization allows consolidation of many servers into a single physcial computer

Two implementation solutions:– Hypervisor– Hosted

Virtualization is a technology that enables consolidation of many servers into a single physical computer. Virtualization allows representation of multiple operating systems simultaneously on a single hardware. The virtualization is a thin layer (software) which is installed between the hardware and the operating system (OS). It dynamically partitions the physical resources such as CPU, memory, and I/O devices for the concurrently running machines in a virtual environment. The task of the virtualization software is to provide an independent operating environment for each operating system and maintain the logical separation of physical resources. Virtualization is often compared incorrectly with simulation, emulation, or terminal services. It is important to understand how virtualization differs from these technologies.

Simulation provides a look and feel of an environment but it does not represent the physical environment. An example of simulation is any virtual reality game or a training program (flight simulation) used to train pilots. Emulation addresses the ability of a software program or a hardware device to imitate another software program or hardware device. Emulation is predominately used for training, demonstration, and testing when the original hardware and software is not available. Terminal Services allow remote access to a server by many users simultaneously. Citrix metaframe is an example of terminal services.

In contrast to these technologies, virtualization allows multiple operating systems to be hosted on a single computer and allows access by users and applications within each operating system providing all the computing resources they need as if they own the entire computer system.There are two implementation solutions of virtualization on x86 (CPU architecture standards implemented in Intel, AMD, Cyrix, and others).Hosted virtualization solution is an approach of virtualization where a virtual thin layer of software is installed on top of anoperating system as an application and provides virtual resources for the guest operating systems to run. Examples of hosted virtualization solutions are VMware Server, Workstation, ACE, and VMware Player.Hypervisor virtualization solution is a thin layer of software installed on the top of the hardware (bare-metal) and creates virtual resources for the guest operating systems. Examples of hypervisor virtualization solution are ESX and ESXi.

Page 6: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 6

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 6

VMware vSphere 4 Infrastructure

NetworkManagementStorage Management

and ReplicationVirtual ApplianceLive Migration

Firewall; Anti-Virus;Intrusion Preventionand Detection

Dynamic Resource Sizing

Virtualization enables consolidation of service offerings to end users in a most cost effective manner. Flexibility, scalability, availability, manageability are the key benefits of virtualization providing an optimal computing infrastructure. Services in a datacenter are provisioned for the internal users in a virtual environment which can be referred to as an internal cloud allowing seamless access to resources such as CPU, storage, and network. Extending the services to the external users with an external cloud is also a key requirement in virtualized a environment.VMware has adopted a new approach with the release of the vSphere 4 suite of products to better integrate various components together and create a unified cloud infrastructure (internal and external). Infrastructure services are a segment of vSphere that focus on the physical component of a computing environment. Infrastructure Services consist of three components: vCompute, vStorage, and vNetwork.

vCompute virtualizes server resources (CPU, memory, BIOS, chipsets, etc.) by using a hypervisor which creates an aggregated pool of resources for Operating System (OS)/Applications to use. The OS/Applications are installed within the virtual object known as the Virtual Machines (VM). The VM provides the resource from the resource pool for various OS/Applications to run on the server with complete logical isolation.ESX/ESXi hosts provide the physical resources used to run virtual machines. They are bare-metal, efficient and reliable hypervisors that provide a virtualization layer by abstracting the processor, memory, storage, network and other hardware components. ESX is managed with a built-in service console or vSphere command line interface (vCLI). ESXi is light weight (32 MB footprint) software and is managed with the BIOS like direct console or vCLI. VMware gives out ESXi for free to customers for experience and usefulness of virtualization technologies. The vStorage service addresses virtualizations of storage devices such as SCSI, FC, iSCSI, and NFS storage systems. The ESX/ESXi server virtualizes the storage in a logical storage unit which is known as a datastore. There are three types of datastores supported by ESX/ESXi host Virtual Machine File System (VMFS). Raw Device Mapping (RDM) and Network File System (NFS).vStorage services also provide thin provisioning to improve the utilization of expensive storage systems. By using thin provisioning, the OS can see and use a larger disk where the physical storage allocation to it is less. With thin provisioning approach, customers can better utilize and manage expensive storage systems. The vNetwork service addresses virtualizations of network connectivity. vNetwork service supports virtual distributed switches (VDS) as well as virtual switches. The virtual switch provides network connectivity for VM to VM, VM to external host, or VMkernal services such as NFS, iSCSI to the network attached storage. It also provides the connectivity to the service console for management of the ESX server.

Page 7: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 7

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 7

VMware vSphere 4 Infrastructure (Cont)

NetworkManagementStorage Management

and ReplicationVirtual ApplianceLive Migration

Firewall; Anti-Virus;Intrusion Preventionand Detection

Dynamic Resource Sizing

Application services provide availability, security, and scalability for the virtual IT environment. Application services focus on the logical aspect of the infrastructure (availability, Security, and Scalability). To achieve the business goals for availability, VMware vSphere has introduced VMotion, HA, FT, and DRS. VMotion technology allows the movement of live VM from one server to another as long as both servers are in the same cluster and share the same datastore. This solution is particularly useful for providing availability during hardware maintenance windows or moving VMs from a failing server. VMotion can also be used for load balancing when there is performance degradation on an ESX host. High Availability (HA) is a technology that allows you to create a cluster without any specific vendor solutions for clustering. It provides the functionality of a cluster enabling OS/Applications to restart at the secondary server if the primary server fails.Fault Tolerance (FT) provides data protection and business continuity. FT technology ensures zero data loss for application. FT is based on a vLockstep technology where the state of primary and secondary virtual machines maintain the same data and instructions set at given point of time. vSphere provides greater security and addresses all compliance requirements.vShield Zones maintain the fencing of the network for applications in a virtualized shared resource pool environment. This creates trust and network segmentation of users and data accessibility. vMSafe is a set of Application Programming Interface (API) for the third party vendors to write software to protect the virtual machines in terms of access to Memory, CPU, Networking, Process execution, and Storage resources. This feature is planned to be released in the future.Network/application firewall, Antivirus protection, Intrusion Detection (ID), and Intrusion protection (IP) are some of the other features available in vSphere.VMware vSphere 4 is highly scalable. The VM can now use up to 8 vCPU and 255 GB of memory. The addition of new virtual hardware to a VM is enabled while the VMs are up and running. For example, you can add more CPU, memory, Ethernet card, or hard disk to the virtual machines in an ON state. Disk capacity can also be increased while the VM is up and running. The core component of vSphere is the ESX/ESXi server. The next slide details the architecture of the ESX/ESXi server.

Page 8: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 8

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 8

Introduction to ESX

X86 Architecture

ESX is a hypervisor (also known as VMkernel) which installs on bare-metal x86 hardware to create a virtual platform. As you can see, the bottom portion of the slide depicts the physical hardware components (CPU, memory, storage, and network) on which the ESX is installed and creates a virtual platform.

The hypervisor performs certain tasks (schedules CPU resources, memory management, and all low-level I/O jobs) on the physical hardware components based on the request from the Virtual Machine Monitor (VMM). VMM is a process that runs within the VMkernel for each Virtual Machine (VM) running on an ESX server. It provides the abstraction of hardware to the guest OS which runs inside the VM. The VM illustrates a virtual hardware environment with virtual CPU, memory, network/storage controllers, and disks.

A Virtual Machine provides a complete system environment for the guest OS to use and think that it is a physical system, although, the VM is just set of configuration, state, and storage software files which represent virtual chipset (Intel 440BX chipset), BIOS, VGA, CPU, RAM, disk/network controllers and disks.

VMware ESX ships in two versions: ESX and ESXi. We will learn about new features of ESX and ESXi hosts in the next slide.

Page 9: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 9

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 9

ESX and ESXi Architecture

Here we learn about ESX and ESXi server. The major differentiator between ESX and ESXi is the service console. ESX has a service console where ESXi does not. The service console is now encapsulated in a VM in vSphere which is the first VM to be installed on an ESX server.

The new features of vSphere ESX and ESXi 4.0:64-bit system architecture – Both ESX and ESXi hypervisors now support 64-bit CPU (Intel and AMD).Improved performance and scalability – ESX 4 has greater transaction I/O processing capacity and a more scalable architecture, supporting up to 64 CPU cores and 1 TB RAM per host.Network and storage optimization – improved paravirtualized SCSI and iSCSI device drivers, and support for 10 GigE networking with Jumbo Frames. Support for IP6 is a new feature as well. Better support for storage and network consolidation – through support for dynamic storage/file system expansion, thin provisioning, and distributed virtual switches (allowing network aggregation across hosts).DirectPath I/O – VMware DirectPath I/O provides the ability to give a guest OS direct access to a physical network or storage controller.Improved security – with the VMsafe API and Trusted Platform Module (TPM).Many more management options – including CLI options, virtual management appliances, programming interfaces, and clients.

Page 10: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 10

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 10

Virtual Machine (VM)Set of virtual hardware where OS/applications run– Virtual hardware (version 7)

Viewed as a set of filesProvisioning methods– Cloning– Template

Virtual appliancesFlexibility– Hot-pluggable device– Virtual disk size increase– vApp

Scalability– 8-way VCPU, 256 GB RAM

VMDirectPath for VM

Virtualized Hardware

Operating System

Applications

Virtual Machine (VM)

CPU NIC Memory HDD CD

The Virtual Machine (VM) is a virtual hardware representation for the OS and its applications to run. The OS run under a VM is called a guest operating system. VMware provides a list of guest operating systems supported on its platform. It is always a good practice to check the latest compatibility document from the VMware website for latest operating system and hardware device support.The new VM hardware version 7 is compatible with ESX/ESXi 4.0 host and consists of many new features:

New storage virtual devices− SAS virtual devices (Serial Attached SCSI)− IDE virtual devices

VMXNET Generation 38-way Virtual SMP256 GB RAMEnhanced VMotion Capability (EVC)Virtual Machine Hot Plug Support (memory, CPU, and devices w/o VM shutdown)

Virtual Machine is a discrete set of files. Following are the list of files which constitute a Virtual Machine:Configuration file (VM_name.vmx)Virtual disk characteristics (VM_name.vmdk)Preallocated virtual disk (VM_name-flat.vmdk)VM BIOS (VM_name.nvram)Swap file (VM_name.vswp)Virtual Machine snapshot (VM_name.vmsd)Log file (vmware.log)

Because VM is a set of discrete files, it makes VM more portable to clone and use as a template to provision multiple VMs from a source VM. VMware along with other companies have developed an Open Virtualization Format (OVF) to simplify VM deployment and packaging.OVF is a platform for packaging OS, preconfigured applications, and other software assembled for a specific purpose. For example, web, database, and application servers can be preconfigured and packaged into an appliance using an OVF format. VMware virtual machines are highly flexible and allow hot pluggable devices and virtual disk size increase while VM is up and running. You can add CPU, memory and other I/O devices into the VM and increase the size of virtual disk while VM is up and running. The memory size of VM has been increased to 256 GB to meet application requirements. VM can also take advantage of virtual SMP (symmetric multi processor) up to 8 virtual CPUs, for CPU-intensive workload applications.The VMDirectPath for Virtual Machines feature is primarily targeted for applications that can benefit from direct access by the guest OS to the I/O devices. If the guest OS is enabled with this feature, then other virtualization features such as VMotion, hardware independence, and sharing of physical I/O devices will not be available to the VM using the VMDirectPath feature.

Page 11: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 11

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 11

vSphere Components

VMware vSphere Client

VMware vSphere Web Access

VMware vSphere vStorageVMFS

VMware Virtual SMP

Let us look in more detail at some important components of vCenter vSphere.

VMware vSphere Client – The interface to the vCenter and ESX/ESXi Servers. It allows users to remotely connect to the vCenter and ESX/ESXi server and perform tasks depending on the user rights.

VMware vSphere Web Access allows an internet browser to manage the virtual machines and access to the remote console.

VMware vSphere vStorage VMFS – This is a cluster file system which allows simultaneous read and write access to the storage device. This is a default file system for virtual machines. It allows live migration of running virtual machines from one physical server to another and also restarts the failed virtual machine to the good physical server as long as the server participates in the cluster configuration. A VMFS is optimized to store large files such as virtual disks and memory image of the server. It can grow to a maximum size of 64 TB.

VMware Virtual SMP – This component allows simultaneous access of multiple physical CPUs to a virtual machine (VM). For any CPU intensive application, this component is very useful. This is a licensed component and requires a separate license to activate.

Page 12: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 12

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 12

VMware vCenter Components

VMware vCenter architecture is a collection of services and interfaces. It is organized into the following types of services and interfaces:

Core Services: These are management services that perform the following functions:VM provisioningHost and VM configurationResource and inventory managementStatistics and loggingAlarms and event managementTask scheduling

Distributed services: This provides services such as VMotion, Distributed Resource Scheduler (DRS), and High Availability (HA).

Plug-in: Other components are installed separately from the base products and require plug-ins in the vCenter Server. The products such as VMware Update Manager and Converter.

Active Directory Interface for the client login and authentication for domain users.

vSphere Application Programming Interface (API) for third party to develop custom applications for vCenter usage.

ESX/ESXi Management: Here the vCenter server installs an agent (vpxa) on the ESX/ESXi host for communication with the host agent (hostd).

Database Interface: This provides access to the database. vCenter comes bundled with the SQL Express database for a small instance (five hosts and fifty VM). However, it is a best practice to use the external database instance for production and enterprise use. The supported databases are MS-SQL 2005/2008 and Oracle 10g and 11g. For a complete list and compatibility of the database requirements, please check the vCenter installation guide.

In the next slide, we will see the more details about the vCenter server functionality.

Page 13: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 13

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 13

VMware vCenter ServervSphere

ClientVia Web Browser

The vCenter server provides a centralized management, configuration, provisioning, and automation of virtual IT environment.

VMware vSphere vCenter server is a suite of products that allow you to configure, manage, and monitor the VMware virtual infrastructure through a vSphere Client or vSphere web-based client. vCenter server is installed on Windows physical or virtual (VM) server. It should be noted that it cannot be installed on any other operating systems such as Linux or UNIX. VMware add-on components such as VMotion, High Availability (HA), and Distributed Resource Scheduler (DRS) are installed and managed through VMware vCenter server.

Multiple vCenter servers can be managed by a single vCenter client providing a consolidated view of multiple vCenter servers. VC servers are interconnected in Linked Mode which allows administrators to share roles and licenses across multiple VC servers. The Linked Mode uses MS-ADAM (Microsoft Active Directory Application Mode).

Host Profiles in a VC server are policy-based rules that enforce compliance along with configuration settings for network, storage, and security on multiple hosts. This simplifies host configuration management.

vServices simplifies deployment and ongoing management of multi-tiered applications running on multiple VMs by encapsulating it into a single vService entity.

Licensing is an application in the vCenter server suite that centralizes the reporting and management of license keys in the VC 4.0 environment.

It also provides performance charts providing single view for CPU, DISK, Memory, and Network. These aggregated views can be further drilled down to detailed view of resources such as a datastore or for events. Alarms can be set for events and low level hardware and host events can be displayed rapidly for fault isolations. It also increases visibility of the virtual infrastructure reports and topology maps. VC server also provides It provides detailed resource usage statistics for CPU and memory usage at both VM and resource pool level.

VI Update Service allows remote upgrade of older virtual infrastructure, rollback, and post-installation scripts.

Page 14: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 14

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 14

Module Summary

Key points covered in this module:

Virtualization concepts

VMware VSphere infrastructure architecture

Architecture of ESX/ESXi host

Virtual Machine definition

List of VMware vSphere components

VMware vSphere vCenter Server framework

These are the key points covered in this module. Please take a moment to review them.

Page 15: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 15

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 15

EMC Rainfinity

Upon completion of this module, you will be able to:

Describe basic concepts of file-level virtualization

Identify Rainfinity terminology

Describe Rainfinity theory of operations

Describe Rainfinity features and functions

Identify Rainfinity platforms

List the benefits of a Rainfinity solution

The objectives for this module are shown here. Please take a moment to read them.

Page 16: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 16

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 16

File Level Virtualization Basics

After File Level Virtualization

Break dependencies between end-user access and data locationStorage utilization is optimizedNon-disruptive data migrations

NAS devices/platforms

IP network

NAS devices/platforms

IP network

Every NAS device is an independent entity physically and logically Underutilized storage resources Downtime caused by data migrations

Before File Level Virtualization

EMC Rainfinity virtualizes NAS environments by dynamically moving information without disruption to clients or applications, making them simple to manage. Rainfinity is an out-of-band file system virtualization that enables non-disruptive data movement in multi-vendor NAS environments.

Page 17: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 17

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 17

Rainfinity OverviewRainfinity is a dedicated hardware/software solution that manages file-oriented (NFS/CIFS) storage access

Provides transparent data mobility

Acts as a bridge between clients and storage servers

Manages data and servers in heterogeneous environments– Microsoft Windows®– UNIX / Linux

Assists with:– Data migration and consolidation– Storage optimization– Global Namespace Management– Tiered storage management

Rainfinity is a dedicated hardware/software platform solution that supports the management of file-oriented data and their servers. File-oriented data is data that is accessed by CIFS or NFS. Rainfinity allows clients to transparently access data that it is managing. Virtualization is an abstraction of the logical and physical paths to data. The client is unaware where the data physically resides. The management of the namespace can be accomplished by industry-standard mechanisms such as a Distributed File System (DFS) in a Windows environment, and NIS/Automount and LDAP in a UNIX environment. Rainfinity does not create its own namespace; it integrates with these existing industry namespaces.

Rainfinity assists administrators with file-oriented storage optimization, consolidation, and disaster recovery. It leverages existing industry namespaces and allows for a single point of namespace management. As data becomes less valuable over time, it can be moved from one storage tier to a less expensive tier while the data remains available on disk rather than on tape. Freeing up expensive storage enables a new project or application to come online. Consequently, tiered storage results in much more effective storage utilization. These management functions work in a heterogeneous environment. In other words, Rainfinity supports Microsoft Windows® as well as UNIX and Linux sites.

Rainfinity is designed to be installed as a bridge between file servers and clients on the network. To achieve this, Rainfinity functions as a Layer 2 switch. This functionality enables Rainfinity to see and process traffic between clients and file servers with minimal modification to the existing network.

Page 18: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 18

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 18

File Virtualization Appliance (FVA)

Abstracts file-based storage access over IP-based networks– Physical storage location is transparent to users and applications– File-based storage systems are seen as a logical pool of resources– Provides constant access to data while moving NFS and/or CIFS data

Simplifies multiprotocol data migration– Replicates files and preserves multiprotocol attributes

Clients NAS storage pool

Layer 2 Switch(Bridge)

FVA

Centera Celerra

NetApp

FileServers

File Virtualization Appliance, or FVA, focuses on abstracting the file-level storage access over IP-based networks. FVA is the term used to describe the values and technologies that extend file-based storage systems to appear as a logical pool of resources from which you can freely allocate capacity wherever and whenever it is needed.

NAS and file serving devices are typically implemented with a single file system per device. A file system that expands to the limitation of a device’s physical disk capacity requires repopulating data to another device with more capacity and then remounting/remapping clients to the new location. The file system must be taken offline during this process, disrupting users and applications, because active data cannot change physical location. When there are multiple devices, the storage administrator must manage each device independently as “islands” of storage capacity.

FVA provides a layer of transparency to users and applications. Distribution transparency masks the distributed nature of data while repopulating data, and allows users and applications full read/write access. FVA simplifies multiprotocol data migration. FVA’s data mobility engine replicates data and metadata, maps permissions, synchronizes access and changes and redirects client access to the authoritative copy of the data—all for multiprotocol mixed access as well as NFS-only or CIFS-only access.

Page 19: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 19

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 19

Rainfinity Basic OperationsBi-modal– Out-of-band mode

Stays out-of-band until data mobility is neededNo network performance impact of in-band appliancesCan still monitor servers

– In-band modeFile servers go in-band to prevent user disruption typical of out-of-band devicesCan monitor protocol connections and sessionsKeeps source and destination in synch as long as neededTracks client access to source and destination

Monitors storage system performance: CPU, I/O, capacity

Performs actual data movement during migration

Integrates to provide a single point of namespace management– DFS; Automounter; & standalone deployment

When Rainfinity is doing a move, the two file servers involved in the move must be on the private, or server side. In this case, Rainfinity is said to be in-band, and those file servers are also referred to as in-band. When Rainfinity is not doing a move or redirecting access to certain file servers, those file servers may be moved to the public, or client side. In this case, Rainfinity is said to be out-of-band for these file servers, and those file servers are also referred to as out-of-band. Rainfinity is aware of file-sharing protocols. It is this application-layer intelligence that allows Rainfinity to move data without interrupting client access.

Not only can Rainfinity move data transparently, it can also be implemented as a data gathering device. A key to the Rainfinity technology suite is that the appliance has multiple network interfaces and essentially performs like a bridge—bringing hosts in- and out-of-band as required to move data accordingly.

Rainfinity also leverages existing namespaces into a single point of management. For example, Rainfinity can integrate with a customer’s DFS, NIS/Automount, and LDAP namespace environments to provide a single view of the data.

Page 20: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 20

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 20

Automount

NIS LDAP

DFSADRainfinity

Appliance

Rainfinity FVA Theory of Operation

DFSAD

Automount

NIS LDAP

Data mobilityRainfinity is triggered

1

Redirection Global namespace updated

2

RainfinityFVA

1

2

Rainfinity installs by plugging into the network switch. There are no changes required to client mount points. Rainfinity installs in the network but is not in the data path. When you install Rainfinity, you set up a separate VLAN in the network. Clients continue to access storage with no disruption.

When data migration must take place for cost or optimization reasons, the ports associated with the involved file servers are associated with the private VLAN of Rainfinity. Rainfinity is now in the data path for these file servers and can ensure client access to the data even though it is being dynamically relocated.

Any updates during the migration are synchronized across both the original source and the new destination. If Rainfinity is removed from the network in the middle of a transaction, there is no data integrity risk. All updates are reflected on the source. The clients are still mounting the source. You can plug Rainfinity back in and the transaction resumes.

Once the data relocation is complete, Rainfinity updates the global namespace (DFS for Windows, Automount for UNIX, login scripts, or homegrown namespace solutions). The namespace in turn updates the clients with the new location of the data. The original source reflects a point-in-time copy at the end of the transaction and reflects updates made up to that point. Updating client mappings takes time, however, so Rainfinity remains in the data path and redirects client access to the new location. Over time, the number of sessions to redirect decreases as new sessions mount directly to the new location.

Page 21: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 21

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 21

Automount

NIS LDAP

DFSAD

Rainfinity FVA Theory of Operation (Cont)

RainfinityAppliance

Data mobilityRainfinity is triggered

1

Redirection Global namespace updated

2

3 Transaction completewithout downtime

RainfinityFVA

3

When all of the client sessions have been remapped to the new location, Rainfinity completes the migration and the servers move out of the Rainfinity private VLAN into the public VLAN. Rainfinity is now out of the data path.

Rainfinity virtualizes an environment 100% of the time with the namespace providing a logical abstraction layer. Rainfinity selectively virtualizes traffic on the wire based on particular optimization or relocation events that need to take place.

Rainfinity can also handle multiple transactions at a time. Rainfinity only does one active move transaction at a time and the others are queued. Rainfinity can perform a redirection simultaneously for multiple transactions.

Page 22: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 22

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 22

File Management Appliance (FMA)Hardware/software appliance solution

FMA archives and recalls files based on configured rules

Full function– File archival and recall– Rule and policy creation and preview– Scheduling– Orphan file management– Stub recovery

Centera

NetApp

NAS Clients

FMA-HA

File Server

FMA

File Management Appliance is a hardware and software appliance solution. FMA provides archival and retrieval functionality in a NAS environment. The archive decision is based on configured rules. After archival, a stub file resides on the primary storage and points to the archived file on the secondary storage. All of the data necessary to retrieve a file resides in the stub itself, not in a database on the FMA appliance.

The archival and recall functions reside on the FMA appliance. The FMA-High Availability (FMA-HA) appliance complements an existing FMA by adding high availability and load balancing capabilities when recalling archived data to primary storage. The FMA-HA can be used for recall only and cannot be used for archiving, or any other FMA function.

FMA provides full archiving functionality. These features include archival and recall, rule and policy creation and preview, scheduling, orphan file management and stub file recovery.

Page 23: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 23

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 23

Tiered Storage Management with FMAAllows for the efficient placement of data, based on capacity and service-level agreements– Primary Storage

Top tier of storage in terms of performance and availability

– Secondary Storage2nd level of storage with lower cost and performance than primary storage

– Policy EngineIntelligence that classifies and migrates files based on pre-established policy Secondary Storage

FMA

Primary Storage

NAS Clients

Policy Engine

or

Tiered storage management, or file archival, allows for efficient placement of data based on capacity and service level agreements. Intelligently placing data in optimal tiers of performance and price lowers the average cost of storage. Enterprises can use lower cost storage, such as ATA drives or tape, to store less critical data at a fraction of the cost of high performance storage. Intelligent software, that automatically classifies and migrates data based on policy, makes this file placement feasible.

Based on configured policies, policy engines migrate data from one storage tier to another. These policies can be based on the size of the data, the length of time since the last access, or by file extension type. File Management Application, or FMA, provides the policy engine functionality that supports the archival process. FMA is used by companies that need to conform to government regulations, policies, and the Information Lifecycle Management process.

Page 24: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 24

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 24

Automount

NIS LDAP

DFSAD

Rainfinity FMA Theory of Operation

Policy-based file archivingRainfinity archives files

1

End-user retrievalAccess stub file, retrieves file

2

File ManagementRainfinityFMA

Once data is migrated from primary to secondary storage, a stub (or tag) file exists on the primary storage to direct access to the actual location of the data (secondary storage). When a write occurs to the data, it is typically fully recalled to the primary storage. When a read occurs, the data can be read from the secondary storage, or it can be partially or fully migrated from secondary to primary storage, depending on the storage technology used. FPolicy, the NetApp file archival application interface, requires a full recall of the data on reads and writes.

Page 25: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 25

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 25

Rainfinity Hardware

Hot swap Disk Drives

Hot swap Disk Drives

FVA5 / FMA5

Front View

FVA6 / FMA6

Shown here are the front views of two different Rainfinity boxes. The FVA and FMA appliances use the same hardware and run the same customized Linux Operating System. The FVA5 and the FMA5 appliances are based on DELL 2950 G5 hardware. FVA6 and FMA6 are based on DELL 2950 G6 hardware.

The following is a description of the front of the appliance:

The CD-ROM is used for full CD system upgrades and fresh installs. There are two hot-swappable SCSI hard drives on the G5 hardware, and three hot-swappable drives on the G6 hardware.

Page 26: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 26

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 26

Rainfinity Hardware (Cont)

G5 and G6 Hardware

Back View

Hot Swap Power Supplies

Clustering Ethernet Interfaces

Bridging Ethernet Interfaces

Eth6 Eth7 Eth8 Eth9

Eth1 Eth0

Eth2 Eth3 Eth4 Eth5

Eth10Eth11 Eth12Eth13

This slide displays the back view of the G5 and G6 Rainfinity boxes seen on the previous slide. The back of the box looks identical for either the G5 or the G6. Notice the two on-board copper ports used for clustering appliances in an active-standby configuration. Please take a moment to review the slide.

Page 27: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 27

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 27

FVA GUI and Applications

Capacity Management

Performance Management

Migration and Consolidation

Rainfinity Platform

Rainfinity FVA application software provides the functionality of the system. The GUI screen shown here can be used, along with the CLI, to use the features provided by Rainfinity. The first two applications from the right-hand corner, Capacity Management and Performance Management, drive storage optimization by visualizing usage trends and exceptions. The next application, Migration and Consolidation, moves data between storage devices. The Rainfinity Platform feature is used to add servers and setup Proxy servers, among other things.

Page 28: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 28

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 28

Why Rainfinity?

Built for the Enterprise:– Most scalable– Safest– Easiest to deploy

Industry standards based

Enterprise service and support– EMC’s world class technical

service organization and 24x7 global hardware and software support

28

In order to streamline the operations of your file server and NAS environments, Rainfinity File Virtualization Appliance delivers: Optimized utilization of storage resources, Accelerated storage consolidation, Simplified management, and Increased protection of critical files.

It does this by simplifying capacity management through non-disruptive data movement and namespace management updates, maintaining a virtual file system of your physical file serving and NAS resources.

Page 29: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 29

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 29

Rainfinity Benefits

Supports heterogeneous, multi-vendor environments

No client or server software required

Data consolidation

Leverages industry-standard namespace

Increases data mobility

Operates transparently to clients and applications

Supports multiprotocol moves (CIFS and NFS)

29

Rainfinity leverages industry-standard Global Namespaces with a scalable, transparent, file-protocol switching capability. There are many advantages to this approach. It limits risk and performance concerns, but also leverages the continuing investment made by large vendors and Standards bodies, whether the namespace is DFS or Automount. Rainfinity can also work with existing environments in which a standard namespace is not deployed, such as login scripts.

Rainfinity is a virtualization solution that provides complete transparency during consolidation and data migration, not just transparency of file access, but transparency to the environment. This solution does not require mount-point changes or the deployment of agents on clients or servers. It supports migration of data that is accessed by both NFS and CIFS clients.

Virtualization increases data mobility by providing location independence of files and file systems from the applications and the people who use them.

Page 30: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 30

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 30

Module Summary

Key points covered in this module:

Virtualization is the newest approach for consolidating many servers into one

Rainfinity is a dedicated hardware/software solution that manages file-oriented (NFS/CIFS) storage access

Rainfinity combines applications that monitor and move data between storage devices

FVA streamlines operations of file server and NAS environments through non-disruptive data movement, and leverages the customer’s existing environment

These are the key points covered in this module. Please take a moment to review them.

Page 31: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 31

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 31

EMC InvistaUpon completion of this module, you will be able to:

Understand the concept and benefits of storage virtualization

Identify the benefits, features, and competitive advantages of an Invista solution

List the hardware and software components of Invista and how they work together to achieve storage virtualization

Understand the advanced software functionality enabled by an Invista solution

Cite how to integrate Invista into an existing SAN and how to design a highly available Invista configuration

Identify the new features and functionality included in the latest release of Invista

.

The objectives for this module are shown here. Please take a moment to read them.

Page 32: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 32

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 32

EMC Invista

Performance architecture– Leverages next-generation “intelligent” SAN

switches for high performance– Designed to work in enterprise-class

environments

Provides advanced functionality– Dynamic volume mobility– Network-based volume management– Heterogeneous point-in-time copies (clones)

Enterprise management– EMC ControlCenter integration– EMC Replication Manager integration

Supports heterogeneous environments– Works with EMC and third-party storage

Network-based Storage Virtualization

Physical storage

Runs in the storage

network

Virtual volumes

Virtual initiators

InvistaControl Path

ClusterData PathController

Data PathController

Invista is a SAN-based storage virtualization solution. Its architecture leverages new intelligent SAN switch hardware from EMC’s Connectrix partners that enables higher levels of scalability and functionality.

Unlike other storage virtualization products, Invista is not appliance-based. Invista delivers consistent, scalable performance across a heterogeneous storage environment, even when using highly random I/O applications. Because Invista uses the processing capabilities of intelligent switches, it eliminates the latency and bandwidth issues associated with an “in-band” appliance approach. By using purpose built switches with port-level processing, this “split-path” architecture delivers wire-speed performance with negligible latency.

EMC’s unique network-based approach to storage virtualization enables key functionalities, such as the ability to move active applications to different tiers of storage non-disruptively, and the ability to leverage clones across a heterogeneous storage environment. These functions work uniformly across qualified hosts and heterogeneous storage arrays.

In addition to integrating discovery and monitoring functions for virtual volumes into EMC ControlCenter, Invista can also be easily managed from a GUI or a command-line interface (CLI).

Invista supports the five major operating systems and storage arrays from EMC, IBM, Hitachi Data Systems, and Hewlett-Packard.

Page 33: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 3333

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 33

Move data non-disruptivelyMove and change primary volumes while

application remains online

Network-based volume management

Pool storage and manage volumes at the network level

Heterogeneous point-in-time copies

Create local copies of data for testing and repurposing across

multiple types of storage

Heterogeneous remote replication of virtual volumes

Create remote copies of data for disaster recovery and business continuity

Continuous data protection of virtual

volumes Point-in-time recovery and

application checkpoints

Advanced Software Functionality

EMCInvista

The next-generation hardware, combined with powerful Invista software, enables some unique capabilities, including:

Dynamic volume mobility allows administrators to move primary volumes between heterogeneous storage arrays while the application remains online. This enabler of information lifecycle management allows you to move applications non-disruptively to the appropriate storage tier, based on application requirements and service levels.Network-based volume management is the basis for what is commonly considered “virtualization.”Invista enables you to create and configure virtual volumes from a heterogeneous storage pool and present them to hosts. It makes sense for the network to be the control point for this—abstracting and aggregating the back-end storage, configuring it, and making it available to all of the connected hosts.Invista creates clones of virtual volumes. This allows you to extend the use of clones to areas where it was previously impossible, due to compatibility issues. For example, you can now create a clone from a high-tier, primary storage array and extend it to a lower-tier, lower-cost storage array. This gives you another local replication option in your tiered storage environment.Invista integration with EMC RecoverPoint software provides a disaster-recovery capability by enabling IT managers to employ virtualization technologies across multiple sites. RecoverPoint provides continuous data protection by enabling recovery of data from any point-in-time backups or application checkpoints.

Page 34: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 34

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 34

Key Invista BenefitsSupport for EMC and third-party arrays– Leverages existing investments in storage capacity

and resources

Delivers Information Lifecycle Management– Enables data movement across multiple storage

tiers

Reduces complexity– Single interface for managing all tiers of storage

Increases operational efficiency by simplifying:– Movement of data to optimize performance– Provisioning of storage among multiple vendor

arrays

EMC Invista provides support for EMC and third-party arrays, which allows an enterprise to leverage existing investments in storage capacity and resources.

Invista also supports Information Lifecycle Management by enabling data movement across multiple storage tiers.

Invista reduces management complexity by establishing a single interface for managing all tiers of storage.

Finally, Invista increases operational efficiency by simplifying both the movement of data to optimize performance, and the provisioning of storage among multiple vendor arrays.

Page 35: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 35

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 35

Invista 2.x Hardware Architecture

Distributed Control Path Cluster’s (CPC) with dual Fibre Channel links for cluster interconnect and failover supportIP packet filtering is used to restrict communication to Invista components Dual Data Path Controllers (DPC) for redundancyMetadata stored on array connected via SAN

Fabric B

Metadata store

DPC-2

IP Network

Fabric A

DPC-1

Dual FC links for cluster interconnect

Intel host CPC

SP B

SP A

The illustration shows the hardware components of an Invista version 2 deployment. The CPC is implemented as a two-node host cluster running on an Intel-based host. Cluster interconnect between the two nodes is implemented using a pair of dedicated point-to-point FC links called CMI links.

In addition to allowing distributed deployment, the CPC platforms use their local hard drives for Bootstrap PSM, and storage space for diagnostic information. They also eliminate the need for standby power supplies, as each SP has its own redundant power supply.

The metadata store resides on SAN volumes which the CPC nodes access via the mirrored fabrics. In Invista terminology, the metadata store is referred to as an Invista Configuration Repository Volume, or ICRV.

A functional IP link between CPC nodes and the DPCs is critical to the operation of the Invista instance. This is provided by a private IP network of two Allied Telesis or two Cisco Catalyst switches—a standard feature of an Invista installation.

Invista V2.0 Patch 2 introduced IP packet filtering configuration. The packet filtering configuration is the preferred IP deployment for Invista. Previous versions of Invista only supported firewalledconfigurations. In firewalled deployments, each Invista component is assigned an IP address on a private network behind the firewall and NAT (Network Address Translation) is used to translate an external IP address to an IP address on the private network.

In packet filtering deployments, customers supply IP addresses for the components of the Invista instance. Packet filtering rules added to the IP switch configuration restrict communication to the Invista components. Packet filtering preserves the network quality provided by a firewall, while eliminating the need for NAT and the limitations imposed by NAT.

Page 36: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 36

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 36

ICRV: Metadata Store (SAN Volume)What is Invista metadata?

Information about Invista configuration

Information about– Virtual Frames– HBAs, hosts (front end)– Storage elements, storage arrays (back end)– Virtual volumes and mappings to storage elements– Meta-volume relationships– Clone data or CPLs – Clone Private LUNs

Invista Code

In general, all configuration and operation related-data that is not production data on the storage elements

Metadata is critical to the operation of Invista. Invista version 2 makes the metadata repository highly available via SAN provisioning.

Invista metadata contains information about the Invista configuration. This includes information about Virtual Frames, HBAs and hosts, storage elements and arrays, virtual volumes and their mappings to storage elements, meta-volume relationships, and the Clone Private LUNS, or CPLs. Also included in the metadata is the Invista code itself.

In general, metadata is all configuration and operation related data that is not the customer’s production data on the storage elements.

Page 37: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 37

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 37

Theory of Operation – CPC

Runs storage and management applications used to configure and control the Invista instance

The CPC stores information about physical and virtual storage, including:– Storage elements dedicated to Invista– Imported storage elements and associated storage elements– Virtual volumes and associated imported storage elements– Virtual Frames and the hosts and virtual volumes that belong to them– Clone Groups and the storage volumes that belong to them

The CPC downloads information to the DPC

Invista management applications run on the CPC. These applications are the Invista Element Manager GUI and the Invista CLI, both of which can be used on a remote platform to monitor and manage the Invista instance.

The CPC stores the following configuration metadata about the Invista instance on the SAN attached ICRV.

Storage element (back-end array volume) information – These volumes have been assigned to the Invista instance. The back-end volumes must be allocated exclusively for Invista usage by the administrator of the storage arrays. Imported storage element information – Imported storage elements are simply storage elements that have been “imported” into the Invista instance. This identifies storage array capacity that Invista intends to use for creating virtual volumes.Virtual Volumes information includes the virtual volume name, storage volume identification(ID), and the imported storage element used to create the virtual volume.Virtual Frame information identifies one or more virtual volumes and the host allowed to access them.Clone Group information includes a data (source) volume and clone (copy) volumes.

Page 38: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 38

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 38

Theory of Operation - DPCCurrently supported DPCs: Brocade AP-7600B, Brocade PB-48K-AP, and Cisco

Receives part of its configuration from the CPC

Examines all incoming/outgoing FC frames– Read/write frames are mapped to the

appropriate virtual target or initiator and FC port

– Control frames are passed between the requesting host and the CPC

Serves as a virtual target for hosts

Serves as a virtual initiator for storage arrays

Brocade AP-7600B

Brocade PB-48K-AP (Scimitar)

Cisco SSM Module

The Data Path Controller (DPC) resides in the intelligent switch component of Invista. It receives part of its configuration from the CPC.

The DPC is the center of all traffic in Invista. The DPC is located in the path of all host I/O traffic. The DPC examines each read/write Fibre Channel frame that is generated by hosts and storage arrays and forwarded to the appropriate device based on the physical to virtual mapping stored in the metadata. Control frames, or Fibre Channel frames that are not read/write operations, are forwarded to the CPC for processing.

To the host, the DPC is a virtual target. To a storage array, the DPC is a virtual initiator.

The three currently supported DPCs are shown on the slide. The Brocade AP-7600B is a departmental switch. The Brocade PB-48K-AP and Cisco SSM module are blades that fit into a slot in large enterprise switches.

Page 39: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 39

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 39

Separation of Data and Control Path Operations

Data Frame- Read- Write

Control Frame- Inquiry (page 80, 83)- Read Capacity- Report LUNs- Test Unit Ready- Request Sense- Reservation- Group Reservation- Persistent Reservation- Format- Verify- Rezero Unit

Array

Control Operations

Data PathController

(DPC)

Parse andRedirect Frames

I/O Streams

ProcessedI/O Stream

Control Path Cluster(CPC)

Invista Virtualization Services

SAL (Switch Abstraction Layer)

SAL-Agent

The illustration shows how I/O packets are processed by the Invista intelligent switch. When a command arrives at the DPC, there are two places where processing occurs.

The data path processors are the port-level ASICs that handle the incoming I/O from the host and do the remapping to the back-end storage. In typical operations, more than 95% of I/O is handled by the DPCs. Whatever cannot be handled by the DPCs is termed an exception. An exception might be a SCSI inquiry about the device, or an I/O for which the DPC does not have mapping information. These exceptions are handled by the CPC.

The CPC is where the storage application actually runs. When the system starts up, the CPC loads the mapping tables for the virtual volumes into the DPCs.

Page 40: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 40

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 40

Invista Differentiators

Uses intelligent switches and directors with port-level real-time processing

Full wire speed

Sustains high performance across applications with highly random I/O

All data safely written to attached storage before writes are acknowledged to the host ensures consistent image of data on attached storage

No risk of lost data due to failure of virtualization system

Metadata stored on SAN for additional protection

Invista differentiators include:Uses intelligent switches and directors with port-level real-time processing. By placing the virtualization function on the switch, Invista utilizes a split path architecture that leverages the dedicated port processing of intelligent switches to perform virtualization functions in real time. Because the virtualization function happens at “ASIC speed,” the caching used in first generation virtualization solutions is unnecessary. Read and write operations are performed as if they were talking directly to physically attached arrays. This preserves investment in the processing power of the attached storage arrays. It also eliminates the bottlenecks and data integrity risk imposed by putting a caching virtualization controller in front of a storage controller.Virtualization is done at full wire speed. Therefore, no I/O performance degradation occurs to I/O that is sent over the Fibre Channel SAN.Sustains high performance across applications with highly random I/O.All data is safely written to attached storage before writes are acknowledged to the host ensuring consistent image of data on attached storage.No risk of lost data due to failure of virtualization system because of the redundant architecture of the Invista components. A typical Invista configuration includes at least two DPCs and CPCs along with redundant connections to the host and array ports. Metadata is stored on ICRVs that have duals paths from the CPCs through the SAN for additional protection.

Page 41: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 41

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 41

Invista Logical Topology

Virtual Targets

Virtual Initiators

Virtual Volumes

DPC

The illustration shows the logical view of Invista.

Virtual Targets are abstract entities that are created by designating specific ports on the switch to be used as front-end ports. On each port designated as front end, a Virtual Target is created that becomes visible in the NameServer on the switch. Each Virtual Target has a unique World Wide Port Name (WWPN). Invista uses virtual targets to map Virtual Volumes to logical devices on the back-end arrays. Each Virtual Volume presented to a host is mapped to a logical device on a back-end array.

Virtual initiators are also abstract entities that are created when the intelligent switch is imported. The number of virtual initiators created is switch dependent. A Cisco SSM module creates nine virtual initiators per SSM blade. However, only eight are useable. A Brocade AP-7600 or PB-48K-AP creates 16 virtual initiators (one per port). Similar to virtual targets, each virtual initiator has a unique WWPN. Invista initiates I/O to the back-end storage arrays (targets) using the virtual initiators.

Page 42: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 42

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 42

Layer 2 SAN(A/B Fabric)

High Availability ConfigurationMirrored SAN

– Two separate SANs, dual HBA hosts– Supports non-disruptive code upgrade

to virtualization components– Provides HA for switch configurations

through fault isolation– Invista LUNs can be exposed on both

fabrics

CPC uses 2 Control Path Nodes– Active-Active cluster– LUN ownership model follows CPC

nodes

Multiple DPCs– Failover LUNs across DPCs– Support for switch upgrades

(hardware and firmware)

Storage Arrays

Hosts

InvistaCore

Layer 2 SAN(A/B Fabric)

CPC DPCDPC

In the illustration, each host has two HBAs. Each HBA is cabled into a unique front-end layer 2 fabric. Each layer 2 fabric is cabled into a separate DPC. The DPC shares the same CPC for redundancy and the ability to share volume mapping in case a component of one of the paths is out of order. Each DPC is cabled into a layer 2 back-end SAN which is cabled into one port in the backend arrays.

In the example, the CPC references the dual CPC node cluster present in the standard Invista 2.x configuration.

Page 43: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 43

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 43

Invista Sharing an Existing SAN

L 2 SAN

Invista Instance

Fabric BFabric A

A

DPC A DPC B

Heterogeneous Storage Arrays

B C D E

F

The diagram shows how an Invista configuration may look when it coexists with a traditional SAN. DPC A and the Fabric A switches are cabled together and are managed as one fabric. DPC B and Fabric B are configured in the same manner.

In this scenario, hosts C, D, and E are directly connected to the Invista environments. Hosts A and F are directly attached to the L2 SAN environment.

Host B has one connection to the Invista instance and another to the L2 SAN. Hosts may be connected in this manner for a number of reasons. They may not be taking part in the virtualized environment or they may be preparing for volume migration to the virtualized environment.

Hosts A, C, D, E, and F can be separated from Invista by zoning the HBA to the array port. By not zoning the HBA to a virtual target, it bypasses Invista. However, physical connectivity to the DPC is preserved in case the host is migrated in the future.

Page 44: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 44

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 44

Invista Element Manager

The illustration shows the main Invista Element Manager window. To start Element Manager, first log in to a computer that has IP connectivity to the Invista CPCs. The computer must be running a supported browser and Java Runtime Environment (JRE). The Invista Element Manager requires a current JRE version which is downloadable from java.sun.com. Start the browser and ensure that any pop-up blockers are either disabled or configured to allow the Invista GUI to launch. Enter the IP address of either Invista SP.

When the Network Address Translation (NAT) Configuration dialog box appears, leave the box checked (this is default option), and choose OK.

The software displays the Invista login dialog box that requests the username and password.

Once logged, the main Element Manager window appears. From this window, the Storage Admin or Operator can perform or view Invista operations.

Page 45: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 45

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 45

Volume ManagementSimplify volume presentation and management

– Create, delete, and change functionality

– Provides front-end LUN masking and mapping of storage volumes to the host

– Single HBA driver for all arrays

Centralized volume management and control

– Single Invista console to manage virtual volumes, clones, and mobility jobs

Reduce management complexity of heterogeneous storage

– Single management interface to allocate and reallocate storage resources

Virtual Volumes

Concatenated volume

Invista provides a robust volume management capability.

The illustration shows how the storage elements from the backend arrays, shown in yellow, red, green, and blue, are created.

Several storage elements can be concatenated together to form a single virtual volume that can then be configured to the host. A concatenated volume is shown in the example.

A virtual volume can consist of the entire storage element as in the case of the red virtual volume or a virtual volume can be a smaller chunk of the storage element as shown by the green virtual volume.

Use Element Manager to create, delete, and modify virtual volumes. Element Manager is also used to configure or un-configure virtual volumes to a host.

By using Element Manager, administrators have a tool that can provide all the capabilities needed to manage the Invista instance.

Page 46: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 46

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 46

Heterogeneous Point-in-Time Copies: CloningKey features

– Can clone a virtual volume to another virtual volume of the same size across heterogeneous array types

– High performance data copy

Use cases – Heterogeneous backup and recovery – Testing, development, training – Parallel processing, reporting, queries

Integrated management – Replication Manager– EMC ControlCenter– Microsoft VSS

The illustration shows how a virtual volume, shown in blue, can be cloned to other virtual volumes of the same size. In the example, there are three clones shown in yellow, red, and green.

Invista permits users to create one or more full copies of a virtual volume. This functionality is performed by Invista, not hosts or arrays, and does not require host CPU cycles. Administrators can use the Element Manager console or the Invista CLI to control cloning operations.

Active clones are managed as a “Clone Group,” which consists of a source volume and one or more clone volumes. Clones can be built on volumes that span heterogeneous arrays. Invista cloning requires that the source and clone volumes must be the same size.

Clones created with Invista can be used for backups, restoring data, testing, report creation, etc.

Page 47: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 47

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 47

Dynamic Volume MobilityKey features

– Non disruptive high speed movement of data across homogeneous arrays

Use cases – Reduce planned application downtime

Roll out application to productionMove legacy applications to lower tiers of storage

– Reduce migration costs Perform lease roll-overs or technology

refreshes faster

– Increase ability to meet service levelsMatch storage AND host capacity to

application performance requirementsIntegral component to Information LifecycleManagement

In Invista, data mobility refers to the high speed non-disruptive movement of data from one virtual volume to another. The source and destination arrays must be available to Invista. The move is transparent to the host. There is no requirement to reboot or take other action due to the migration. The host “sees” the same virtual volume before and after the data has been moved, regardless of the storage array containing the data. In the example, the data on the green volume is being moved to the blue volume.

Data mobility is a valuable tool any time the customer needs to move data without impacting the application. For example, it is useful when a lease expires on a storage array and the data needs to be retained. In this case, the data can be moved to the new array while the application is running.

Page 48: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 48

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 48

DataData

Remote Replication: Virtual to Virtual Volumes

RecoverPoint performs remote replication at the network level, enabling virtual-volume-to-virtual-volume replication

Leading-edge compression reduces bandwidth 3x to 15x to reduce monthly connectivity charges

RecoverPoint

Local CDP

journal

Remote CDP

journal

RecoverPoint

Physical storage Physical storage

Applications

SAN

InvistaControl Path

ClusterData PathController

Data PathController

Activevolume

Applications

SAN

InvistaControl Path

ClusterData PathController

Data PathController

Remotevolume

WAN

RecoverPoint is a network-based remote-replication product. It provides:Disaster recovery capability for InvistaVirtual-volume-to-virtual-volume replication, enabling IT managers to employ virtualization technologies across multiple sites for disaster tolerance and enhanced availabilitySynchronous or asynchronous bi-directional replication between sites

RecoverPoint utilizes leading-edge compression technology to reduce bandwidth requirements (by 3x to 15x) to save on lease-line costs. In addition, it incorporates continuous data protection (CDP) to provide protection against data corruption and to ensure data consistency across application volumes at remote sites.

Page 49: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 49

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 49

Remote Replication: Virtual to Physical Volumes

RecoverPoint supports heterogeneous storage, enabling virtual to non-virtual replication

– Lowers cost of remote replication of virtual volumes

Data

RecoverPoint

Local CDP

journal

Remote CDP

journal

RecoverPoint

Physical storage

SAN

InvistaControl Path

ClusterData PathController

Data PathController

Activevolume

SAN

WAN

Remotevolume

Applications Applications

Unlike some products from EMC’s competitors, RecoverPoint supports heterogeneous storage, as well as virtual-to-virtual or virtual-to-non-virtual deployments. With Invista and RecoverPoint, you have the option of remotely replicating from a virtual environment to Tier 1 or Tier 2 storage in a traditional storage configuration. You do not need to purchase a second Invista instance at the remote site to obtain disaster tolerance, which reduces the costs and complexity at your remote site.

Page 50: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 50

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 50

Module Summary

Key points covered in this module:

Storage virtualization concepts

Invista design and benefits

Major hardware components of Invista

Invista management interfaces

Invista services (functionality)

Invista theory of operations

Invista configuration strategies

These are the key points covered in this module. Please take a moment to review them.

Page 51: EMC Storage Virttualization Foundations

Copyright © 2009 EMC Corporation. Do not Copy - All Rights Reserved.

EMC Storage Virtualization Foundations - 51

© 2009 EMC Corporation. All rights reserved. EMC Storage Virtualization Foundations - 51

Course Summary

Key points covered in this course:

Virtual infrastructure

VMware product differences

File-level virtualization basic concepts

Rainfinity features, functions, and benefits

Benefits, features, and advantages of an Invista solution

Concepts and benefits of storage virtualization

These are the key points covered in this training. Please take a moment to review them.

This concludes the training. Please proceed to the Course Completion slide to take the assessment.