39
1 FAST VP for EMC ® Symmetrix ® VMAX ® The ory a nd Be st Pra c tic e s for Planning and Performance Technical Notes P/ N 300-012-014 REV A04 June 2012 This technical notes document contains information on these topics: Executive summary ................................................................................... 2 Introduction and overview ....................................................................... 2 Fully Automated Storage Tiering ............................................................ 3 FAST and FAST VP comparison .............................................................. 5 Theory of operation ................................................................................... 6 Performance considerations ................................................................... 11 Product and feature interoperability ..................................................... 14 Planning and design considerations ..................................................... 17 Summary and conclusion ....................................................................... 34 Appendix: Best practices quick reference ............................................. 37

How fast works in symmetrix vmax

Embed Size (px)

Citation preview

1

FAST VP for EMC® Symmetrix

® VMAX

®

Theory and Best Practices

for Planning and Performance

Technical Notes P/ N 300-012-014

REV A04

June 2012

This technical notes document contains information on these top ics:

Executive summary ................................................................................... 2

Introduction and overview ....................................................................... 2 Fully Automated Storage Tiering ............................................................ 3 FAST and FAST VP comp arison .............................................................. 5 Theory of operation ................................................................................... 6 Performance considerations ................................................................... 11 Product and feature interop erability ..................................................... 14

Planning and design consid erations ..................................................... 17 Summary and conclusion ....................................................................... 34 Appendix: Best practices quick reference ............................................. 37

2

Executive summary

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

Executive summary

EMC® Symmetrix

® VMAX

® Family with Enginu ity™ incorporates a

scalable fabric-interconnect design that allows a storage array to

seamlessly grow from an entry-level configuration to a 4 PB system.

Symmetrix VMAX Series arrays provide pred ictable, self-optimizing

performance and enables organ izations to scale out on demand in

private cloud environments.

VMAX Series arrays automate storage operations to exceed business

requirements in virtualized environments, with management tools that

integrate with virtualized servers and reduce ad ministra tion time in

private cloud infrastructures. Customers are able to achieve always-on

availability with maximum security, fully nondisruptive operations , and

multi-site migration, recovery, and restart to prevent application

downtime.

Information infrastructure must continuously adapt to changing

business requirements. EMC Symmetrix Fully Automated Storage

Tiering for Virtual Pools (FAST VP) automates tiered storage strategies,

in Virtual Provisioning™ environments, by easily moving workloads

between Symmetrix tiers as performance characteristics change over

time. FAST VP performs data movements, improving performance, and

reducing costs, all while maintaining vital service levels.

Introduction and overview

EMC Symmetrix VMAX FAST VP for Virtual Provisioning environments

automates the identification of data volumes for the purposes of

relocating application data across d ifferent performance/ capacity tiers

within an array. FAST VP proactively monitors workloads at both the

LUN and sub-LUN level in order to identify busy d ata that would

benefit from being moved to higher-performing drives. FAST VP also

identifies less-busy data that cou ld be moved to higher -capacity d rives

without existing performance being affected . This promotion/ demotion

activity is based on policies that associate a storage group to multip le

d rive technologies, or RAID protection schemes, by way of thin storage

pools, as well as the performance requ irements of the application

contained within the storage group. Data movement executed during

this activity is performed nondisruptively, without affecting business

continuity and data availability.

3

Fully Automated Storage Tiering

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Audience

This technical notes document is intended for anyone who needs to

understand FAST VP theory, best practices, and associated

recommend ations as necessary to achieve the best performance for FAST

VP configurations. This document is specifically targeted to EMC

customers, sales, and field technical staff who are either running FAST

VP or are considering FAST VP for future implementation.

Significant portions of this document assume a base knowledge

regard ing the implementation and management of FAST VP. For

information regard ing the implementation and management of FAST VP

in Virtual Provisioning environments, refer to the Implementing Fully

Automated Storage Tiering for V irtual Pools (FAST VP) for EMC Symmetrix

VMAX Series Arrays Technical Note (P/ N 300-012-015).

Fully Automated Storage Tiering

Fully Automated Storage Tiering (FAST) automates the identification of

data volumes for the purposes of relocating application data across

d ifferent performance/ capacity tiers within an array , or to an external

array using Federated Tiered Storage (FTS).

The primary benefits of FAST include:

Elimination of manually tiering applications when performance

objectives change over time.

Automating the process of identifying data that can benefit from

Enterprise Flash Drives or that can be kep t on higher -capacity,

less-expensive SATA drives without impacting performance.

Improving application p erformance at the same cost, or

provid ing the same application performance at lower cost. Cost

is defined as acquisition (both hard ware and software),

space/ energy, and management expense.

Optimizing and prioritizing business applications, allowing

customers to dynamically allocate resources within a single

array.

Delivering greater flexibility in meeting d ifferent

price/ performance ratios throughout the lifecycle of the

information stored .

Due to advances in d rive technology, and the need for storage

consolid ation, the number of d rive types supported by Symmetrix arrays

has grown significantly. These drives span a range of storage service

specializations and cost characteristics that d iffer greatly.

4

Fully Automated Storage Tiering

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

Several d ifferences exist between the four d rive technologies supported

by the Symmetrix VMAX Series arrays: Enterprise Flash Drive (EFD), Fibre

Channel (FC), Serial Attach SCSI (SAS), and SATA. The primary

d ifferences are:

Response time

Cost per unit of storage capacity

Cost per unit of storage request processing

At one extreme are EFDs, which have a very low response

time, but w ith a high cost per unit of storage capacity

At the other extreme are SATA drives, which have a low

cost per unit of storage capacity, bu t high response times

and high cost per u nit of storage request processing

Between these two extremes lie Fibre Channel and SAS

drives

Based on the nature of the d ifferences that exist between these four d rive

types, the following observations can be made regard ing the most suited

workload type for each drive.

Enterprise Flash Drives: EFDs are more su ited for

workloads that have a high back-end random read storage

request density. Such workload s take advantage of both

the low response time provided by the d rive, and the low

cost per unit of storage request processing without

requiring a log of storage capacity.

SATA drives: SATA drives are suited toward workloads

that have a low back-end storage request density.

Fibre Channel/ SAS drives: Fibre Channel and SAS drives

are the best d rive type for workloads with a back-end

storage request density that is not consistently high or low.

This d isparity in suitable workloads presents both an opportunity and a

challenge for storage administrators.

To the degree it can be arranged for storage workloads to be served by

the best su ited drive technology, the opportunity exists to improve

application performance, reduce hardware acquisition expenses, and

reduce operating expenses (includ ing energy costs and space

consumption).

The challenge, however, lies in how to realize these benefits w ithout

introd ucing additional ad ministrative overhead and complexity.

The approach taken with FAST is to au tomate the process of identifying

which regions of storage should reside on a given drive technology, and

5

FAST and FAST VP comparison

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

to au tomatically and nondisruptively move storage between tiers to

optimize storage resource usage accord ingly. This also needs to be done

while taking into account optional constraints on tier capacity usage that

may be imposed on specific groups of storage devices.

FAST and FAST VP comparison

EMC Symmetrix VMAX FAST and FAST VP automate the identification

of data volumes for the purposes of relocating app lication d ata across

d ifferent performance/ capacity tiers within an array, or to an external

array using Federated Tiered Storage (FTS). While the administration

procedures used with FAST VP are very similar to those available with

FAST, there is a major d ifference: The storage pools used by FAST VP are

thin storage pools.

FAST operates on non-thin, or d isk-group-provisioned , Symmetrix

volumes. Data movements executed between tiers are performed at the

full-volume level. FAST VP operates on Virtual Provisioning thin

devices. As such, data movements execu ted can be performed at the sub-

LUN level, and a single thin d evice may have extents allocated across

multiple thin pools within the array, or on an external array using FTS.

Note: For more information on Virtual Provisioning, refer to the Best Practices

for Fast, Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning

Technical Note available at http :/ / powerlink.emc.com.

Note: For more information on Federated Tiered Storage, refer to the Design

and Implementation Best Practices for EMC Symmetrix Federated Tiered Storage

(FTS) technical note available at http:/ / powerlink.emc.com.

Because FAST and FAST VP support d ifferent device types, non-thin and

thin, respectively, they both can operate simultaneously within a single

array. Aside from some shared configuration parameters, the

management and operation of each can be considered separately.

Note: For more information on FAST refer to the Implementing Fully

Automated Storage Tiering (FAST) for EMC Symmetrix VMAX Series Arrays

technical note available at http :/ / powerlink.emc.com.

The goal of FAST and FAST VP is to optimize the performance and cost-

efficiency of configurations containing mixed drive technologies. While

FAST monitors and moves storage in units of entire logical devices,

FAST VP monitors data access with much finer granularity. This allow s

FAST VP to determine the most appropriate tier (based on optimizing

performance and cost efficiency) for each 120 track region of storage. In

6

Theory of operation

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

this way, the ability of FAST VP to monitor and move data with much

finer granularity greatly enhances the value proposition of au tomated

tiering.

FAST VP builds upon and extends the existing capabilities of Virtual

Provisioning and FAST to provide the user with enhanced Symmetrix

tiering options. The Virtual Provisioning underpinnings of FAST VP

allow FAST VP to combine the core benefits of Virtual Provisioning

(wide striping and thin provisioning) with the benefits of automated

tiering.

FAST VP more closely aligns storage access workload s with the best-

suited drive technology than is possible if all regions of a given dev ice

must be mapped to the same tier. At any given time, the hot regions of a

thin device managed by FAST VP may be mapped to an EFD tier, and

the warm parts may be mapped to an FC tier , while the cold parts may

be mapped to a SATA tier. By more effectively exploiting drive

technology specializations, FAST VP delivers better performance and

greater cost efficiency that FAST.

FAST VP also better ad apts to shifting workload locality of reference,

changes in tier-allocation limits, or storage-group priority. This is due to

the fact that the workload may be ad justed by moving less data. This

further contributes to making FAST VP more effective at exploiting drive

specializations and also enhances some of the operational advantages of

FAST. This includes the ability to nondisruptively ad just the quality of

storage service (response time and throughput) provid ed to a storage

group.

Theory of operation

There are two components of FAST VP: Symmetrix Enginu ity and the

FAST controller. Symmetrix Enginuity storage is the operating

environment that controls components w ithin the array. The FAST

controller is a service that runs on the service processor.

7

Theory of operation

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Figure 1 . FAST VP components

When FAST VP is active, both components participate in the execution of

two algorithms (the intelligent tiering algorithm and the allocation

compliance algorithm) to d etermine appropriate d ata placement.

The intelligent tiering algorithm uses performance data collected by

Enginuity, as well as supporting calculations performed by the FAST

controller, to issue d ata movement requests to the Virtual LUN (VLUN)

VP data movement engine.

The allocation compliance algorithm enforces the upper limits of storage

capacity that can be used in each tier by a given storage group by also

issuing d ata movement requests to the VLUN VP d ata movement

engine.

Performance time windows can be defined to specify when the FAST VP

controller should collect performance data, upon which analysis is

performed to determine the appropriate tier for devices. By default, this

8

Theory of operation

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

occurs 24 hours a day.

Defined data movement windows determine when to execute the data

movements necessary to move data between tiers.

Data movements performed by Enginuity are achieved by moving

allocated extents between tiers. The size of data movement can be as

small as 12 tracks, representing a single allocated thin device extent, but

more typ ically it is a unit known as an extent group, which is 120 tracks.

The following sections further describe each of these algorithms.

Intelligent tiering algorithm

The goal of the intelligent tiering algorithm is to use the performance

metrics collected at the sub-LUN level to determine which tier each

extent group should reside in and to submit the needed data movements

to the VLUN VP d ata movement engine. The determination of which

extent groups need to be moved is performed by a task that runs within

the Symmetrix array.

The intelligent tiering algorithm is structured into two components : a

main component that execu tes within Enginuity and a second ary,

supporting component that executes within the FAST controller on the

service processor.

The main component assesses whether extent groups need to be moved

in order to optimize the use of the FAST VP storage tiers. If so, the

required data movement requests are issued to the VLUN VP data

movement engine.

When determining the appropriate tier for each extent group, the main

component makes use of both the FAST VP metrics, p reviously

d iscussed , and supporting calculations performed by the secondary

component on the service p rocessor.

The intelligent tiering algorithm runs continuously during open data

movement wind ows when FAST is enabled and the FAST VP operating

mode is Automatic. As such, performance-related data movements can

occur continuously during an open data movement window.

Allocation compliance algorithm

The goal of the allocation compliance algorithm is to detect and correct

situations where the allocated capacity for a particular storage group

within a thin storage tier exceeds the maximum capacity allowed by the

associated FAST policy.

9

Theory of operation

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

A storage group is considered to be in compliance with its associated

FAST policy when the configured capacity of the thin d evices in the

storage group is located on tiers defined in the policy , and when the

usage of each tier is w ithin the upper limits of the tier usage limits

specified in the policy.

Compliance violations may occur for multiple reasons, includ ing:

New extent allocations performed for thin devices

managed by FAST VP

Changes made to the upper usage limits for a VP tier in a

FAST policy

Adding thin devices to a storage group that are themselves

out of compliance

Manual VLUN VP migrations of thin devices

The compliance algorithm attempts to minimize the amount of

movements performed to correct compliance that may, in turn, generate

movements performed by the intelligent tiering algorithm . This is d one

by coord inating the movement requests w ith the analysis performed by

the intelligent tiering algorithm to determine the most appropriate

extents to be moved , and the most appropria te tier to be moved to, when

correcting compliance violations.

When FAST is enabled and the FAST VP operating mode is Automatic,

the compliance algorithm runs every 10 minutes during open d ata

movement wind ows.

Data movement

Data movements execu ted by FAST VP are performed by the VLUN VP

data movement engine, and involve moving thin device extents between

thin pools w ithin the array. Extents are moved by way of a move

process only; extents are not swapped between pools.

The movement of extents, or extent groups, does not change the thin

device bind ing information. The thin device still remain s bound to the

pool it was originally bound to. New allocations for the thin device, as

the result of host writes, will continue to come from the bound pool,

unless VP allocation by FAST VP is enabled .

To complete a move, the following must hold true:

The FAST VP operating mode must be Automatic.

The VP data movement window must be open.

The thin device affected must not be pinned .

There must be sufficient unallocated space in the thin

10

Theory of operation

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

pools included in the destination tier to accommodate the

data being moved .

The destination tier must contain at least one thin pool that

has not exceeded the pool reserved capacity (PRC).

Note: If the selected destination tier contains only pools that have reached the

PRC limit, then an alternate tier may be considered by the movement task.

Other movement considerations include:

Only extents that are allocated will be moved .

No back-end configuration changes are performed during

a FAST VP data movement, and , as such, no configuration

locks are held during the process.

As swaps are not performed, there is no requirement for

any swap space, such as DRVs, to facilitate data

movement.

Data movement time wind ows are used to specify date and time ranges

when d ata movements are allowed , or not allowed , to be performed .

FAST VP data movements run as low -priority tasks on the Symmetrix

back end . Data movement wind ows can be planned so as to minimize

impact on the performance of other more critical workloads.

A default data movement time window excludes all performance data

samples 24 hours a d ay, 7 d ays a week, 365 days a year. There are two

types of data movement that can occur under FAST VP: Those generated

by the intelligent tiering algorithm and those generated by the allocation

compliance algorithm. Both types of data movement only occur during

user-defined data movement windows.

Related movements of the intelligent tiering algorithm are requested and

executed by Enginuity. These data movements are governed by the

workload on each extent group, but may only be execu ted within the

constraints of the associated FAST policy. Therefore, a performance

movement cannot cause a storage group to become non-compliant with

its FAST policy.

Allocation compliance related movements are generated by the FAST

controller and executed by Enginuity. These movements bring the

capacity of the storage group back within the boundaries specified by the

associated policy. Performance information from the intelligent tiering

algorithm is used to determine more appropriate sub-extents to move

when restoring compliance.

When a compliance violation exists, the algorithm generates a d ata

movement request to return the allocations within the required limits.

11

Performance considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

This request explicitly ind icates which thin device extents should be

moved , and the specific thin pools they should be moved to.

Performance considerations

The Enginuity operating environment collects and maintains

performance data that is used by FAST VP. The FAST controller analyzes

the performance d ata to determine the best location to place the thin

device data on the defined VP tiers w ithin the Symmetrix array.

Performance metrics

When collecting performance data at the LUN and sub-LUN level for use

by FAST VP, Enginuity only collects statistics related to Symmetrix back-

end activity that is the result of host I/ O. The metrics collected are:

Read miss

Write

Prefetch (sequential read )

The read miss metric accounts for each DA read operation that is

performed. Read s to areas of a thin device that have not had space

allocated in a thin pool are not counted . Also, read hits, which are

serviced from cache, are not considered .

Write operations are counted in terms of the number of d istinct DA

operations that are performed. The metric accounts for when a write is

destaged – write hits, to cache, are not considered .

Writes related to specific RAID protection schemes are also not counted .

In the case of RAID 1 protected devices, the write I/ O is only counted for

one of the mirrors. In the case of RAID 5 and RAID 6 protected devices,

parity reads and writes are not counted .

Prefetch operations are accounted for in terms of the number of d istinct

DA operations performed to prefetch data spanning a FAST VP extent.

This metric considers each DA read operation performed as a front-end

prefetch operation.

Workload related to internal copy operations, such as d rive rebuilds,

clone operations, VLUN migrations, or even FAST VP data movements,

is not included in the FAST VP metrics.

These FAST VP performance metrics provide a measure of activity that

assigns greater weight to more recent I/ O requests, but are also

influenced by less recent activity. By default, based on a workload

analysis period of 24 hours, an I/ O that has just been received is

weighted two times more heavily than an I/ O received 24 hours

12

Performance considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

previously.

Note: Performance metrics are only collected during user -defined

performance time windows. The times during which metrics are not being

collected does not contribute to reducing the weight assigned to those metrics

already collected .

The metrics collected at the sub-LUN level for thin devices under FAST

VP control contain measurements to allow FAST VP to make separate

data movement requests for each 120 tracks unit of storage that make up

the thin device. This unit of storage consists of 10 contiguous thin device

extents and is known as an extent group.

In order to maintain the sub-LUN-level metrics collected by Enginuity,

the Symmetrix array allocates one cache slot for each thin device that is

under FAST VP control.

When managing metadevices, cache slots are allocated for both the

metahead and for each of the metamembers.

Note: Each cache slot on a Symmetrix VMAX Series array is one track in size.

FAST VP tuning

FAST VP provides a number of parameters that can be used to tune the

performance of FAST VP and to control the aggressiveness of the data

movements. These parameters can be used to nondisruptively ad just the

amount of tier storage that a given storage group is allowed to use, or to

ad just the manner in which storage groups using the same tier compete

with each other for space.

FAST VP relocation rate

The FAST VP relocation rate (FRR) is a quality of service (QoS) setting

for FAST VP and affects the aggressiveness of d ata movement requests

generated by FAST VP. This aggressiveness is measured as the amount

of data that is requested to be moved at any given time, and the priority

given to moving the d ata between pools.

Note: The rate at which data is moved between pools can also be controlled

by means of the Symmetrix Quality of Service VLUN setting.

Pool reserved capacity

The Pool reserved capacity (PRC) reserves a percentage of each pool

included in a VP tier for non-FAST VP activities. The purpose of this is to

ensure that FAST VP d ata movements do not fill a thin pool, and

13

Performance considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

subsequently cause a new extent allocation (a result of a host write) to

fail.

The PRC can be set both system -wide and for each ind ividual pool. By

default, the system-wide setting is applied to all thin pools that have

been included in VP tier definitions. However, this can be overridden for

each ind ividual pool by using the pool-level setting.

When the percentage of unallocated space in a thin pool is equal to the

PRC, FAST VP no longer perform s data movements into that pool.

However, data movements may continue to occur out of the pool to

other pools. When the percentage of unallocated space becomes greater

than the PRC, FAST VP can begin performing data movements into that

pool again.

FAST VP time windows

FAST VP utilizes time windows to define certain behaviors regard ing

performance data collection and data movement. There are two possible

wind ow types:

Performance time window

Data movement time wind ow

The performance time windows are used to specify when performance

metrics should be collected by Enginuity.

The data movement time windows define when to perform the data

relocations necessary to move data between tiers. Separate data

movement wind ows can be defined for full LUN movement, performed

by FAST and Optimizer, and sub-LUN data movement performed by

FAST VP.

Both performance time windows and d ata movement wind ows may be

defined as inclusion or exclusion windows. An inclusion time window

ind icates that the action should be performed during the defined time

wind ow. An exclusion time wind ow ind icates that the action should be

performed outside the defined time wind ow.

Performance time window

The performance time windows are used to identify the business cycle

for the Symmetrix array. They specify d ate and time ranges (past or

future) when performance samples should be collected , or not collected ,

for the purposes of FAST VP performance analysis. The intent of

defining performance time wind ows is to d istingu ish periods of time

when the Symmetrix array is id le from periods when it is active, and to

only include performance d ata collected during the active periods.

By default, performance metrics are collected 24 hours a day, 7 days a

14

Product and feature interoperability

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

week, 365 d ays a year.

Data movement time window

Data movement time wind ows are used to specify date and time ranges

when d ata movements are allowed , or not allowed , to be performed .

FAST VP data movements run as low-priority tasks on the Symmetrix

back end . By default, data movement is prevented 24 hours a day, 7 d ays

a week, 365 days a year.

Storage group priority

When a storage group is associated with a FAST policy, a priority value

must be assigned to the storage group. This priority value can be

between 1 and 3, with 1 being the highest priority . The default is 2.

When multiple storage groups share the same policy, the priority value

is used when the data contained in the storage groups is competing for

the same resources in one of the associated tiers. Storage groups with a

higher priority are given preference when decid ing which data needs to

be moved to another tier.

Product and feature interoperability

FAST VP is fully interoperable with all Symmetrix replication

technologies: EMC SRDF®, EMC TimeFinder

®/ Clone, TimeFinder/ Snap,

and Open Replicator. Any active replication on a Symmetrix device

remains intact while data from that device is being moved . Similarly, all

incremental relationships are maintained for the moved or swapped

devices. FAST VP also operates alongside Symmetrix features, such as

Symmetrix Optimizer, Dynamic Cache Partitioning, and Auto-

provisioning Groups.

SRDF

Thin SRDF devices, R1 or R2, can be associated with a FAST policy.

Extents of SRDF devices can be moved between tiers while the devices

are being actively replicated , in either synchronous or asynchronous

mode.

TimeFinder/ Clone

Both the source and target devices of a TimeFinder/ Clone session can be

managed by FAST VP. However, the source and target are managed

independently, and , as such, may end up with d ifferent extent

allocations across tiers.

15

Product and feature interoperability

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

TimeFinder/ Snap

The source device in a TimeFinder/ Snap session can be managed by

FAST VP. However, target device VDEVs are not managed by FAST VP.

TimeFinder VP Snap

The source device in a TimeFinder VP Snap session can be managed by

FAST VP. Target devices may also be managed by FAST VP, however,

extent allocations that are shared by multiple target devices will not be

moved .

Open Replicator for Symmetrix

The control device in an Open Replicator session, push or pull, can have

extents moved by FAST VP.

Virtual Provisioning

All thin devices, whether under FAST VP control or not, may only be

bound to a single thin pool. All host-write-generated allocations, or user-

requested pre-allocations, are performed to this pool. FAST VP data

movements do not change the bind ing information for a thin device.

It is possible to change the bind ing information for a thin device without

changing any of the current extent allocations for the d evice. However,

when rebind ing a device that is under FAST VP control, the th in pool the

device is being re-bound to must belong to one of the VP tiers contained

in the policy that the device is associated with.

Virtual Provisioning space reclamation

Space reclamation may be run against a thin device under FAST VP

control. However, during the space reclamation process, sub-LUN

performance metrics are not updated , and data movements are not

performed.

Note: If FAST VP is actively moving extents of a device, a request to reclaim

space on that device will fail. Prior to issuing the space reclamation task , the

device should first be pinned. This suspend s any active FAST VP data

movements for the device and allow s the request to succeed .

Virtual Provisioning T10 unmap

Unmap command s can be issued to thin devices under FAST VP control.

The T10 SCSI unmap command for thin devices advises a target thin

device that a range of blocks are no longer in use. If this range covers a

full thin device extent, that extent can be deallocated and the free space

returned to the pool.

If the unmap command range covers only some tracks in an extent, those

tracks are marked Never Written by Host (NWBH). The extent is not

16

Product and feature interoperability

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

deallocated ; however those tracks will not have to be retrieved from d isk

should a read request be performed. Instead , the Symmetrix array

immed iately returns all zeros.

Virtual Provisioning pool management

Data devices may be added to or removed from a thin pool that is

included in the FAST VP tier. Data movements related to FAST VP, into

or out of the thin pool, continue while the data devices are being

modified .

In the case of add ing d ata devices to a thin pool, automated pool

rebalancing may be run. Similarly, when d isabling and removing data

devices from the pool, they drain their allocated tracks to other enabled

data devices in the pool.

While both d ata device d raining and automated pool rebalancing may be

active in a thin pool that is included in a VP tier, both of these processes

may affect performance of FAST VP data movements.

Virtual LUN VP mobility

A thin device under FAST VP control may be migrated using VLUN VP.

Such a migration resu lts in all allocated extents of the device being

moved to a single thin pool.

While the migration is in progress, no FAST VP-related data movements

are performed. However, once the migration is complete, all allocated

extents of the thin device will be available to be retiered .

To prevent the migrated device from being retiered by FAST VP

immed iately following the migration, it is recommend ed that the device

first be pinned . To re-enable FAST VP-related data movements, the

device can later be unpinned .

FAST

Both FAST and FAST VP may coexist w ithin a single Symmetrix array.

FAST only performs full device movements of non-thin devices. As such,

there is no impact to FAST VP’s management of thin d evices.

Both FAST and FAST VP share some configuration parameters. These

are:

Workload Analysis Period (WAP)

Initial Analysis Period (IAP)

Performance time windows

Symmetrix Optimizer

Symmetrix Optimizer operates only on non-thin devices. As a result,

there is no impact on FAST VP’s management of thin d evices.

17

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Both Optimizer and FAST VP share some configuration parameters.

These are:

Workload Analysis Period (WAP)

Initial Analysis Period (IAP)

Performance time windows

Dynamic Cache Partitioning (DCP)

Dynamic Cache Partitioning can be used to isolate storage hand ling of

d ifferent applications. As data movements use the same cache partition

as the application, movements of d ata on behalf of one application d o

not affect the performance of applications that are not sharing the same

cache partition.

Auto-provisioning Groups

Storage groups created for the purposes of Auto-provisioning may also

be used for FAST VP. However, while a device may be contained in

multiple storage groups for the purposes of Auto-provisioning, it may

only be contained in one storage group that is associated with a FAST

policy (DP or VP).

Should a storage group contain a mix of device types, thin and non -thin,

only the devices matching the type of FAST policy it is associated with

are managed by FAST.

If it is intended that both d evice types in an Auto-provisioning storage

group are to be managed by FAST and FAST VP, respectively, then

separate storage groups need to be created . A storage group with the

non-thin devices may then be associated with a policy containing DP

tiers. A separate storage group containing the thin devices will be

associated with a policy containing VP tiers.

Planning and design considerations

The following sections detail best practice recommend ations for

planning the implementation of a FAST VP environment .

The best practices documented are based on features available in

Enginuity 5876.82.57, Solutions Enabler 7.4, Symmetrix Management

Console 7.3.3, and Unisphere® for VMAX 1.0.

FAST VP configuration parameters

FAST VP has multiple configuration parameters that control its behavior.

These include settings to determine the effect of past workloads on d ata

analysis, quality of service for data movements, and pool space to be

18

Planning and design considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

reserved for non-FAST VP activities. Also, performance collection and

data movement time wind ows can be defined .

The following sections describe best practice recommendations for each

of these configuration p arameters.

Note: For more information on each of these configuration parameters, refer

to the Implementing Fully Automated Storage Tiering for Virtual Pools (FAST VP)

for EMC Symmetrix VMAX Series Arrays technical note available at

http://powerlink.emc.com.

Performance time window

The performance time windows specify date and time ranges when

performance metrics should be collected , or not collected , for the

purposes of FAST VP performance analysis. By default, performance

metrics are collected 24 hours a day, every day. Time windows may be

defined , however, to includ e only certain days or days of the week, as

well as exclude other time periods.

As a best practice, the default performance wind ow sh ould be left

unchanged . However, if there are extended periods of time when the

workloads managed by FAST VP are not active, these time period s

should be excluded .

Note: The performance time window is applied system -wide. If multiple

applications are active on the array, but active at d ifferent times, then the

default performance time window behavior should be left unchanged .

Data movement time window

Data movement time wind ows are used to specify date and time ranges

when d ata movements are allowed , or not allowed , to be performed .

The best practice recommendation is that, at a minimum, the data

movement wind ow should allow d ata movements for the same period of

time that the performance time windows allow d ata collection. This

allow s FAST VP to react more quickly and more dynamically to any

changes in workload that occur on the array.

Unless there are specific time periods to avoid data movements, during a

backup window, for example, it may be appropriate to set the data

movement wind ow to allow FAST VP to perform movements 24 hours a

day, every day.

19

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Note: If there is a concern about possible impact of data movements

occurring during a production workload then the FAST VP Relocation Rate

can be used to minimize this impact.

Workload Analysis Period

The Workload Analysis Period (WAP) determines the degree to which

FAST VP metrics are influenced by recent host activity, and also less-

recent host activity, that takes place while the performance time wind ow

is considered open.

The longer the time defined in the WAP, the greater the amount of

weight is assigned to less recent host activity.

The best practice recommendation for the WAP is to use the default

value of 168 hours (1 week).

Initial Analysis Period

The Initial Analysis Period (IAP) defines the minimum amount of time a

thin device should be under FAST VP management before any

performance-related d ata movements should be applied .

This parameter should be set to a long enough value so as to allow

sufficient d ata samples for FAST VP to establish a good characterization

of the typical workload on that device.

At the initial deployment of FAST VP, it may make sense to set the IAP

to 168 hours (1 week) to ensure that a typical weekly workload cycle is

seen. However, once FAST VP data movement has begun, you may

reduce the IAP to 24 hours (1 day), which allow s the newly associated

devices to benefit from FAST VP movement recommendations more

quickly.

FAST VP Relocation Rate

The FAST VP Relocation Rate (FRR) is a quality of service (QoS) setting

for FAST VP. The FRR affects the aggressiveness of data movement

requests generated by FAST VP. This aggressiveness is measured as the

amount of data that is requested to be moved at any given time, and the

priority given to moving the data between pools.

Setting the FRR to the most aggressive value, 1, causes FAST VP to

attempt to move the most d ata it can, as quickly as it can. Dependent on

the amount of d ata to be moved , an FRR of 1 is more likely to cause

impact to host I/ O response times, due to the add itional back -end

overhead being generated by the FAST VP data movements. However,

the d istribution of d ata across tiers is completed in a shorter period of

time.

Setting the FRR to the least aggressive value, 10, causes FAST VP to

20

Planning and design considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

greatly reduce the amount of data that is moved , and the pace at which it

is moved . This setting causes no impact to host response time, but the

final d istribu tion of data takes longer.

Figure 2 shows the same workload , 1,500 IOPS of type OLTP2, being run

on an environment containing two FAST VP tiers, Fibre Channel (FC)

and Enterprise Flash (EFD). The same test was carried out w ith three

separate relocation rates: 1, 5, and 8.

With a FRR of 1, an initial increase in response time is seen at the two-

hour mark, when FAST VP data movement began, while no increase in

response time is seen when the relocation rate is set to 8. However, the

steady state for the response time is seen in a much shorter period of

time for the lower, more aggressive, setting.

Figure 2. Example workload with varying relocation rates

The default value for the FRR is 5. However, the best practice

recommend ation for the initial deployment of FAST VP is to start with a

more conservative value for the relocation rate, perhaps 7 or 8. The

reason for this is that when FAST VP is first enabled , the amount of data

to be moved is likely to be greater , compared to when FAST VP has been

running for some time.

At a later d ate, when it is seen that the amount of data movement s

between tiers is less, the FRR can be set to a more aggressive level,

21

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

possibly 2 or 3. This allow s FAST VP to ad just to small changes in

workload more quickly.

Pool Reserved Capacity

The Pool Reserved Capacity (PRC) reserves a percentage of each pool

included in a VP tier for non-FAST VP activities. When the percentage of

unallocated space in a thin pool is equal to the PRC, FAST VP no longer

performs data movements into that pool.

The PRC can be set both as a system -wide setting and for each ind ivid ual

pool. If the PRC has not been set for a pool, or the PRC for the pool has

been set to NONE, then the system-wide setting is used .

For the system-wide setting, the best practice recommendation is to use

the default value of 10 percent.

For ind ividual pools, if thin devices are bound to the pool, the best

practice recommendation is to set the PRC based on the lowest allocation

warning level for that thin pool. For example, if a warning is triggered

when a thin pool has reached an allocation of 80 percent of its capacity,

then the PRC should be set to 20 percent. This ensures that the remaining

20 percent of the pool is only used for new host-generated allocations,

and not FAST VP d ata movements.

If no thin devices are bound to the pool, or are going to be boun d , then

the PRC should be set to the lowest possible value, 1 percent.

VP Allocation by FAST Policy

The VP allocation by FAST policy feature allows new allocations to come

from any of the thin pools included in the FAST VP policy that the thin

device is associated with.

With this feature enabled , FAST VP attempts to allocate new writes in

the most appropriate tier first, based on available performance metrics. If

no performance metrics are available, the allocation is attempted in the

pool the device is bound to.

If the pool initially chosen to allocate the d ata is full, FAST VP then look s

to other pools contained within the FAST VP policy and allocate from

there. As long as there is space available in at least one of the pools

within the policy, all new extent allocations will be successful.

The allocation by policy feature is enabled at the Symmetrix array level

and applies to all allocations for all devices managed by FAST VP. The

feature is either enabled or d isabled (the default setting is d isabled ).

When d isabled , new allocations only come from the pool the thin device

is bound to.

As a best practice, it is recommended that VP allocation by FAST policy

22

Planning and design considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

be enabled .

Note: For more information on the decision making process of the VP

allocation by FAST policy feature, refer to the Advance FAST VP features

section of the Implementing Fully Automated Storage Tiering for Virtual Pools

(FAST VP) for EMC Symmetrix VMAX Series Arrays technical note.

FAST VP tier configuration

A Symmetrix storage tier is a specification of a set of resources of the

same d isk technology type (EFD, FC/ SAS, SATA, or FTS storage)

combined with a given RAID protection type (RAID 1, RAID 5, RAID 6,

or Unprotected), and the same emulation (FBA or CKD).

Note: The unprotected RAID type may only be applied to a tier resid ing on

an FTS-connected storage array.

FAST VP tiers can contain between one and four thin storage pools . Each

thin pool must contain d ata devices configured on the same drive

technology and emulation (and the same rotational speed , in the case of

FC, SAS, and SATA drives). However, two or more thin pools containing

data devices configured on rotating drives of d ifferent speeds may be

combined in a single VP tier.

Drive-size considerations

Drive size is not a factor when add ing d ata devices to a thin pool. For

example, data devices configured on 300GB FC 15k drives can coexist in

a pool with d ata devices configured on 600GB FC 15k drives.

However, when planning to have data devices on d ifferent d rive s izes

exist in the same storage tier, it is recommended to create two separate

pools for each drive size, and then combine those two pools into a single

tier.

Note: For more information on best practices for configuring thin pools, refer

to the Best Practices for Fast, Simple Capacity Allocation with EMC Symmetrix

Virtual Provisioning Technical Note available at http:/ / powerlink.emc.com .

External tiers

When creating a tier on an external array using FTS, it is recommended

that the eDisks be configured on a single array. The eDisks should also

be configured on similar d rives within that external array.

External tiers can only be configured on storage configured for external

provisioning. Encapsulated storage is n ot supported by FAST VP.

23

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Note: For more information on best practices for configuring eDisks, refer to

the Design and Implementation Best Practices for EMC Symmetrix Federated Tiered

Storage (FTS) technical note available at http:/ / powerlink.emc.com.

FAST VP policy configuration

A FAST VP policy groups between one and three VP tiers and assigns an

upper-usage limit for each storage tier. The upper limit specifies the

maximum amount of capacity of a storage group associated with the

policy can reside on that particular tier. The upper capacity usage limit

for each storage tier is specified as a percentage of the configured , logical

capacity of the associated storage group.

Figure 3. Storage tier, policy, storage group association

The usage limit for each tier must be between 1 percent and 100 percent.

When combined , the upper usage limit for all thin storage tiers in the

policy must total at least 100 percent, but may be greater than 100

percent.

Tier ranking

FAST VP ranks tiers within a policy based on known performance

models. For internal tiers, FAST VP orders the three available

technologies in terms of performance capabilities in descending order :

EFD, Fibre Channel/ SAS, and SATA.

FAST VP has no mechanism for determining the potential performance

of an external, FTS tier. As a result, any external tier included in a policy

is considered to be the lowest tier in that policy.

Tier usage limits

Creating a policy with a total upper usage limit greater than 100 percent

allows flexibility with the configuration of a storage group , whereby data

may be moved between tiers without necessarily having to move a

corresponding amount of other data within the same storage group.

The ideal FAST VP policy would be to specify 100 percent for each of the

included tiers. Such a policy would provide the greatest amount of

flexibility to an associated storage group, as it would allow 100 percent

24

Planning and design considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

of the storage group’s capacity to be promoted or demoted to any tier

within the policy.

While ideal, operationally it may not be appropriate to deploy the

100/ 100/ 100 policy. There may be reasons to limit access to a particular

tier within the array.

As an example, it may be appropriate to limit the amount of a storage

group’s capacity that can be placed on EFD. This may be used to prevent

one single storage group, or application, from consuming all of the EFD

resources. In this case, a policy containing just a small percentage for

EFD would be recommend ed .

Similarly, it may be appropriate to restrict the amount of SATA capacity

a storage group may utilize. Some applications, which can become

inactive from time to time, may require a minimum level of performance

when they become active again. For such applications, a policy exclud ing

the SATA tier could be appropriate.

The best way to determine appropriate policies for a FAST VP

implementation is to examine the workload skew for the application

data to be managed by FAST VP. The workload skew defines an

asymmetry in d ata usage over time. This means a small percentage of the

data on the array may be servicing the majority of the workload on the

array. One tool that provid es insight into this workload skew is Tier

Advisor.

Tier Advisor

Tier Advisor is a utility available to EMC technical staff that estimates

the performance and cost of mixing drives of d ifferent technology types

(EFD, FC, and SATA) within Symmetrix VMAX Series storage arrays.

Tier Advisor can examine performance data collected from Symmetrix,

VNX®, or CLARiiON

® storage arrays and determine the workload skew

at the full LUN level. It can also estimate the workload skew at the sub -

LUN level.

With this information, Tier Advisor can model an optimal storage array

configuration by enabling the ability to interactively experiment with

d ifferent storage tiers and storage policies, until achieving the desired

cost and performance preferences.

Thin device binding

Each thin device, whether under FAST VP control or not, may only be

bound to a single thin pool. By default, all host write generated

allocations, or user-requested pre-allocations, are performed from this

pool.

25

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

From an ease-of-management and reporting perspective, it is

recommended that all thin devices be bound to a single pool w ithin the

Symmetrix array.

Note: FAST VP data movements do not change the bind ing information for a

thin device.

In determining the appropriate pool, both performance requirements

and capacity management should be taken into consid eration , as well as

the use of thin device preallocation and the system write pending limit.

Note: Unless there is a very specific performance need , it is not

recommended to bind thin devices, under FAST VP control, to the EFD tier.

Performance consideration

With VP allocation by FAST policy enabled , FAST VP attempts to

allocate new extents in the most appropriate tier. This is based on

available performance metrics for the thin device being allocated . Should

performance metrics not be available, the allocation comes from the pool

the thin device is bound to.

As such, the best practice recommend ation is to bind all thin devices to a

pool within the second highest tier in the policy.

In a three-tier configuration (EFD, FC, and SATA), this would imply the

Fibre Channel tier. In a FC and SATA configuration, this would imply

the SATA tier. Once the data has been written, and space has been

allocated in the pool, FAST VP then makes the decision to promote or

demote the data as appropriate.

Capacity management consideration

Binding all thin devices to a single pool u ltimately causes that single

pool to be oversubscribed . If the allocation by policy feature is not

enabled , this could potentially lead to issues as the pool fills up. Host

writes to unallocated areas of a thin device will fail if there is insufficient

space in the bound pool.

In an EFD, FC, SATA configuration, if the FC tier is significantly smaller

than the SATA tier, bind ing all thin devices to the SATA tier reduces the

likelihood of the bound pool filling up.

If performance needs requ ire the thin devices to be bound to FC, the Pool

Reserved Capacity, the FAST VP policy configuration , or a combination

of both, can be used to alleviate a potential pool-full condition.

26

Planning and design considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

Note: Enabling VP allocation by FAST policy on the Symmetrix VMAX Series

array greatly alleviates the capacity-management consideration .

Preallocation

A way to avoid writes to a thin device failing due to a pool being fully

allocated is to preallocate the thin device when it is bound . However,

performance requ irements of newly written data should be considered

prior to using preallocation.

When FAST VP performs d ata movements, only allocated extents are

moved . This applies not only to extents allocated as the result of a host

write, but also to extents that have been preallocated . These preallocated

extents w ill be moved even if no data has yet been written to them.

Note: When moving preallocated , but unwritten, extents, no data is actually

moved . The pointer for the extent is simply red irected to the pool in the

target tier.

Preallocated , but unwritten, extents show as inactive and , as a result, are

demoted to the lowest tier included in the associated FAST VP policy.

When these extents are eventually written to, the write performance will

be that of the tier it has been demoted to.

A best practice recommend ation is to not preallocate th in devices

managed by FAST VP. Preallocation should only be used selectively for

those devices that can never tolerate a write failure due to a full pool.

System write pending limit

FAST VP is designed to throttle back data movements, both promotions

and demotions, as the write pending count approaches the system write

pending limit. This throttling gives an even higher priority to host I/ O to

ensure that tracks marked as write pending are destaged appropriately.

Note: By default, the system write pending limit on a Symmetrix VMAX

Series array is set to 75 percent of the available cache.

If the write pending count reaches 60 percent of the write pending limit,

FAST VP data movements stop. As the write pending count decreases

below this level, data movements automatically restart.

A very busy workload running on SATA disks, with SATA disks at, or

near, 100 percent utilization, causing a high write pending count,

impacts FAST VP from promoting active extents to FC and/ or EFD tiers .

In an environment with a high write workload , it is recommended to

bind thin devices to a pool in the FC tier.

27

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Migration

When performing a migration to thin devices, it is possible that the thin

devices may become fully allocated as a resu lt of the migration. As such,

the thin devices being migrated to should be bound to a pool that has

sufficient capacity to contain the full capacity of each of the devices.

Virtual Provisioning zero space reclamation can be used following the

migration to deallocate zero data copied during the migration.

Alternatively, the front-end zero detection capabilities of SRDF and

Open Replicator can be used during the migration.

Rebinding a thin device

It is possible to change the bind ing information for a thin device without

moving any of the current extent allocations for the device. This is done

by a process called rebind ing.

Rebind ing a thin device increases the subscription level of the pool the

device is bound to, and decrease the subscription level of the pool it was

previously bound to. However, the allocation levels in both pools remain

unchanged .

Note: When rebind ing a device that is under FAST VP control, the thin pool

the device is being re-bound to must belong to one of the VP tiers contained

in the policy the device is associated with.

Thin pool oversubscription

Thin pool oversubscription allows an organization to p resent larger than

needed devices to hosts and applications without having to purchase

enough physical d rives to fully allocate all of the space represented by

the thin devices.

Note: For more information on oversubscription, refer to the Best Practices for

Fast, Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning

technical note available at http :/ / powerlink.emc.com.

In a FAST VP configuration, the capacity available for thin devices is

now spread across multiple pools. Even if there is no plan to

oversubscribe the available capacity in the Symmetrix array , there

typically is insufficient space in a single tier to accommodate the

configured thin device capacity.

By following the previous recommend ation, bind ing all thin devices

being managed by FAST VP to a single thin pool inherently causes that

pool to be oversubscribed . As a result, the over-subscription limit for that

pool needs to be set to a value greater than 100 percent.

28

Planning and design considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

Available capacity

In order to determine at what level to set the over -subscription limit, the

total capacity available to d evices under the control of FAST VP should

first be calcu lated .

If the VP allocation by FAST policy feature is enabled , then 100 percent

of the capacity of each pool within the policy is available.

If the allocation by policy feature is not enabled , the following points

should be considered :

For pools with thin devices bound , 100 percent of the

capacity is available.

For pools with no thin devices bound , 100 percent of the

pool, less the capacity reserved by the PRC, is available.

Note: The pool reserved capacity value only affects FAST VP movements into

the pool; the PRC does not affect the ability for the thin devices to allocate

extents within the pool.

Over-subscription limits

After determining the capacity available to FAST VP, an over -

subscription limit can then be calculated for the pool devices that are

going to be bound to it.

To ensure the configured capacity of the array is not oversubscribed , the

limit can be calcu lated by d ivid ing the capacity of the thin pool being

used for bind ing into the available capacity of all the pools. This value is

then multiplied by 100 to get a percent value.

For example, consider a configuration with a 1TB EFD pool, a 5 TB FC

pool, and a 10TB SATA pool. The total available capacity is 16TB. If all

thin devices are bound to FC, the oversubscription limit could be set to

320% —(16/ 5)*100.

This value can be set higher if the intention is to initially oversubscribe

the configured physical capacity of the array, and then add storage on an

as-needed basis.

For thin pools where devices will never be bound , the subscription limit

can be set to 0 percent. This prevents any accidental bind ing or migration

to that pool.

RAID protection considerations

When designing a Virtual Provisioning configuration, and particu larly

when choosing a corresponding RAID protection strategy, both device-

29

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

level performance and availability implications should be carefully

considered .

Note: For more information on these considerations, refer to the Best Practices

for Fast, Simple Capacity Allocation with EMC Symmetrix Virtual Provisioning

technical note available at http :/ / powerlink.emc.com.

FAST VP does not change these considerations and recommend ations

from a performance perspective. What FAST VP does change, however,

is that a single thin device can now have its d ata spread across multiple

tiers of varying RAID protection and drive technology within the array.

Because of this, the availability of an ind ividual thin device is not based

just on the availability characteristics of the thin pool the device is bound

to. Instead , availability is based on the characteristics of the tier with the

lowest availability.

While performance and availability requirements u ltimately determine

the configuration of each tier within the Symmetrix array, it is

recommended , as a best practice, to choose RAID 1 or RAID 5 protection

on EFDs. The faster rebuild times of EFDs provide higher availability for

these protection schemes on that tier.

Also, it is recommended to use either RAID 1 or RAID 6 on the SATA

tier. This is due to the slower rebuild times of the SATA drives

(compared to EFD and FC), and the increased chance of a dual-drive

failure, lead ing to data unavailability with RAID 5 protection.

For the FC tier, RAID 1 is the recommended protection level. Mirrored

data devices on FC pools provide a higher level of performance than

both RAID 5 and RAID 6, particu larly for write workload . Availability of

RAID 1, in regard to a dual-drive failure, is also greater than RAID 5. To

obtain the best availability numbers for RAID 1 on FC, the use of lower

capacity d rives is recommended .

Note: For new environments being configured in anticipation of

implementing FAST VP, or for existing environments having additional tiers

added , it is highly recommended that EMC representatives be engaged to

assist in determining the appropriate RAID protection schemes for each tier.

Drive configuration

For most customer configurations, the best practice configuration

guidelines recommend an even, balanced d istribution of physical d isks

across the d isk ad apters (DAs) and DA CPUs.

This is of particular relevance for EFDs, which are each able to support

thousands of I/ Os per second , and , therefore, are able to create a load on

30

Planning and design considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

the DA equivalent to that of multiple hard d isk drives.

An even d istribution of I/ O across all DA CPUs is op timal to maximize

their capability, and the overall capability of the Symmetrix array.

However, there are scenarios where it is not appropriate to evenly

d istribute d isk resources. An example of this is when the customer

desires partitioning and isolation of d isk resources to separate customer

environments and workloads within the VMAX Series array.

Generally, it is appropriate to configure more, smaller d rives, than fewer,

larger d rives for both EFD and FC tiers. Doing so spreads I/ O load as

wide as possible across the Symmetrix back-end .

For example, on a 2-engine VMAX Series array with 16 DAs, if the EFD

raw capacity requirement is 3.2 TB, it would be more optimal to

configure 16 x 200 GB EFDs than 8 x 400 GB EFDs, as each DA could

have one EFD configured on it.

Storage group priority

When a storage group is associated with a FAST policy, a priority value

must be assigned to the storage group. This priority value can be

between 1 and 3, with 1 being the highest priority . The default is 2.

When multiple storage groups are associated with FAST VP policies, the

priority value is used when the data contained in the storage groups is

competing for the same resources on one of the FAST VP tiers. Storage

groups with a higher priority are given preference when decid ing which

data needs to be moved to another tier.

The best practice recommendation is to use the default priority of 2 for

all storage groups associated with FAST VP policies. These values may

then be modified if it is seen, for example, that a high-priority

application is not getting sufficient resources from a higher tier.

SRDF

FAST VP has no restrictions in its ability to manage SRDF devices,

however, what must be considered is that data movements are restricted

to the array upon which FAST VP is operating. By default, there is no

coord ination of data movements; FAST VP acts independently on both

the local and remote arrays. However, enabling FAST VP SRDF

coord ination allows R1 performance metrics to be used when making

promotion-and-demotion d ecisions for d ata belonging to an R2 device.

Note: For more information on the operation of FAST VP SRDF coord ination ,

refer to the Advance FAST VP features section of the Implementing Fully

31

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Automated Storage Tiering for V irtual Pools (FAST VP) for EMC Symmetrix

VMAX Series Arrays technical note available at http:/ / powerlink.emc.com.

As a general best practice, FAST VP should be employed for both R1 and

R2 devices, particularly if the remote R2 array has also been configured

with tiered storage capacity. Where possible, similar FAST VP tiers and

policies should be configured at each site.

When available, FAST VP SRDF coord ination should be enabled for each

storage group containing SRDF devices that is associated with a FAST

VP policy.

Each SRDF configuration, however, presents its own unique behaviors

and workloads. Prior to implementing FAST VP in an SRDF

environment, the information in the follow ing sections should be

considered .

Note: The following sections assume that SRDF is implemented with all

Symmetrix arrays configured for Virtual Provisioning (all SRDF devices are

thin devices) and installed with the minimum Enginuity version capable of

running FAST VP (5875.135.91).

FAST VP behavior

For SRDF R1 devices, FAST VP promotes and demotes extent groups

based on the read -and-write activity experienced on th e R1 devices.

Meanwhile, by default, the SRDF R2 devices typically only experience

write activity during normal operations. As a result, FAST VP is likely to

promote only R2 device extents that are experiencing writes.

If there are R1 device extents that only experience read activity, no

writes, then the correspond ing extents on the R2 devices will see no I/ O

activity. This will likely lead to these R2 device extents being demoted to

the SATA tier, assuming this was included in the FAST VP policy.

SRDF coord ination changes this defau lt behavior by period ically

transmitting FAST VP performance metrics collected for R1 devices

across the SRDF link to the corresponding R2 devices. On the R2 device,

the R1 performance metrics are merged with the actual R2 metrics.

Movement decisions made by FAST VP for these R2 devices are then

based on the merged performance metrics. In this way, read activity on

the R1 device can be reflected on the R2.

SRDF operating mode

EMC best practices, for both synchronous and asynchronous modes of

SRDF operation, recommend implementing a balanced configuration on

both the R1 and R2 Symmetrix arrays. Ideally, data on each array would

32

Planning and design considerations

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

be located on devices configured with the same RAID protection type, on

the same drive type.

As FAST VP operates independently on each array, and also promotes

and demotes d ata at the sub-LUN level, there is no guarantee that such a

balance may be maintained .

In SRDF synchronous (SRDF/ S) mode, host writes are transferred

synchronously from R1 to R2. These writes are only acknowledged by

the host when the data has been received into cache on the remote R2

array. These writes to cache are then destaged asynchronously to d isk on

the R2 array.

Note: For more information on SRDF/ S, see the EMC Solutions Enabler

Symmetrix SRDF Family CLI Product Guide available at

http:/ / powerlink.emc.com.

In an unbalanced configuration, where the R2 d ata resides on a lower -

performing tier than on the R1, performance impact may be seen at the

host if the number of write pendings builds up and writes to cache are

delayed on the R2 array.

With FAST VP, this typically does not cause a problem, as the

promotions that occur on the R2 side are the result of write activity.

Areas of the thin devices under heavy write workload are likely to be

promoted and maintained on the higher-performing tiers on the R2

array.

To avoid potential problems with the configuration becoming

unbalanced , it is recommended that FAST VP SRDF coord ination be

enabled in SRDF/ S environments.

In SRDF asynchronous (SRDF/ A) mode, host writes are transferred

asynchronously in pre-defined time periods or delta sets. At any given

time, there are three delta sets in effect: The capture set, the

transmit/ receive set, and the apply set.

Note: For more information on SRDF/ A, see the EMC Solutions Enabler

Symmetrix SRDF Family CLI Product Guide available at

http:/ / powerlink.emc.com.

A balanced SRDF configuration is more important for SRDF/ A, as data

cannot transition from one delta set to the next until the apply set has

completed destaging to d isk. If the data resides on a lower -performing

tier on the R2 array, compared to the R1, then the SRDF/ A cycle time

may elongate and eventually cause the SRDF/ A session to d rop.

33

Planning and design considerations

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

This is similar to SRDF/ S mode in most environments, so this may not

be a large issue, as the data under write workload is promoted and

maintained on the higher-performing tiers.

To avoid potential problems with the configuration becoming

unbalanced , it is strongly recommended that FAST VP SRDF

coord ination be enabled in SRDF/ A environments.

SRDF/ A DSE (delta set extension) should be considered to prevent

SRDF/ A sessions from dropping in situations where writes propagated

to the R2 array are being destaged to a lower tier, potentially causing an

elongation of the SRDF/ A cycle time.

Note: For more information on SRDF/ DSE, see the Best Practices for EMC

SRDF/A Delta Set Extension technical note available at

http:/ / powerlink.emc.com.

SRDF failover

As FAST VP works independently on both the R1 and R2 arrays, it

should be expected that the data layout will be d ifferent on each side if

SRDF coord ination is not enabled . When an SRDF failover operation is

performed, and host app lications are brought up on the R2 devices, it

should then also be expected that the performance characteristics on the

R2 will be d ifferent from those on the R1.

In this situation it takes FAST VP some period of time to ad just to the

change in workload and start promotion and demotion activities based

on the mixed read-and-write workload .

To better prepare for a failover, and to shorten the time period to achieve

an ad justment to the new workload , FAST VP SRDF coord ination should

be enabled .

Note: FAST VP performance metrics are only transmitted from R1 to R2.

When failed over, and with applications are running on R2 devices,

performance metrics will not be sent from R2 to R1.

Personality swap

An SRDF personality swap follows a similar pattern of behavior to an

SRDF failover scenario. Applications are brought up on the devices that

were formerly R2 devices.

In order for performance metrics to be transmitted from the new R1

devices to the corresponding R2 devices, SRDF coord ination need s to be

enabled on the storage groups containing the R1 devices.

SRDF coord ination can be enabled in advance on a storage group

containing R2 devices. However, the setting only takes effect when a

34

Summary and conclusion

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

personality swap is performed, converting the R2 devices to R1.

SRDF bi-directional

Best practice recommend ations change slightly in a bi-d irectional SRDF

environment. In this case, each Symmetrix array has both R1 and R2

devices configured .

With SRDF coord ination enabled , it is possible that d ata belonging to an

R2 device could d isp lace d ata belonging to an R1 device on a higher tier .

This may happen if the R2 device’s corresponding R1 device has a higher

workload than the local R1 device.

In this scenario, it is recommended to reserve the EFD tier for R1 device

usage. This can be done by exclud ing any EFD tiers from the policies

associated with the R2 devices.

Should a failover be performed , then the EFD tier can be added

dynamically to the policy associated with the R2 devices. Alternatively,

the storage group containing the R2 devices could be reassociated with a

policy containing EFD.

Note: This recommendation assumes that the lower tiers are of sufficient

capacity and I/ O capability to handle the expected SRDF write workload on

the R2 devices.

EFD considerations

If there is a d ifference in the configuration of the EFD tier on the remote

array, then it is recommend ed not to include the EFD tier in the FAST VP

policies on the R2 array. Examples of a configuration d ifference include

either fewer EFDs or no EFDs. Similarly, if the EFD configuration on the

R2 array does not follow the best practice guideline of being balanced

across DAs, do not include the EFD tier on the R2 side.

Note: This recommendation assumes that the lower tiers are of sufficient

capacity and I/ O capability to handle the expected SRDF write workload on

the R2 devices.

Summary and conclusion

EMC Symmetrix VMAX Family with Enginu ity incorporates a scalable

fabric-interconnect design that allows a storage array to seamlessly grow

from an entry-level configu ration to a 4 PB system. Symmetrix VMAX

arrays provide pred ictable, self-optimizing performance and enables

organizations to scale ou t on demand in private cloud environments.

35

Summary and conclusion

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Information infrastructure must continuously adapt to changing

business requirements. FAST VP automates tiered storage strategies, in

Virtual Provisioning environments, by easily moving workloads

between Symmetrix tiers as performance characteristics change over

time. FAST VP performs data movements, improving performance, and

reducing costs, all while maintaining vital service levels.

EMC Symmetrix VMAX FAST VP for Virtual Provisioning environments

automates the identification of data volumes for the purposes relocating

application d ata across d ifferent performance/ capacity tiers within an

array. FAST VP proactively monitors workloads at both the LUN and

sub-LUN level, in order to identify busy data that would benefit from

being moved to higher-performing drives. FAST VP also identifies less-

busy data that could be moved to higher -capacity d rives, without

existing performance being affected .

Promotion/ demotion activity is based on policies that associate a storage

group to multiple d rive technologies, or RAID protection schemes, by

way of thin storage pools, as well as the performance requirements of the

application contained within the storage group. Data movement

executed during this activity is performed nond isruptively, without

affecting business continuity and d ata availability.

There are two components of FAST VP: Symmetrix Enginu ity and the

FAST controller. Enginuity is the storage operating environment that

controls components w ithin the array. The FAST controller is a service

that runs on the service processor. FAST VP uses two d istinct algorithms,

one performance-oriented and one capacity-allocation-oriented , in order

to determine the appropriate tier a device should belong to. The

intelligent tiering algorithm considers the performance metrics of all thin

devices under FAST VP control, and determines the appropriate tier for

each extent group. The allocation compliance algorithm is used to

enforce the per-tier storage capacity usage limits.

Data movements execu ted by FAST VP are performed by the VLUN VP

data movement engine, and involve moving thin device extents be tween

thin pools w ithin the array. Extents are moved by means of a move

process only; extents are not swapped between pools.

Performance data for use by FAST VP is collected and maintained by

Enginuity. This d ata is then analyzed by the FAST controller , and the

guidelines generated for the placement of thin device data on the defined

VP tiers w ithin the array. When collecting performance data at the LUN

and sub-LUN level for use by FAST VP, Enginuity only collects statistics

related to Symmetrix back-end activity that is the result of host I/ O.

36

Summary and conclusion

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

FAST VP provides a number of parameters that can be used to tune the

performance of FAST VP and to control the aggressiveness of the data

movements. These parameters can be used to nondisruptively ad just the

amount of tier storage that a given storage group is allowed to use, or to

ad just the manner in which storage groups using the same tier compete

with each other for space.

FAST VP is fully interoperable with all Symmetrix replication

technologies: EMC SRDF, EMC TimeFinder/ Clone, TimeFinder/ Snap,

and Open Replicator. Any active replication on a Symmetrix device

remains intact while data from that device is being moved . Similarly, all

incremental relationships are maintained for the moved or swapped

devices. FAST VP also operates alongside Symmetrix features, such as

Symmetrix Optimizer, Dynamic Cache Partitioning, and Auto-

provisioning Groups.

37

Appendix: Best practices quick reference

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Appendix: Best practices quick reference

The following provides a qu ick reference to the general best practice

recommend ations for planning the implementation of a FAST VP

environment.

The best practices documented are based on features available in

Enginuity 5876.82.57, Solutions Enabler 7.4, Symmetrix Management

Console 7.3.3, and Unisphere for VMAX 1.0.

For more details on these recommend ations, and other considerations,

see ―Planning and design considerations.‖

FAST VP configuration parameters

FAST VP includes multiple configuration parameters that control its

behavior. The following sections describe best practice recommend ations

for each of these configuration parameters.

Performance time window

Use the default performance time window to collect performance metrics

24 hours a d ay, every day.

Data movement time window

Create a d ata movement window to allow data movement for the same

period of time that the performance time windows allow data collection.

Workload Analysis Period (WAP)

Use the default workload analysis period of 168 hours (7 days).

Initial Analysis Period (IAP)

At the initial deployment of FAST VP, set the initial analysis period to

168 hours to ensure, at a minimum, that a typ ical weekly workload cycle

is seen. After this period has passed , the value can be dropped to 24

hours.

FAST VP Relocation Rate

For the initial deployment of FAST VP, start with a conservative value

for the relocation rate, perhaps 7 or 8. At a later d ate the FRR can be

gradually lowered to a more aggressive level, possibly 2 or 3.

Pool Reserved Capacity (PRC)

For ind ividual pools with bound thin devices, set the PRC based on the

lowest allocation warning level for that thin pool.

For pools with no bound thin devices, set the PRC to 1 percent.

VP Allocation by FAST Policy

VP allocation by FAST policy should be enabled .

38

Appendix: Best practices quick reference

FAST VP for EMC Symmetrix VMAX Theory and Best Prac tices for Planning and Performanc e

FAST VP policy configuration

The ideal FAST VP policy would be 100 percent EFD, 100 percent FC,

and 100 percent SATA. While ideal, operationally it may not be

appropriate to deploy the 100/ 100/ 100 policy. There may be reasons to

limit access to a particular tier within the array.

The best way to determine the appropriate policies for a FAST VP

implementation is to examine the workload skew for the application

data to be managed by FAST VP. This can be done by using a tool such

as Tier Advisor.

Thin device binding

If no advance knowledge of the expected workload on newly written

data is available, the best p ractice recommendation is to bind all thin

devices to a pool within the second highest tier. In a three-tier

configuration, this would imply the FC tier.

VP allocation by FAST policy should be enabled .

It is not recommended to bind thin devices, under FAST VP control, to

the EFD tier.

A best practice recommend ation is to not preallocate thin devices

managed by FAST VP.

RAID protection considerations

For EFD, choose RAID 5 protection .

For FC, choose RAID 1 protection.

For SATA, choose RAID 6 protection.

Drive configuration

Where possible, balance physical d rives evenly across DAs. Configure

more, smaller EFDs, rather than fewer, larger EFDs, to spread I/ O load

as wide as possible.

Storage group priority

The best practice recommendation is to use the default priority of 2 for

all storage groups associated with FAST VP policies.

SRDF

As a general best practice, FAST VP should be employed for both R1 and

R2 devices, particularly if the remote R2 array has also been configured

with tiered storage capacity. Sim ilar FAST VP tiers and policies should

be configured at each site.

FAST VP SRDF coord ination should be enabled for storage gr oups

containing SRDF R1 devices.

39

Appendix: Best practices quick reference

FAST VP Theory for EMC Symmetrix VMAX and Best Prac tic es for Planning and Performanc e

Copyright © 2012 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The

information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC

CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND

WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY

DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A

PARTICULAR PURPOSE.

Use, copying, and d istribution of any EMC software described in this publication requires

an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks

on EMC.com.

For the most up-to-date regulatory document for your product line, go to the Technical

Documentation and Advisories section on EMC Powerlink.

All other trademarks used herein are the property of their respective owners.