46
Copyright © 2013 EMC Corporation. All rights reserved. This module focuses on the benefits and process to migrate a LUN and the procedures for expanding Pool LUNs and the Classic LUNs. It covers the functionality, benefits and configuration of FAST VP and FAST Cache. Advanced Storage Concepts 1

Advanced Storage Concepts

Embed Size (px)

Citation preview

Page 1: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

This module focuses on the benefits and process to migrate a LUN and the procedures for expanding Pool LUNs and the Classic LUNs. It covers the functionality, benefits and configuration of FAST VP and FAST Cache.

Advanced Storage Concepts 1

Page 2: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

This lesson covers the benefits and process to migrate a LUN and the procedures for expanding Pool LUNs and the Classic LUNs overview. It also show how to proceed with the volume extension in a Windows 2012 Server.

Advanced Storage Concepts 2

Page 3: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

LUN Migration is an important feature for use in storage system tuning. LUN Migration moves data from a source LUN to a destination LUN (of the same or larger size) within a single storage system. This migration is accomplished without disruption to applications running on the host. LUN Migration can enhance performance or increase disk utilization for the changing business needs and applications by allowing the user to change LUN type and characteristics, such as RAID type or size (Destination must be the same size or larger), while production volumes remain online. LUNs can be moved between Pools, between RAID Groups, or between Pools and RAID Groups.

When a Thin LUN is migrated to another Thin LUN, only the consumed space is copied. When a Thick LUN or Classic LUN is migrated to a Thin LUN, the space reclamation feature is invoked and only the consumed capacity is copied.

3 Advanced Storage Concepts

Page 4: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

LUN migration allows data to be moved from one LUN to another, regardless of RAID type, disk type, LUN type, speed and number of disks in the RAID group or Pool, with some restrictions. The process involved in the migration is the same across all VNX systems.

The LUNs used for migration may not be private LUNs, nor may they be in the process of binding, expanding or migrating.

Either LUN, or both LUNs, may be metaLUNs, but neither LUN may be a component LUN of a metaLUN.

The destination LUN may not be part of SnapView or MirrorView operation. This includes Clone Private LUNs, Write Intent Log LUNs, and Reserved LUN Pool LUNs.

Note the Destination LUN is required to be at least as large as the Source LUN, but may be larger.

4 Advanced Storage Concepts

Page 5: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The LUN Migration feature is transparent to any host accessing the Source LUN, though there may be a performance impact.

Copying of data proceeds while the Source LUN is available for read/write access, and the copy process may be terminated at any time. Once all data is copied, the Destination LUN assumes the full identity of the Source LUN, and the Source LUN is destroyed as a security measure. If the migration is terminated before completion, the source LUN will remain accessible to the host, and the destination will be destroyed as a security measure.

The host accessing the Source LUN sees no identity change, though of course the LUN size may have changed. In that case, a host utility, such as Microsoft Windows diskpart, can be used to make the increased space available to the host OS.

5 Advanced Storage Concepts

Page 6: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

In the systems drop-down list on the menu bar, select a system. Select Storage > LUNs > LUNs. Navigate to the LUN that will be the source LUN for the migration operation, and right-click Migrate.

6 Advanced Storage Concepts

Page 7: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The “Start Migration” menu option configures and starts the LUN migration operation.

If the destination LUN does not belong to the SP that owns the source LUN, the destination LUN will be trespassed to the SP that owns the source LUN before the migration starts.

Source LUN Name: Identifies the source LUN that will be participating in the migration operation.

Source LUN ID: Identifies the source LUN ID (iSCSI) or the WWN (Fibre Channel).

Source LUN Capacity: Identifies the capacity of the source LUN, for example, 5 GB.

Migration Rate: Sets the rate at which the data will be copied - valid values are Low, Medium, High, or ASAP.

Available Destination LUNs: Lists LUNs that are available to be destination LUNs. These LUNS must be the same size or larger than the source LUN.

7 Advanced Storage Concepts

Page 8: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

Once the user chooses the destination LUN and start the migration, the user is asked to confirm and continue; click “Yes” on each screen.

Launch the “LUN Properties” window and select the “Migration” tab to display the state and status of the operation. Use the dropdown menu for the “Migration Rate” to view and select available options. These changes can be applied as the migration continues.

The LUN Migration Summary option displays a summary of all the currently active migrations for a particular storage system, or for all storage systems within the domain. This information includes the storage system hosting the LUN, the Source and Destination LUNs, the state of the migration, and information about the rate and progress of the migration.

8 Advanced Storage Concepts

Page 9: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

After migration, the destination LUN has taken on the identity of the source LUN. The RAID Type reflects the destination LUN type. SP ownership reverts to the SP that owned the source LUN before the migration started.

Note: ‘User Capacity has been changed to reflect the additional LUN size. The destination LUN is destroyed.

9 Advanced Storage Concepts

Page 10: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

When the FAST Cache feature is being used, ensure FAST Cache is OFF on LUNs being migrated. This prevents the migration’s I/O from consuming capacity in the FAST Cache that may otherwise benefit workload I/O.

When migrating into or between FAST VP pool-based LUNs, the initial allocation of the LUN and the allocation policy have an important effect on its performance and capacity utilization. Tiering policy setting (Highest, Auto, Lowest) determines which tier within the pool the data of the source LUN will be first allocated to. Be sure to set the correct policy needed to ensure the expected starting performance for all the source LUN’s data. As much capacity from the source LUN will be allocated as possible to the appropriate tier. Once the migration is complete the user can adjust the tiering policy.

There will be a lowering in the rate of migration when the source or destination LUN is a Virtual Provisioning thin LUN. It is difficult to determine the transfer rate when the source LUN is a thin LUN. It will be lower. The decrease in the rate depends on how sparsely the thin LUN is populated with user data, and how sequential in nature of the stored data is. A densely populated LUN with highly sequential data increases the transfer rate. Random data and sparsely populated LUNs decrease it.

ASAP priority LUN migrations with normal cache settings should be used with caution. They may have an adverse effect on system performance. EMC recommends that the user execute at the High priority, unless migration time is critical.

10 Advanced Storage Concepts

Page 11: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

VNX has the capability to expand a Pool LUN. Pool LUN expansion can be done with a few simple clicks, and the expanded capacity is immediately available.

For a thick LUN, the pool must have enough storage for the expansion to succeed, whereas for a thin LUN the storage does not need to be available. It is important to note that the storage administrator cannot expand a pool LUN if it is part of a data protection or LUN-migration operation.

During the expansion process, the host is able to process I/O to the LUN, and access any existing data.

Right-click the base LUN and select Expand. When the “LUN Expand Storage Dialog” opens, set the amount of space needed for the LUN.

11 Advanced Storage Concepts

Page 12: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The VNX metaLUN feature allows Classic LUNs (Pool LUNs cannot be used for metaLUNs) to be aggregated in order to increase the size or performance of the base LUN. The base LUN, which can be a regular LUN or a metaLUN, is the LUN that will be expanded by the addition of other LUNs. The LUNs that make up a metaLUN are called component LUNs.

A RAID Group is limited to 16 disks, and that places an upper limit on the size of a Classic LUN, and the performance which may be achieved by a single Classic LUN. MetaLUNs allow an increase of available bandwidth or throughput, or LUN capacity, by adding hard disks. MetaLUNs are functionally similar to volumes created with host volume managers, but with some important distinctions.

To create a volume manager stripe, all component LUNs must be made available to the host, and each will have a unique address. Only a single LUN, with a single address, is presented to the host with metaLUNs.

If a volume is to be replicated with VNX replication products (SnapView, VNX Snapshot, MirrorView and SAN Copy), a usable image requires consistent handling of fracture and session start operations on all member LUNs at the same time. MetaLUNs simplify replication by presenting a single object to the replication software. This also makes it easier to share the volume across multiple hosts – an action that volume managers will not allow.

The use of a host striped volume manager has the effect of multithreading requests consisting of more than one volume stripe segment which increases concurrency to the storage system. MetaLUNs have no multithreading effect since the multiplexing of the component LUNs are done on the storage system.

12 Advanced Storage Concepts

Page 13: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The slide shows some of the possible scenarios and guidelines for metaLUN expansion.

Concatenation simply adds space to the end of an existing LUN, and performs no copying or moving of data; as a result, the expansion process is immediate, with no storage system performance impact. The disadvantage of concatenated expansion is that it does not distribute data evenly across all the physical disks that make up the metaLUN, so performance will not improve significantly. This solution is appropriate for scenarios where additional space has to be added quickly, and where performance is a lower priority.

EMC strongly recommends to not expand LUN capacity by concatenating LUNs of different RAID types. This should be done only in an emergency situation when the user need to add capacity to a LUN and there is no LUNs of the same RAID type or the disk capacity to create new ones. Concatenating metaLUN components with a variety of RAID types could impact the performance of the resulting metaLUN. Once the LUN is expanded, the user cannot change the RAID type of any of its components without destroying the metaLUN. Destroying a metaLUN destroys all LUNs in the metaLUN, and therefore causes data to be lost.

Creating a striped metaLUN from an empty Base LUN also involves no data movement, so is also immediate. The structure that is created will stripe new data across all physical LUNs, so will exhibit good performance. If the Base LUN is populated, data will be restriped across all component LUNs, and this may take an appreciable length of time. The time taken will be dependent on a number of factors, among which are the size of the Base LUN, expansion priority, and the utilization of storage system resources. Overall system performance may be affected during the expansion. As a result of the increase in back-end activity associated with restriping, expanding more than one LUN per RAID Group at the same time is not recommended.

13 Advanced Storage Concepts

Page 14: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The Storage Expansion wizard is supported for classic LUNs only.

The RAID Group LUN Expansion Wizard lets the user dynamically expand the capacity of new or existing LUNs. The user can add additional LUNs to a metaLUN to increase its capacity even more. The wizard preserves the expanded LUN's data. The user does not have to unbind the LUN to expand and lose all the data on this LUN. Once a metaLUN is created, it acts like a standard LUN. The user can expand it, add it to a Storage Group, view its properties, and destroy it.

For existing metaLUNs, only the last component of the metaLUN can be expanded. If the user click a component other than the last one and select Add LUNs, the software displays an error message.

A metaLUN can span multiple RAID Groups and, depending on expansion type (concatenate or stripe), the LUNs in a metaLUN can be different sizes and RAID Types.

The software allows only four expansions per storage system to be running at the same time. Any additional requests for expansion are added to a queue, and when one expansion completes, the first one in the queue begins.

In the systems drop-down list on the menu bar, select a storage system.

Right-click the base LUN and select Expand. When the “Expand Storage Wizard Dialog” opens, follow the steps.

Another option is from the task list, under Wizards. Select RAID Group LUN Expansion Wizard.

Follow the steps in the wizard, and when available, click the Learn more links for additional information.

14 Advanced Storage Concepts

Page 15: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The example shows a Windows Server 2012 (Server Manager > File and Storage Services > Volumes) view of a pool-based LUN before and after LUN expansion.

1. The LUN is originally presented to the Windows host and is assigned the mount point as drive “E” with 10.0 GB allocated

2. After expansion the LUN shows the expanded volume (5.00 GB Unallocated)

15 Advanced Storage Concepts

Page 16: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

3. Right-click the Volume E: and select Extend Volume to claim the newly added space.

4. The volumes shows 15.0 GB allocated.

Operating-system support for Pool LUN expansion is critical in order to make use of the expanded LUN. Every operating system behaves differently to an expanded LUN.

16 Advanced Storage Concepts

Page 17: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

This lesson covered the benefits and process to migrate a LUN and the procedures for expanding Pool LUNs and the Classic LUNs overview. It also showed how to proceed with the volume extension in a Windows 2012 Server.

Advanced Storage Concepts 17

Page 18: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

This lesson covers the functionality, benefits and configuration of FAST VP.

Advanced Storage Concepts 18

Page 19: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

VNX FAST VP, or Fully Automated Storage Tiering for Virtual Pools tracks data in a Pool at a granularity of 256 MB – a slice – and ranks slices according to their level of activity and how recently that activity took place. Slices that are heavily and frequently accessed will be moved to the highest tier of storage, typically Flash drives, while the data that is accessed least will be moved to lower performing, but higher capacity storage – typically NL-SAS drives. This sub-LUN granularity makes the process more efficient, and enhances the benefit achieved from the addition of Flash drives.

The ranking process is automatic, and requires no user intervention. When FAST VP is implemented, the storage system measures, analyzes, and implements a dynamic storage-tiering policy in a faster and more efficient way than a human analyst. Relocation of slices occurs according to a schedule which is user-configurable, but which defaults to a daily relocation. Users can also start a manual relocation if desired. FAST VP depends for its operation on tiers of disks – up to 3 are allowed, and a minimum of 2 are needed for meaningful FAST VP operation. The tiers relate to the disk type in use. Note that no distinction is made between 10k rpm and 15k rpm SAS disks, and it is therefore recommended that disk speeds not be mixed in a tier.

NOTE: VNX systems running MCx code use 256 MB slices. Other VNX models, such as the VNX5700 and VNX7500, use 1 GB slices.

19 Advanced Storage Concepts

Page 20: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

FAST VP uses a number of mechanisms to optimize performance and efficiency. It removes the need for manual, resource intensive, LUN migrations, while still providing the performance levels required by the most active dataset. It can lower the Total Cost of Ownership (TCO) and increase performance by intelligently managing data placement.

Another process that can be performed is the rebalance. Upon the expansion of a storage pool, the system recognizes the newly added space and initiates an auto-tiering data relocation operation.

Applications that exhibit skew, and have workloads that are fairly stable over time will benefit from the addition of FAST VP.

The VNX series of storage systems delivers high value by providing a unified approach to auto-tiering for file and block data. Both block and file data can use virtual pools and FAST VP. This provides compelling value for users who want to optimize the use of high-performance drives across their environment.

20 Advanced Storage Concepts

Page 21: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

FAST VP enables the user to create storage pools with heterogeneous device classes and place the data on the class of devices or tier that is most appropriate for the block of data. Pools allocate and store data in 256 MB slices (earlier versions used 1 GB slices) which can be migrated or relocated, allowing FAST VP to reorganize LUNs onto different tiers of the Pool. This relocation is transparent to the hosts accessing the LUNs.

For example, when a LUN is first created it may have a very high read/write workload with I/Os queued to it continuously. The user wants that LUN to have the best response time possible in order to maximize productivity of the process that relies on this storage. Over time, that LUN may become less active or stop being used and another LUN may become the focus of the operation. VNX systems configured with EMC’s FAST VP software would automatically relocate inactive slices to a lower storage tier, freeing up the more expensive storage devices for the newly created and more active slices.

The administrator can use FAST VP with LUNs regardless of whether those LUNs are also in use by other VNX software features, such as Data Compression, SnapView, MirrorView, RecoverPoint, and so on.

The tiers from highest to lowest are Flash, SAS, and NL-SAS, described in FAST VP as Extreme Performance, Performance, and Capacity respectively. FAST VP differentiates each of the tiers by drives type, but it does not take rotational speed into consideration. EMC strongly encourages to avoid mixing rotational speeds per drive type in a given pool. FAST VP is not supported for RAID groups because all the disks in a RAID group, unlike those in a Pool, must be of the same type (all Flash, all SAS, or all NL-SAS). The lowest performing disks in a RAID group determine a RAID group’s overall performance.

21 Advanced Storage Concepts

Page 22: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

During storage pool creation, the user can select RAID protection on a per-tier basis. Each tier has a single RAID type, and once the RAID configuration is set for that tier in the pool, it cannot be changed. The table above shows the RAID configuration that are supoorted for each tier.

The drives used in a Pool can be configured in many ways – supported RAID types are RAID 1/0, RAID 5, and RAID 6. For each of those RAID types, there are recommended configurations. These recommended configurations balance performance, protection, and data efficiency. The configurations shown on the slide are those recommended for the supported RAID types. Note that, though each tier may have a different RAID type, any single tier may have only 1 RAID type associated with it, and that type cannot be changed once configured.

Advanced Storage Concepts 22

Page 23: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

When creating storage pools on a VNX, data can use three different categories of media in a single pool. These categories, referred to as storage tiers, provide various levels of performance and capacity through several drive types. The available tiers when creating a storage pool are:

• Extreme Performance

• Performance

• Capacity

FAST VP differentiates each of these tiers by drive type, but it does not take rotational speed into consideration. EMC strongly encourages that the user avoid mixing rotational speeds per drive type in a given pool. If multiple rotational-speed drives exist in the array, the user should implement multiple pools as well. FAST VP can leverage one, two, or all three storage tiers in a single pool. Each tier offers unique advantages in terms of performance and cost.

Advanced Storage Concepts 23

Page 24: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

To create a Heterogeneous Pool select the storage system in the systems drop-down list on the menu bar. Select Storage > Storage Configuration > Storage Pools. In Pools, click Create.

Advanced Storage Concepts 24

Page 25: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

In the General tab, under Storage Pool Parameters, select "Pool.” The user can create pools that use multiple RAID types, one RAID type per tier, to satisfy multiple tiering requirements within a pool. To do this:

• The pool must contain multiple disk types

• When creating the pool, select the RAID type for each tier

For the Extreme performance tier, there are 2 types of disks that can be used: FAST Cache optimized Flash drives and FAST VP optimized Flash drives. A RAID Group created by FAST VP can use only one type, though both types can appear in the tier. If both types of drive are present, the drive selection dialog shows them separately.

When the user expand an existing pool by adding additional drives, the system selects the same RAID type that was used when the user created the pool.

When the user expand an existing pool by adding a new disk type tier, the user need to select the RAID type that is valid for the new disk type. For example, best practices suggest using RAID 6 for NL-SAS drives, and RAID 6, 5, or 1/0 for other drives.

Advanced Storage Concepts 25

Page 26: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

FAST VP policies are available for storage systems with the FAST VP enabler installed. The FAST VP feature automatically migrates the data between storage tiers, and within storage tiers, to provide the lowest total cost of ownership. FAST VP Pools are configured with different types of disks (Flash, SAS, and NL-SAS) and the storage system software continually tracks the usage of the data stored on LUNs in the Pools. Using these LUN statistics, the FAST VP feature relocates data (At a granularity of 256 MB) to the storage tier that is best suited for the data, based on the policy. Relocation within a tier ensures that hot spots are reduced or eliminated.

Use the “Highest Available Tier” policy when quick response times are a priority.

A small portion of a large set of data may be responsible for most of the I/O activity in a system. FAST VP allows for moving a small percentage of the “hot” data to higher tiers while maintaining the rest of the data in the lower ties. The “Auto Tier” policy automatically relocates data to the most appropriate tier based on the activity level of each data slice.

The “Start High, then Auto Tier” is the recommended policy for each newly created pool, because it takes advantage of the “Highest Available Tier” and “Auto-Tier” policies.

Use the “Lowest Available Tier” policy when cost effectiveness is the highest priority. With this policy, data is initially placed on the lowest available tier with capacity.

User can set all LUN level policies except the “No Data Movement” policy both during and after LUN creation. The “No Data Movement” policy is only available after LUN creation. If a LUN is configured with this policy, no slices provisioned to the lUN are relocated across tiers.

Advanced Storage Concepts 26

Page 27: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

Provided the FAST enabler is present, select the Tiering tab from the Storage Pool Properties window to display the status and configuration options.

Scheduled means FAST VP relocation is scheduled for the Pool. Data relocation for the pool will be performed based on the FAST schedule in the Manage Auto-Tiering dialog. If a tier fills to 90% capacity, data will be moved to another tier.

The Relocation Schedule button launches the Manage Auto-Tiering dialog when pressed.

Data Relocation Status has several states. Ready means no relocations in progress for this pool, Relocating means relocations are in progress for this pool and Paused means relocations are paused for this pool.

Data to Move Down is the total amount of data (in GB) to move down from one tier to another; Data to Move Up is the total amount of data (in GB) to move up from one tier to another; Data to Move Within is the amount of data (in GB) that will be relocated inside the tier based on I/O access.

Estimated time for data relocation is the estimated time (in hours) required to complete data relocation

Note: If the FAST enabler is not installed, certain information will not be displayed.

Tier Details shows information for each tier in the Pool. The example Pool has 2 tiers, SAS (Performance) and NL-SAS (Capacity).

Tier Name is the Name of the tier assigned by provider or lower level software.

Advanced Storage Concepts 27

Page 28: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The Manage Auto-Tiering option available from Unisphere allows users to view and configure various options.

The Data Relocation Rate controls how aggressively all scheduled data relocations will be performed on the system when they occur. This applies to scheduled data relocations. The rate settings are high, medium (default), and low. A low setting has little impact to production I/O, but means that the tiering operations will take longer to make a full pass through all the pools with tiering enabled. The high setting has the opposite effect. Though relocation operations will proceed at a much faster pace, FAST VP will not consume so much of the storage system resources that server I/Os time out. Operations are throttled by the storage system.

The Data Relocation Schedule if enabled, controls the system FAST VP schedule. When not checked, the scheduling controls are grayed out, and no data relocations are started by the scheduler. Even if the system FAST VP scheduler is disabled, data relocations at the pool level may be manually started. The schedule controls allows configuring the days of the week, the time of day to start data relocation, and the data relocation duration (hours selection 0-23; minutes selection of 0, 15, 30, &.45, but will be editable to allow for minutes set through CLI).

The default schedule is determined by the provider and will be read by Unisphere. Changes that are applied to the schedule are persistent.

The scheduled days use the same start time and duration.

28 Advanced Storage Concepts

Page 29: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

Unisphere or Navisphere Secure CLI lets the user schedule the days of the week, start time, and durations for data relocation for all participating tiered Pools in the storage system. Unisphere or Navisphere Secure CLI also lets the user initiate a manual data relocation at any time. To ensure that up-to-date statistics and settings are accounted for properly prior to a manual relocation, FAST VP analyzes all statistics gathered independently of its regularly scheduled hourly analysis before starting the relocation.

FAST VP scheduling involves defining the timetable and duration to initiate Analysis and Relocation tasks for Pools enabled for tiering. Schedules can be configured to be run daily, weekly, or just single iteration. A default schedule will be configured when the FAST enabler is installed.

Relocation tasks are controlled by a single schedule, and affect all Pools configured for tiering.

29 Advanced Storage Concepts

Page 30: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The Start Data Relocation dialog displays all of the pools that were selected and the action that is about to take place. If FAST is Paused, this dialog will contain a message alerting the user that FAST is in a Paused state and that relocations will resume once FAST is resumed (provided that the selected window for the relocations did not expire in the meantime). If one or more Pools are already actively relocating data, it will be noted in the confirmation message.

Data Relocation Rates are High, Medium, and Low. The default setting of the Data Relocation Rate is determined by the Data Relocation Rate defined in the Manage FAST dialog.

Data Relocation Duration is 8 hours (default). Hours selection has a valid range of 0-23. Minutes selection is limited to 0, 15, 30, and 45.

When the “Stop Data Relocation” menu item is selected, a confirmation dialog is displayed noting all of the pools that were selected and the action that is about to take place. If one or more pools are not actively relocating data, it will be noted in the confirmation message.

30 Advanced Storage Concepts

Page 31: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The Tiering Summary pane can be configured from the Customize menu on the Dashboard. The icon displays information about the status of tiering. This view is available for all arrays regardless of the FAST enabler. When the FAST enabler is not installed, it will display no FAST data and instead will show the user a message alerting them to the fact that this feature is not supported on this system.

Relocation Status: Indicates the tiering relocation status. Can be Enabled or Paused.

Pools with data to be moved: the number of Pools that have data queued up to move between tiers. This is a hot link that takes the user to the Pools table under Storage > Storage Configuration > Storage Pools.

Scheduled Pools: the number of tiered pools associated with the FAST schedule. This is also a hot link that takes the user to Storage > Storage Configuration > Storage Pools.

Active Pool Relocations: the number of pools with active data relocations running. This is also a hot link that takes the user to Storage > Storage Configuration > Storage Pools.

Additional information includes the quantity of data to be moved up (GB), the quantity of data to be moved down (GB), the estimated time to perform the relocation, the relocation rate, and data to be moved within a tier if the tier has been expanded.

31 Advanced Storage Concepts

Page 32: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

This lesson covered the functionality, benefits and configuration of FAST VP.

Advanced Storage Concepts 32

Page 33: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

This lesson covers functionality, benefits, and configuration of EMC FAST Cache.

Advanced Storage Concepts 33

Page 34: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

FAST Cache uses Flash drives and helps with read and write performance for frequently accessed data when this feature is enabled for or LUNs. FAST Cache consists of a storage pool of Flash disks configured to function as FAST Cache. The FAST Cache is based on the locality of reference of the data set. A data set with high locality of reference (data areas that are frequently accessed) is a good candidate for FAST Cache. By promoting the data set to the FAST Cache, the storage system services any subsequent requests for this data faster from the Flash disks that make up the FAST Cache; thus, reducing the load on the disks in the LUNs that contain the data (the underlying disks). The data is flushed out of cache when it is no longer accessed as frequently as other data, per the Least Recently Used Algorithm.

FAST Cache consists of one or more pairs of mirrored disks (RAID 1) and provides both read and write caching. For reads, the FAST Cache driver copies data off the disks being accessed into the FAST Cache. For writes, FAST Cache effectively buffers the data waiting to be written to disk. In both cases, the workload is off-loaded from slow rotating disks to the faster Flash disks in FAST Cache. FAST Cache should be disabled for Write Intent Log (WIL) LUNs or Clone Private LUNs (CPLs). Enabling FAST Cache for these LUNs is a misallocation of the FAST Cache and may reduce the effectiveness of FAST Cache for other LUNs. FAST Cache can be enabled on Classic LUNs and Pools once the FAST Cache enabler is installed.

34 Advanced Storage Concepts

Page 35: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

FAST Cache improves the application performance, especially for workloads with frequent and unpredictable large increases in I/O activity. The part of an application’s working dataset is frequently accessed is copied to the FAST Cache, so the application receives an immediate performance boost. FAST Cache provides low latency and high I/O performance without requiring a large number of Flash disks. It is also expandable while I/O to and from the storage system is occurring. Applications such as File and OLTP (online transaction processing) have data sets that can benefit from the FAST Cache. The performance boost provided by FAST Cache varies with the workload and the cache size.

Another important benefit is improved total cost of ownership (TCO) of the system. FAST Cache copies the hot or active subsets of data to Flash drives in chunks. By offloading many if not most of the remaining IOPS after FAST Cache, the user can fill the remainder of their storage needs with low cost, high capacity disk drives. This ratio of a small amount of Flash paired with a lot of disk offers the best performance ($/IOPS) at the lowest cost ($/GB) with optimal power efficiency (IOPS/KWH).

Use FAST Cache and FAST VP together to yield high performance and TCO from the storage system. For example, use FAST Cache optimized Flash drives to create FAST Cache, and use FAST VP for pools consisting of SAS and NL-SAS disk drives. From a performance point of view, FAST Cache provides an immediate performance benefit to bursty data, while FAST VP moves more active data to SAS drives and less active data to NL-SAS drives. From a TCO perspective, FAST Cache can service active data with fewer Flash drives, while FAST VP optimizes disk utilization and efficiency with SAS and NL-SAS drives.

35 Advanced Storage Concepts

Page 36: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

FAST Cache requires the FAST Cache enabler to take advantage of the feature. To create FAST Cache, the user need at least 2 FAST Cache optimized drives in the system, which will be configured in RAID 1 mirrored pairs. Once the enabler is installed, the system uses the following main components to process and execute FAST Cache:

• Policy Engine – Manages the flow of I/O through FAST Cache. When a chunk of data on a LUN is accessed freuqnetly, it is copied temporarily to FAST Cache (FAST Cache optimized drives). The Policy Engine also maintains statistical information about the data access patterns. The policies defined by the Policy Engine are system-defined and cannot be changed by the user.

• Memory Map – Tracks extents usage and ownership in 64 KB chunks of granularity. The Memory Map maintains information on the state of 64 KB chunks of storage and the contents in FAST Cache. A copy of the Memory Map is stored in DRAM memory, so when the FAST Cache enabler is installed, SP memory is dynamically allocated to the FAST Cache Memory Map. The size of the Memory Map increases linearly with the size of FAST Cache being created. A copy of the Memory Map is also mirrored to the Flash disks to maintain data integrity and high availability of data.

36 Advanced Storage Concepts

Page 37: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

FAST Cache is configured in Read/Write mode on Flash drives configured as RAID 1 mirrored pairs. In read mode, the application gets the acknowledgement for an IO operation once it has been serviced by the FAST Cache. FAST Cache algorithms are designed such that the workload is spread evenly across all the flash drives that have been used for creating FAST Cache.

During normal operation, a promotion to FAST Cache is initiated after the Policy Engine determines that 64 KB block of data is being accessed frequently. To be considered, the 64 KB block of data must be accessed by reads and/or writes multiple times within a short period of time.

A FAST Cache Flush is the process in which a FAST Cache page is copied to te HDDs and the page is freed for use. The least recently used (LRU) algorithm determines which data blocks to flush to make room for the new promotions.

FAST Cache contains a cleaning process which proactively copies dirty pages to the underlying physical devices during times of minimal backend activity.

37 Advanced Storage Concepts

Page 38: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The user can perform two FAST Cache operations: initializing (creating) and destroying. The initialize operation starts after the user chooses the cache configuration and click the Create button on the storage-system FAST Cache tab. Before the initializing operation starts, the cache state is N/A. At the beginning of initializing operation, the cache state becomes Disabled and then it changes to Enabling. During the Enabling state the operation state is Running and the percentage of the initializing operation that is complete is displayed. When the initializing operation is complete, the cache state is Enabled. The cache stays in the Enabled state until a failure occurs or the user choose to destroy the cache.

FAST Cache must use FAST Cache optimized Flash drives, and cannot use FAST VP optimized drives.

The destroy operation starts when the user clicks the Destroy button on the storage-system FAST Cache tab. Before the destroy operation starts, the cache state is Enabled or Faulted. During the destroy operation, the cache state is Destroying and the percentage of the destroy operation that is complete is displayed. At the end of the destroy operation, the cache state becomes Disabled, and when the operation is complete, the cache state is N/A. The cache stays in the N/A state until the user recreates the cache.

The FAST Cache has two failure states: Enabled (Degraded) and Disabled (Faulted). The cache is in the Enabled (Degraded) state when it is enabled and accepting I/O, but one of its disks has failed or been removed, and the cache is transitioning into a read-only mode. The cache is in the Disabled (Faulted) state when it is disabled and no longer accepting I/O because multiple disks in the cache have failed or been removed.

38 Advanced Storage Concepts

Page 39: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

If a sufficient number of Flash drives are not available to enable FAST Cache, Unisphere displays an error message, and FAST Cache cannot be created. The bottom portion of the screen shows the Flash drives that will be used for creating FAST Cache. The user can choose the drives manually by selecting the Manual option. To change the size of FAST Cache after it is configured, the user must destroy and recreate FAST Cache. This requires FAST Cache to flush all dirty pages currently contained in FAST Cache. When FAST Cache is created again, it must repopulate its data (warm-up period).

39 Advanced Storage Concepts

Page 40: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The FAST Cache option will only be available if the FAST Cache enabler is installed on the storage system.

When a Classic LUN is created, as shown in the example on the top left, FAST Cache is enabled by default (as is Read and Write Cache).

If the Classic LUN has already been created as shown in the example on the bottom left, and FAST Cache has not been enabled for the LUN, the Cache tab in the LUN Properties window can be used to configure FAST Cache.

Note that checking the Enable Caching checkbox checks all boxes below it (SP Read Cache, SP Write Cache, FAST Cache).

Enabling FAST Cache for Pool LUNs differs from that of a Classic LUNs in that FAST Cache is configured at the Pool level only as shown in the examples on the right. In other words, all LUNs created in the Pool will have FAST Cache enabled or disabled collectively depending on the state of the FAST Cache Enabled box.

The FAST Cache Enabled box will be enabled by default if the FAST Cache enabler was installed before the Pool was created. If the Pool was created prior to installing the FAST Cache enabler, FAST Cache is disabled on the Pool by default. To enable FAST Cache on the Pool, launch the Storage Pool Properties window and select the Enabled box under FAST Cache as shown in the example on the bottom right.

40 Advanced Storage Concepts

Page 41: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The user can display FAST Cache properties in any Unisphere table (For example, the LUNs table) by right-clicking the table header and selecting Choose Columns. The user can also click the Tools icon at the top-right corner of the table and select Choose Columns. This opens a dialog box, where the user can select FAST Cache. The FAST Cache property is displayed for every entry in the table.

41 Advanced Storage Concepts

Page 42: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The management functions described in the previous slides can also be executes using NaviSecCLI. In the syntax used above, an ellipsis (...) indicates that additional cli options are required.

42 Advanced Storage Concepts

Page 43: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

The table shows the FAST Cache maximum configuration options. The Maximun FAST Cache in the last column depend on the drive count in the second column (Flash Disk Capacity). For example: VNX5400 can have up to 10 drives of 100 GB or up to 10 drives of 200 GB.

43 Advanced Storage Concepts

Page 44: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

FAST Cache enables the system cache to be expanded by using flash drives as an additional tier of cache. This allows the storage system to provide Flash drive performance to the most heavily accessed chunks of data.

FAST Cache absorbs I/O bursts from applications, thereby reducing the load on backend hard disks. Because SP cache can be flushed to Flash drives more quickly than to magnetic drives, there is less performance pressure on storage system resources. This helps in improving the TCO of the storage solution.

FAST Cache can be managed through Unisphere in an easy and intuitive manner. FAST Cache is not a good fit for all types of workloads. The application I/O profile should be analyzed to determine the potential performance benefits.

FAST Cache works in a complementary way with FAST VP technology. Both the technologies help in placing data segments on the most appropriate storage tier based on their usage pattern.

44 Advanced Storage Concepts

Page 45: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

This lesson covered the functionality, benefits, and configuration of EMC FAST Cache.

Advanced Storage Concepts 45

Page 46: Advanced Storage Concepts

Copyright © 2013 EMC Corporation. All rights reserved.

Listed are the key points covered in this module.

Advanced Storage Concepts 46