25
Adaptive Optimization for HPE 3PAR StoreServ Storage Hybrid storage on HPE 3PAR StoreServ Technical white paper

Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

  • Upload
    others

  • View
    12

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Adaptive Optimization for HPE 3PAR StoreServ Storage Hybrid storage on HPE 3PAR StoreServ

Technical white paper

Page 2: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper

Contents Executive summary .................................................................................................................................................................................................................................................................................................................................3 Storage tiers: Opportunity and challenge ...........................................................................................................................................................................................................................................................................3 Brief overview of volume mapping .......................................................................................................................................................................................................................................................................................... 5

Physical disks ........................................................................................................................................................................................................................................................................................................................................ 5 Logical disks ........................................................................................................................................................................................................................................................................................................................................... 5 Common provisioning groups................................................................................................................................................................................................................................................................................................ 6 CPGs and workloads ....................................................................................................................................................................................................................................................................................................................... 6 Virtual volumes .................................................................................................................................................................................................................................................................................................................................... 6

Requirements .............................................................................................................................................................................................................................................................................................................................................. 6 HPE 3PAR Adaptive Optimization Software ................................................................................................................................................................................................................................................................... 7

CPG as tiers in AO configuration ......................................................................................................................................................................................................................................................................................... 7 Data locality ............................................................................................................................................................................................................................................................................................................................................ 7 Tiering analysis algorithm ......................................................................................................................................................................................................................................................................................................... 8 Average tier access rate densities...................................................................................................................................................................................................................................................................................... 8 Design tradeoff: Granularity of data movement ..................................................................................................................................................................................................................................................10 HPE 3PAR Adaptive Flash Cache .....................................................................................................................................................................................................................................................................................10 HPE 3PAR Thin Deduplication ...........................................................................................................................................................................................................................................................................................10

Implementing Adaptive Optimization .................................................................................................................................................................................................................................................................................. 11 Sizing for Adaptive Optimization ....................................................................................................................................................................................................................................................................................... 11 Configuring AO using IOPS and Capacity Targets ............................................................................................................................................................................................................................................ 11 Configuring AO using region I/O density reports .............................................................................................................................................................................................................................................. 13 Scheduling AO Policies............................................................................................................................................................................................................................................................................................................... 17 StartAO Output ................................................................................................................................................................................................................................................................................................................................. 18 Using AO configuration with VVsets ............................................................................................................................................................................................................................................................................ 20 Adaptive Optimization with Remote Copy ............................................................................................................................................................................................................................................................... 21

Use case.......................................................................................................................................................................................................................................................................................................................................................... 21 Accelerating workloads by adding SSDs .................................................................................................................................................................................................................................................................... 21 Lowering cost per GB by configuring a three-tier configuration with SSD, FC, and NL ................................................................................................................................................ 23

Summary ....................................................................................................................................................................................................................................................................................................................................................... 25

Page 3: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 3

Executive summary Now more than ever, IT managers are under pressure to deliver the service levels necessary for a wide variety of mission-critical applications at the lowest possible cost. The introduction of solid-state drive (SSD) technology has created enormous demand for an optimization solution capable of leveraging this drive class to improve service levels without raising costs. Traditional approaches to service-level optimization have not been successful in meeting this need due to limitations such as:

• High cost of placing entire volumes onto SSDs

• Inability to scale sub-volume optimization to accommodate large, mission-critical data centers

• Insufficient administrative controls and override mechanisms

• Inability to move data without impacting service levels

• Unproven technologies that introduce unnecessary risk

New opportunities exist to optimize the cost and performance of storage arrays, thanks to the availability of a wide range of storage media such as SSDs, high-performance hard disk drives (HDDs), and high-capacity HDDs. But these opportunities come with the challenge of doing it effectively and without increasing administrative burdens.

HPE 3PAR Adaptive Optimization (AO) enables the HPE 3PAR StoreServ to act as a hybrid storage array where the StoreServ can blend flash-based SSDs and HDDs to provide performance at an affordable price. HPE 3PAR Adaptive Optimization Software delivers the next generation in autonomic storage tiering by taking a fine-grained, highly automated approach to service-level improvement. Using the massively parallel, widely striped, and highly granular HPE 3PAR architecture as a foundation, Adaptive Optimization leverages the proven sub-volume data movement engine built into the HPE 3PAR Operating System Software. The result is highly reliable, non-disruptive, cost-efficient sub-volume storage tiering that delivers the right quality of service (QoS) at the lowest transactional cost.

Adaptive Optimization enables enterprise and cloud data centers to improve service levels effortlessly, on a large scale and for a lower total cost than other solutions. This approach enables data centers using HPE 3PAR systems to meet enterprise and cloud performance demands within a smaller footprint and lower storage equipment cost than by using only Fibre Channel (FC) storage. This level of cost savings is made possible in part by application-specific thresholds and comprehensive support for thin and fat volumes as well as volume copies. Adaptive Optimization enables IT managers to react swiftly to changing business needs while delivering service-level improvement over the entire application lifecycle—autonomically and non-disruptively.

With this highly autonomic technology, IT managers can now achieve non-disruptive, cost- and performance-efficient storage within even the largest and most demanding enterprise and cloud environments.

This white paper describes the technology that adaptively optimizes storage on HPE 3PAR StoreServ Storage, explains some of the tradeoffs, and illustrates its effectiveness with performance results.

Storage tiers: Opportunity and challenge Modern storage arrays support multiple tiers of storage media with a wide range of performance, cost, and capacity characteristics—ranging from inexpensive Serial ATA (SATA) HDDs that can sustain only about 75 IOPS to more expensive flash memory-based SSDs that can sustain thousands of IOPS. Volume RAID and layout choices enable additional performance, cost, and capacity options. This wide range of cost, capacity, and performance characteristics is both an opportunity and a challenge.

Page 4: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 4

Figure 1. Autonomic tiering with HPE 3PAR StoreServ

The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered) by correctly placing the data on different tiers; move the most active data to the fastest (and most expensive) tier and move the idle data to the slowest (and least expensive) tier. The challenge, of course, is to do this in a way that minimizes the burden on storage administrators while also providing them with appropriate controls.

Currently, data placement on different tiers is a task usually performed by storage administrators—and their decisions are often based not on application demands but on the price paid by the users. If they don’t use careful analysis, they may allocate storage based on available space rather than on performance requirements. At times, HDDs with the largest capacity may also have the highest number of accesses. However, the largest HDDs are often the slowest HDDs. This can create significant performance bottlenecks.

There is an obvious analogy with CPU memory hierarchies. Although the basic idea is the same (use the smallest, fastest, most expensive resource for the busiest data), the implementation tradeoffs are different for storage arrays. While deep CPU memory hierarchies (first-, second-, and third-level caches; main memory; and finally paging store) are ubiquitous and have mature design and implementation techniques, storage arrays typically have only a single cache level (the cache on disk drives usually acts more like a buffer than a cache).

Page 5: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 5

Brief overview of volume mapping Before you can understand HPE 3PAR Adaptive Optimization, it is important to understand volume mapping on HPE 3PAR StoreServ Storage as illustrated in figure 2.

Figure 2. HPE 3PAR volume management

The HPE 3PAR Operating System has a logical volume manager that provides the virtual volume abstraction. It comprises several layers, with each layer being created from elements of the layer below.

Physical disks Every physical disk (PD) that is admitted into the system is divided into 1 GB chunklets. A chunklet is the most basic element of data storage of the HPE 3PAR OS. These chunklets form the basis of the RAID sets; depending on the sparing algorithm and system configuration, some chunklets are allocated as spares.

Logical disks The logical disk (LD) layer is where the RAID functionality occurs. Multiple chunklet RAID sets, from different PDs, are striped together to form an LD. All chunklets belonging to a given LD will be from the same drive type. LDs can consist of all nearline (NL), FC, or SSD drive type chunklets.

There are three types of logical disks:

1. User (USR) logical disks provide user storage space to fully provisioned virtual volumes.

2. Shared data (SD) logical disks provide the storage space for snapshots, virtual copies, thinly provisioned virtual volumes (TPVVs), or thinly deduped virtual volumes (TDVVs).

3. Shared administration (SA) logical disks provide the storage space for snapshot and TPVV administration. They contain the bitmaps pointing to which pages of which SD LD are in use.

The LDs are divided into regions, which are contiguous 128 MB blocks. The space for the virtual volumes is allocated across these regions.

Page 6: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 6

Common provisioning groups The next layer is the common provisioning group (CPG), which defines the LD creation characteristics, such as RAID type, set size, disk type for chunklet selection, plus total space warning and limit points. A CPG is a virtual pool of LDs that allows volumes to share resources and to allocate space on demand. A thin provisioned volume created from a CPG will automatically allocate space on demand by mapping new regions from the LDs associated with the CPG. New LDs of requested size will be created for fully provisioned volumes and regions are mapped from these LDs.

CPGs and workloads HPE 3PAR StoreServ performs efficiently for any type of workload, and different workloads can be mixed on the same array. These different workloads may need different types of service levels to store their data. For example, for high-performance mission-critical workloads, it may be best to create volumes with RAID 5 protection on SSD or RAID 1 protection on fast class [FC or serial-attached SCSI (SAS) performance HDDs]. For less-demanding projects, RAID 5 on FC drives or RAID 6 on NL drives may suffice. For each of these workloads, you can create a CPG to serve as the template for creating virtual volumes (VVs) for the workload. VVs can be moved between CPGs with the HPE 3PAR Dynamic Optimization (DO) software command tunevv, thereby changing their underlying physical disk layout and hence their service level.

Virtual volumes The top layer is the VV, which is the only data layer visible to hosts. VVs draw their resources from CPGs and the volumes are exported as virtual logical unit numbers (VLUNs) to hosts.

A VV is classified by its type of provisioning, which can be one of the following:

• Full: Fully provisioned VV, either with no snapshot space or with statically allocated snapshot space

• TPVV: Thin provisioned VV, with space for the base volume allocated from the associated user CPG and snapshot space allocated from the associated snapshot CPG (if any)

• TDVV: Thin deduped VV, with space for the base volume allocated from the associated user CPG (SSD tier only) and snapshot space allocated from the associated snapshot CPG (if any) Note: TDVV’s must be 100% on an SSD tier and hence cannot be combined with AO

• CPVV: Commonly provisioned VV. The space for this VV is fully provisioned from the associated user CPG and the snapshot space allocated from the associated snapshot CPG

TPVVs and TDVVs associated with the same CPG share the same LDs and draw space from that pool as needed, allocating space on demand in small increments for each controller node. As the volumes that draw space from the CPG require additional storage, the HPE 3PAR OS automatically increases the logical disk storage until either all available storage is consumed or, if specified, the CPG reaches the user-defined growth limit, which restricts the CPG’s maximum size. The size limit for an individual VV is 16 TB.

Figure 2 illustrates the volume mapping for both non-tiered as well as tiered (adaptively optimized) volumes. For non-tiered VVs, each type of space (user, snap) is mapped to LD regions within a single CPG, and therefore, all space is in a single tier. For tiered VVs, each type of space can be mapped to regions from different CPGs.

Finally, remember that although this mapping from VVs to VV spaces to LDs to chunklets is complex, the user is not exposed to this complexity because the system software automatically creates the mappings.

The remainder of this white paper describes how Adaptive Optimization tiering is implemented and the benefits that can be expected.

Requirements HPE 3PAR Adaptive Optimization is a licensed feature of HPE 3PAR OS and is supported on all HPE 3PAR StoreServ Storage systems. For more information on HPE 3PAR Adaptive Optimization licensing, contact your HPE representative or authorized HPE partner. Creating and managing AO requires HPE 3PAR StoreServ Management Console (MC) 4.5 or later, or SSMC version 2.1 or later. To use the command line, you must install HPE 3PAR Command Line Interface 3.1.2 MU2 or later. Certain features and reports of HPE 3PAR Adaptive Optimization described in this paper are only available from HPE 3PAR SSMC 4.5 or later and HPE 3PAR OS 3.1.3 MU1 and later. The ability to specify minimum and maximum sizes of each tier is supported in HPE 3PAR OS 3.2.2 and later.

Page 7: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 7

HPE 3PAR Adaptive Optimization Software CPG as tiers in AO configuration Before creating an AO configuration, it is required to create CPGs. CPGs are used as tiers within AO. An AO configuration can have a maximum of three tiers, and a minimum of two tiers are required to define a new AO configuration. A CPG can be part of only one AO configuration. So, every AO configuration will need a different set of CPGs. Virtual volumes that are not part of an AO configuration have all their regions mapped to LDs belonging to a single CPG. However, in the case of AO, VVs will have regions mapped to LDs from different CPGs (tiers). Data placement for a particular region is decided based on statistics collected and analyzed by AO for each region.

HPE 3PAR Adaptive Optimization leverages data collected by HPE 3PAR System Reporter (SR) on node. The SR on node periodically collects detailed performance and space data that is used by AO for the following:

• Analyze the data to determine the volume regions that should be moved between tiers

• Instruct the array to move the regions from one CPG (tier) to another

• Provide the user with reports that show the impact of Adaptive Optimization

Refer to the HPE 3PAR System Reporter white paper for more details.

Data locality Sub-LUN tiering solutions such as AO provide high value when there is a lot of locality of data. Locality of data means there is a limited area of the data address space used by a server or application that receives a large percentage of the I/O requests compared to the rest of the data space. This is the result of how a server’s applications use the data on the LUNs.

A relatively small portion of the data receives a lot of I/O and is hot, while a larger portion of the data space receives very few or no I/O and is considered cold. AO is not responsible for causing locality of data. Sub-LUN tiering solutions move small pieces of a LUN up and down between tiers based on how hot or cold the data pieces are. In the case of AO, the pieces are 128 MB regions.

Typically, over 80 percent of IOPS is served by less than 20 percent of addressable capacity. If such a LUN is part of a single tier, then that tier should be capable of handling the maximum IOPS requirement even when most of the capacity will not be accessed as frequently. With Adaptive Optimization, the LUN will be spread over multiple tiers; a small amount of capacity that is accessed heavily will be moved to the fastest tier and any capacity that is lightly accessed will be moved to the slowest and cheapest tier. Due to high data locality, the fastest tier can be small as compared to the other tiers and still provide the required IOPS. Figure 3 gives an example of how I/O will be distributed on a LUN that has a single tier and of a LUN that is spread across multiple tiers.

An application that does not have high data locality will have IOPS spread throughout the LUN and a small fast tier will not be a good fit; capacity allocated from the slower tiers will also have a similar access rate as the fastest tier. Such an application will not perform optimally with a tiered LUN. It should be deployed on a LUN created using a single tier.

The section about region I/O density reports will explain in detail how to find out if a particular group of applications is suitable for sub-LUN tiering.

Page 8: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 8

Figure 3. Data locality and I/O distribution

Tiering analysis algorithm The tiering analysis algorithm that selects regions to move from one tier to another considers several things described in the following sections.

Space available in the tiers When AO runs, it will first consider available space in the tiers. If a tier has a CPG warning limit or a tier maximum value configured and the tier space exceeds the lower of these limits, AO will move regions to reduce space consumption below the limit. (Note: If the warning limit for any CPG is exceeded, the array will generate a warning alert). If space is available in a faster tier, it chooses the busiest regions to move to that tier. Similarly, if space is available in a slower tier, it chooses the idlest regions to move to that tier. The average tier service times and average tier access rates are ignored when data is being moved because the size limits of a tier take precedent.

Average tier service times Normally, HPE 3PAR Adaptive Optimization tries to move busier regions in a slow tier into a higher performance tier. However, if a higher performance tier gets overloaded (too busy), performance for regions in that tier may actually be lower than regions in a slower tier. In order to prevent this, the algorithm does not move any regions from a slower to a faster tier if the faster tier’s average service time is not lower than the slower tier’s average service time by a certain factor (a parameter called svctFactor). There is an important exception to this rule because service times are only significant if there is sufficient IOPS load on the tier. If the IOPS load on the destination tier is below another value (a parameter called minDstIops), then we do not compare the destination tier’s average service time with the source tier’s average service time. Instead, we use an absolute threshold (a parameter called maxSvctms).

Average tier access rate densities When not limited, as described earlier, by lack of space in tiers or by high average tier service times, Adaptive Optimization computes the average tier access rate densities (a measure of how busy the regions in a tier are on average, calculated with units of IOPS per gigabyte per minute). It also compares them with the access rate densities of individual regions in each tier. It uses this data to decide if the region should be moved to a different tier or remain in its current tier.

We first consider the algorithm for selecting regions to move from a slower to a faster tier. For a region to be considered busy enough to move from a slower to a faster tier, its average access rate density or accr(region) must satisfy these two conditions:

First, the region must be sufficiently busy compared to other regions in the source tier:

accr(region) > srcAvgFactorUp(Mode) * accr(srcTier)

Where accr(srcTier) is the average access rate density of the source (slower) tier and srcAvgFactorUp(Mode) is a tuning parameter that depends on the mode configuration parameter. Note that by selecting different values of srcAvgFactorUp for performance, balanced, or cost mode, HPE 3PAR Adaptive Optimization can control how aggressive the algorithm is in moving regions up to faster tiers. Performance mode will be more aggressive in moving regions up to faster tiers and keeping cooling regions in a higher tier. Cost mode will be more aggressive in moving regions down to lower tiers and keeping warm regions in a lower tier.

Page 9: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 9

The values used in this algorithm may change from release to release, but in 3.2.1 MU3 the values used for srcAvgFactorUp are:

• srvAvgFactorUp(Performance mode) = 1.0

• srvAvgFactorUp(Balanced mode) = 1.5

• srvAvgFactorUp(Cost mode) = 2.0

Applying these values to the algorithm above you can see that when using Performance mode accr(region) must be greater than 1.0 times the source tier accr in order to be moved to a higher tier. If the mode changes to Balanced, the accr(region) must be 50% (1.5 times) greater than the source tier accr in order to be moved. This higher threshold means fewer regions will be considered for movement to a higher tier when the mode is Balanced. Considering a configuration with a mode of Cost the accr(region) must be 100% (2 times) greater than the source tier accr in order to be considered for a move. Using Performance Mode, more regions are considered for movement to a higher tier while using Cost mode results in fewer regions being considered for movement to a higher tier.

Second, the region must meet one of two conditions: it must be sufficiently busy compared with other regions in the destination tier, or it must be exceptionally busy compared with the source tier regions. This second condition is added to cover the case in which a very small number of extremely busy regions are moved to the fast tier, but then the average access rate density of the fast tier creates too high a barrier for other busy regions to move to the fast tier:

accr(region) > minimum((dstAvgFactorUp(Mode) * accr(dstTier)), (dstAvgMaxUp(Mode) * accr(srcTier)))

The algorithm for moving idle regions down from faster to slower tiers is similar in spirit—but instead of checking for access rate densities greater than some value, the algorithm checks for access rate densities less than some value:

accr(region) < srcAvgFactorDown(Mode) * accr(srcTier) accr(region) < maximum((dstAvgFactorDown(Mode) * accr(dstTier)), (dstAvgMinDown(Mode) * accr(srcTier))) The choice of mode (Performance, Balanced, and Cost) will allow more or less regions to be considered for moves. Performance mode will allow fewer regions to be considered for moves to lower tiers while Cost mode will allow more regions to be considered for moves to lower tiers. Balanced mode, as its name implies, is in the middle.

AO makes a special case for regions that are completely idle (accr(region) = 0). These regions are moved directly to the lowest tier, even when performance mode is selected.

Minimum and maximum space within a tier HPE 3PAR InForm OS version 3.2.2 introduces the ability to manage the space AO consumes directly in each tier. The previous discussion illustrates how AO monitors the system and moves data between tiers in response to available space and region IO density. This can result in little or no data in a given tier. In cases where tier 0 is an SSD tier and all SSD space is dedicated to AO, it is desirable to maximize the SSD space utilization. HPE 3PAR InForm OS version 3.2.2 introduces new options to the “createaocfg,” “setaocfg,” and “startao” commands to support a user-specified minimum or maximum size for each tier.

The new command options are -t0min, -t1min, and -t2min to set a minimum size for a tier and -t0max, -t1max, and -t2max to set a maximum size for a tier. Specifying a minimum size target for tier 0 of 400 GB (setaocfg -t0min 400G), for example, will instruct AO to utilize a minimum of 400 GB in tier 0. In this case, AO will move regions into tier 0 that qualify according to the region density data and additional regions, if needed, will also be moved into tier 0 to consume at least 400 GB in this tier. The additional regions will be selected according to the region density data, but with a relaxed threshold to meet the minimum space target.

The space specified with the min or max parameters can be specified in MB (default), GB (g or G), or TB (t or T). Specifying a size of 0 instructs AO not to enforce a minimum or maximum size target. Note the maximum parameters serve the same function as the CPG warning limit. If both are specified, the lower value will be enforced.

Special consideration must be given when using these new parameters in multiple AO configurations simultaneously or in more than one tier in a single AO configuration.

Consider two AO configurations each with a tier 0 minimum of 750 GB configured and only 1 TB of space available to tier 0. If each AO configuration is run sequentially, the first AO configuration to run will meet its minimum configuration of 750 GB leaving only 250 GB available to the second AO configuration. If both AO configurations are run together both AO configurations will consume tier 0 space at a similar rate until the space is exhausted. It is likely that neither AO configuration will meet its minimum configuration of 750 GB.

Page 10: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 10

Consider a second case of a single AO configuration with tier minimums of 750 GB configured for each of the 3-tiers. If only 1 TB of VV space is consumed in the AO configuration, all three minimums cannot be met at the same time and AO will use the configured AO mode to guide region moves. When performance mode is configured, regions will be moved to tier 0 space first, leaving tier 2 and perhaps tier 1 below its minimum. Likewise when cost mode is configured, regions will be moved to tier 2 space first, leaving tier 0 and perhaps tier 1 space below its minimum. This leaves balanced mode which is a compromise between performance mode and cost mode.

Example A new array is purchased that includes eight 480 GB SSDs. The goal is to use all the space from all eight SSDs as tier 0 in an AO configuration. If the CPG holding the SSDs is configured to use RAID 5 (3+1), for example, we calculate a total space in this tier of 2.6 TB. Executing the command setaocfg –t0min 2.6T <aocfg name> will cause AO to use all the SSD space.

Consult the 3.2.2 CLI Reference Manual or CLI help output for more details on command syntax.

Design tradeoff: Granularity of data movement The volume space to LD mapping has a granularity of either 128 MB (user and snapshot data) or 32 MB (admin metadata) and that is naturally the granularity at which the data is moved between tiers. Is that the optimal granularity? On the one hand, having fine-grain data movement is better since we can move a smaller region of busy data to high-performance storage without being forced to bring along additional idle data adjacent to it. On the other hand, having a fine-grain mapping imposes a larger overhead because HPE 3PAR Adaptive Optimization needs to track performance of a larger number of regions, maintain larger numbers of mappings, and perform more data movement operations. Larger regions also take more advantage of spatial locality (the blocks near a busy block are more likely to be busy in the near future than a distant block). A thorough analysis shows that using 128 MB regions for user data is the best granularity for moving data between tiers with Adaptive Optimization.

HPE 3PAR Adaptive Flash Cache HPE 3PAR OS 3.2.1 introduced a new feature called HPE 3PAR Adaptive Flash Cache (AFC), which provides read cache extensions by leveraging HPE 3PAR first-in-class virtualization technologies. This functionality allows dedicating a portion of SSD capacity as an augmentation of the HPE 3PAR StoreServ data read cache, reducing application response time for read-intensive I/O workloads. This feature can coexist with Adaptive Optimization and in fact is complementary to AO.

In order to understand how much of the existing SSD capacity should be allocated to AFC, refer to the HPE 3PAR Adaptive Flash Cache technical white paper. If customers already have Adaptive Optimization configured and no available space on the SSD tier, they may set a warning limit on the CPG SSD tier to free up some space to then allocate to AFC. If the array is running Inform OS 3.2.2 or later, space can be freed using the AO tier CFG limit options (e.g., –t0max) to the createaocfg, setaocfg, and startao commands. AFC will allocate the same amount of space on all node pairs with the smallest amount being 128 GB per node pair increasing in 16 GB increments.

HPE 3PAR Thin Deduplication HPE 3PAR OS 3.2.1 MU1 introduces a new feature called HPE 3PAR Thin Deduplication, which allows provisioning of TDVVs to an SSD tier. While the feature can coexist on the same system where Adaptive Optimization is running and configured, TDVVs cannot be provisioned on a CPG that is part of an AO configuration, and the system will prevent creating an AO configuration if a CPG has TDVVs associated with it. A system can be configured with a shared pool of SSDs that may be used for sub-tiering (AO), cache augmentation (AFC), and provisioning of TDVVs, TPVVs, or full VVs.

Page 11: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 11

Implementing Adaptive Optimization Sizing for Adaptive Optimization It’s important to size correctly when setting up an HPE 3PAR array with Adaptive Optimization. Traditional approaches start with sizing the storage capacity of each tier. This approach fits nicely with easily available capacity information, but fails to consider the performance needed to access the capacity. Sizing an array for Adaptive Optimization begins with performance. When performance sizing is met, adjustments can be made to address capacity needs. This approach considers both performance and capacity so performance and space concerns are addressed in the sizing exercise.

Performance sizing for Adaptive Optimization is best accomplished using region density reports. These reports show the suitability of a VV or all VVs in a CPG for use with Adaptive Optimization. This approach requires that the VVs and workload already exist and have been measured. First, we will consider an AO sizing approach where region density reports are not available. AO sizing using region density reports will be considered later under the section titled Configuring AO using Region I/O Density Reports on page 13. The following sizing approach may require adjustments after AO is implemented if data locality does not fit on the SSD and FC tiers. This sizing approach targets the NL tier to perform 0% of the IOPS. Observing significant IOPS to the NL tier after AO is implemented indicates adjustments are needed.

Default CPG When new data (new VVs or new user space for a thin volume) is created, it will be written in the default CPG defined at volume creation time. 3PAR AO best practice is to create VVs in the FC tier when there is an FC tier and in the SSD tier in a 2-tier SSD+NL AO configuration. Adaptive Optimization will not migrate regions of data to other tiers until the next time the AO configuration is executed. It is, therefore, important that the default CPG have enough performance and capacity to accommodate the performance and capacity requirements of new applications that are provisioned to the system. It is a best practice to size the solutions assuming the NL tier will contribute zero percent of the IOPS required from the solution.

Configuring AO using IOPS and Capacity Targets Region density reports provide measurement data on a running system to use in sizing a 3PAR array using AO. When this data is not available it may be necessary to size the array using I/O and Capacity targets. An I/O target is an estimate of the total I/O’s an array may be asked to perform and will be used to size the performance of the array. A capacity target is the amount of storage space required by the application and is necessary but of secondary importance to performance when sizing an array. An example of IOPS and Capacity targets is to size an array for use with AO that needs to perform 125,000 IOPS with 450 TB of capacity.

AO Sizing for 2-Tier: FC+NL Sizing an AO configuration for any set of tiers begins with performance. In all AO sizing, the NL tier will be sized to handle 0% of the IOPS making sizing for a 2-tier FC+NL AO configuration straightforward. Begin by sizing the FC tier to handle 100% of the IOPS and add NL drives to meet the capacity target.

When sizing the FC AO tier, use a guideline of 150 IOPS per 10k FC drive and 200 IOPS per 15k drive. A target of 50,000 IOPS and 450 TB raw capacity, for example, can be sized by dividing 150 IOPS per 10k FC drive by the IOPS goal to determine how many 10k FC drives are required. In this case, 333 (50000/150) 10k FC drives will meet the target of 50,000 IOPS. Choosing 600 GB 10k FC drives will result in the FC tier of an AO configuration performing the target 50,000 IOPS and providing 200 TB of raw capacity. The addition storage needed to meet the target of 450 TB can be provided by adding 63 4 TB NL drives.

AO Sizing for 2-Tier: SSD+FC Sizing an AO configuration with a 2-tier configuration using SSD and FC will also begin by performance sizing the FC tier. The same guidelines of 150 IOPS for 10k rpm FC drives and 200 IOPS per 15k rpm drives are used to start. A target of 100,000 IOPS and 425 TB raw capacity, for example, can be sized by dividing 150 IOPS per 10k FC drive by the IOPS goal of 100,000 to determine how many 10k FC drives are required. In this example, 667 (100000/150) 10k FC drives will meet the target of 100,000 IOPS.

Next we choose a target SSD configuration and calculate how many FC drives can be removed from the configuration to remain at the target IOPS level. IOPS guidelines for SSDs can be used like the FC IOPS guidelines and traded off to maintain the target IOPS while changing the mix of drives. Use an AO configuration IOPS guideline of 1100 IOPS for the 480 GB MLC SSDs and 4000 IOPS for the 1920 GB and 3840 GB SSDs.

Page 12: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 12

Choosing a configuration of 48 x 480 GB SSDs will result in an SSD tier capable of 52,800 (48 * 1100) IOPS. Reduce the number of drives in the FC tier by the same 52,800 IOPS to keep the same target IOPS of 100,000. The above configuration included 10 k rpm FC drives so we use the 150 IOPS per drive guideline to calculate the number of 10k FC drives we can remove from the configuration. The result is 352 (52,800 SSD IOPS / 150 10k FC drive IOPS) 10k FC drives can be removed from the configuration to trade the FC IOPS for the SSD IOPS. The 2-tier AO configuration to provide 100,000 IOPS has now been sized with 48 x 480 GB SSDs and 315 (667-362) 10k FC drives.

The last step is to make adjustments to meet the capacity target of 425 TB by adding FC drives. The configuration so far includes 48 x 480 GB SSDs providing 23 TB raw. Choosing 315 of the 1.2 TB 10k FC drives will provide 377 TB of raw capacity and a total of 400 TB (23 TB SSD + 377 TB FC) raw capacity. 25 TB additional raw capacity is required to meet the target of 425 TB raw and can be achieved by adding 20 1.2 TB 10k FC drives. The AO configuration sized to meet 100,000 IOPS and 425 TB raw capacity using a 2-tier SSD+FC configuration is 48 x 480 GB SSDs and 340 (315+25) 1.2 TB 10k FC drives.

AO Sizing for 3-Tier: SSD+FC+NL Sizing an AO configuration for a 3-tier configuration using SSD, FC, and NL tiers will begin like the other examples with performance sizing of the FC tier. The sizing process for a 3-tier SSD+FC+NL AO configuration follows the prior discussion of sizing a 2-tier SSD+FC AO configuration with one minor modification. Sizing the FC tier and trading SSD IOPS for FC IOPS remains the same. The only difference is the last step to adjust the configuration to meet the capacity target where NL drives are added instead of FC drives.

Using a target of 125,000 IOPS and 450 TB raw as an example, start by sizing the FC tier using the previous guidelines of 150 IOPS per 10k rpm FC drive and 200 IOPS per 15k rpm FC drive. Using 15k rpm FC drives in this example, 625 drives are needed to meet the target of 125,000 IOPS. Adding 64 x 480 GB SSDs will add 70,400 IOPS allowing the SSDs to replace the IOPS of 352 15 k rpm FC drives (70,400/200=352).

The AO configuration after sizing the FC and SSD tiers is made up of 64 x 480 GB SSDs holding 31 TB raw and 273 (625–362) 15k rpm FC drives. If we choose 600GB 15k RPM FC drives, this represents 164 TB raw space and when adding the 31 TB raw from the SSD tier reaches a total of 195 TB raw. The NL tier will be sized to provide 0% of the IOPS target and capacity to fill the gap from 195 TB raw in the SSD and FC tiers to the target of 450 TB raw. The NL tier must provide 255 TB raw (450–195) capacity which can achieved with 64 x 4 TB NL drives. In this example a 3-tier AO configuration is sized to provide a target of 125,000 IOPS and 450 TB raw with 64 x 480 GB SSDs in tier 0, 273 x 600 GB 15k rpm FC drives in tier 1 and 64 x 4 TB NL drives in tier 2.

AO Sizing for 2-Tier: SSD+NL A 2-tier AO configuration using SSD and NL tiers is sized following the same process as the earlier example of a 2-tier configuration with FC and NL with some additional considerations. One of these considerations is the difference in average service times of the SSD and NL tiers. This difference is much greater than the average service time difference of any other tiers in the AO configurations previously discussed. This difference may result in users being more aware of tiering of their data when that data is heavily accessed and ends up on the NL tier. It is always important to understand how your data is accessed when setting up an AO configuration, but a 2-tier configuration with SSD and NL will cause any deficiencies in the configuration to be magnified.

Adaptive Flash Cache (AFC) may be helpful to minimize the impact of the average service time differences for some workloads. AFC uses SSD to extend the read cache for small block (64k or less) random reads. SSD space will need to be balanced between AO and AFC, but for some workloads this is a very effective mix of 3PAR technologies. Refer to the Adaptive Flash Cache White Paper for more details.

In addition to considering AFC, it is recommended to create VV’s that will be part of a 2-tier (SSD+NL) AO configuration in the SSD CPG. This will cause new space allocations (new data written) to be allocated first to the SSD tier and later moved according to the AO policy.

AO sizing of a 2-tier configuration using SSD and NL tiers begins with performance of the SSD tier. In this example the target is 140,000 IOPS and 350 TB raw space. Calculate the number of SSDs required to meet the IOPS target using the AO configuration guidelines of 1100 IOPS per 480 GB SSD or 4000 IOPS per 1920 GB or 3840 GB SSD. The 3840 GB SSDs can meet the target of 140,000 IOPS with 36 drives providing 138 TB raw capacity. An additional 212 TB raw capacity is needed to meet the capacity target of 350 TB and can be provided with 36 x 6 TB NL drives. The 2-tier SSD+NL AO configuration to meet the target of 140,000 IOPS and 350 TB raw capacity is 36 x 3.84 TB SSDs and 36 x 6 TB NL drives.

Page 13: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 13

Configuring AO using region I/O density reports Region I/O density reports are used to identify applications and volumes that are suitable for AO. Starting from HPE 3PAR OS 3.1.3 MU1, System Reporter license is not needed to produce region I/O density reports. The cumulative region I/O density is most useful when setting up AO and the regular bar chart-based region I/O density is useful to get insights into how AO has performed over time. These reports can be run via the command line or from the HPE 3PAR SSMC.

Cumulative region I/O density reports These reports give a good indication about the locality of data for a particular CPG for a defined interval of time. These reports can be generated against CPGs and AO configurations; when setting up a configuration for the first time, reports should be generated against single CPGs, as they will help identify CPGs that are a good candidate for an AO configuration. There are two types of cumulative reports: percentage-based and numbers-based. The percentage type report has percent capacity on X-axis and % access rate on Y-axis. Whereas for the numbers-type report the total capacity is on X-axis and total access rate is on Y-axis. Figure 4 gives an example of the percentage-type cumulative region I/O density report.

Figure 4. Cumulative region I/O density report–percentage

A CPG is a possible candidate for AO if the curve for that CPG is in the left top corner. Such a CPG will serve most of its I/O from a small amount of addressable capacity. The report in figure 4 has two visible curves both in the left top corner. So both are possible candidates for AO. The red colored curve tells that almost 90 percent of the I/O for that CPG is serviced by 5 percent of the capacity. Similarly, the green curve tells that almost 90 percent of the I/O for that CPG is serviced by 1 percent of the capacity.

From this report, it seems that it will help if we add the two CPGs to an AO configuration and let AO move this hot capacity to SSD. But, this report is a normalized report based on percentage, so you do not have the actual IOPS or capacity numbers. We first need to find out how much capacity is hot and how much total I/O this small capacity is serving. This information is given by the numbers-type cumulate region I/O density report.

Figure 5 gives an example of the numbers-type report. Here capacity in GiB is on X-axis and access rate I/O/min is on Y-axis. In this report, we see that it will be useful to move the FastClass.cpg into an AO configuration that uses SSDs for tier 0. However, the OS_Images, which was on the left top corner in figure 4, has very small I/O and so it will not be beneficial to move this CPG to an AO configuration.

Page 14: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 14

Figure 5. Cumulative region I/O density report—numbers

Region I/O density reports Region I/O density is an indication of how busy a region of data is. A region I/O density report is a set of histograms with I/O rate buckets. The space GiB histogram shows the capacity in GiB for all regions in the I/O rate buckets. The IO/min histogram shows the total maximum IO’s/min for the regions in the I/O rate buckets. The example results in figure 6 is describe region I/O density after HPE 3PAR Adaptive Optimization has run for a while.

Both charts are histograms, with the X-axis showing the I/O rate density buckets; the busiest regions are to the right and the idlest regions are to the left. The chart on the left shows on the Y-axis the capacity for all the regions in each bucket, while the chart on the right shows on the Y-axis the total maximum IO’s/min for the regions in each bucket. As shown in the charts, the FC tier (tier 1) occupies very little space but absorbs most of the I/O accesses, whereas the nearline tier (tier 2) occupies most of the space but absorbs almost no accesses at all. This environment is a good fit for Adaptive Optimization.

Figure 6. Region I/O density report for AO configuration with two tiers

There is also a cumulative histogram that adds up all the bucket values from left to right. Figure 7 shows the cumulative region I/O density report for the same AO configuration as shown in figure 6.

Page 15: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 15

Figure 7. Cumulative Region I/O density report for AO configuration with two tiers

Using these charts together, we can get a view into how densely I/O values are grouped across the data space and determine how large different tiers of storage should be. These reports are most useful when run against an AO configuration as they display the distribution of space and I/O across all tiers in the AO configuration.

This report, when run daily, offers insight into the data locality of the workload. When there is too much “blue” in buckets to the right side of the IO/min report, it indicates cold regions are heating up every day and perhaps showing too little data locality. In the example in figures 6 and 7 the “hot” NL data consumes about 250 GB and 1000 IO/min which is not a problem. If the IOs were grew higher, however, it could indicate a data locality concern with some of the VVs in the configuration.

From HPE 3PAR OS version 3.1.3 onwards, you can display region density report for each VV in an AO configuration or CPG. This report has two use cases: to find which VVs are best suited for AO and to find if certain VVs need a different AO policy. Certain VVs could have a different I/O profile than the rest of the VVs in the Adaptive Optimization configuration (AOCFG). You might find out that some VVs are not well suited for Adaptive Optimization, as they do not have enough locality of data. Using the per VV region density report, you can now find such VVs and move them to a CPG outside the AO configuration using Dynamic Optimization. Figure 8 shows the region density reports for some of the VVs in an AO configuration with two tiers. As shown in the chart, the volumes using the FC tier have more I/O but consume very little space. Whereas the volumes using the NL tier have very little I/O but consume most of the space.

Figure 8. Region I/O density report for AO configuration with individual VVs

Page 16: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 16

Creating and managing AO Simple administration is an important design goal, which makes it tempting to automate Adaptive Optimization completely. The administrator need not configure anything. However, analysis indicates that some controls are, in fact, desirable for administration simplicity. Since HPE 3PAR StoreServ Storage is typically used for multiple applications—often for multiple customers—HPE allows administrators to create multiple Adaptive Optimization configurations so that they can use different configurations for different applications or customers. Figure 9 shows the configuration settings for an Adaptive Optimization configuration.

Figure 9. Configuration settings

An AO configuration is made up of up to three tiers. Each tier corresponds to a CPG. In figure 9, SSD CPG is used for tier 0, RAID 1 FC CPG is used for tier 1, and RAID 5 FC CPG is used for tier 2. The warning and limits displayed in figure 9 have to be configured for each CPG individually by editing the CPG. More details on how to edit the warning and limit for the CPG is available in the HPE 3PAR SSMC user guide.

Make sure to define tier 0 to be on a higher performance level than tier 1, which in turn should be higher performance than tier 2. For example, you may choose RAID 1 with SSDs for tier 0, RAID 5 with FC drives for tier 1, and RAID 6 with NL or SATA drives for tier 2.

Best practices encourage you to begin your Adaptive Optimization configurations with your application CPG starting with tier 1. For example, tier 1 could be CPG using your FC or SAS physical disks. This allows you to add both higher and lower tier capabilities at a later date. If you don’t have a higher or lower tier, you can add either or both at a later date by using a new CPG, such as tier 0 using SSDs or tier 2 using NL. Or, you could have CPG tiers with RAID 1 or RAID 5 and RAID 6. The main point is that you should begin with middle CPG tier 1 when configuring Adaptive Optimization with your application.

It is also important to specify the schedule when a configuration will move data across tiers along with the measurement duration preceding the execution time. This allows the administrator to schedule data movement at times when the additional overhead of that data movement is acceptable (for example, non-peak hours). You can also set the schedule as to when Adaptive Optimization should stop working before the next measurement period.

The following data movement modes are available:

1. Performance mode—biases the tiering algorithm to move more data into faster tiers.

2. Cost mode—biases the tiering algorithm to move more data into the slower tiers.

3. Balanced mode—is a balance between performance and cost.

The mode configuration parameter does not change the basic flow of the tiering analysis algorithm, but rather it changes certain tuning parameters that the algorithm uses.

If the SSD tier is used only for AO, then it is recommended to disable the raw space alert for the SSD tier. AO manages the amount of space used on each tier and it can fill up over 90 percent of a small SSD tier. But, this will generate alerts about raw space availability. These alerts can be ignored if the SSD tier is used only by AO. These alerts can be disabled using the following command:

setsys –param RawSpaceAlertSSD 0

Page 17: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 17

In addition to disabling the raw space alerts when the entire SSD tier is used only for AO, a tier minimum value can be set. 3PAR InForm OS 3.2.2 and later includes the ability to set tier minimum and maximum values. These options were designed to address the use case of an environment where all SSD space is dedicated to AO. Use this option by first calculating the usable space in the SSD tier available to AO. A configuration of 8 x 480 GB MLC SSDs in a RAID 5 CPG, for example, results in about 2.6 TB of usable space available to AO. Specify this as the tier minimum for AO with a command such as setaocfg –t0min 2.6T CPG_SSD_r5. Tier minimum and maximums may also be set with the createaocfg and startao commands.

Scheduling AO Policies After creating the AO configuration, the configuration has to be scheduled to run on a regular interval. Figure 10 shows how to create the scheduled task for an AO configuration. The max run time specifies how long AO should move the regions once it is scheduled. The measurement interval specifies the duration for which the AO configuration should be analyzed for data movement. Schedule specifies when to run the AO task; the task can recur daily, once, multiple times daily, or advanced. Figure 11 shows details of the scheduled tasks.

Figure 10. Schedule AO configuration

Figure 11. 3PAR Activity View

Page 18: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 18

The AO Space Moved Report shown in figure 12 provides details about the capacity migrated across tiers. This report can be used to verify the effectiveness of the AO configuration and should be checked regularly.

Figure 12. AO Space Moved Report

The space moved report shows the movement of data between tiers. When AO is first implemented the data moved in each AO run will start at some level. In one example AO moved about 500 GB of data per day between tier 1 and tier 0. Over time the space moved by AO should decline and stabilize in a lower range for data that is a good fit for AO. In the same example AO space moved declined from 500 GB to 25 GB–40 GB per day after a few weeks. Space moved reports that show large moves every time AO runs or large regular spikes in data moved may indicate the data locality is not a good fit for AO. Data indicating large moves of cold data from the lowest tier to a higher tier is cause for investigation into the application data locality.

Freeing unused space from CPGs using compact operation The startao command has a -compact auto option that runs compactcpg only if one or more of the following conditions are met (otherwise, compactcpg is not run):

1. There is unused space in the CPG and the current allocated space is above the warning limit.

2. The unused space in the CPG is more than a certain fraction (25 percent) of the CPG space. This is the total unused space across user, snapshot, and admin space.

3. The space available to grow the CPG (i.e., free chunklets for the CPG) is less than four times the CPG growth increment. This can be examined by comparing the LDFree output of showspace -cpg with showcpg -sdg.

Compactcpg when run as part of AO has a limited scope by design to minimize overhead. If desired, compactcpg can be scheduled separately from startao to always run compactcpg.

StartAO Output Scheduling an AO policy causes the startao command to execute. It is recommended to monitor the effectiveness of AO using space moved reports as just discussed. The startao command output also includes information that can help understand AO. The output can be found in the Schedules tab of SSMC and will be provided as command line output when run from the CLI.

The first two lines in the startao output identifies the time period that will be analyzed. This is sometimes called the measurement period. Region moves will be based on the activity of the system during this time interval. If you choose a measurement period that does not represent the desired tuning timeframe the results may be undesirable.

Start sample: 2015-11-05 16:30:00 (1446766200)

End sample: 2015-11-05 18:30:00 (1446773400)

The analysis phase is next. During this phase data is gathered from On Node System Reporter representing the specified measurement period and the necessary calculations are made. All data needed to select regions to be moved such as the Average Access Rate Density of a Region (ACCR) are computed during this phase. When the calculations are complete, AO checks the data against the –min_iops value optionally set in the schedule. The –min_iops parameter, discussed in a later section titled “Using min_iops option” defaults to 50. If the system is not sufficiently busy during the measurement period AO stops after the analysis phase and no region moves are scheduled.

Page 19: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 19

StartAO has now been initialized, and the needed data collected or calculated. The next phase will schedule region moves. The first region moves considered by AO are based on available space. AO will check the CPG’s in the AO configuration for free space. If one or two of the CPGs do not have sufficient free space, AO will attempt to move regions between tiers to create free space. If sufficient free space is unavailable on all tiers, AO will stop. This is the phase where AO will consider any tier minimum or maximum values as described earlier in the section titled “Minimum and Maximum Space within a Tier”. AO will schedule region moves, if necessary, to satisfy any minimum or maximum settings.

When space considerations are complete a series of iterations begins to consider regions moves based on data analyzed from the measurement period. Each iteration begins with a line in the output that looks like this:

Starting genMoves iteration: #

Where # is the iteration number starting with 0.

The next line in each iteration identifies the tiers and the AO mode used in considering moves.

Starting genMoves 1 0 busiest Performance

Where the numbers 1 and 0 are the tiers (tier 1 and tier 0), busiest is the criteria and Performance is the AO mode.

Each iteration will move a maximum of about 16 GiB of regions between tiers. The space moved in any iteration will be reported in the startao output with a line like the following:

Criteria: busiest Move from tier1 to tier 0 16448.0 MiB

This line reports the move criteria (busiest in this example), the tiers involved and the space to be moved. If the space to be moved is not zero, the next several lines of the output list the commands to schedule the region moves. This process repeats until AO finds no more regions to move or it reaches an ending condition. In some configurations a large number of regions need to be moved to satisfy the AO configuration parameters. Ending conditions provide a reasonable limit on the duration of a single run of AO. Ending conditions include reaching the time limit set in the AO schedule or –maxrunh parameter when run from the CLI. Other ending conditions are based on the total amount of data moved. The total space moved for an AO run can be calculated by summing the space reported on all “Criteria:” lines in the startao output. In a recent example there were over 300 “Criteria:” lines reporting a total of more than 4 TB of space moved.

The last phase of the AO run is compaction. The goal of this phase is to free up regions which have been moved to other tiers. The approach is to reclaim space when conditions are right, but not to provide a full space reclaim for all CPGs.

Using min_iops option From HPE 3PAR OS version 3.1.3 onwards, a new option -min_iops was added to the startao command. If this option is used, AO will not execute the region moves if the average LD IOPS for the AOCFG during the measurement interval is less than <min_iops> value specified with this option. If -min_iops is not specified, then the default value is 50. The -min_iops option can be used to prevent movement of normally busy regions to slower tiers if the application associated with an AOCFG is down or inactive (i.e., total LD IOPS for the AOCFG is less than <min_iops>) during the measurement interval. This ensures that when the application resumes normal operation, its busy regions are still in the faster tiers.

Removing a tier from an AO configuration A CPG will not be tracked by AO after it is removed from the AO configuration and any existing data in that CPG will not be moved to other CPGs. Hence, it’s important to move all data out of a tier before it is removed from the AO configuration. The easiest way to drain the tier is by setting the CPG warning level to 1 or setting the tier maximum value to 1. This will hint AO to not move any new data to this CPG and to move existing data from this tier to the other CPGs. Note that setting the tier maximum to 0 will disable space considerations for that tier by AO and will not cause all data to be removed from the tier. Setting the maximum value to 1, meaning 1 MB, will minimize the data on the tier.

Configuring the SSD pool to be used by multiple CPGs The SSD tier may be used by CPGs that are not part of an AO configuration. In this case it is a best practice to set warning limits on the SSD AO configurations or tier limits when running 3.2.2 or later. This helps prevent the system from allocating all available free space to the AO tier 0 CPG.

For example, if a system has 10 TB of usable SSD capacity, the user can create an AO_tier0_SSDr5 and App1_SSDr5 and set a warning limit on the AO CPGs or tier maximum in the AO configuration to 5 TB so no more than 5 TB of usable SSD will be allocated for that AO configuration. For the other App1_SSDr5 CPG, the users can decide to set a warning or limits on the CPGs or VVs depending on how they want to manage the shared pool of space and when they want to be alerted that the pool is running short of space.

Page 20: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 20

Using AO configuration with VVsets Starting from HPE 3PAR OS 3.2.1, a customer can configure AO region moves at a more granular level than all the volumes within a CPG. This can be done by scheduling region moves at the virtual volume set (VVset) that serves as a proxy for an application.

This option is useful if a customer wants to have multiple volumes within the same AO Configuration CPG and schedule data movements at different times with different schedules and modes.

A VVset is an autonomic group object that is a collection of one or more virtual volumes. VVsets help to simplify the administration of VVs and reduce human error. An operation like exporting a VVset to a host will export all member volumes of the VVset. Adding a volume to an existing VVset will export the new volume automatically to the host or the host set the VVset is exported to.

VVsets have a number of use cases beyond reducing the administration of their volume members, such as enabling simultaneous point-in-time snapshots of a group of volumes with a single command. There is no requirement that volumes in a VVset have to be exported to hosts for an AO configuration to work at the VVset level.

An environment may have workloads with two distinct IO profiles; profile_1 business hours and profile_2 non_business_hours.

For profile_2 only a subset of volumes are used and these applications can be logically assigned to a VVset called AO_Profile_2.

In order to make the volumes layout ready for out of business hours activity, one can schedule an Adaptive Optimization run that will target only the volumes that belong to VVset AO_Profile_2, without impacting other volumes that are part of the AOCFG:

• 8:00 PM—Run AO against AO_Profile_2 measurement from last night, (12:00 AM to 5 AM). Maximum run for 4 hours.

• 5:00 AM—Run AO on entire AO config to prepare for day activity, (9:00 AM to 7:00 PM). Maximum run for 4 hours.

Alternatively, instead of running AO against all volumes, it’s possible to create VVset AO_Profile_1 and run AO only against that VVset. Without this feature, we would have needed two different AO configurations and hence different sets of CPGs.

A new option -vv that takes a comma-separated list of VV names or set patterns has been added to three commands startao, srrgiodensity, and sraomoves.

If -vv option is used with startao, the command then runs AO on the specified AOCFG, but only to matching VVs. This allows a user to run AO on separate applications using the same AOCFG.

For srrgiodensity, the -vv option can be used to filter the volumes in the report when used along with the –withvv option.

And, the -vv option for sraomoves will allow the user to see the region moves associated with individual VVs. The min_iops option described in the previous sections can be used at a more granular level when implementing AO with VVsets.

This option is configurable via command line only; for more information refer to the HPE 3PAR Command Line Interface reference guide.

Following example schedules AO moves every day at 23:00; this command will only move regions belonging to VVs in VVset AO_Profile_2.

createsched "startao -btsecs -1h -vv set:AO_Profile_2 AOCFG" "0 23 * * *" AO_out_of_business_hours

Page 21: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 21

Adaptive Optimization with Remote Copy Although Adaptive Optimization coexists and works with replicated volumes, it’s important that users take into account the following considerations when using AO on destination volumes for HPE 3PAR Remote Copy. The I/O pattern on the Remote Copy source volumes is different from the I/O pattern on the Remote Copy target volumes; hence, AO will act differently when moving regions on the primary volume when compared to the data movement on the destination volume.

• With synchronous replication and asynchronous streaming replication, the target volumes will receive all the write I/O requests; however, the target volume will see none of the read requests on the source volume. AO will only see the write I/O on the target volumes and will move the regions accordingly. If the application is read intensive, then the hot regions will be moved to tier 0 on the source array but will not be moved to tier 0 on the target array. Upon failover to the remote array, the performance of the application may be impacted as the region that was hot and in tier 0 earlier (on the source array) may not be in tier 0. This scenario also applies to HPE 3PAR Peer Persistence.

• With periodic asynchronous replication mode, write I/O operations are batched and sent to the target volumes periodically; hence, the I/O pattern on target volumes is very different from that on source volumes. If AO is configured on these volumes, the resulting sub-tiering will be different from the sub-tiering done on the source volumes.

Considering the above scenarios, data layout on the target volume may be different on the secondary system. And in case of a failover, performance levels may differ. In these cases, a good way to rebalance the hottest data to the upper tiers is to manually run the AO policy 30 minutes after failover (so the system has enough I/O statistics for the new I/O profile to identify the right regions to move).

Use case Accelerating workloads by adding SSDs In a two-tier configuration with SSD and fast-class drives, AO can provide performance acceleration by lowering the average svctimes. In the provided example, a customer had an array with 120 FC drives with a backend IOPS of over 30,250. Each drive was getting around 250 IOPS and had a service time of over 20 ms. Figure 13 shows the IOPS and service times reported by the statport command. The customer engaged Hewlett Packard Enterprise to help them resolve their performance problem.

Figure 13. Back-end IOPS and service time before implementing AO

From the HPE 3PAR StoreServ CLI command statport, it was clear that the array was getting a lot more I/O than what it was sized for. And, they had not yet completed the migration activity and expected a lot more load on the array. The customer had two options: either add more FC drives or analyze and check if adding SSD drives and enabling AO will help. Figure 14 shows the region I/O density report for the FC CPG.

Page 22: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 22

Figure 14. Region I/O density report

The customer decided to add 24 SSD drives and use AO. Figure 15 shows the statpd report after enabling AO. The SSD drives are now serving around 1,000 IOPS each and the FC drives are now serving around 200 IOPS each. And, figure 16 shows how service times reduced from over 20 ms to less than 10 ms.

Figure 15. PD IOPS after implementing AO

Page 23: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 23

Figure 16. Service time reduction as result of AO

So, by enabling AO, the customer was able to reduce the IOPS on FC drives, thereby improving the response time. The I/O profile also had good locality of data that helped AO to put the hot regions in SSD tier.

Lowering cost per GB by configuring a three-tier configuration with SSD, FC, and NL This section describes the real benefits that a customer has from using HPE 3PAR Adaptive Optimization. The customer had a system with 96 300 GB 15k rpm FC drives and 48 1 TB 7.2k rpm NL drives. The customer had 52 physical servers connected and running VMware® with more than 250 virtual machines (VMs).

The workload was mixed (development and QA, databases, file servers, and more) and they needed more space to accommodate many more VMs that were scheduled to be moved onto the array. However, they faced a performance issue: they had difficulty managing their two tiers (FC and NL) in a way that kept the busier workloads on their FC disks. Even though the NL disks had substantially less performance capability (because there were fewer NL disks and they were much slower), they had larger overall capacity.

As a result, more workloads were allocated to them and they tended to be busier while incurring long latencies. The customer considered two options: either they would purchase additional 96 FC drives, or they would purchase additional 48 NL drives and 16 SSD drives and use HPE 3PAR Adaptive Optimization to migrate busy regions onto the SSD drives. They chose the latter and were pleased with the results (illustrated in figure 17).

Page 24: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper Page 24

Figure 17. Improved performance after Adaptive Optimization

Before HPE 3PAR Adaptive Optimization, as described in the charts—and even though there are fewer NL drives—they incur greater IOPS load than the FC drives in aggregate and consequently have very poor latency (~40 ms) compared with the FC drives (~10 ms). After HPE 3PAR Adaptive Optimization was executed for a little while, as shown in figure 17, the IOPS load for the NL drives dropped substantially and the load was transferred mostly to the SSD drives.

HPE 3PAR Adaptive Optimization moved approximately 33 percent of the IOPS workload to the SSD drives even though that involved moving only one percent of the space. Back-end performance improved in two ways: the 33 percent of the IOPS that were serviced by SSD drives got very good latencies (~2 ms), and the latencies for the NL drives improved (from ~40 ms to ~15 ms). The front-end performance also improved significantly as most of the frequently accessed data was serviced by SSD tier. Moreover, the investment in the 16 SSD drives permitted them to add even more NL drives in the future, because the SSD drives have both space and performance headroom remaining.

Page 25: Adaptive Optimization for HPE 3PAR StoreServ Storage with ...The opportunity is that the performance of a system can be maintained while the cost of the system can be improved (lowered)

Technical white paper

Sign up for updates

Rate this document

© Copyright 2012–2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.

VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.

4AA4-0867ENW, January 2016, Rev. 4

Summary HPE 3PAR Adaptive Optimization is a powerful tool for identifying how to configure multiple tiers of storage devices for high performance. Its management features can deliver results with reduced effort. As in all matters concerning performance, your results may vary but proper focus and use of HPE 3PAR Adaptive Optimization can deliver significant improvements in device utilization and total throughput

Learn more at hpe.com/storage/3PARStoreServ