23
u Driving Business Value on Power Systems with Solid State Drives April 2009 By Lotus Douglas, Qunying Gao, Lilian Romero, Linton Ward, and David Whitworth IBM Systems and Technology Group Sunil Kamath IBM Software Group, Information Management Jim Olson IBM Integrated Technology Delivery

Driving Business Value on Power Systems with Solid-state Drives

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Driving Business Value on Power Systems with Solid-state Drives

u

Driving Business Value on Power Systems with Solid State Drives

April 2009

By Lotus Douglas, Qunying Gao, Lilian Romero, Linton Ward, and David Whitworth

IBM Systems and Technology Group

Sunil Kamath IBM Software Group, Information Management

Jim Olson IBM Integrated Technology Delivery

Page 2: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 2 of 23

Executive Summary

Solid State Drives (SSDs) offer a number of advantages over traditional hard disk drives (HDDs). With no seek time or rotational delays, SSDs can deliver substantially better I/O performance than HDDs. Capable of driving tens of thousands of I/O operations per second (IOPS), as opposed to hundreds for HDDs, SSDs break through performance bottlenecks of I/O-bound applications. Applications that require hundreds of HDDs can meet their I/O performance requirements with far fewer SSDs, resulting in energy, space, and cost savings. To demonstrate the benefits of SSDs, we ran experiments comparing SSDs with HDDs. The experiments showed a significant performance advantage with SSDs which resulted in a substantial reduction in the number of drives needed to meet the desired level of performance. Fewer drives translate into a smaller physical footprint, reduced energy consumption, and less hardware to maintain. The experiments also showed better application response times for SSDs, which leads to increased productivity and higher customer satisfaction. Solid state drive (SSD) technology was introduced more than three decades ago. Until recently, however, the high cost-per-gigabyte and limited capacity of SSDs restricted deployment of these drives to niche markets or military applications. Recent advances in SSD technology and economies of scale have driven down the cost of SSDs, making them a viable storage option for many I/O intensive enterprise applications. While the cost of SSDs is trending downward, the $/GB for SSDs is still substantially higher than that of HDDs. It is not cost-effective or necessary to replace all HDDs with SSDs. For instance, infrequently accessed (cold) data can reside on lower cost HDDs while frequently accessed (hot) data can be moved to SSDs for maximum performance. The appropriate mix of SSDs and HDDs should be used to strike a proper balance between performance and cost. This paper provides information to enable you to integrate SSDs into your storage infrastructure so that you can immediately take advantage of SSDs to improve your application performance and increase productivity. We describe how to deploy SSDs in a tiered storage environment to allow you to leverage your existing storage with SSDs for maximum performance and minimum cost. The paper also discusses IBM tools and services available to assist you in deploying and managing a storage solution with SSDs.

Page 3: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 3 of 23

Leveraging SSDs in Tiered Storage Pools

Many storage environments have grown to support a diversity of needs and evolved into disparate technologies that have lead to storage sprawl. In a large-scale storage infrastructure this yields a sub-optimal storage design that can be improved with a focus on data access characteristics analysis and management. Tiered storage is an approach of utilizing different types of storage throughout the storage infrastructure. It is a mix of higher performing/higher cost storage with lower performing/lower cost storage and placing data accordingly based on specific characteristics such as performance needs, age and importance of data availability. Properly balancing these tiers leads to the minimal cost – best performance solution. The focus of this paper is on the active, mission critical data. Typically this is regarded as Tier 1 storage. SSDs can be considered as a new Tier 0 for the fastest active data.

Figure 1: Tiered Storage Environment

An example of an existing storage environment is shown in Figure 1. The design results in a significantly increased cost associated with maintaining and supporting the infrastructure. In addition to the immediate effect associated to this balance, growth continues at an increased rate in the higher cost area of Tier 1. Thus, as the growth occurs, the distribution of data would continue to grow in a non-optimal direction unless there is careful planning and discipline in deployment.

Tier2 Medium

Performance Non-Mission Critical

Tier1 High Performance

Mission Critical

Tier0 Ultra High

Performance

Tier3 Low Performance

Archival/Tape

Cost versus Performance

Cost

Per

Gigabyte

Performance

Page 4: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 4 of 23

Typically, an optimal design would keep the active operational data in Tier 0 and Tier 1 and leverage Tiers 2 and 3 for less active data. An example is shown in Figure 2. The benefits associated with a Tiered storage approach are simple; it is all cost related. This approach will save significant cost associated with storage itself, as lower Tiered storage is less expensive. Beyond that, there are the environmental savings, such as energy, footprint, and cooling reductions.

Figure 2: Storage Pyramid

How to implement Tiered Storage

There are three areas of interest critical to implementing, maintaining and leveraging a tiered storage solution. These areas are software tools for identification and reporting of all components of the tiered storage solution, virtualization to enable control and allocation of your solution, and offerings that are designed to provide alignment with your specific needs for IT governance. Tivoli Productivity Center (formerly Total Storage Productivity Center) is a perfect example of software needed to execute data identification for implementation and management. It provides the capability to zero in on data characteristics that can be used to make choices on data placement in implementation and steady state. SAN Volume Controller enables virtualization for your storage environment. Virtualization is critical to maintaining a tiered storage solution as it provides the capability for your administrators to relocate data between tiers of storage without impacting the application and customer service levels. Virtualization allows you to leverage the tiered storage solution to provide the required flexibility for a dynamic infrastructure.

IBM Novus Intelligent Storage Service Catalog (ISSC) offering is a single framework aimed at providing storage optimization through more efficient provisioning, better analytics of the storage environment and proper alignment of data to storage tiers. The intellectual capital which comprises ISSC is IBM’s Intelligent Storage Service Request (ISSR), Process Excellence, and Storage Enterprise Resource Planner (SERP).Through detailed interviews with the client, IBM is able to obtain a detailed understanding of the customer’s business requirements. ISSR promotes "right-tiering" and "right-sizing" of storage provisioning based on these business requirements acting as a front end interface for storage requests. Upon receipt of the ISSR, Process Excellence

Tier 0: Ultra high performance applications

Tier 2: Meet QoS for non-mission critical apps

Tier 3: Archives and long term retention

Tier 1: Mission critical, revenue generating apps

1-3%

15-20%

20-25%

50-60%

Storage Pyramid

Page 5: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 5 of 23

is utilized by the storage administrator to ensure that proper process and procedure are utilized at all times to eliminate costly errors or unknown challenges created by lack of standardization. In addition, use of Novus’s SERP software solution can provide very specific data characteristic information that when combined with the customer discussions, can result in a method of more effectively deploying and managing a tiered storage solution.

Leveraging SSDs for a High Value Database

Improving the response time of some database environments can yield a substantial benefit to business results. While a tiered storage strategy focuses on reducing the operational costs, some environments can leverage the improved I/O performance that SSDs provide. Further, beyond the benefits of improved performance, other implied benefits such as infrastructure simplification, ease of storage management, and reduced need for fine tuning skills are paramount and result in substantial IT efficiency and reduced costs. Storage management, performance, and cost are big issues in the database world. Database workloads, both transactional and data warehousing typically require lots of HDDs for I/O performance – both IOPS and bandwidth. Traditional enterprise HDDs, including the 15K RPM HDDs are limited by the rate of head movement and deliver random I/O performance of approximately 150 -175 IOPS with a latency of about 5 -7 msecs and sequential scan bandwidth of about 30 - 60 MB/sec for most database workloads. Write-intensive batch jobs are under pressure to complete within the increasingly shrinking time-window leading to reduced up-time for transactional database systems. In addition, maintenance jobs such as backup, restore, and database crash recovery which can induce too much pressure on I/O are also time critical and important to the business to maintain a highly operational database system. Backup operations tend to drive high levels of sequential I/Os while recovery processes drive high levels of random I/O. In many customer environments, to maintain the high IOPS rate required to service applications with reasonable response times, less data is placed on HDDs resulting in poor IOPS per gigabyte of available storage capacity. This implies that a lot of capacity on HDDs (greater than 50% in most cases) is wasted or under-utilized and the situation has only worsened with larger density HDDs. SSDs offer game-changing performance for database applications by removing the limitations traditional rotating disks impose on database design. This will revolutionize database architectural design by removing the traditional I/O bottleneck. SSDs eliminate the need to have a large number of under-utilized (short-stroked) HDDs to meet the heavy I/O demands of database applications.

Customer Scenarios that can Benefit from SSDs

A broad spectrum of industries from the financial sector to the consumer service industry, including government, with varied or common business challenges can benefit from SSD technology. These businesses at a fundamental level rely on improved responsiveness from their critical transactional, Customer Relationship Management (CRM) or data warehousing solutions that enable them to service their clients faster and react to changes and new opportunities more rapidly, resulting in improved profitability and increased revenue. With an explosion of data volumes and a need to convert them into

Page 6: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 6 of 23

trustable information with speed, SSDs help enable IT to address the critical storage challenges to satisfy business needs. The following business scenarios represent a few cases where SSD technology can deliver significant value.

• Customer retention by servicing them with superior satisfaction. Enterprises that empower their customer support representatives to service their clients' needs in real time results in better customer loyalty.

• 360 degree view of customer relationships that enables businesses to respond to market needs and more rapidly identify new opportunities

• Real time and fast fraud detection enables enterprises spanning financial, insurance, consumer services organizations, etc to improve profitability and facilitate better customer value.

• Faster reporting and business analytics capabilities empower organizations to deal with risk management in an efficient manner.

• Faster order processing systems where speed of transaction processing lead to increased revenue and customer satisfaction.

To illustrate the type of workloads that benefit from SSD technology, two scenarios from real world customer applications are chosen. Figure 3 shows a workload profile from a large enterprise in the consumer products company running their global and mission critical SAP R/3 workload with DB2™ on Power Systems. Figure 4 illustrates a workload profile from a global financial institution running DB2 on Power Systems which services tens of thousands of transactions per second. The SAP R/3 workload is an 8 TB DB2 database that is hosted off a single IBM System Storage DS8100 disk system with 14 TB of usable capacity. The database is over provisioned by nearly 75%, primarily due to the need for IOPS from physical disk spindles. However as can be noted in Figure 3, the CPU is still about 30-40% waiting on I/O. These workloads can benefit from migrating the storage from HDDs to SSDs within a DS8100 which will reduce I/O wait, improve SAP transaction response time and save on storage costs by eliminating the need to over provision storage.

Figure 3: CPU and I/O profile of SAP R/3 workload with DB2 on Power Systems The next scenario is from a large and global financial industry company. Figure 4 illustrates the CPU profile of a 200 gigabyte DB2 database that is servicing tens of

CPU and I/O Profile

0

20

40

60

80

100

User% Sys% Wait%

Page 7: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 7 of 23

thousands of transactions per second with hundreds of concurrent users. In this environment, the DB2 database was provisioned with a single DS4800 controller with two terabytes of RAID storage. This represents capacity that is ten times more than required to handle the I/O performance and therefore overall transaction response times. As can be noted from Figure 4, the CPU is still about 20% waiting on I/O. This scenario is another example where migrating to SSDs can result in consolidation of drives by up to 10 x while further improving the transaction response times and handling large volumes of concurrent users.

Figure 4: CPU and I/O profile of a DB2 workload on Power Systems handling large volume of transactions

Quantifying Benefits of SSDs using an OLTP Workload

On-line Transaction Processing (OLTP) applications are characterized by large numbers of users concurrently executing transactions against a database. OLTP applications typically have a significant amount of random I/O and a high number of physical reads with the goal of ensuring consistently low response times. Typical OLTP applications include banking applications, order processing systems, and airline reservation systems. OLTP transactions spend a great deal of time waiting on I/O. The I/O wait time is considerably longer for HDDs than SSDs due to delays inherent to HDD mechanical parts. SSDs are ideal for OLTP workloads since they do not have any seek or rotational delays and can process I/O faster.

An SSD solution for OLTP applications can offer the following benefits:

• a substantial reduction in the number of drives required • increased I/O and throughput performance • a substantial reduction in response time • a reduction in energy consumption • reduced lab space requirement

To characterize the benefits of SSDs for transactional workloads, an in-house OLTP database application was chosen. For our experiments, the application characteristics were 60% random reads and about 40% random writes. The system configuration was as follows:

CPU and IO profile

0

10

20

30

40

50

60

User% Sys% Wait%

Page 8: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 8 of 23

Server Power 550 Express Model 8204-E8A with 128GB of memory

OS AIX ™

61 TL2 Database DB2 9.5 FP3

In total, three experiments were conducted by using different types of storage. For the base configuration, the entire database was placed on HDDs using a total of 800 drives in order to meet the response time requirements. The database was built using RAID5 where the tables containing the highest number of IOPS (hot data) were spread across 720 FC HDDs and the remaining tables (cold data) were spread across 80 SAS HDDs. The configuration is shown in Figure 5.

Figure 5: Base Configuration Using 800 HDDs

In the second experiment a total of 116 drives were used consisting of a mix of 36 SSDs and 80 HDDs. The hot database tables were placed on the SSDs and the colds tables remained on the 80 SAS HDDs. The 36 SSDs were placed in 6 EXP12S drawers. Each EXP12S was attached to a PCI-X DDR 1.5 GB Cache SAS RAID Adapter. A total of six 5+P RAID5 arrays were created on the SSDs. The cache on RAID adapters can become a performance bottleneck for some workloads with this many SSDs on one adapter, so the RAID adapter cache was disabled for this experiment. The response times for these SSDs is so fast that the database saw very good response times on this workload even with the adapter's cache disabled. The configuration is depicted in Figure 6.

720 x 15K RPM FC HDDs

(Hot Data)

80 x 15K RPM SAS HDDs

(Cold Data)

Base Configuration using 800 HDDs

Page 9: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 9 of 23

Figure 6: Mixed SSD – HDD Configuration

In the third and final experiment, a total of 116 drives were also used. The same number of HDDs (36) was used to hold the hot data as SSDs in the mixed storage configuration. Because of the price and performance differential, we do not expect that customers would do a one-to-one substitution of HDDs with SSDs. The experiment was designed to show a direct performance comparison between SSDs and HDDs. So, unlike the 800 HDD experiment, we did not “short stroke” the HDDs in order to achieve better I/O performance. RAID5 was used for this experiment, as well. The configuration is shown in Figure 7.

Figure 7: HDD Configuration with same Storage Footprint as SSD

The end goal of all the experiments was to compare the response times, throughput, space usage, and energy consumption, using SSDs vs HDDs. The experiments were performed by executing a number of different types of OLTP transactions against the database and collecting performance statistics to understand the behavior of the drives.

HDD Config with Storage Footprint = SSD

6 SAS adapters

36 x 15K RPM SAS HDDs

(Hot Data)

4 FC adapters

80 x 15K RPM SAS HDDs

(Cold Data)

Mixed SSD – HDD Configuration (Hot data moved to SSDs)

6 SAS adapters

36 SSDs

(Hot Data)

4 FC adapters

80 x 15K RPM SAS HDDs

(Cold Data)

Page 10: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 10 of 23

Read Response Times

0

1

2

3

4

5

6

HDD SSD

Re

sp

on

se

Tim

e (

ms

)

SSSSDD == 33XX bbeetttteerr rreessppoonnssee ttiimmee

Results of Experiments

800 HDDs vs. 116 Drives (mix of 36 SSDs and 80 HDDs)

For the base configuration with 800 HDDs, the system CPU was 67% busy while the remaining 33% was spent waiting for I/Os to complete. The IOPS per drive for the drives holding the hot tables maxed out at about 220. In comparison, for the configuration using SSDs for the hot tables, the CPU utilization reached over 80% and the IOPS per drive was over 7000. With SSDs, I/Os were serviced much faster, resulting in better storage and application response times. In addition, more of the CPU was freed up to do useful work instead of waiting for I/O. As a result system throughput increased. Figure 8 shows the database transaction response times and relative system throughput for the 800-HDD and the mixed SSD-HDD configurations. The configuration with SSDs achieved a 1.72X improvement in application response times and a 1.65X improvement in transaction throughput over the 800-HDD configuration.

Database Transaction Response Time & Throughput

0

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

HDD SSD

Re

sp

on

se

Tim

e (

se

cs

)

0

1

2

3

4

5

6

Re

latv

ie S

ys

tem

Th

rou

gh

pu

t

Figure 8: SSD vs. HDD Database Transaction RT and Throughput The average drive read response times for the 800-HDD and mixed SSD-HDD configurations are shown in Figure 9. The drive read response time improved by 3X when the hot tables were moved to the SSDs (1.7ms for SSDs vs. 5.3ms for HDDs).

Figure 9: SSD-HDD RT Comparison

SSSSDD == 11..77XX bbeetttteerr ttrraannssaaccttiioonn RRTT

SSSSDD == 11..6655XX bbeetttteerr

tthhrroouugghhppuutt

Page 11: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 11 of 23

Throughput per drive

0

20

40

60

80

100

120

140

160

HDD SSD

Tra

ns

ac

tio

n T

hro

ug

hp

ut

SSSSDD == 3333XX HHDDDD

The efficiency of the drives was measured in terms of transaction throughput per drive. For the 800-HDD configuration the throughput per drive was 4.2 and for the SSD configuration it was 137.5. This means that each SSD performed 33 times more work than an HDD, as shown on Figure 10. This disparity in throughput per drive are due to the SSDs being faster than HDDs. Many more HDDs are required to achieve the same throughput as a small number of SSDs. Even with a 20:1 ratio of HDDs to SSDs, the 800-HDD configuration was still bottle-necked by I/O and could only achieve ½ the throughput of the SSD-HDD mixed configuration.

Figure 10: SSD-HDD Throughput Comparison

Moving the hot tables to SSDs reduced the number of physical drives required from 720 to 36. This reduction resulted in space usage and energy consumption savings. Figure 11 shows the energy usage is about 90% lower for SSDs. The energy usage was measured at peak throughput for all the storage components. The system energy and AC cooling energy were not included in the energy usage measurement. Figure 12 shows the space reduction between HDDs and SSDs is about 84%. The space was calculated using the total space used by the storage sub-system such as the controllers, drive enclosures, and drives.

Figure 11: SSD-HDD Energy Usage Comparison Figure 12: SSD-HDD Space Usage Comparison

Watts per throughput

0

1

2

3

4

5

6

7

8

HDD SSD

Watt

s p

er

tran

sacti

on

/seco

nd SSSSDD == 9900%% lleessss eenneerrggyy

uussaaggee

Space Usage

0

20

40

60

80

100

120

140

160

180

200

HDD SSD

Rack U

nit

es (

U)

SSSSDD == 8844%% lleessss ssppaaccee

Page 12: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 12 of 23

116 Drives (36 SSDs + 80 HDDs) vs. 116 Drives (36 HDDs + 80 HDDs)

In this experiment the number of drives remained the same. Both the SSD configuration and the HDD configuration used 36 drives for the hot data and 80 drives for the cold data. The purpose of the experiment was to do a direct comparison of HDD and SSD performance in a high I/O demanding environment.

Focusing the analysis on the most interesting subset of the 116 drive comparison, the 36 HDD vs the 36 SSD which are both running the "hot" data (the tables with the highest amount of IOPS), the following observations were made: For the 36 HDD measurement, the CPU utilization was only 5% and the remaining 95% was spent either waiting on I/Os to complete or idle. The drive read response time was 6.8 ms. The IOPS per drive was about 170. In comparison, the read respond time and IOPS per drive for the 36 SSD measurement was 1.7 ms and 7000, respectively. Figure 13 shows both the relative response times and drive performance comparisons using 116 disks.

Figure 13: 116 (36 SSDs+80HDDs vs 36 HDDs+80HDDs) : RT & Drive Performance Comparisons

Determining Whether an AIX Application Might Benefit from SSDs AIX provides performance tools that can be used to determine if a configuration has hot data that would perform better if moved to SSDs. The most valuable tools for assessing data hot spots are the AIX tools iostat and filemon. In addition, database vendors also provide tools to analyze hot data.

SSD vs HDD: Relative Drive Performance

0 10 20 30 40 50

transactions/drive

IOPS/driveSDD

HDD

SSD = 40X better

SSD = 42X better

SSD vs HDD: Relative Read Response Times

0 1 2 3 4 5

Read RT SSD

HDD

SSD = 4X better

Page 13: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 13 of 23

In order to demonstrate the capabilities of these tools, we will compare iostat and filemon data from the 800-drive HDD run and the 116-drive mixed SSD-HDD run. The data will show the I/O performance improvement gained from using SSDs.

Identifying Hot Disks

The iostat tool can provide a good first level I/O analysis because it provides a high level, real-time view of overall storage performance and is simple to run. To isolate the hot data, look for data that does a high rate of random small block I/O per GB to the HDDs. Running the command "iostat -t" provides CPU utilization details. If there is no I/O wait time, then SSDs will not improve system performance. As shown in Table 1, there was a substantial amount of I/O wait time for the 800-HDD experiment, so there is a big potential for performance improvement from using SSDs.

Storage Configuration % iowait 800-HDD experiment 33.2 116 drive mixed SSD - HDD experiment 1.9

Table 1: SSD and HDD iowait Output

Running iostat with the "-D" flag as shown in Tables 2 and 3, provides detailed output per logical disk (hdisk), including read and write response times. In order to focus on the hdisks with the hot data, only those that contain the hot data and the database logs (hdisk320-321) are shown below.

There are several things to notice here:

1. The total system storage I/O requests or transfers per second (tps) are shown at the top of each report. The tps is a total of reads per second (rps) and writes per second (wps). Note that the tps on the SSD run is about double the tps on the HDD run.

2. HDDs max out at about 200 IOPS. So, look for hdisks that do over 200 IOPS (or

tps) per physical drive.

• For the 80-HDD configuration, each hdisk consists of 30 physical drives: hdisk178 - hdisk201 are RAID5 arrays, each with 30 x 15K RPM HDDs

• For the 116-drive mixed SSD - HDD configuration, each hdisk consists of 6 physical drives: hdisk202 - hdisk207 are RAID5 arrays, each with 6 x SSDs.

• Each write to a RAID5 array causes 4 drive I/Os (2 reads and 2 writes)

3. "%tm act" shows the percentage of time where there is at least one I/O request outstanding to that hdisk. We need to look for hdisks that are at least 99% busy.

4. read and write avg serv times indicate the average service time per transfer.

Page 14: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 14 of 23

Kbps tps Kb_read Kb_wrtn

Physical 326144 77196.3 1892352 1372352

Disks: xfers read write

%tm bps tps bread bwrtn rps avg min max wps avg min max

Act serv serv serv serv serv serv

hdisk178 100 16.1M 3929.7 10.6M 5.5M 2592.7 6 0.1 250.3 1337 2.5 0.2 261.8

hdisk179 100 15.7M 3844.9 10.3M 5.4M 2522.5 5.9 0.1 223.4 1322 3.2 0.2 275.1

hdisk180 99.9 8.8M 2148.8 5.6M 3.2M 1379 4.3 0.1 199.6 769.7 3.7 0.2 123.4

hdisk181 100 9.1M 2216.9 5.8M 3.2M 1423.7 4.1 0.1 214.5 793.2 2 0.2 122.7

hdisk182 99.6 9.1M 2230.8 5.9M 3.2M 1444.4 4.2 0.1 205.1 786.4 2.8 0.2 230.4

hdisk183 100 9.2M 2234 5.9M 3.3M 1433.7 4.1 0.1 220.8 800.3 3.3 0.2 122.7

hdisk184 100 15.7M 3833.4 9.7M 6.0M 2368.7 7.6 0.1 542 1465 2.5 0.2 448.8

hdisk185 100 15.7M 3842 9.7M 6.0M 2380.2 7.3 0.1 330.1 1462 3.3 0.2 280

hdisk186 99.9 9.0M 2193 5.6M 3.3M 1375.8 4.9 0.1 117.4 817.2 3.9 0.2 101.9

hdisk187 99.5 8.9M 2183.9 5.6M 3.3M 1368.5 4 0.1 125.8 815.4 1.9 0.2 91

hdisk188 99.5 9.0M 2208.8 5.6M 3.4M 1378 3.9 0.1 270 830.8 2.8 0.2 91.1

hdisk189 99.8 9.0M 2203.7 5.6M 3.4M 1373.5 3.9 0.1 128 830.2 3.3 0.2 120.6

hdisk190 100 15.4M 3761.9 9.8M 5.6M 2384.6 6.2 0.1 207.2 1377 1.9 0.2 318.1

hdisk191 100 15.4M 3765.6 9.8M 5.6M 2400.3 6.1 0.1 237.8 1365 2.6 0.2 344

hdisk192 99.8 9.1M 2218.2 5.9M 3.2M 1444.1 4.3 0.1 202.1 774.1 2.8 0.2 179.7

hdisk193 99.5 9.2M 2245.8 6.0M 3.2M 1468.5 4.2 0.1 213.6 777.2 1.7 0.2 176.5

hdisk194 99.9 9.5M 2317.2 6.2M 3.3M 1521.3 4.2 0.1 234.6 795.9 2.2 0.2 176.5

hdisk195 99.8 9.5M 2311 6.2M 3.3M 1513 4.2 0.1 195.3 798 2.5 0.2 177

hdisk196 100 15.6M 3802 9.7M 5.9M 2369.6 6.1 0.1 250 1432 1.8 0.2 414.9

hdisk197 100 15.5M 3773.7 9.6M 5.9M 2345.5 6.1 0.1 238 1428 2.5 0.2 330.8

hdisk198 99.8 8.6M 2095 5.3M 3.3M 1289.6 4.4 0.1 228.9 805.4 2.8 0.2 184

hdisk199 99.4 8.6M 2102.5 5.3M 3.4M 1284.6 4.3 0.1 204.8 817.9 1.6 0.2 184

hdisk200 99.5 9.4M 2292.6 6.1M 3.3M 1481.1 4.2 0.1 215.2 811.5 2.1 0.2 181.8

hdisk201 99.8 9.5M 2310.4 6.1M 3.3M 1495.3 4.2 0.1 226.4 815.1 2.4 0.2 181.8

hdisk320 27.3 12.8M 3108.9 0.0 12.8M 0.0 0.0 0.0 0.0 3109 0.1 0.1 15.5

hdisk321 31.4 12.9M 3143.7 0.0 12.9M 0.0 0.0 0.0 0.0 3144 0.1 0.1 16.7

Table 2: 800-HDD Experiment iostat –D Output

Kbps tps Kb_read Kb_wrtn

Physical 538923 127744 3406752 1991904

Disks: xfers read write

%tm bps tps bread bwrtn rps avg min max wps avg min max

act serv serv serv serv serv serv

hdisk202 100 79.1M 19302.1 53.0M 26.1M 12942.2 1.7 0.1 57 6360 3.9 0.5 49.9

hdisk203 100 76.7M 18736.1 51.1M 25.6M 12476.3 1.7 0.1 61.5 6259.8 3.8 0.5 56.2

hdisk204 100 76.8M 18753.8 51.7M 25.1M 12622.1 1.5 0.1 41.8 6131.7 3.6 0.4 51.3

hdisk205 100 78.9M 19259.9 52.8M 26.1M 12888.9 1.7 0.1 40 6371 3.9 0.6 50.3

hdisk206 100 77.3M 18879.8 51.6M 25.8M 12588.7 1.6 0.1 54.7 6291.1 3.7 0.5 52.1

hdisk207 100 77.2M 18853.8 51.9M 25.3M 12681.5 1.5 0.1 58.4 6172.3 3.6 0.5 59.4

hdisk320 35.4 13.7M 3292.4 0 13.7M 0 0 0 0 3292.4 0.1 0.1 18

hdisk321 36.3 13.6M 3273.4 0 13.6M 0 0 0 0 3273.4 0.1 0.1 18.2

Table 3: Mixed SSD - HDD Experiment iostat –D Output

Page 15: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 15 of 23

Identifying Hot Logical Volumes

After using iostat to determine that there are hot hdisks on a system, the next step is to use filemon to find the hot logical volumes (LVs). The LVs map to the database tables. Filemon provides summary and detailed performance reports on files, LVs and Physical Volumes (PVs). The filemon output below includes the LV and PV summary reports and some examples from the detailed LV reports. How to run filemon: Filemon can be run in either online mode or offline mode using a previously collected trace. The offline method, used for this data, is described below. Note that running the AIX trace command can cause significant performance degradation if the system CPU is very busy. This causes some of the SSD throughputs reported by filemon to be lower than those reported by iostat. The HDD results are not affected because there were plenty of spare CPU cycles on that experiment due the substantial I/O wait time. trace -andfp -C all -T 30000000 -L 30000000 –o filename.trc gensyms -F > gensyms.out -F option provides the file, LV, and hdisk names needed for filemon filemon -i trcfile -n gensyms.out -O detailed,all -o filemon.out

Filemon’s "Most Active Logical Volumes" table sorts the LVs based on their utilization. LVs with low utilizations typically do not need to be moved to SSDs. LVs with high utilizations are good candidates for further investigation regarding whether they should be moved to SSDs (having a high utilization does not necessarily indicate a performance problem). The 800-HDD "Most Active Logical Volumes" filemon report, depicted in Table 4, shows there are 28 LVs that were at least 91% busy during the trace. The last LV listed is only 68% busy. The rest of the LVs on the system are even less busy and are not shown here. The database tables that were on the 28 busiest LVs for the 800-HDD experiment were all moved to SSDs for the 116-drive SSD experiment.

800-HDD 116 Drives (Mixed SSD - HDD)

Most Active Logical Volumes MostActiveLogicalVolumes

util #rblk #wblk KB/s volume util #rblk #wblk KB/s volume

1 3592 2248 10521.2 /dev/hddR04V1S 0.98 6544 4888 17968.5 /dev/ssdR02V2S

0.99 5440 2024 13447 /dev/hddR04V2S 0.98 7312 3352 16761.4 /dev/ssdR04V3S

0.99 4200 1288 9887.1 /dev/hddR01V4S 0.98 7568 4136 18396 /dev/ssdR03V4S

0.99 5256 2120 13288.4 /dev/hddR03V3S 0.98 5976 4976 17214.1 /dev/ssdR02V4S

0.99 5816 1080 12423.7 /dev/hddR01V2S 0.97 6984 5816 20118.7 /dev/ssdR02V3S

0.99 5288 1160 11616.6 /dev/hddR01V1S 0.97 5928 5120 17364.9 /dev/ssdR02V1S

0.99 4136 2056 11155.4 /dev/hddR03V4S 0.97 5760 5120 17100.9 /dev/ssdR01V4S

0.99 4992 1552 11789.5 /dev/hddR02V2S 0.97 5152 3600 13756.2 /dev/ssdR04V2S

0.99 6344 1456 14052.3 /dev/hddR02V1S 0.97 5864 3520 14749.5 /dev/ssdR03V3S

0.99 4264 2168 11587.7 /dev/hddR03V2S 0.96 6864 3440 16195.5 /dev/ssdR03V1S

0.99 5096 1208 11357.1 /dev/hddR01V3S 0.96 4864 4136 14146 /dev/ssdR04V4S

0.99 4592 2168 12178.7 /dev/hddR03V1S 0.96 6456 3528 15692.6 /dev/ssdR04V1S

0.99 6680 1240 14268.5 /dev/hddR02V4S 0.95 6768 3968 16874.6 /dev/ssdR03V2S

0.99 4912 1776 12048.9 /dev/hddR04V3S 0.93 5344 5616 17226.6 /dev/ssdR01V3S

0.98 4048 1904 10723 /dev/hddR04V4S 0.91 4880 4912 15390.8 /dev/ssdR01V2S

0.98 4936 1408 11429.2 /dev/hddR02V3S 0.91 3880 4968 13907 /dev/ssdR01V1S

Page 16: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 16 of 23

0.98 1792 1512 5952.4 /dev/hddR03V1C 0.86 3136 3288 10097.1 /dev/ssdR03V2C

0.98 1608 1256 5159.7 /dev/hddR04V1C 0.81 3320 1176 7066.7 /dev/ssdR02V1C

0.97 1720 1248 5347.1 /dev/hddR04V3C 0.8 2600 2968 8751.6 /dev/ssdR04V2C

0.97 1624 584 3977.9 /dev/hddR02V2C 0.77 2872 3136 9443.2 /dev/ssdR04V1C

0.97 1608 1328 5289.4 /dev/hddR04V2C 0.77 3224 1024 6676.9 /dev/ssdR01V1C

0.97 1736 664 4323.8 /dev/hddR02V3C 0.73 0 14520 22822.1 /dev/dbloglv

0.96 1592 632 4006.7 /dev/hddR01V3C 0.71 1960 3264 8210.9 /dev/ssdR03V3C

0.95 1816 608 4367 /dev/hddR01V2C 0.71 2336 3192 8688.8 /dev/ssdR03V1C

0.94 1624 1368 5390.3 /dev/hddR03V2C 0.68 2328 3016 8399.6 /dev/ssdR04V3C

0.94 1448 728 3920.2 /dev/hddR01V1C 0.68 2712 1176 6111.1 /dev/ssdR02V3C

0.93 1512 760 4093.2 /dev/hddR02V1C 0.66 2352 1096 5419.5 /dev/ssdR01V3C

0.92 1592 1448 5476.8 /dev/hddR03V3C 0.63 2192 1040 5080 /dev/ssdR02V2C

0.68 0 11424 20581.2 /dev/dbloglv 0.61 2016 1224 5092.5 /dev/ssdR01V2C

Table 4: 800-HDD and Mixed SSD-HDD filemon Report

Detailed Logical Volumes Tables Detailed reports are shown for both a hot LV that is a good candidate to move to an SSD and for the database log LV, which is not a good candidate. The reports are included in Tables 5, 6, 7 and 8. Hot LV details Important things to note here are:

1. The average I/O size is 4KB (8.0 512 byte blocks, which is a good match for SSDs)

2. The I/O is completely random (read and write sequences are equal to the number of reads and write)

3. Read response times are relatively long 4. Average seek distance is very long (20.9GB).

Table 5: 800-HDD Detailed filemon LV Report Table 6: Mixed SSD-HDD Detailed filemon LV Report

VOLUME: /dev/hddR04V1S description: raw reads: 449 (0 errs) read sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 read times (msec): avg 5.801 min 0.118 max 34.264 sdev 4.517 read sequences: 449 read seq. lengths: avg 8.0 min 8 max 8 sdev 0.0 writes: 281 (0 errs) write sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 write times (msec): avg 1.194 min 0.373 max 4.414 sdev 0.641 write sequences: 281 write seq. lengths: avg 8.0 min 8 max 8 sdev 0.0 seeks: 730 (100.0%) seek dist (blks): init 105356576, avg 40796294.3 min 14072 max 115550480 sdev 28644621.9 time to next req(msec): avg 0.380 min 0.000 max 3.741 sdev 0.559 throughput: 10521.2 KB/sec utilization: 1.00

VOLUME: /dev/ssdR02V2S description: raw reads: 818 (0 errs) read sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 read times (msec): avg 1.030 min 0.314 max 14.616 sdev 1.894 read sequences: 818 read seq. lengths: avg 8.0 min 8 max 8 sdev 0.0 writes: 611 (0 errs) write sizes (blks): avg 8.0 min 8 max 8 sdev 0.0 write times (msec): avg 3.066 min 0.853 max 18.028 sdev 2.961 write sequences: 611 write seq. lengths: avg 8.0 min 8 max 8 sdev 0.0 seeks: 1429 (100.0%) seek dist (blks): init 15678600, avg 39276756.9 min 18680 max 117667808 sdev 28871870.5 time to next req(msec): avg 0.222 min 0.000 max 5.564 sdev 0.410 throughput: 17968.5 KB/sec utilization: 0.98

Page 17: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 17 of 23

Database log details The database log details are shown here as an example of data that would not benefit from SSDs (the log is on HDDs in both cases):

1. The I/O is very sequential. (Note that the log is striped across two hdisks, which causes filemon to report a substantial number of write sequences)

2. The response times for both runs are very short due to the storage array's write cache

3. Average seek distance is very short (8KB).

Table 7: 800-HDD Experiment filemon database log Table 8: Mixed SSD-HDD Experiment filemon database log

Which LVs Should be Moved to SSD Once the hot LVs are known, use the "lslv" command to find the LV sizes and calculate the IOPS/GB. LVs with the highest IOPS/GB should be moved first.

Using DB2 Snapshot to Identify Hot Logical Volumes

The DB2 snapshot monitor tool provides another means to identify hot tablespaces that are best candidates to place on SSDs. It is used to capture information about the database and any connected applications at a specific time. DB2 tablespace snapshot provides the following information:

• Tablespace Name • Tablespace Page size • Number of used pages • Bufferpool data/index/xda physical reads • Bufferpool read/write time

To identify which containers are hot, it is necessary to analyze the following properties:

• Access density - which is a function of number of physical I/Os relative to number of used pages in the tablespace.

• Access latency - which is a measure of latency for those physical I/Os. • Relative weight for tablespace that is calculated to help prioritize between

different tablespaces to place on SSD. This is a function of access density and access latency.

• Sequential ratio of accesses – ratio of sequential to random access

VOLUME: /dev/dbloglv description: raw writes: 1424 (0 errs) write sizes (blks): avg 8.0 min 8 max 16 sdev 0.4 write times (msec): avg 0.132 min 0.114 max 0.952 sdev 0.035 write sequences: 487 write seq. lengths: avg 23.5 min 8 max 56 sdev 5.8 seeks: 487 (34.2%) seek dist (blks): init 25305528, avg 16.0 min 16 max 16 sdev 0.0 time to next req(msec): avg 0.195 min 0.130 max 3.317 sdev 0.137 throughput: 20581.2 KB/sec utilization: 0.68

VOLUME: /dev/dbloglv description: raw writes: 1787 (0 errs) write sizes (blks): avg 8.1 min 8 max 32 sdev 1.3 write times (msec): avg 0.131 min 0.116 max 0.885 sdev 0.033 write sequences: 553 write seq. lengths: avg 26.3 min 8 max 88 sdev 7.9 seeks: 553 (30.9%) seek dist (blks): init 57817232, avg 16.0 min 16 max 16 sdev 0.0 time to next req(msec): avg 0.178 min 0.131 max 8.299 sdev 0.218 throughput: 22822.1 KB/sec utilization: 0.73

Page 18: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 18 of 23

The weighting factor is used to determine which tablespaces are better candidates to place on SSDs. The steps below show how to compute the weighting factor:

Total physical I/Os = ( Bufferpool data physical reads + Bufferpool index physical reads + Bufferpool xda physical reads + Bufferpool temporary data physical reads + (Direct reads * 512 )/tablespace page size)

Page Velocity = (Total physical I/Os) / (snapshot interval in seconds) Access time = (Total buffer pool read time + Direct reads elapsed time) Access density = (Page Velocity) / (number of used pages in tablespace) Access latency = (Access time) / (Total physical I/Os) Weighting factor = (Access density) * (Access latency) Sequentiality ratio = (Asynchronous pool data page reads + Asynchronous pool index page reads + Asynchronous pool xda page reads)/ (Buffer pool data physical reads+ Buffer pool index physical reads + Bufferpool xda physical reads)

When the above information is summarized for all tablespaces based on descending order of weighting factor, those tablespaces that have higher weighting factor are better candidate for SSDs. Tablespaces that have lower sequential ratio are better candidates for HDDs. Table 9 shows an example of some data from DB2 tablespaces snapshot taken from the 800-HDD configuration:

db2 get snapshot for tablespaces on DBNAME

Tablespace name TS_S_13 TS_OL_1

Tablespace Page size (bytes) 4096 4096

Number of used pages 15001088 8561920

Buffer pool data physical reads 10162297 458610

Buffer pool temporary data physical reads 0 0

Asynchronous pool data page reads 0 0

Buffer pool index physical reads 0 0

Buffer pool temporary index physical reads 0 0

Asynchronous pool index page reads 0 0

Buffer pool xda physical reads 0 0

Buffer pool temporary xda physical reads 0 0

Asynchronous pool xda page reads 0 0

Total buffer pool read time (millisec) 26705251 1189809

Total elapsed asynchronous read time 0 0

Direct reads 0 0

Direct reads elapsed time (ms) 0 0

Table 9: Tablespace snapshot for TS_S_13 and TS_OL_1

Summary of tablespace weighting factor for TS_S_13 and TS_OL_1 is as follows:

Page 19: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 19 of 23

TS_S_13 TS_OL_1 Total physical I/Os 21099197 458610 Page Velocity 2109919.7 45861 Access time 26705251 1189809 Access density 1.406 0.053 Access latency 1.265 2.504 Weighting factor 1.779 0.133 Sequentiality ratio 0 0

Table 10: Tablespace weighting factor for TS_S_13 and TS_OL_1

Tablespace TS_S_13 has much higher “Weighting factor” than tablespace TS_OL_1. Therefore, it’s a better candidate for moving to a SSD.

Migration Tools

As discussed earlier in this paper, after identifying the hot tablespaces using iostat, filemon or DB2 tablespace snapshot, the next step is to move the hot tablespaces from HDDs to SSDs. There are several tools available for data migration. This paper focuses on using IBM Softek Transparent Data Migration Facility (TDMF) and AIX migratepv.

Softek TDMF

Softek TDMF allows customers to move data between unlike storage vendors and switch over to new storage with no interruption to business generating applications. Softek TDMF is host-based software that moves data at the block level between logical volumes without interrupting reads and writes to those volumes.

Figure 14 shows the Softek architecture which consists of a command line interface (CLI), filter driver, and configuration files.

Page 20: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 20 of 23

Figure 14: Softek TDMF Migration Tool Architecture There are two ways to migrate data using Softek TDMF: Dynamic activation and auto switchover. Both methods support migration with no disruption to the application. Below is an example of how to migrate data using auto switchover mode. This mode can be used to migrate data anytime with minimal performance impact.

• Step 1: Creating a migration volume and associating it to a valid source volume # tdmf create tR01V3S /dev/hddR01V3S

• Step 2: Adding a target volume to migration volume # tdmf add tR01V3S /dev/ssdR01V3S

• Step 3: Starting migration and auto switchover after migration is done

# tdmf copy -x tR01V3S

• Step 4: Remove the old source volume, new source volume takes over # tdmf remove tR01V3S

HDD SSD

target source

Page 21: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 21 of 23

Data Migration Performance Impact

0

1000

2000

3000

4000

5000

6000

Interval (10s)

Th

rou

gh

pu

t (t

ps)

With auto switchover mode, the source volume (old volume on HDD) is removed after the data is migrated to the target volume (new volume on SSD). Mirrored writes to both the old and new volumes are no longer required.

Figure 15 shows the progress of a migration of hot tablespaces from HDDs to SSDs while an OLTP application is running. Before the migration started, the database transaction response time was about 0.04 seconds. During the migration, the application ran uninterrupted. The response time increased to 0.07 seconds. After the migration completed, the database transaction response time reduced to 0.02 seconds and system throughput almost doubled.

Figure 15: Performance Impact of Migrating Data from HDD to SSD

Migratepv

Another tool that can be used when migrating data is the AIX command migratepv, which moves physical partitions from one AIX hdisk to one or more hdisks. The following is an example of using the migratepv command to move the partitions from hdisk20 to hdisk30:

migratepv hdisk20 hdisk30

The migrate function works by creating a mirror of the logical volumes involved and then resynchronizing the logical volumes. When the migration is in progress the volume group is locked so only one hdisk can be migrated at a time.

Hot data on HDD

Hot data on SSD

Data Migration

Page 22: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 22 of 23

Conclusion

Customer I/O demands have outpaced the performance capabilities of traditional hard disk drives. Latencies associated with spinning platters and moving arms limit the speed of HDD data access. SSDs’ near instantaneous data access removes this I/O bottleneck, creating a paradigm shift in I/O performance. Applications throttled by poor I/O performance can benefit greatly from SSDs. For these I/O-intensive applications, it is no longer necessary to “short stroke” drives in order to achieve good I/O performance.

As demonstrated in our experiments, SSDs result in a substantial improvement in I/O performance which translates to increased business output, reduced energy consumption, reduced floor space requirements, and cost savings. Our experiments showed the following SSD benefits when comparing 800 HDDs with 36 SSDs and 80 HDDs:

• 1.65X system throughput improvement • 1.72X application response time improvement • 3X improvement in drive read response time • 33X improvement in throughput per drive • 90% reduction in energy consumption • 84% reduction in floor space requirements

Additional experiments with an equal number of HDDs and SSDs resulted in 4X better drive read response times and 42X better throughput per drive. SSDs can deliver significant business value to a broad spectrum of industries facing various business challenges. We illustrated two workloads representing real world customer applications in the financial sector and consumer products industry. Both scenarios showed advantages for migrating to SSDs, including improved transaction response times, increased numbers of users, and storage cost savings.

The superior performance of SSDs must be balanced with cost. Multi-tiered storage solutions can provide that balance. An application’s “hot” data can be moved to SSDs, while less active data can remain on lower cost HDDs. IBM provides tooling and services to assist customers in characterizing their data access patterns and helping them make smart data placement choices for the highest performing – lowest cost storage solution.

Page 23: Driving Business Value on Power Systems with Solid-state Drives

Driving Business Value on Power Systems with Solid State Drives

© Copyright IBM Corporation 2009 All Rights Reserved Page 23 of 23

For More Information

IBM Power Servers ibm.com/systems/p

IBM Storage

ibm.com/services/storage

Notices and Disclaimer

Copyright © 2009 by International Business Machines Corporation. No part of this document may be reproduced or transmitted in any form without written permission from IBM Corporation. Product data has been reviewed for accuracy as of the date of initial publication. Product data is subject to change without notice. This information may include technical inaccuracies or typographical errors. IBM may make improvements and/or changes in the product(s) and/or programs(s) at any time without notice. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. THE INFORMATION PROVIDED IN THIS DOCUMENT IS DISTRIBUTED "AS IS" WITHOUT ANY WARRANTY, EITHER EXPRESS OR IMPLIED. IBM EXPRESSLY DISCLAIMS ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT. IBM shall have no responsibility to update this information. IBM products are warranted according to the terms and conditions of the agreements (e.g., IBM Customer Agreement, Statement of Limited Warranty, International Program License Agreement, etc.) under which they are provided. IBM is not responsible for the performance or interoperability of any non-IBM products discussed herein. The performance data contained herein was obtained in a controlled, isolated environment. Actual results that may be obtained in other operating environments may vary significantly. While IBM has reviewed each item for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

The provision of the information contained herein is not intended to, and does not; grant any right or license under any IBM patents or copyrights. Inquiries regarding patent or copyright licenses should be made, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. IBM, IBM Power, AIX, DB2, DB2 9.5, Enterprise Storage Server, FlashCopy, TotalStorage, are trademarks of International Business Machines Corporation in the United States, other countries, or both.

Other company, product or service names may be trademarks or service marks of others

POW03025USEN-00