100
Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009

Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

  • Upload
    others

  • View
    10

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem

Best Practices35834-00 Rev. DMay, 2009

Page 2: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Copyright © 2009 by International Business Machines Corporation. All rights reserved.

Page 3: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Contents

BEST PRACTICES FOR RUNNING AN ORACLE DATABASE ON AN IBM MID-RANGE STORAGE SUBSYSTEM

Disclaimer............................................................................................................................................................... 2

Planning the Storage Subsystem Design ................................................................................................... 2

Features of the DS4800 and DS5000 Storage Subsystems............................................................ 2

Maximizing Drive-Side Performance ..................................................................................................... 3

Basing the Segment Size on Oracle I/O Operations ......................................................................... 5

Configure Storage Cache ........................................................................................................................... 6

Aligning File System Partitions ................................................................................................................ 7

Laying Out Logical Drives and Disk Drives................................................................................................. 9

Deciding on an Oracle Design Strategy................................................................................................ 9

Laying Out Logical Drives and Arrays..................................................................................................11

Using Oracle’s Optimal Flexible Architecture ............................................................................12

Oracle Redo Logs .................................................................................................................................13

Example OFA Configuration Using File Systems ......................................................................14

Oracle Redo Logs Using File System .............................................................................................14

Example of Using Oracle’s OFA Directory Structure with File Systems ............................15

Example of Oracle’s OFA Configuration Using Automatic Storage Management .......16

Oracle Redo Logs Using Oracle ASM ............................................................................................16

Example of Oracle’s OFA Directory Structure Using Oracle ASM .......................................17

Considering the Server Platform .................................................................................................................20

Considering the Server Hardware Architecture...............................................................................20

Calculating Aggregate Bandwidth ................................................................................................20

Sharing Bandwidth with Multiple HBAs ......................................................................................21

Considering the System Software.........................................................................................................21

Buffering the I/O ..................................................................................................................................21

Clustering ...............................................................................................................................................22

Calculating Optimal Segment Size ................................................................................................22

Aligning Host I/O with RAID Striping...................................................................................................23

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem iii© Copyright 2009, IBM Corporation

Page 4: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Aligning Partitions on a Microsoft Windows Operating System ........................................ 23

Aligning Partitions on a Linux Operating System .................................................................... 23

Aligning Partitions on Other Operating Systems .................................................................... 26

Locating Recommendations for Host Bus Adapter Settings....................................................... 26

Recommendations for QLx246x Settings ................................................................................... 27

Locating Recommendations for Fibre Channel Switch Settings............................................... 30

Using Command Tag Queuing .............................................................................................................. 31

Analyzing I/O Characteristics.................................................................................................................. 32

Using Logical Volume Manager to Balance I/O Load .................................................................... 32

Setting Up the Storage Subsystem............................................................................................................. 33

Factors Influencing Storage Performance ......................................................................................... 33

Estimating Capacity Limits ...................................................................................................................... 33

Auto Logical Drive Transfer..................................................................................................................... 34

Determining the Best RAID Level.......................................................................................................... 35

Choosing the Number of Disk Drives to Put in an Array............................................................... 35

How the Number of Disk Drives per Array Affects Performance ........................................ 36

Storage Subsystem Design Best Practices ......................................................................................... 38

Cabling and Setup ............................................................................................................................................ 39

Connecting the IBM DS4800 and DS5000 Controller Modules.................................................. 39

Connecting the Host.................................................................................................................................. 41

Locating Arrays ............................................................................................................................................ 42

Cabling Arrays in the DS4800 Controller Module and DS5000 Controller Module ..... 42

Tuning External IBM Storage Subsystems................................................................................................ 46

Elements That Influence Performance................................................................................................ 46

An Iterative Approach to Performance Tuning................................................................................ 47

Setting the Global Parameters ..................................................................................................................... 47

Setting the Global Cache Flush.............................................................................................................. 47

Setting the Force Unit Access and Synchronize Cache................................................................. 47

Setting the Global Media Scan............................................................................................................... 48

Setting LUN-Specific Parameters .......................................................................................................... 48

Setting the LUN-Specific Media Scan ........................................................................................... 48

Setting the Caching Parameters .................................................................................................... 48

Setting the LUN-Specific Write Cache and Write Cache Mirroring .................................... 49

iv Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 5: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Setting the LUN-Specific Read Cache and Read Ahead Multiplier ....................................49

I/O Performance Monitoring and Troubleshooting..............................................................................49

Tools for Database Performance Monitoring ...................................................................................49

Tools for Operating System Performance Monitoring ..................................................................51

How Tablespace Fragmentation Makes a Difference on Performance ............................53

How Table Fragmentation Makes a Difference on Performance .......................................54

Tools Used in the Storage Configuration Process ....................................................................54

Other Options for Database Tuning.....................................................................................................59

Other Ways to Improve I/O Efficiency at the Database Level .....................................................60

Example of an I/O Tuning Exercise .......................................................................................................60

Using Performance Tools ...............................................................................................................................61

Using DS Storage Manager Performance Monitor .........................................................................62

Obtaining Additional Performance Tools ..........................................................................................62

Getting Optimal Performance from Premium Features......................................................................63

Getting Optimal Performance from FlashCopy ...............................................................................63

Getting Optimal Performance from VolumeCopy ..........................................................................63

Getting Optimal Performance from Enhanced Remote Mirroring ...........................................63

Conclusion ...........................................................................................................................................................64

Contact Information.........................................................................................................................................64

APPENDIX 1: REFERENCES

APPENDIX 2: TEST CONFIGURATION

APPENDIX 3: AVT DISABLE SCRIPT

APPENDIX 4: DS5000 STORAGE SUBSYSTEM

Overview of the DS5000 Storage Subsystem........................................................................................D-1

Supporting Your Critical Functions ....................................................................................................D-1

Growing with Your Business .................................................................................................................D-2

Extending the Life of Your Storage Subsystem..............................................................................D-2

Securing Data .............................................................................................................................................D-2

Product Features of the DS5000 Storage Subsystem ........................................................................D-2

Releases of the DS5000 Storage Subsystem..........................................................................................D-3

Benefits of the DS5000 Storage Subsystem ..........................................................................................D-4

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem v© Copyright 2009, IBM Corporation

Page 6: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Comparing the DS5000 Storage Subsystem to the DS4000 Series Storage Subsystem.......D-6

Hardware Components.................................................................................................................................D-8

DS5000 Controller Module....................................................................................................................D-8

EXP810 Drive Expansion Enclosure....................................................................................................D-9

DS Storage Manager............................................................................................................................. D-10

Functions ........................................................................................................................................... D-10

Premium Features .......................................................................................................................... D-10

Software Specifications of DS Storage Manager ................................................................. D-11

Supported Operating Systems .................................................................................................. D-12

Technical Specifications............................................................................................................................. D-13

Physical Characteristics........................................................................................................................ D-13

Operating Temperature ...................................................................................................................... D-13

Relative Humidity With No Condensation.................................................................................... D-14

Altitude Ranges ...................................................................................................................................... D-14

Heat Dissipation ..................................................................................................................................... D-14

Acoustic Noise......................................................................................................................................... D-15

Power Input.............................................................................................................................................. D-15

Hardware Architecture and Diagrams.................................................................................................. D-15

External Connections ........................................................................................................................... D-17

Disk Drive Channels and Loop Switches ....................................................................................... D-17

Cabling....................................................................................................................................................... D-19

vi Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 7: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem

Owning state-of-the-art storage subsystems is not enough to excel in today’s competitive business climate. Even with the best storage subsystem, the continuous demands upon the IT environment can create challenges:

• Unused capacity on expensive equipment becomes a financial waste.

• Continuous hardware and software sprawl create an ever-changing environment that constantly must be re-tuned to adjust to new conditions.

• New equipment is hot-added when possible, often resulting in a convoluted configuration that makes tuning for high performance complex and difficult to manage.

This document describes the optimum performance settings for IBM® System Storage™ DS5000® storage subsystems and IBM System Storage DS4800 storage subsystems with the Oracle® Database application. This document identifies parameters for optimizing a high-performance storage subsystem.

For each parameter, this document explains how to monitor, evaluate, adjust, and make sure that the adjustment was appropriate and positive. The process of keeping the parameters tuned involves the following tasks:

• Identify the relevant parameters.

• Take a baseline to determine the benchmark value for each relevant parameter.

• Continuously monitor each parameter on an ongoing basis. Only continuous monitoring can isolate the triggers that impact performance. Also, continue monitoring after any adjustment so that the effectiveness of the adjustment can be evaluated.

• Adjust parameters while the system remains in production.

• Watch how adjustments in one parameter are affecting other parameters.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 1© Copyright 2009, IBM Corporation

Page 8: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

DisclaimerBecause of the highly customizable nature of an Oracle Database environment, you must take into consideration your specific environment and equipment to achieve optimal performance from the DS4800 storage subsystems and DS5000 storage subsystems. When weighing the recommendations in this document, start with the first principles of I/O performance tuning:

• “It depends…” There are no absolute answers. Each environment is unique and the correct settings depend on the unique goals, configuration, and demands for the specific environment.

• “Actual mileage might vary.” Results vary widely because conditions vary widely.

IMPORTANT Only attempt many of the procedures within this document if you are a trained storage specialist with intimate knowledge of the working environment.

Planning the Storage Subsystem DesignUse the following information to plan the design of your storage subsystem.

Features of the DS4800 and DS5000 Storage Subsystems• 4-Gb/s Fibre Channel interfaces for server-to-SAN attachment

■ Eight for the DS4800 storage subsystem

■ Sixteen for the IBM DS5000 storage subsystem

• Fibre Channel/SATA disk drives

■ A maximum of 224 for the DS4800 storage subsystem

■ A maximum of 448 for the DS5000 storage subsystem

• Dedicated data cache with battery backup

■ A maximum of 16 GB for the DS4800 storage subsystem

■ A maximum of 32 GB for the DS5000 storage subsystem

• Online storage subsystem administration

• Storage subsystem performance

■ Burst I/O from cache: 575,000 IOPS

■ Sustained I/O from disk drive: 86,000 IOPS

2 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 9: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning the Storage Subsystem Design

■ Sustained throughput from disk drive: 1,600 MB/s

• Capacity

■ 16.4 TB using 73GB FC disk drives

■ 32.7TB using 146GB FC disk drives

■ 67.2TB using 300GB FC disk drives

■ 112TB using 500GB SATA disk drives

For performance statistics, see “Comparing the DS5000 Storage Subsystem to the DS4000 Series Storage Subsystem” on page D-6.

Maximizing Drive-Side PerformanceThe DS4800 storage subsystem provides four redundant drive-side connections (four ports per controller). To maximize performance, add drive expansion enclosures in groups of four, which spread the I/Os across all four disk drive channels of each controller. Each controller port supports up to 400 MB/s bandwidth. Even though the system might not require four fully populated drive expansion enclosures, for the sake of performance, implement four drive expansion enclosures—each only half populated—than to implement only two drive expansion enclosures that are fully populated.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 3© Copyright 2009, IBM Corporation

Page 10: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 1 Cabling a Storage Subsystem with Four Expansion Drawers

Each controller on the DS4800 storage subsystem has four disk drive channels, and each disk drive channel has two ports. Therefore, each controller has eight drive-side connections. A controller module has eight redundant path pairs that are formed using one disk drive channel of controller A and one disk drive channel of controller B.

Figure 2 on page 5 shows the redundant pairs in a controller module. Table 1 on page 5 lists the numbers of the redundant path pairs and the drive-side connections of the disk drive channels from which the redundant path pairs are formed.

IMPORTANT To maintain data access in the event of the failure of a controller, an ESM, or a disk drive channel, you must connect a drive expansion enclosure or a string of drive expansion enclosures to both disk drive channels on a redundant path pair.

4 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 11: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning the Storage Subsystem Design

Figure 2 Redundant Path Pairs of the Controller Module

Basing the Segment Size on Oracle I/O OperationsBase the segment size on the type of data and on the expected I/O size of the data. Store sequentially-read data on logical drives with small segment sizes and with dynamic prefetch set up to dynamically read-ahead blocks. For the procedure for

Table 1 Redundant Path Pairs of a Controller Module

Redundant Path Pairs

Drive-Side ConnectionsController A

Disk Drive Channels

Controller A

Drive-Side ConnectionsController B

1 Port 8 Channel 1 Port 1

2 Port 7 Channel 1 Port 2

3 Port 6 Channel 2 Port 3

4 Port 5 Channel 2 Port 4

5 Port 4 Channel 3 Port 5

6 Port 3 Channel 3 Port 6

7 Port 2 Channel 3 Port 6

8 Port 2 Channel 4 Port 7

9 Port 1 Channel 4 Port 8

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 5© Copyright 2009, IBM Corporation

Page 12: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

setting up the appropriate disk drive segment size, see “Calculating Optimal Segment Size” on page 22. To see the prefetch settings used in the test setup, see “Appendix B: Test Configuration.”

Very little I/O from Oracle is truly sequential in nature except for processing redo logs and archive logs. Oracle can read a full table scan all over the disk drive. Oracle calls this a scattered read. Oracle's sequential data read is for accessing a single index entry or single piece of data. Use small segment sizes for an OLTP with little or no need for a read-ahead data. Use larger segment sizes for a Decision Support System (DSS) environment when you want to perform full table scans through a data warehouse.

Perform these important actions:

• Set the database block size lower or equal to the disk drive segment size. If the segment size is set at 2 KB and the database block size is set at 4 KB, two I/O operations are required to fill the block, resulting in performance degradation.

• Make sure that the segment size is an even multiple of the database block size. This prevents partial I/O operations from filling the block.

• Set the parameter db_file_multiblock_read_count appropriately. Normally you want to set the db_file_multiblock_read_count as shown:

segment size = db_file_multiblock_read_count * DB_BLOCK_SIZE

You also can set the db_file_multiblock_read_count so that the result of the previous calculation is smaller than or equal to the segment size. The segment size is an even multiple of this result. For example, if you have a segment size of 64 KB and a block size of 8 KB, you can set the db_file_multiblock_read_count to 4, which equals a value of 32 KB. The segment size, 64 KB, would be an even multiple of 32 KB.

Configure Storage CacheAlways set up read cache. Setting up read cache lets the controllers service reads from cache for any additional read requests to the data stored within the cache.

Also set up write cache to let the controllers acknowledge writes as soon as the data hits the cache instead of waiting for the data to be written to the physical media. For other storage subsystems, a trade-off exists between data integrity and speed. IBM storage subsystems are designed to store data on both controller caches before being acknowledged. Furthermore, both controllers have battery backups for this cache. Therefore, when using IBM storage subsystems, chances of data loss when setting up read cache and write cache are extremely low.

Whether to use the prefetch option or not depends on the type of data to be stored on the logical drives and how that data is accessed. If the data is accessed randomly, such as with tablespaces or indexes, you want to disable prefetch. Disabling prefetch

6 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 13: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Planning the Storage Subsystem Design

prevents the controllers from reading ahead segments of data that most likely you will not use—unless your logical drive segment size is smaller than the data read size requested.

Aligning File System PartitionsAlign partitions to stripe width. Calculate stripe width by the following formula:

segment_size / blocks * num_disks

In this formula, 4+1 RAID5 with 512 KB segment equals 512 KB / 512 * 4 = 4096.

Figure 3 and Figure 4 on page 8 show the performance difference between aligned partitions and non-aligned partitions. The test setup used a 4+1 RAID5 array and a 4+4 RAID11 array. Each array contained one logical drive at two-thirds of the total capacity of the array. The test setup used a partition on each logical drive for the maximum size. The test ran Orion on each logical drive, first with the default partition placement and then again after aligning the partition to the stripe size. As a result, by simply aligning the file system partitions to the stripe size, you can achieve as much as a 42 percent (RAID1) or 73 percent (RAID5) performance gain in MB/s throughput.

To see the complete test setup, see “Appendix B: Test Configuration.”

Here is the command line syntax used for Orion for the test comparisons:

orion -run advanced -testname rd5_v1 -num_disks 10 -size_small 8 -size_large 1024 -type rand -write 10 -duration 60 -simulate concat -matrix basic -cache_size 0

1.RAID 1 on the IBM storage subsystem is implemented as a RAID10 mirrored stripe configuration. Although the DS Storage Manager calls this RAID1, it is functionally equivalent to a RAID10.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 7© Copyright 2009, IBM Corporation

Page 14: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 3 Comparing the Performance of Aligned RAID1 to Non-Aligned RAID1

Figure 4 Comparing the Performance of Aligned RAID5 to Non-Aligned RAID5

8 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 15: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Laying Out Logical Drives and Disk Drives

Laying Out Logical Drives and Disk DrivesWhen tuning for optimal performance, consider the specific applications to be served. First, identify certain important characteristics of the applications. Then, based upon those characteristics, choose an appropriate RAID level, choose the number of disk drives to put in an array, and set the cache parameters.

Deciding on an Oracle Design StrategyBefore you can effectively design your array and logical drives, you must determine the primary goals of the configuration—performance, reliability, growth, manageability, or cost. Each goal has pros and cons and trade-offs, as detailed in the following section. After you have determined what goals are best for your environment, follow the guidelines to implement those goals. To get the best performance from the IBM storage subsystem, you must know the I/O characteristics of the files to place on the storage subsystem. After you know the I/O characteristics of the files, you can set up a correct array and a correct logical drive to service these files.

Frequently Updated Databases: If your database is frequently updated and if performance is a major concern, your best choice is RAID10, even though RAID 10 is the most expensive because of the number of disk drives, expansion drawers, and other devices. RAID10 provides the least disk drive overhead and the highest performance from the IBM storage subsystems.

Low to Medium Updated Databases: If your database is updated infrequently or if you must maximize your storage investment, choose RAID5 for the database files. RAID5 provides capability for large storage subsystem logical drives with minimal redundancy of disk drives.

Remotely Replicated Environments: If you plan on remotely replicating your environment, carefully segment the database. Segment the data on smaller logical drives and selectively replicate these logical drives. Segmenting limits WAN traffic to only what is absolutely needed for database replication. Conversely, if you use large logical drives in replication, initial establish times are larger and the amount of traffic through the WAN might increase, leading to slower than necessary database performance. The IBM premium features, Enhanced Remote Mirroring, VolumeCopy, and FlashCopy® are extremely useful with replicating remote environments.

For further information about Enhanced Remote Mirroring, refer to the following IBM documents:

• Enhanced Remote Mirroring Service Planning and Delivery Guidebook for Version 9.19

• Enhanced Remote Mirroring Installation and Configuration Guidebook for Version 9.19

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 9© Copyright 2009, IBM Corporation

Page 16: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

• Enhanced Remote Mirroring of an Oracle Database Stored on IBM DS4000 Storage Subsystems

• Enhanced Remote Mirroring of an Oracle Database Using Data Replicator Software

For further information about FlashCopy, refer to the following IBM document:

• IBM System Storage DS4000 and Storage Manager v. 10.30 Redbook

Multiplexing data occurs at the Oracle level and has no impact on what storage is being used.

IMPORTANT Do not multiplex to the same disk drive. Multiplexing to the same disk drive introduces twice the amount of I/O to the disk drive. Also, no redundancy exists in case of failure or corruption of the disk drive.

Table 2 shows what RAID levels are most appropriate for what file types.

Use RAID0 arrays only for high traffic data that does not need any redundancy protection for device failures. RAID0 is the least used RAID format but provides for high speed I/O without the additional redundant disk drives for protection.

RAID1 offers the best performance while providing data protection by mirroring each physical disk drive. Create RAID1 arrays with the most disk drives possible (30 maximum) to achieve the highest performance.

Table 2 Best RAID Level for File Type

File Type RAID Level Comments

Redo logsa

a. Always use Oracle’s log multiplexing on the redo logs. Log multiplexing provides the best chance of recovery for the database. For more information, see “Oracle Redo Logs” on page 13.

RAID10 Multiplex with Oracle

Control files RAID10 Multiplex with Oracle

Temp datafiles RAID10, RAID5Performance first / drop recreate on disk drive failure

Archive logs RAID10, RAID5 Determined by performance and cost requirements

Undo/Rollback RAID10, RAID5 Determined by performance and cost requirements

Datafiles RAID10, RAID5 Determined by performance and cost requirements

Oracle executables RAID5

Export files RAID10, RAID5 Determined by performance and cost requirements

Backup staging RAID10, RAID5 Determined by performance and cost requirements

10 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 17: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Laying Out Logical Drives and Disk Drives

Create RAID 5 arrays with either 4+1 disk drives or 8+1 disk drives to provide best performance while reducing RAID overhead. RAID5 offers good read performance at a reduced cost of physical disk drives compared to a RAID1 array.

RAID 10 (RAID 1+0) combines the best features of data mirroring of RAID 1 plus the data striping of RAID 0. RAID 10 provides fault tolerance along with better performance compared to other RAID options. A RAID 10 array can sustain multiple disk drive failures and losses as long as no two disk drives form a single pair of one mirror.

Laying Out Logical Drives and ArraysChoosing the best disk drive layout for the arrays is extremely important for determining performance and manageability. Two main schools of thought exist in this area: Optimal Flexible Architecture (OFA) and Stripe and Mirror Everything (SAME). The best method depends on your specific environment and management style.

To learn more about OFA, go to:

http://download.oracle.com/docs/cd/B19306_01/install.102/b15704/app_ofa.htm

To learn more about SAME, go to:

http://www.oracle.com/technology/deploy/availability/pdf/OOW2000_same_ppt.pdf

To achieve the best performance and availability possible, configure the IBM storage subsystem as follows:

• Designate a hot spare disk drive in each expansion drawer.

• Create a RAID10 array across as many disk drives as possible.

• Create a logical drive that is two-thirds the capacity of the RAID10 array.

• Set the stripe size to 512 KB for each logical drive. The 512 KB stripe size provided the best results in IBM testing using the ORION utility.

• Change cache settings on the logical drive.

• Disable dynamic cache read prefetch.

• Map the logical drive to the host system.

NOTE If you are considering or implementing an Enhanced Remote Mirroring solution, segment the data structure to the smallest size necessary. Small segments limit the amount of data to be transferred across the WAN links.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 11© Copyright 2009, IBM Corporation

Page 18: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Using Oracle’s Optimal Flexible Architecture

1 Establish an orderly operating system directory structure in which any database file can be stored on any disk drive resource.

a Name all devices that might contain Oracle data so that a wild card or similar mechanism can be used to refer to the collection of devices as a unit.

b Make a directory explicitly for storage of Oracle data at the same level on each of these devices.

c Beneath the Oracle data directory on each device, make a directory for each different Oracle database on the system.

d If, and only if, X is a control file, a redo log file, or a datafile of the Oracle database whose DB_NAME is sid, place the file X in the directory /u??/ORACLE/sid/type_desig. If your operating system is Windows 2000 or Windows NT place the file X in the directory C:\oracle\sid\type_desig. The type_desig specifies the type of file to be placed in the directory at that location and is usually data, index, control, or redo.

2 Standardize the implementation of Locally Managed Tablespaces (LMT) by selecting those options that are appropriate for your environment:

■ Automatic Segment Space Management (ASSM) – Replaces the linked-list freelists with bitmaps to make managing storage extents fast and efficient.

■ Uniform size clause – Allocates extents at the tablespace level. This helps eliminate tablespace fragmentation, because all objects within the tablespace have the same extent size. Choose this option when all objects inside the tablespace are similar in size.

■ Autoallocate – Creates extents at the object level. Extent sizes escalate from 64 KB, to 1 MB, to 8 MB, and on up to 64 MB as the object grows in size. Autoallocate reduces table fragmentation because the same range of extent sizes are used within the tablespace. Choose this option if objects stored within the tablespace have different storage sizes.

3 Separate groups of segments (data objects) with different behavior into different tablespaces.

■ Separate groups of objects with different access characteristics into different tablespaces. For example, separate data from online redo logs by placing them into different tablespaces. Data and indexes are accessed randomly, while online redo logs are written to sequentially.

■ Separate groups of segments that contend for disk drive resources in different tablespaces. For example, separate data from indexes by placing them into different tablespaces. Try to separate redo logs from tablespaces.

12 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 19: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Laying Out Logical Drives and Disk Drives

■ Separate groups of segments representing objects with differing behavioral characteristics in different tablespaces. For example, separate tables that require a monthly retention cycle from tables that require a yearly retention cycle by placing them into different tablespaces.

4 Maximize database reliability and performance by separating database components across different disk drive resources.

■ For RAID environments, spread datafiles across multiple controller arrays. Spreading the datafiles across multiple controller arrays increases performance by eliminating logical drive hot spots by distributing the I/O among multiple disk drives.

■ Keep at least three active copies of a database control file on at least three different physical storage subsystems in case of a logical drive/controller failure.

■ To provide the best possibility of a database recovery, use at least three groups of redo logs in Oracle 9i or Oracle Database 10g. Isolate them to the greatest extent possible on hardware serving few or no files that will be active while Oracle Database is in use. Multiplex redo logs whenever possible.

■ Separate tablespaces if they include data that is threatened by disk drive resource contention across different physical disk drive resources. Also consider disk drive controller usage.

■ Consider partitioning when dealing with large data and indexes. Partitioning lets the database administrator allocate a separate tablespace for each partition data and index component and each tablespace can be placed on different physical devices for better performance.

Oracle Redo Logs

Oracle's online redo logs play a key role in database recovery operations and can have an impact on database performance. Failure of an online log group can be the most painful of all Oracle database recovery scenarios, because if you lose the redo log file, you have lost all updates to the database subsequent to the last checkpoint or redo log file switch. You could possibly lose all of the data in the redo log. For these reasons, give special attention to Oracle’s online redo logs.

• Use Oracle’s online redo log multiplexing feature in favor of, or in addition to, RAID features. For optimal availability, locate members of multiplexed log groups so that they share no common points of failure at the disk drive, channel, or board level.

• Where the performance of the update and modify functions is important, locate the online redo logs on dedicated or otherwise quiet disk drive spindles.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 13© Copyright 2009, IBM Corporation

Page 20: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

• Size the redo log files so that they switch every half hour if possible. Increase the size of the online redo log files to increase performance of processing large amount of inserts, updates, and deletes. Sizing the redo log files is critical to get the best performance out of the database.

Example OFA Configuration Using File Systems

This example configuration contains eight data areas, including disk drives, striped sets, RAID sets, and placeholders for other new technologies to be developed in the future. Separate the eight data areas as completely as possible. Ideally, operate from different device controllers or channels to maximize throughput. The more disk drive heads are moving at one time, the faster the database. To minimize disk drive contention, lay out the file system disk drives as follows:

AREA 1 – Oracle executables and a control file

AREA 2 – Data: datafiles, index datafiles, system datafiles, tool datafiles, user datafiles, and a control file

AREA 3 – Data datafiles, index datafiles, temporary datafiles, undo datafiles, and a control file

AREA 4 – Archive log files, export files, backup staging area, and a control file

AREA 5 – Redo log files

AREA 6 – Redo log files

AREA 7 – Redo log files

AREA 8 – Redo log files

NOTE When you create ASM disk groups, the following levels of redundancy are available: normal, high, and external. Specify external redundancy for all ASM disk groups. This action lets the storage subsystem perform the mirroring rather than letting ASM perform the mirroring.

Oracle Redo Logs Using File System

Online redo log files exist on four separate disk drives. Online redo log files are multiplexed, and Oracle creates these log files in a circular fashion:

redo log1 =>redo log2 => redo log3 => redo log4 =>redo log1

As a result, the I/O is evenly distributed. Therefore, when Oracle switches log file groups, writing to the new redo log files does not impact reading the old redo log file to create a new archive log file.

14 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 21: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Laying Out Logical Drives and Disk Drives

Example of Using Oracle’s OFA Directory Structure with File Systems

Table 3 on page 15 shows a sample directory structure using File Systems. This sample directory structure includes the recommended array layouts to achieve optimal performance from Oracle and the IBM storage subsystem. Your specific environment might require changes based on available storage, performance, remote replication, or manageability.

Table 3 Sample Oracle Directory Structure Using File System (1 of 2)

Directory Name RAID Level Device ID Logical Drive

Name

/u01/product/oracle/10.0.2/… RAID 5RAID10

/dev/hdisk01 VG01_Vol1

/u01/oracle/admin/sid/ RAID 5RAID10

/dev/hdisk01 VG01_Vol1

bdump/

cdump/

udump/

…/

/u02/oracle/data/sid/ RAID 5RAID10

/dev/hdisk02 VG02_Vol1

/u03/oracle/temp/sid/ RAID 5RAID10

/dev/hdisk03 VG03_Vol1

/uo3/oracle/undo/sid/ RAID 5RAID10

/dev/hdisk03 VG03_Vol1

/u04/oracle/archive/sid/ RAID 5RAID10

/dev/hdisk04 VG04_Vol1

/u04/oracle/export/sid/ RAID 5RAID10

/dev/hdisk04 VG04_Vol1

/u04/oracle/backup/sid/ RAID 5RAID10

/dev/hdisk04 VG04_Vol1

/u05/oracle/redo/sid/ RAID10 /dev/hdisk05 VG05_Vol1

redo_1a

redo_3a

/u06/oracle/redo/sid/ RAID10 /dev/hdisk06 VG06_Vol1

redo_2a

redo_4a

/u07/oracle/redo/sid/ RAID10 /dev/hdisk07 VG07_Vol1

redo_1b

redo_3b

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 15© Copyright 2009, IBM Corporation

Page 22: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Example of Oracle’s OFA Configuration Using Automatic Storage Management

This example configuration contains eight data areas, including disk drives, striped sets, RAID sets, and a placeholder for other new technologies to be developed in the future. Separate the eight data areas as completely as possible. Ideally, operate from different device controllers or channels to maximize throughput. The more disk drive heads are moving at one time, the faster the database. To minimize disk drive contention, lay out the Oracle Automatic Storage Management (Oracle ASM) file system disk drives as follows:

AREA 1 – Oracle executables and user areas

AREA 2 – Export files

AREA 3 – Redo log files

AREA 4 – Redo log files

AREA 5 – Redo log files

AREA 6 – Redo log files

AREA 7 – Data datafiles, index datafiles, system datafiles, tool datafiles, user datafiles

AREA 8 – Flash Recovery Area (FRA): archive logs, RMAN backups, flash area, autobackups

Oracle Redo Logs Using Oracle ASM

Using Oracle ASM, the database administrator has the option of either using non-ASM logical drives for the online redo logs or to use ASM disk groups. The online redo log files exist on four separate Oracle ASM disks. Online redo log files are multiplexed, and Oracle creates these log files in a circular fashion:

redo log1 => redo log2 => redo log3 => redo log4 => redo log1

As a result, the I/O is evenly distributed. Therefore, when Oracle switches log file groups, writing to the new redo log file does not impact reading the old redo log file to create a new archive log file.

/u08/oracle/redo/sid/ RAID10 /dev/hdisk08 VG08_Vol1

redo_2b

redo_4b

Table 3 Sample Oracle Directory Structure Using File System (2 of 2)

Directory Name RAID Level Device ID Logical Drive

Name

16 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 23: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Laying Out Logical Drives and Disk Drives

Using Oracle ASM, you can place the online redo logs in Oracle ASM disk groups. Place the second copy of the multiplexed online redo log in the Oracle ASM flash recovery area disk group. Then add the devices from Areas 3, 4, 5, and 6 to the Oracle ASM disk group dedicated for the online redo logs. This action lets the I/O spread more evenly across more Oracle ASM disks, thus reducing any I/O bottlenecks.

NOTE For performance reasons, RAID 5 is not recommended for the Oracle ASM disk group to be used for the online redo logs.

Example of Oracle’s OFA Directory Structure Using Oracle ASM

Even using Oracle ASM, you can place the online redo log files on non-ASM disks. You also can place the online redo log files on ASM disks.

Table 4 shows a sample directory structure using Oracle ASM disk groups with the online redo log files on non-ASM disks. The sample directory shows the logical drive layout and the Oracle ASM disk group layout required to achieve optimal performance from Oracle and the IBM storage subsystem. Your specific environment might require changes based on available storage, performance, remote replication, or manageability.

Table 4 Sample Oracle Directory Structure for Online Redo Log Files on Non-ASM Disks (1 of 2)

Directory Name RAID Level Device ID Logical Drive

Name

/u01/oracle/product/10.0.2/… RAID5RAID10

/dev/hdisk01 VG01_Vol1

/u01/oracle/admin/sid/ RAID5RAID10

/dev/hdisk01 VG01_Vol1

bdump/

cdump/

udump/

…/

+DG 1 for all data/index tablespaces RAID5RAID10

/dev/hdisk02

/dev/hdisk03

VG02_Vol1

VG03_Vol1

+DG 2 for all archive logs, autobackups, flashback logs, rman backups (flash recovery area)

RAID5RAID10

/dev/hdisk04 VG04_Vol1

/u05/oracle/redo/sid/ RAID10 /dev/hdisk05 VG05_Vol1

redo_1a

redo_3a

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 17© Copyright 2009, IBM Corporation

Page 24: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Oracle ASM disk group DG1 contains the following logical devices:

• /dev/hdisk02

• /dev/hdisk03

Oracle ASM disk group DG2 contains the following logical device:

• /dev/hdisk04

Table 5 on page 19 shows a sample directory structure using Oracle ASM disk groups with the online redo log files on Oracle ASM disk groups. The sample directory structure shows the logical drive and Oracle ASM disk group layout required to achieve optimal performance from Oracle and from the IBM storage subsystem. Your specific environment might require changes based on available storage, performance, remote replication, or manageability.

/u06/oracle/redo/sid/ RAID10 /dev/hdisk06 VG06_Vol1

redo_2a

redo_4a

/u07/oracle/redo/sid/ RAID10 /dev/hdisk07 VG07_Vol1

redo_1b

redo_3b

/u08/oracle/redo/sid/ RAID10 /dev/hdisk08 VG08_Vol1

redo_2b

redo_4b

Table 4 Sample Oracle Directory Structure for Online Redo Log Files on Non-ASM Disks (2 of 2)

Directory Name RAID Level Device ID Logical Drive

Name

18 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 25: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Laying Out Logical Drives and Disk Drives

Oracle ASM disk group DG1 contains the following logical devices:

• /dev/hdisk02

• /dev/hdisk03

• /dev/hdisk05

• /dev/hdisk06

Oracle ASM disk group DG2 contains the following logical devices:

• /dev/hdisk04

• /dev/hdisk07

• /dev/hdisk08

Table 5 Sample Oracle Directory Structure for Online Redo Log Files on ASM Disks

Directory Name RAID Level Device ID

Array/Logical Drive

Name

/u01/oracle/product/10.0.2/… RAID5RAID10

/dev/hdisk01 VG01_Vol1

/u01/oracle/admin/sid/ RAID5RAID10

/dev/hdisk01 VG01_Vol1

bdump/

cdump/

udump/

…/

+DG1 for all data and index tablespaces, online redo logs 1a, 2a, 3a, 4a

RAID10 /dev/hdisk02

/dev/hdisk03

/dev/hdisk05

/dev/hdisk06

VG02_Vol1

VG03_Vol1

VG05_Vol1

VG06_Vol1

+DG2 for archive logs, autobackups, flashback logs, rman backups (flash recovery area), online redo logs 1b, 2b, 3b, 4b

RAID10 /dev/hdisk04

/dev/hdisk07

/dev/hdisk08

VG04_Vol1

VG07_Vol1

VG08_Vol1

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 19© Copyright 2009, IBM Corporation

Page 26: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Considering the Server PlatformThe server platform contains the server hardware and the system software.

When considering the hardware and operating system that you want to run the Oracle database on, consider these things:

High availability – Is Oracle Real Application Clusters (Oracle RAC) needed to provide HA capabilities?

Scalability – If the database is expected to grow and requires more hardware resources to provide future performance that the customer needs, Oracle RAC can provide a scalable approach to accommodate growth potential.

Number of concurrent sessions – Determine the number of concurrent sessions and the complexity of these transactions before deciding what hardware and operating system to use for the database.

Amount of disk I/Os per second (IOPS) – If the database performs a large amount of IOPS, consider hardware that supports multiple HBAs. Also consider the number of disk drive spindles needed to provide the necessary IOPS that are forecasted by the application.

Size – If you have a small database or a small number of users, a small-to-medium sized hardware platform could be justified.

Cost – If cost is a factor for purchasing hardware, IBM System x™ servers running on the x86 platform could be a cheaper platform. The x86 provides outstanding performance for the money.

Considering the Server Hardware Architecture Available bandwidth depends on the server hardware. The number of buses adds to the aggregate bandwidth, but the number of HBAs sharing a single bus can throttle the bandwidth.

Calculating Aggregate Bandwidth

An important limiting factor in I/O performance is the I/O capability of the server that hosts the application. The aggregate bandwidth of the server to the storage subsystem is measured in MB/s and contains the total capability of the buses to which the storage subsystem is connected. For example, a 64-bit PCI bus clocked at 133MHz has a maximum bandwidth calculated by the following formula:

PCI bus throughput (MB/s) = PCI Bus Width / 8 * Bus Speed

64 bit /8 * 133 MHz = 1062 MB/s ~ = 1 GB/s

20 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 27: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Considering the Server Platform

Sharing Bandwidth with Multiple HBAs

Multiple HBAs on a bus share this single source of I/O bandwidth, and each HBA might have multiple FC ports, which typically operate at 1 Gb/s, 2 Gb/s, or 4 Gb/s. As a result, the ability to drive a storage subsystem can be throttled1 by either the server bus or by the HBAs. Therefore, whenever you configure a server or whenever you analyze I/O performance, you must know how much server bandwidth is available and which devices are sharing that bandwidth.

Considering the System Software The system software contains the operating system and the file system.

Buffering the I/O

The type of I/O—buffered or unbuffered—provided by the operating system to the application is an important factor in analyzing storage performance issues. Unbuffered I/O (also known as raw I/O or direct I/O) moves data directly between the application and the disk drive devices. Buffered I/O is a service provided by the operating system or by the file system. Buffering improves application performance by caching write data in a file system buffer, which the operating system or file system periodically flushes to permanent storage.

Buffered I/O is generally preferred for shorter and more frequent transfers. File system buffering might change the I/O patterns generated by the application. That is, writes might coalesce so that the pattern seen by the storage subsystem is more sequential and more write-intensive than the application I/O itself. Direct I/O is preferred for larger, less frequent transfers and for applications that provide their own extensive buffering, for example Oracle. Regardless of I/O type, I/O performance generally improves when the storage subsystem is kept busy with a steady supply of I/O requests from the host application. Become familiar with the parameters that the operating system provides for controlling I/O, for example maximum transfer size.

Table 6 PCI-X Bus Throughput

MHz PCI Bus Width Throughput (MB/s)

66 64 528

100 64 800

133 64 1064

266 64 2128

533 64 4264

1.Throttle – To slow down I/O processing during low memory conditions, typically processing one sequence at a time in the order the request was received.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 21© Copyright 2009, IBM Corporation

Page 28: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Clustering

So-called shared, clustered, or SAN file systems such as CXFS, StorNext, and GPFS provide file sharing for multiple hosts in a SAN. All such multi-node systems introduce additional I/O performance issues that require a complete understanding of the data flow, I/O alignment, and I/O sizes of the specific file system. For information about setting the segment size, see “Basing the Segment Size on Oracle I/O Operations” on page 5.

Oracle uses the term block size instead of the more common term page size. A block is the smallest unit of work. For information about how to set the db_file_ multiblock_read_count that associates Oracle's block size to the segment size defined, see page 6.

When you are conducting a performance tuning session on the database, test the performance with backups that are running synchronously with the daily jobs of rebuilding database objects or defragmenting database objects.

Calculating Optimal Segment Size

The IBM term segment size refers to the amount of data written to one disk drive in an array before writing to the next disk drive in the array. For example, in a RAID5 4+1 array with a segment size of 128 KB, the first 128 KB of the LUN storage capacity is written to the first disk drive, and the next 128 KB to the second disk drive. For a RAID1 2+2 array, 128 KB of an I/O would be written to each of the two data disk drives and to the mirrors. If the I/O size is larger than the number of disk drives times 128 KB, this pattern repeats until the entire I/O is completed.

For very large I/O requests, the optimal segment size for a RAID array is one that distributes a single host I/O across all data disk drives. The formula for optimal segment size is as follows:

LUN segment size = LUN stripe width ÷ number of data disk drives

For RAID5, the number of data disk drives is equal to the number of disk drives in the array minus 1. For example:

RAID5, 4+1 with a 64 KB segment size => (5-1) * 64 KB = 256 KB stripe width

For RAID1, the number of data disk drives is equal to the number of disk drives divided by 2. For example:

RAID1/0, 2+2 with a 64 KB segment size => (2) * 64 KB = 128 KB stripe width

For small I/O requests, make the segment size large enough to minimize the number of segments (disk drives in the LUN) that must be accessed to satisfy the I/O request, that is, to minimize segment boundary crossings. For IOPS environments, set the segment size to 64 KB or 128 KB or larger, so that the stripe width is at least as large as the median I/O size.

22 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 29: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Considering the Server Platform

When using a logical drive manager to collect multiple storage subsystem LUNs into a Logical Volume Manager (LVM) array (VG), the I/O stripe width is allocated across all of the segments of all of the data disk drives in all of the LUNs. The adjusted formula becomes as follows:

LUN segment size = LVM I/O stripe width / (# of data disk drives/LUN * # of LUNs/VG)

To learn the terminology so that you understand how data in each I/O is allocated to each LUN in a logical array, refer to the vendor documentation for the specific Logical Volume Manager.

Aligning Host I/O with RAID StripingFor all file systems and operating system types, avoid performance-degrading segment crossings. That is, you must not let I/O span a segment boundary. Matching of I/O size (commonly, by a power of two) to array layout helps maintain aligned I/O across the entire disk drive. However, this is only true if the starting sector is correctly aligned to a segment boundary. Segment crossing is often seen in the Windows operating system, where partitions created by Windows 2000 or Windows 2003 start at the 64th sector. Starting at the 64th sector causes misalignment with the underlying RAID striping and provides for the possibility for a single I/O operation to span multiple segments.

Aligning Partitions on a Microsoft Windows Operating System

Microsoft provides the diskpar.exe utility as part of the Windows 2000 Resource Kit (diskpart.exe in Windows 2003 Service Pack 1). Using Diskpar.exe, you can set the starting sector in the master boot record to a value that makes sure of sector alignment for all I/Os. Use a multiple of 64, such as 64 or 128. Sector alignment is especially important for Exchange. For Microsoft’s usage details on diskpar, go to:

http://technet.microsoft.com/en-us/library/0e24eb22-fbd5-4536-9cb4-2bd8e98806e7.aspx

Aligning Partitions on a Linux Operating System

IMPORTANT Adjust a Linux operating system for correct alignment only if you are an expert. Only a Linux expert can be trusted to use the extra-functionality x mode with the fdisk command.

In x mode, experts can use the b option to set the starting block of a partition as an absolute address. For example, presume an application with a 2-MB block size. If the first stripe group occupies blocks 0-4095, setting the starting block to 4096, that is, one block + 1, guarantees alignment.

To move the starting offset of data in a partition, follow these steps.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 23© Copyright 2009, IBM Corporation

Page 30: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

IMPORTANT Do not attempt to partition a live logical drive that contains data. The data will be lost.

1 Create a new partition.

2 Using fdisk, choose the x option.

3 Choose b to move the beginning of the data in a partition.

4 Choose the partition number.

5 Select a new beginning for the data.

You usually can put in one stripe width in the vdShow [LUN_Number] >> Stripe Size >> Sectors.

6 Choose w to write the partition table.

NOTE Stripe Size = Segment Size / 512 * Number of LUNsFor example, 512 KB (524,288) / 512 * 4 = 4096

Figure 5 on page 25 and Figure 6 on page 26 clarify the formula by showing the shell.

24 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 31: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Considering the Server Platform

Figure 5 Diagnostic Display of RD5_V1

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 25© Copyright 2009, IBM Corporation

Page 32: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 6 fdisk Display of RD5_V1 Logical Drive

Aligning Partitions on Other Operating Systems

I/O alignment is equally important for other operating systems and their associated file systems (such as UFS, VxFS, QFS, ZFS, CXFS, and SNFS). For details about how to ensure alignment in each situation, refer to the respective vendor’s documentation.

Locating Recommendations for Host Bus Adapter SettingsUse the HBA settings recommended by IBM rather than the vendor’s default settings. Check with IBM for the recommended HBA settings. Table 7 on page 27 and Table 8 on page 29 show the recommended HBA settings for QLogic 246x HBAs. For other vendor HBAs, and for current settings, please contact your IBM IBM Customer Support representative for detailed adapter settings specific for your operating environment.

26 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 33: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Considering the Server Platform

Recommendations for QLx246x Settings

Table 7 Adapter Settings (1 of 2)

Setting Values Default IBM Recommended Description

Host Adapter BIOS Enabled/Disabled Disabled Disabled Disables the ROM BIOS on the HBA, freeing space in upper memory. Must be enabled if you are booting from an FC disk drive attached to the HBA.

Frame Size 512/1024/2048 2048 2048 Specifies the maximum frame length supported by the HBA.

NOTE This option is not available for QLE2x0 HBAs.

Loop Reset Delay 0 - 60 seconds 5 seconds

8 After resetting the loop, the firmware refrains from initiating any loop activity for the number of seconds specified in this setting.

NOTE This option is not available for QLE2x0 HBAs.

Adapter Hard Loop ID

Enabled/Disabled Disabled Enabled Forces the adapter to attempt to use the ID specified in the Hard Loop ID setting.

NOTE This option is not available for QLE2x0 HBAs.

Hard Loop ID 0 – 125 0 Must be unique for each port

Specifies the ID for the Adapter Hard Loop ID.

NOTE This option is not available for QLE2x0 HBAs.

Spin Up Delay Enabled/Disabled Disabled Disabled Waits the BIOS up to two minutes to find the first disk drive.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 27© Copyright 2009, IBM Corporation

Page 34: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Connection Options

0/1/2 2 2 Defines the type of connection:

• 0 – Loop

• 1 – Point-to-point

• 2 – Loop preferred, then point-to-point

NOTE This option is not available for QLE2x0 HBAs.

Fibre Channel Tape Support

Enabled/Disabled Enabled Disabled Enables FCP-2 recovery.

NOTE This option is not available for QLE2x0 HBAs.

Data Rate 0/1/2/3 2 2 Determines the data rate.

• When this setting is 0, the QLx246x board runs at 1 Gb/s.

• When this setting is 1, the QLx246x board runs at 2 Gb/s.

• When this setting is 2, the HBA auto-negotiates and determines the data rate.

• When this setting is 3, the QLx246x board runs at 4 Gb/s.

NOTE This option is not available for QLE2x0 HBAs.

Table 7 Adapter Settings (2 of 2)

28 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 35: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Considering the Server Platform

Table 8 Advanced Adapter Settings (1 of 2)

Setting Values Default IBMRecommended Description

Execution Throttle

1-256 16 256 Specifies the maximum number of commands that can run on any one target port. When the number of commands reaches a target port’s execution throttle, the system does not issue any new commands until one of the current commands completes executing.

LUNs Per Target

0/8/16/32/64/128/256

128 0 Specifies the number of LUNs supported per target if the target does not support the Report LUN command. Multiple LUN support is typically for RAID boxes that use LUNs to map disk drives.

Enable LIP Reset

Yes/No No No Determines the type of loop initialization process (LIP) reset used when the operating system starts a bus reset routine as follows:

• Yes – The driver starts a global LIP reset to reset the target devices.

• No – The driver starts a global LIP reset with full login.

Enable LIP Full Login

Yes/No Yes Yes Instructs the ISP chip to re-login to all ports after any LIP.

Enable Target Reset

Yes/No Yes Yes Enables the drivers to issue a Target Reset command to all devices on the loop when a SCSI Bus Reset command is issued.

Login Retry Count

0-255 8 30 Specifies the number of times that the software tries to log in to a device.

Port Down Retry Count

0-255 seconds 30 seconds 70 seconds Specifies the number of seconds that the software waits to retry a command given to a port that returns a port down status.

Link Down Timeout

0-255 seconds 30 seconds 60 seconds Specifies the number of seconds that the software waits for a down link to come up.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 29© Copyright 2009, IBM Corporation

Page 36: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Locating Recommendations for Fibre Channel Switch Settings

Use the Fibre Channel switch settings recommended by IBM. Recommended settings are available from the supplier of the storage subsystem. For example, on Brocade switches, make sure that the In-Order Delivery parameter is enabled. In a multi-switch SAN fabric, where I/O traverses inter-switch links, make sure to configure sufficient inter-switch link bandwidth.

Operation Mode

0/5/6 0 0 Specifies the reduced interrupt operation (RIO) modes, if supported by the software driver. The RIO modes permit posting multiple command completions in a single interrupt. The following modes are supported:

• 0 – Interrupt for every I/O completion.

• 5 – Interrupt when Interrupt Delay Timer expires.

• 6 – Interrupt when Interrupt Delay Timer expires or no active I/Os.

Interrupt Delay Timer

0-255 seconds 0 0 The value (in 200-microsecond increments) used by a timer to set the wait time between generating an interrupt.

Enable Interrupt

Yes/No No No • Yes – Enables the BIOS to use the IRQ assigned to the ISP24xx.

• No - Causes the BIOS to poll for ISP mailbox command completion status.

Table 8 Advanced Adapter Settings (2 of 2)

Setting Values Default IBMRecommended Description

Execution Throttle

1-256 16 256 Specifies the maximum number of commands that can run on any one target port. When the number of commands reaches a target port’s execution throttle, the system does not issue any new commands until one of the current commands completes executing.

30 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 37: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Considering the Server Platform

Using Command Tag QueuingCommand Tag Queuing (CTQ) refers to the controller’s ability to line up multiple SCSI commands for a single LUN and run the commands in an optimized order that minimizes rotational and seek latencies. Although CTQ might not help in some instances, such as single-threaded I/O, CTQ never hurts performance and therefore is generally recommended. The IBM models vary in CTQ capability, generally up to 2048 per controller. Adjust the CTQ size to service multiple hosts. CTQ is set up by default on IBM storage subsystems, but you also must set up CTQ in the host operating system and on the HBA. Refer to the documentation from the HBA vendor.

The capability of a single host varies by the type of operating system, but you can generally calculate CTQ as follows:

OS CTQ Depth Setting = Maximum OS queue depth (< 255) /Total # of LUNs

NOTE If the HBA has a lower CTQ capacity than the result of the previously mentioned calculation, the HBA’s CTQ capacity limits the actual setting.

Table 9 shows that the method for setting CTQ varies by the type of operating system. For detailed information, refer to the documentation for each operating system.

Table 9 Methods for Setting CTQ

Operating System Command

Solaris sd_max_throttle in /etc/system

HP-UX scsictl command

scsi_max_depth dynamic parameter added: use kmtune command

AIX® Use lsattr -E -l hdisk<n> to view LUN setting and chdev -l hdiskn -a q_type=simple -a queue_depth=<NewValue> to change queue depth for a LUN. The -T and –P flags control when the change becomes effective and how long the change lasts.

IRIX For each LUN, use the fx command (for example fx –x “dksc 6,2,2”) following the menus down to /label/set/param, where options are provided for Enable/Disable and for CTQ depth.

Linux The operating system default is viewed/set in generic sg driver at /proc/scsi/sg.

The HBA parameter is set in HBA driver configuration file, for example lpfc.conf.

Windows Change Registry settings as described in the documentation from the HBA vendor.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 31© Copyright 2009, IBM Corporation

Page 38: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Analyzing I/O CharacteristicsAnalyze the application to determine the best RAID level and the appropriate number of disk drives to put in each array:

• Is the I/O primarily sequential or random?

• Is the size of a typical I/O large (> 256 KB), small (< 64 KB), or in-between?

If this number is unknown, calculate an estimate of I/O size from the statistics reported by the IBM System Storage DS Storage Manager Performance Monitor using the following formula:

Current KB/second ÷ Current I/O/second = KB/I/O

• What is the I/O mix, that is, the proportion of reads to writes? Most environments are primarily Read.

• What Read Percent statistic does DS Storage Manager Performance Monitor report?

• What type of I/O does the application use—buffered or unbuffered?

• Are concurrent I/Os or multiple I/O threads used?

In general, creating more sustained I/O produces the best overall results, up to the point of controller saturation. Write-intensive workloads are an exception to this general rule.

Using Logical Volume Manager to Balance I/O LoadSome hosts use a Logical Volume Manager (LVM), which can be useful for controlling and adjusting the application I/O size presented to the storage subsystem. For highest performance, use LUNs from both controllers to build a Logical Volume Group (LVG), thereby balancing (or striping) the I/O load across the available hardware. The choice of a LVG “segment” size is similar to that discussed in “Choosing the Number of Disk Drives to Put in an Array” on page 35. Stripe high bandwidth applications that use large I/O sizes across multiple IBM logical drives to aggregate the available channel bandwidth. For transaction oriented applications, keep the LVG “segment” size greater than or equal to the predominate application I/O size to avoid segment crossings.

For more information about IBM segment size, see “Calculating Optimal Segment Size” on page 22.

32 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 39: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Setting Up the Storage Subsystem

Setting Up the Storage SubsystemAfter the operating system and the application are fully considered, you can now set up the storage subsystem for optimal performance.

Factors Influencing Storage PerformanceIBM storage subsystems, with their intelligent controllers, deliver the highest possible performance from arrays. IBM storage subsystems were designed from the outset for open system I/O. One of the long-standing trademarks of IBM storage subsystem technology is the extremely high performance in open systems. IBM storage subsystems are designed to provide the best possible disk drive-based performance—a requirement for today’s transaction-intensive applications. Disk drive-based performance is accomplished with a combination of attentive controller design, custom integrated circuits to accelerate RAID XOR, and efficient cache management. IBM has been developing and perfecting these features for nearly two decades.

Disk drive I/O capacity is at the very heart of storage subsystem performance. For IBM storage subsystems, the number of disk drives in a configuration usually establishes the upper bound for storage subsystem performance. Many various interacting factors determine how much of the raw performance of a group of disk drives that a specific application can use. These factors include the following:

• Size of the cache

• Algorithms that manage the cache

• Number and type of host and disk drive channels

• How RAID parity calculations are performed

• Whether SCSI commands are queued for optimized execution by the controllers

• How controllers choose data paths

Estimating Capacity LimitsWhen setting up a storage subsystem, first estimate the capacity limits. To establish a framework for tuning, estimate the upper limit for performance for the IBM storage subsystem, based upon the specifications for the particular model.

For IOPS environments, the number of disk drives in the array largely determines performance. The maximum IOPS (from disk drive) for a storage subsystem is typically specified with a full complement of disk drives. Performance is slower for fewer disk drives and can be approximated by a simple ratio. Many factors determine IOPS, such as disk drive type (FC, SAS, or SATA), disk drive RPM, data layout, varying I/O sizes, array layout, controller architecture and workload. The previous formula is only an approximation.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 33© Copyright 2009, IBM Corporation

Page 40: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Performance in bandwidth environments is not quite so directly dependent on disk drive count, and the full bandwidth rating of the storage subsystem often can be realized with less than a full configuration, for example, with as few as four full expansion drawers.

Auto Logical Drive TransferAuto Logical Drive Transfer (ADT) is a controller firmware-based failover method that does not require a specialized host software to actively watch or manage host paths to the storage. Path failover occurs simply by sending I/O to the controller that does not own the logical drive. IBM made some compromises with the AVT failover method so that the method works with “unsophisticated,” impatient, host failover implementations.

One such compromise was to limit the amount of dirty cache permitted for a LUN, so that cache flush required for a LUN failover does not take too long. The dirty cache limit is 16 MB / LUN. When the amount of dirty data in cache exceeds 16 MB for a single LUN, all of the data for that LUN is marked with an age of 0, making the data available for immediate flush. Any new data coming into that LUN is also marked with an age of zero, rather than the usual default age of ten seconds. This cache flush strategy can cause problems for a couple of reasons. First, the flush strategy might be more aggressive than one would like, possibly causing reads to get stuck behind longer than normal write to disk drive activity. Second, cache flushing might occur at inopportune times.

If you have ADT set up in any host region, the ADT cache flush rules apply to all host regions. If you want to totally disable ADT cache flush logic, you must disable AVT in all host regions, and then restart the controllers. The decision on flush strategy is made at boot time.

If you are using RDAC or other multi-path software, disable ADT on the IBM storage subsystem. To disable ADT on all host regions, copy the ADT disable script to a *.SCT file and then run the *.SCT file on the IBM storage subsystem. For the ADT disable script, see “Appendix B: Test Configuration.”

34 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 41: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Setting Up the Storage Subsystem

Determining the Best RAID LevelIn general, RAID5 works best for sequential large I/Os (> 256 KB), while RAID5 or RAID1 works best for small I/Os (< 32 KB). For I/O sizes in between, the RAID level might be dictated by other application characteristics.

RAID5 and RAID1 have similar characteristics for read environments. For sequential writes, RAID5 typically has an advantage over RAID1 because of the RAID1 requirement to duplicate the host write request for parity. This duplication of data typically puts a strain on the disk drive channels of the RAID hardware. RAID5 is challenged most by random writes, which can generate multiple disk drive I/Os for each host write. Different RAID levels can be tested by using the DS Storage Manager Dynamic RAID Migration feature, which lets the RAID level of an array be changed while maintaining continuous access to data.

Choosing the Number of Disk Drives to Put in an ArrayFor high bandwidth applications, use enough disk drives to provide a full stripe write for the typical application I/O size, while still providing for a segment size of 64 KB or larger. Host I/O sizes of a power of two are typical, such as 512 KB, 1 MB, and 2 MB. A RAID5 array of 4+1 or 8+1 is a good match for those host I/O sizes. Therefore, for a typical host I/O size of 1 MB, use a RAID 4+1 with a 256 KB segment size, or a RAID 8+1 with a 128 KB segment size. For more information, see “Setting Up the Storage Subsystem” on page 33.

For IOPs or transaction-oriented applications, the number of disk drives becomes more significant because disk drive random I/O rates are relatively low. Select a number of disk drives that matches the per array I/O rate needed to support the application. Make sure to account for the I/Os required to implement the data protection of the selected RAID level. Make the segment size at least as large as the typical application I/O size. The reason is to avoid segment crossings, which place additional I/O demand on the disk drives. A segment size of 128 KB is a reasonable starting point for most applications. The higher the spin speed of the disk drive, the better. The spindle count of an existing array can be increased using the Dynamic Capacity Expansion feature of IBM System Storage DS Storage Manager.

Table 10 I/O Size and Optimal RAID Level

I/O Size RAID Level

Sequential, large (>256 KB)

RAID5

Small (<32 KB) RAID5 or RAID1

Between 32 KB and 256 KB RAID level does not depend on I/O size

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 35© Copyright 2009, IBM Corporation

Page 42: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

How the Number of Disk Drives per Array Affects Performance

To get the best performance from the storage subsystem, use as many disk drives as possible within the RAID1 arrays. Figure 7 demonstrates the performance difference between the following arrays at ten-percent writes:

Figure 7 shows how the number of disk drives affects performance of RAID1 and RAID5, measured in MB/s. In Figure 7, the performance level of the RAID5 8 + 1 array is almost the same (-5 percent max) as a RAID1 4 + 4 array, with twice the logical drive space with only nine disk drives for the RAID5 compared to eight disk drives for the RAID1 array. For light write access data, RAID5 is a good economical choice for large amounts of data.

Figure 7 Impact in MB/s of the Number of Disk Drives on the Performance of RAID1 and RAID5

RAID Level Disk Drives

Logical Drive Name

RAID1 4+4 = 8 total RD1_V1_10

RAID1 8+8 = 16 total RD1_V2_10

RAID5 4+1 = 5 total RD5_V1_10

RAID5 8 +1 = 9 total RD5_V2_10

36 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 43: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Setting Up the Storage Subsystem

Figure 8 shows how the number of disk drives affects the performance of RAID1 and RAID5, measured in IOPS.

Figure 8 Impact in IOPS of the Number of Disk Drives on the Performance of RAID1 and RAID5

However, Figure 9 on page 38 shows that at heavy write levels, RAID1 clearly outperforms RAID5 at both logical drive sizes. Figure 9 demonstrates the need to place high write-intensive files such as redo, archives, and backup, on RAID1 logical drives instead of RAID 5 logical drives.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 37© Copyright 2009, IBM Corporation

Page 44: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 9 Impact of the Number of Disk Drives on RAID1 and RAID5 Performance in a Write-Intensive Environment

Storage Subsystem Design Best PracticesHere are some of the most important best practices that you must follow to obtain optimal performance from an IBM storage subsystem.

• Use all available host-side channels. Balance I/O across the dual controllers of the storage subsystem (for example, with a volume manager) and strive to keep both controllers busy.

• Attach cables to the expansion drawers according to your company’s best practices.

• Choose faster disk drives. A 15-K RPM disk drive has one-third less rotational latency than a 10-K RPM disk drive.

• Add more disk drives to the configuration for a linear increase in performance, up to the point of controller saturation. More disk drives provide more spindles to service I/O.

• Create arrays across expansion drawers to distribute I/O across back-end loops. This varies by controller module. Try to balance the number of disk drives on each back-end loop. For details, see “Locating Arrays” on page 42.

• Configure the entire capacity of an array into a single logical drive. Cost considerations might require multiple disk drives on one array, but this choice typically increases seek time penalties.

38 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 45: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cabling and Setup

• Separate random workloads and sequential workloads onto different physical disk drives.

• Choose an optimal segment size based on the I/O characteristics of the application.

• If ADT is not required for failover by any of the hosts that use the storage subsystem, disable ADT in all host regions. Consult with an IBM IBM Customer Support representative on the procedure, which must be repeated if the NVSRAM file is changed or reloaded.

Cabling and SetupEach of the IBM DS4800 storage subsystems provides eight 4-Gb/s FC-AL host or FC-SW SAN connections and eight 4-Gb/s FC-AL drive expansion enclosure connections. Host connectivity can be configured in many various methods. For specific host-side configurations, refer to the Hardware Cabling Guide. This document presumes that you are using dual ported host-side connectivity to take full advantage of the IBM storage subsystem’s performance and redundancy.

Connecting the IBM DS4800 and DS5000 Controller ModulesFigure 10 on page 40 shows the connections on the IBM DS4800 controller modules.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 39© Copyright 2009, IBM Corporation

Page 46: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 10 DS4800 Controller

Figure 11 on page 41 shows the connections on the DS5000 controller module.

1 Host Channels1a Link Speed Indicator1b Link Speed Indicator2 RS232 Serial Controller3 Ethernet Connectors3a Ethernet 100BaseT Indicator3b Ethernet Link Indicator4 Disk Drive Channels (dual-ported)4a Bypass Indicator (SFP)4b Disk Drive Channel 1 Speed Indicator

4c Disk Drive Channel 1 Speed Indicator4d Bypass Indicator (SFP)5 Tray ID/Diagnostic Display5a Service Action Allowed5b Needs Attention Indicator5c Cache Active Indicator6 AC Power Switch7 AC Power Adapter

40 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 47: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cabling and Setup

Figure 11 DS5000 Controller

Connecting the HostIBM storage subsystems are designed with high availability and performance in mind. IBM storage subsystems feature redundant power supplies, redundant controllers, redundant cache, and redundant internal architecture. To maximize this redundant technology and to prevent a single point of failure, use the following cabling guidelines in any high-available environment. Figure 12 on page 42 shows the recommended method for connecting hosts with redundant switches for the DS4800 controller module. If you are using a DS5000 controller module, connect the hosts with redundant switches using available host ports. Redundant switching provides the most fault-tolerant configuration available for host connectivity.

1 RS232 Serial Controller2 Ethernet Connectors3 Host Channels3a Link Speed Indicator3b Link Speed Indicator4 Disk Drive Connectors4a Bypass Indicator (SFP)4b Disk Drive Channel 1 Speed Indicator4c Disk Drive Channel 1 Speed Indicator4d Bypass Indicator (SFP)

5 Tray ID/Diagnostic Display5a Service Action Allowed5b Service Action Required5c Cache Active Indicator6 AC Power Switch7 AC Power Adapter

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 41© Copyright 2009, IBM Corporation

Page 48: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 12 Redundant Switched Host Connection

Locating ArraysHow you locate the arrays depends on the controller model that you are using.

• In controller module models prior to the DS4800 controller module and the DS5000 controller module, you stripe the arrays across all available expansion drawers.

• In the DS4800 controller module and the DS5000 controller module model, you must correctly attach cables to the units according to the arrays.

Cabling Arrays in the DS4800 Controller Module and DS5000 Controller Module

Each DS4800 controller module and the DS5000 controller module is distinguished by four drive-side ports, identified with a label 1 through label 4. These four ports provide a dual-controller system to support four redundant back-end loops. One port each from controller A and controller B are paired to connect a logical stack of expansion drawers. For example, stack 1 in Figure 13 on page 44 is connected to controller port A4 and controller port B1. Stack 2 is connected to controller port A3 and controller port B2. Stack depth varies from one to four depending on the total number of expansion

42 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 49: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cabling and Setup

drawers configured. For optimal performance, create arrays and logical drives for controller A in stack 1 and stack 3 and for controller B in stack 2 and stack 4. You must choose between optimal performance and optimal protection of the drive expansion enclosure. Assuming sufficient expansion drawers, arrays up to 7+1 can be configured for both optimal performance and optimal drive expansion enclosure protection, as in the blue array depicted. The green array has been configured for optimal performance, with one drive expansion enclosure containing two disk drives of the array.

Figure 13 on page 44 shows the correct method for attaching cables to a DS4800 storage subsystem.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 43© Copyright 2009, IBM Corporation

Page 50: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 13 Cabling a DS4800 Storage Subsystem

44 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 51: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cabling and Setup

Figure 14 shows the correct method for attaching cables to a storage subsystem.

Figure 14 Cabling a DS5000 Storage Subsystem

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 45© Copyright 2009, IBM Corporation

Page 52: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 13 on page 44 shows a full configuration of the 16-slot 4 Gb/s expansion drawers (EXP810), which if fully populated supports the DS4800 controller’s maximum of 224 total disk drives. For partially populated expansion drawers or the older 14-slot expansion drawers, a maximum of 16 enclosures can be configured. Regardless of enclosure count, a best practice is to balance disk drives across all stacks as evenly as possible. For optimal throughput, each controller uses a separate drive loop. Therefore, assign controller A to one stack and assign controller B to another stack.

For the DS4800 and the DS5000 controller modules only, you can operate both 2 Gb/s and 4 Gb/s expansion drawers at the same time. However, when operating both Gb/s speeds, the expansion drawers must be connected as stacks 1 and 2 and stacks 3 and 4 because of the switch design, which is contrary to the optimal performance configuration.

Tuning External IBM Storage SubsystemsThe challenge of storage performance tuning is to understand and control these interacting factors while accurately measuring application performance. Because the performance of the storage subsystem accounts for only a portion of overall application performance, tuning must be completed in context. The full context includes the I/O characteristics of the application and all of the components in the data path:

• HBA

• Switches

• Logical drive manager

• File system

• Operating system

• Server

With multiple parameters to consider, the task of performance tuning even one application can seem formidable. Tuning all of the different applications that share a single storage subsystem seems even more formidable. To reduce the complexity of tuning, IBM storage subsystems feature performance monitoring and flexible tuning controls in IBM System Storage DS Storage Manager.

Elements That Influence PerformanceThis document provides an overall approach to tuning I/O performance and also provides specific guidelines for using the storage subsystem tuning controls. These recommendations start with an overall analysis of the three elements that determine I/O performance:

• Application software

46 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 53: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting the Global Parameters

• Server platform (hardware, operating system, volume managers, device drivers)

• Storage subsystem

An Iterative Approach to Performance TuningPerformance tuning requires you to loop repeatedly through the following steps:

1 Run benchmarking tests.

2 Measure the performance results.

3 Adjust the settings as required, changing only one parameter at a time.

The dynamic features in all IBM storage subsystems are ideally suited for this iterative process. The first step in tuning is to establish a baseline of existing performance with a convenient and trusted metric. Compare the baseline to the estimated capability of the configuration. This document provides recommendations for this important first step.

For more detailed information about storage performance tuning, contact IBM or the supplier of the storage subsystem. See “Contact Information” on page 64. IBM and IBM resellers also provide in-depth tuning classes and customized on-site consulting services. See “Conclusion” on page 64.

Setting the Global ParametersThis section describes how to set the global parameters.

Setting the Global Cache FlushTwo global parameters, Start Flushing and Stop Flushing, are provided to control the flushing of write data from the controller cache to the disk drives. Flushing starts when the percentage of unwritten data cache exceeds the Start Flushing level and stops when the percentage hits the Stop Flushing mark. IBM recommends setting both parameters to the same value to cause a brief flushing operation to maintain a specified level of free space. Start with the default values and experiment.

If you activate the per-logical drive failover functionality of ADT, cache management and flushing behavior can be affected. If ADT is not required for failover for the host platforms using the storage subsystem, disabling ADT in all host regions can improve performance for some workloads.

Setting the Force Unit Access and Synchronize CacheTwo cache-related NVSRAM parameters set by the host type are related to SCSI commands from the host. They are Force Unit Access (FUA) and Synchronize Cache. FUA is a bit that is set as part of a read or write command. If activated, FUA instructs the storage subsystem to bypass cache and go directly to disk drive. The Synchronize

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 47© Copyright 2009, IBM Corporation

Page 54: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Cache command instructs the storage subsystem to flush cache to disk drive. The FUA bit and the Synchronize Cache command are most often used in Windows server environments. Because control of these functions usually are left to the IBM storage subsystem, these NVSRAM parameters are normally set to the “Ignore” state. However, if cache behavior is not as expected, contact the supplier of the IBM storage subsystem to make sure of the state of these parameters.

Setting the Global Media Scan The impact of Media Scan is minimal, but the extra reads do represent a finite workload. Therefore, consider the performance demands when setting Media Scan.

• In most cases, set up Media Scan and set the scan frequency to 15 days to provide periodic scans of the surface of all disk drives.

• When absolute maximum performance is the objective, do not set up Media Scan.

You also can enable or disable Media Scan for each logical drive. See the following section, “Setting LUN-Specific Parameters.”

Setting LUN-Specific ParametersUse the Performance Monitor to guide the tuning process. Observe the cache hit percentage and the read/write mix for each LUN of interest while an application is running.

Setting the LUN-Specific Media Scan

One way to limit the workload caused by Media Scan is to enable or disable Media Scan for each logical drive, rather than globally.

• In most cases, set up Media Scan for each logical drive.

• If the goal is to maximize the performance of a LUN or to take fine measurements of performance, disable Media Scan for a specific logical drive.

Setting the Caching Parameters

The cache block size is a global parameter for the storage subsystem. Set the cache block size nearest to the typical I/O size. Set the cache block size to 4 KB for transactional workloads with small I/O sizes and to 16 KB size for large block and sequential I/O. You easily can change the cache block size at any time to optimize for a particular workload during a specific time period.

48 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 55: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I/O Performance Monitoring and Troubleshooting

Setting the LUN-Specific Write Cache and Write Cache Mirroring

Setting up Write Cache on a LUN generally improves performance for applications with significant write content, unless the application features a continuous stream of writes. However, write caching does introduce some small risk of data loss, in the unlikely event of a controller failure. To eliminate any chance of data loss from a controller failure, the Write Cache Mirroring option makes sure that a LUN’s write data is cached in both controllers. This option historically trades write performance for the highest possible availability, although recent firmware improvements significantly reduce this penalty for bandwidth environments.

Because the cache batteries protect the controller cache for several days, a power failure alone does not threaten data.

To see the write cache settings for the test setup, see “Appendix B: Test Configuration.”

Setting the LUN-Specific Read Cache and Read Ahead Multiplier

Setting up Read-Ahead Caching for a LUN might be helpful if parts of the workload are sequential. For IBM System Storage DS Storage Manager versions earlier than 9.1, use small values, such as 1 through 4, for the Read Ahead Enable and observe the effect. For IBM System Storage DS Storage Manager versions 9.1 and later, the read ahead is completed by algorithm, so that the feature is either enabled (any non-zero value) or disabled (0), and in most cases should be enabled.

I/O Performance Monitoring and TroubleshootingAs stated in “Laying Out Logical Drives and Arrays” on page 11, when designing a database, the best practice is to incorporate the Stripe and Mirror Everything (SAME) method. After you deploy the database, methods and tools exist that are readily available to make sure that the I/O is spread out evenly or if the I/O is concentrated just on a couple of LUNs. This information is available from both the database and from the operating system. If your database uses ASM, and if you have created disk groups containing many LUNs, ASM automatically stripes the data over the LUNs in the assigned disk group. Using this method helps the database administrator to make sure that the data is striped over all of the available LUNs.

Tools for Database Performance MonitoringOracle provides tools to query for I/O statistics or to search for hot disks and provide views for the database administrator. The most common views are V$FILESTAT, V$SYSSTAT, Oracle Wait events (V$SYSTEM_EVENT), and V$TEMPSTAT.

V$FILESTAT – Shows the number of physical reads and writes along with the total number of block I/Os at each datafile.

V$SYSSTAT – Shows the Oracle system statistics.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 49© Copyright 2009, IBM Corporation

Page 56: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

V$SYSTEM_EVENT – Used by Oracle’s system wait events. You can see I/O bottlenecks in real-time that can cause database contention.

V$SESSION_EVENT – Used by Oracle’s wait events at the session level.

Oracle trace events – Runs SQL statements. The most common trace event that you can set up is the 10046 trace event. This event traces the execution of SQL statements at the session level. Oracle permits a trace level ranging from a basic level trace (level 1) to level 12. The trace level includes both wait statistics and bind variables.

Oracle also provides reports that you can run to show useful information for troubleshooting I/O data.

• Statspack – Included in the database after the Oracle 9i release. Statspack performs the following functions:

■ Collects data that includes the SQL you are running.

■ Calculates ratios that aid in performance tuning such as cache hit ratios.

■ Collects data on frequent intervals or you can issue a flashcopy at any time. FlashCopy data lets you see the details of the database between two given dates and times.

■ Formats the output in a report format.

• Automatic Workload Repository (AWR) – Included in the database after release 10g. AWR improves on the concept of Statspack. AWR provides the following benefits:

■ Collects database performance statistics and metrics at the database level.

■ Loads automatically when creating the database.

■ Provides more robust functionality than Statspack.

■ Automatically runs flashcopies every hour.

■ Lets you use a retention period before Oracle removes outdated information.

NOTE AWR is automatically installed and operating when you create the database. Before you can query or use the AWR reports or data, you must purchase an Oracle license. For more information, see your Oracle sales representative.

• Oracle Enterprise Manager (OEM) – Includes databases, applications, and storage, and provides different modules (packs) that improve the capability of OEM. The following is a partial list of the packs included.

■ Configuration management packs

■ Change management packs

50 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 57: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I/O Performance Monitoring and Troubleshooting

■ Diagnostics packs

■ Tuning packs

NOTE Oracle Enterprise Manager comes with the Oracle database software but has limited functionality. You must purchase a pack license before you can install and use a pack.

Tools for Operating System Performance MonitoringEach operating system contains utilities that provide real-time I/O statistics that you can use to view or troubleshoot I/O at the host level. In the Unix or Linux OS, the most common utilities are IOSTAT and VMSTAT. In the Windows OS, Oracle provides added functionality to Performance Monitor.

IOSTAT – Reports CPU usage and I/O statistics for all I/O devices. When running IOSTAT in continuous mode during your troubleshooting process, you can observe the usage of the disk drive devices and determine whether you have uneven distribution over the disk drives. You also can determine whether a tendency exists to concentrate all I/O on just a couple disk drive (the “hot disk” scenario). For more information about how to run IOSTAT, refer to the MAN pages in your OS. The following provides a sample command line and the output of an IOSTAT run.

On AIX, type the following command.

p6blade1:/# iostat 5

tty: tin tout avg-cpu: % user % sys % idle % iowait 0.0 244.7 15.6 47.5 33.5 3.4Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk6 0.0 0.8 0.2 4 0hdisk4 0.0 0.8 0.2 4 0hdisk7 0.0 32.8 2.2 132 32hdisk8 0.2 1.6 0.4 4 4hdisk9 0.0 26.4 1.8 100 32hdisk5 0.0 0.8 0.2 4 0hdisk1 0.0 0.0 0.0 0 0hdisk2 0.0 1.6 0.4 4 4hdisk3 0.2 58.4 3.8 292 0hdisk0 1.6 20.0 3.2 44 56hdisk13 30.4 79952.8 352.9 399664 0hdisk10 0.0 0.0 0.0 0 0hdisk15 36.2 79822.4 349.3 399012 0hdisk14 25.2 79896.8 341.5 399384 0hdisk11 35.0 79802.4 325.5 398912 0

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 51© Copyright 2009, IBM Corporation

Page 58: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

hdisk12 29.8 79951.2 323.3 399656 0

VMSTAT – Reports on virtual memory statistics. Depending on the options you specify and which OS you are using, VMSTAT shows statistics on run queues, memory paging (in/out), memory scan rate, and CPU usage by user, system, and idle state. For more information about how to run VMSTAT, refer to the MAN pages in your OS. The following provides a sample command line and the output of an VMSTAT run.

On AIX, type the following command.

p6blade1:/# vmstat 5

System configuration: lcpu=8 mem=7744MBkthr memory page faults cpu ----- ----------- ------------------------ ------------ ----------- r b avm fre re pi po fr sr cy in sy cs us sy id wa 0 0 383517 1577052 0 0 0 0 0 0 2 44 89 0 0 99 0 0 1 383517 1577052 0 0 0 0 0 0 4 11 89 0 0 99 0 0 0 383517 1577052 0 0 0 0 0 0 1 13 88 0 0 99 0 0 0 383515 1577052 0 0 0 0 0 0 3 97 92 0 0 99 0

Performance Monitor in Windows – You can modify Performance Monitor in Windows to include Oracle performance statistics. Some of the statistics you can monitor from within Performance Monitor include buffer cache, log buffer, DBWR, shared pool, and I/O on database files. To include these statistics in performance monitor, perform the following step.

During the Oracle install, select Oracle Counters for Windows Performance Monitor under Oracle Windows Interfaces.

Figure 15 on page 53 shows how the performance monitor looks when monitoring the oracle counters.

52 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 59: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I/O Performance Monitoring and Troubleshooting

Figure 15 Monitoring Oracle Counters

How Tablespace Fragmentation Makes a Difference on Performance

In version 7x, datafiles were created as one large extent. For a typically-sizes database, this datafile setup did not present a problem. As the newer versions of Oracle were released, the fragmentation issue was waning. At first, there were limits of the number of extents based on blocksize. Today, multiple theories exist on tablespace fragmentation. Some experts, including Oracle, propose that thousands of extents can exist. Other experts want to keep the number of extents low.

Oracle developed auto-allocate technology to simplify the administration of tables with varying row lengths within a tablespace. Auto-allocate technology also makes manipulating data within the database more efficient. Auto-allocate technology lets Oracle select the optimal extent size and lets you increase the extent sizes as the segments grow. Using auto-allocate lessens the number of overall extents and simplifies the task of the database administrator to create multiple Locally Managed Tablespaces (LMTs) with a specified uniform extent size.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 53© Copyright 2009, IBM Corporation

Page 60: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

NOTE To obtain the best performance with the least amount of tablespace fragmentation, create Locally Managed Tablespaces using the appropriate uniform extent size basued upon the table row lengths.

How Table Fragmentation Makes a Difference on Performance

Table fragmentation is different than tablespace fragmentation. Fragmentation within a table can create a dramatic decrease in performance.

For example, when performing a full table scan on a table, Oracle reads each block below the tables high-water mark. If a program removes all of the rows in a table (not including a TRUNCATE function), the high-water mark does not get re-established. Also the storage used for the table does not shrink.

To demonstrate this issue, perform the following steps.

1 Create a table named “Table A” 2 GB in size using 8-KB blocks and fill the table completely with data.

After the load, the high-water mark is established at the end of many thousands of blocks.

2 Run a select count(*) command from Table A and write down the result.

3 Delete all of the rows from the table.

4 Run the select count command one more time.

This query took as long as the query with the populated table but now the table contains 0 rows.

Because Oracle reads all of the blocks up to the high-water mark, Oracle reads those blocks even if those blocks do not contain data. If you want to remove all of the data in a table, use the TRUNCATE command to reset the high-water mark to the beginning of the table.

Over time, applications might remove specific blocks of data within the table. If quite a number of empty blocks exist in a table, the best practice is to reorganize the table. One of the best ways to reorganize a table is to use the MOVE option in the Alter Table command. This function reads the table and writes the table back with no empty blocks to the same or another specified tablespace. For more information about the MOVE option, refer to Oracle’s web site.

Tools Used in the Storage Configuration Process

Oracle provides two tools that aid in determining the best storage configuration for the database and the application. The tools included in Oracle Database 11g are ORION and I/O Calibration.

54 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 61: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I/O Performance Monitoring and Troubleshooting

Using ORION for Storage Configuration

ORION is a standalone tool that does not require you to create a database to use the tool. Use ORION to understand the performance capabilities of the storage subsystem when created using specific RAID and LUN configurations at the storage level. With ORION, you can easily configure the RAID and LUNs. You can run a specific set of tests to view the performance these configurations provide in input-output per second (IOPS), megabytes per second (MBPS), and latency for a specific database configuration.

For example, if the database you want to design is used for DSS, you can create a set of ORION tests that stress the storage looking for the maximum amount of MBPS throughput. You can test all of the different RAID configurations along with striping and segment sizes until you find the best possible setup. For more information on Orion, please refer to Oracle’s documentation.

A sample of the output from an Orion test is shown below. The graphs were created using Excel with output created by Orion.

Run the following command to test ORION.

-run advanced -testname mytest -num_disks 160 -size_small 8 -size_large 1024 -type rand -simulate raid0 -write 100 -duration 120 -matrix basic

Test: mytestSmall I/O size: 8 KBLarge I/O size: 1024 KBI/O Types: Small Random I/Os, Large Random I/OsSimulated Array Type: RAID 0Stripe Depth: 1024 KBWrite: 100%Cache Size: Not EnteredDuration for each Data Point: 120 secondsSmall Columns:, 0Large Columns:, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 52, 78, 104, 130, 156, 182, 208, 234, 260, 286, 312Total Data Points: 97

Name: \\.\e:Size: 3569717760Name: \\.\f:Size: 3569717760Name: \\.\g:Size: 3569717760Name: \\.\h:Size: 3569717760Name: \\.\i:Size: 3569717760Name: \\.\j:Size: 3569717760Name: \\.\k:Size: 3569717760Name: \\.\l:Size: 3569717760Name: \\.\m:Size: 3569717760

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 55© Copyright 2009, IBM Corporation

Page 62: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Name: \\.\n:Size: 3569717760Name: \\.\o:Size: 3569717760Name: \\.\p:Size: 3569717760Name: \\.\q:Size: 3569717760Name: \\.\r:Size: 3569717760Name: \\.\s:Size: 3569717760Name: \\.\t:Size: 356971776016 FILEs found.

Maximum Large MBPS=291.06 @ Small=0 and Large=312Maximum Small IOPS=16147 @ Small=800 and Large=0Minimum Small Latency=0.24 @ Small=1 and Large=0

Figure 16 ORION Advanced All Writes for RAID 10 with 16 LUNS in a 5+5 Configuration and 512 KB Segments

56 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 63: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I/O Performance Monitoring and Troubleshooting

Figure 17 ORION Advanced All Writes for RAID 10 with 16 LUNS in a 5+5 Configuration and 512 KB Segments

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 57© Copyright 2009, IBM Corporation

Page 64: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Figure 18 ORION Advanced All Writes Test for RAID 10 with 16 LUNS in a 5+5 Configuration and 512 KB Segments

Using I/O Calibration

I/O Calibration is a stored procedure inside of the Oracle 11g database that shows the I/O capabilities of the storage subsystem with data going through the database. Using I/O Calibration lets you do the same type of tests as ORION, such as reconfigure the storage RAID and LUNs to get the best possible performance. However, the database performs the actual I/O calls. Because I/O Calibration uses the database, the statistics are more accurate and these tests can take advantage of Oracle features like RAC and ASM. For more information on I/O Calibration, refer to Oracle’s documentation.

The following is an example of the script that was run for I/O calibration on 12 disks:

SQL> SET SERVEROUTPUT ONSQL> DECLARE 2 lat INTEGER; 3 iops INTEGER; 4 mbps INTEGER; 5 BEGIN 6 -- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (<DISKS>, <MAX_LATENCY>, iops, mbps, lat); 7 DBMS_RESOURCE_MANAGER.CALIBRATE_IO (12,10,iops, mbps, lat); 8 DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops); 9 DBMS_OUTPUT.PUT_LINE ('latency = ' || lat);

58 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 65: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I/O Performance Monitoring and Troubleshooting

10 dbms_output.put_line('max_mbps = ' || mbps); 11 end; 12 /max_iops = 392latency = 9max_mbps = 53

Other Options for Database TuningMany areas exist that affect how the database handles I/O. Monitor these areas for tuning on a regular basis:

• If the application is not tuned, you might find tuning the database very difficult. Tune the application first before investigating any possible tuning efforts inside the database.

• If hardware resources are not adequate, database tuning is affected. Hardware resources include the following items.

■ The amount of memory allocated to the buffer cache dramatically affects how much physical I/O the database can perform.

■ The amount of CPU available affects how well the database can perform. Is there enough CPU capability on the server to run the database without exhausting all CPU resources? Before using any parallel processing, validate CPU usage over the last several days or weeks to determine usage patterns.

• Are the disk drive service times within reason? Is the storage subsystem saturated from outside applications? If so, isolate the database storage on a storage subsystem that has available resources.

• Are the LUNs created correctly for the database you are building? Determine which RAID level to use with the correct configuration needed to satisfy the IOPS or MBPS requirements of the application. For example, if the application is an OLTP type of system, the RAID 5 write penalty might impact performance.

• Are there enough disk drive spindles to meet the IOPS or MBPS requirements? If the storage subsystem has hit the physical limits (allocated disk drive spindles, HBA cards, controllers), I/O tuning within the database does not improve the performance.

• Do the storage parameters along with the operating system I/O parameters match the parameters set in the database? For example, the value of block size * db_file_multiblock_read_count must match the settings of the operating system or volume manager for the maximum amount of I/O that the OS can perform. This parameter is especially true in earlier Oracle releases.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 59© Copyright 2009, IBM Corporation

Page 66: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Other Ways to Improve I/O Efficiency at the Database Level• Use ASM. This application implements striping of data over all of the allocated disk

drives presented as disk groups. ASM is capable of doing I/O rebalancing when adding disk drives or deleting disk drives.

• Consider using multiple block sizes for different tablespaces. The database might dictate to use a small block size for the data tablespaces but the index tablespaces might benefit from the larger block sizes.

• Implement partitioning (license required). Depending on how the data is accessed, you can increase performance by using a smaller subset of the data or index instead of querying the whole table.

Example of an I/O Tuning ExerciseFor a test using an AIX server, a tablespace was created using one datafile (non-ASM) on one physical disk drive (hdisk10). A large table was loaded into this tablespace. While running a query against this table, IOSTAT was running.

p6blade1:/# iostat 5

tty: tin tout avg-cpu: % user % sys % idle % iowait 0.0 241.6 6.0 26.4 64.7 2.9Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk6 0.0 1.6 0.4 8 0hdisk4 0.0 1.6 0.4 8 0hdisk7 0.0 14.3 1.2 40 32hdisk8 0.0 3.2 0.8 8 8hdisk9 0.0 7.9 0.8 8 32hdisk5 0.0 1.6 0.4 8 0hdisk1 0.4 4.8 1.2 4 20hdisk2 0.2 3.2 0.8 8 8hdisk3 0.0 1.6 0.4 8 0hdisk0 1.6 20.7 4.4 20 84hdisk13 0.0 0.8 0.2 0 4hdisk10 50.9 174439.7 67.6 877864 4hdisk15 0.0 0.8 0.2 0 4hdisk14 0.0 0.8 0.2 0 4hdisk11 0.0 0.8 0.2 0 4hdisk12 0.2 0.8 0.2 0 4

In this IOSTAT report, the report shows that all of the I/O is concentrated on just one disk drive and the other attached disk drives were idle.

To show how to stripe across more disk drives, the tablespace was dropped and recreated using disk drives: hdisk11, hdisk12, hdisk13, hdisk14, and hdisk15. After re-creating the same table as in the test above, the same query was started while running IOSTAT.

60 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 67: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Performance Tools

p6blade1:/# iostat 5

tty: tin tout avg-cpu: % user % sys % idle % iowait 0.0 244.7 15.6 47.5 33.5 3.4Disks: % tm_act Kbps tps Kb_read Kb_wrtnhdisk6 0.0 0.8 0.2 4 0hdisk4 0.0 0.8 0.2 4 0hdisk7 0.0 32.8 2.2 132 32hdisk8 0.2 1.6 0.4 4 4hdisk9 0.0 26.4 1.8 100 32hdisk5 0.0 0.8 0.2 4 0hdisk1 0.0 0.0 0.0 0 0hdisk2 0.0 1.6 0.4 4 4hdisk3 0.2 58.4 3.8 292 0hdisk0 1.6 20.0 3.2 44 56hdisk13 30.4 79952.8 352.9 399664 0hdisk10 0.0 0.0 0.0 0 0hdisk15 36.2 79822.4 349.3 399012 0hdisk14 25.2 79896.8 341.5 399384 0hdisk11 35.0 79802.4 325.5 398912 0hdisk12 29.8 79951.2 323.3 399656 0

The following differences are found from distributing the I/O.

• More system resources are used (CPU for USER, SYSTEM) and there were less idle CPUs.

• The I/O is evenly distributed between the disk drives.

• Instead of one disk drive being 50 percent busy, five disk drives are now 25 percent to 30 percent busy.

• Throughput has dramatically improved.

To stripe data over multiple disk drives when creating the tablespace, specify multiple datafiles on separate disk drives and Oracle performs a round-robin, putting the data on each datafile for each extent.

For this test, each disk drive was presented by itself, not part of a multi-disk drive LUN. For larger databases, you can create LUNs using a RAID configuration containing many disk drives in the configuration that is best for your needs. A datafile is placed on each of these LUNs for greater striping capability.

Using Performance ToolsPerformance tuning depends on measurement. Fortunately, many software measurement tools are available. DS Storage Manager Performance Monitor comes with IBM System Storage DS Storage Manager. Many third-party tools are also readily available.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 61© Copyright 2009, IBM Corporation

Page 68: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

Using DS Storage Manager Performance MonitorIBM System Storage DS Storage Manager provides an integrated Performance Monitor that reports the following statistics for each logical drive in the storage subsystem.

This convenient tool adds the storage subsystem view of performance to those provided by other host-based or fabric-based monitoring tools. For detailed usage information about Performance Monitor, see the IBM System Storage DS Storage Manager online help.

Obtaining Additional Performance ToolsTable 12 shows a number of widely available tools, benchmarks, and utilities. Some of these are produced by non-profit organizations and are free.

Table 11 Performance Monitoring Statistics

Statistic Description

Total I/OsSubsequent to the start of this monitoring session

Read Percentage Percent of Read I/Os

Cache Hit Percentage Percent of Reads satisfied from cache

Current KB/secSubsequent to the last polling interval or requested update

Max. KB/sec Highest value subsequent to the last start

Current I/O/secSubsequent to the last polling interval or requested update

Max. I/O/sec Highest value subsequent to the last start

Table 12 Performance Tools (1 of 2)

Name Description Available From

SPC-1

SPC-2

Storage Performance Council benchmarks

http://www.storageperformance.org

IOBench I/0 throughput and fixed workload benchmark

http://portal.acm.org/citation.cfm?id=71309

IOmeter I/O storage subsystem measurement and characterization tool

http://www.iometer.org

IOzone File system benchmark tool http://www.iozone.org

lmdd Disk drive dump utility for “raw” devices from the lmbench suite

62 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 69: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Getting Optimal Performance from Premium Features

Getting Optimal Performance from Premium Features

Use the FlashCopy, VolumeCopy, and Enhanced Remote Mirroring premium features to improve the performance of your storage subsystem.

Getting Optimal Performance from FlashCopyFor optimal performance when using the FlashCopy premium feature, observe the following guidelines:

• Locate repository logical drives on the same array as the base logical drive to minimize the copy-on-write penalty.

• Try to schedule Read I/Os to the FlashCopy logical drive at off-peak times when I/O activity on the source LUN is lower.

Getting Optimal Performance from VolumeCopyThe VolumeCopy premium feature uses optimized large blocks to complete the copy as quickly as possible. Thus VolumeCopy requires little tuning other than setting the copy priority to the highest level that still permits acceptable host I/O performance. VolumeCopy performance is affected by other controller activity and by the RAID level and logical drive parameters of the source and target logical drives. A best practice for using VolumeCopy is to disable all flashcopy logical drives associated with a base logical drive before selecting the base logical drive as a VolumeCopy target logical drive.

For more information about VolumeCopy, see the IBM System Storage DS Storage Manager online help or refer to the IBM System Storage DS Storage Manager VolumeCopy – Feature Guide for 9.x.

Getting Optimal Performance from Enhanced Remote Mirroring

For optimal performance when using the Enhanced Remote Mirroring premium feature, observe the following guidelines:

sar Unix/Linux system activity report command with numerous options

xdd Tool for measuring and characterizing disk drive storage subsystem I/O

http://www.ioperformance.com

Table 12 Performance Tools (2 of 2)

Name Description Available From

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem 63© Copyright 2009, IBM Corporation

Page 70: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Best Practices for Running an Oracle Database on an IBM Mid-Range Storage Subsystem. . . . . .

• Upgrade both storage subsystems to the latest firmware levels available.

• Locate repository logical drives on RAID 1 logical drives separated from production logical drives to isolate writes and help optimize performance.

• In general, use at least as many disk drives in the target arrays as are in the source arrays.

• Larger segment sizes on both the source and target LUNs generally improve the performance of the Enhanced Remote Mirroring premium feature.

• For the target LUN, set up Write Caching, but do not set up Write Cache Mirroring.

• For the source LUN, set up Read Caching. Determine the write caching parameters for the source by the operational requirements, not by Enhanced Remote Mirroring.

• Use the highest priority level for synchronization for optimal Enhanced Remote Mirroring performance, assuming that the impact on host I/O performance is acceptable.

• On Brocade switches, set up the In Order Delivery option.

ConclusionThis document provides important general guidelines for obtaining optimum performance when using the DS4800 storage subsystem and the DS5000 storage subsystem with the Oracle Database.

To continue to improve performance, learn as much as possible about the requirements for your specific operating system and for the additional applications running in your Oracle Database environment.

If you have special problems that you are unable to solve, contact IBM or IBM resellers to obtain custom, on-site consulting. See “Contact Information” on page 64.

Contact InformationFor more information, please visit the IBM web site at:

http://www.ibm.com

64 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 71: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Appendix A

References

IBM, 2007. Tuning External Storage Subsystems

IBM, 2009. IBM System Storage DS4000 and Storage Manager v. 10.30 Redbook

http://www.redbooks.ibm.com/redbooks/pdfs/sg247010.pdf

Loaiza, Juan [undated PPT] Optimal Storage Configuration Made Easy

http://www.oracle.com/technology/deploy/availability/pdf/OOW2000_same_ppt.pdf

Microsoft, 2006. How to Align Exchange I/O with Storage Track Boundaries [Microsoft’s usage details on diskpar]

http://technet.microsoft.com/en-us/library/aa998219.aspx

Oracle, 2006. Optimal Flexible Architecture

http://download.oracle.com/docs/cd/B19306_01/install.102/b15704/app_ofa.htm

For further information about Enhanced Remote Mirroring, refer to the following IBM documents:

• Enhanced Remote Mirroring Service Planning and Delivery Guidebook for Version 9.19

• Enhanced Remote Mirroring Installation and Configuration Guidebook for Version 9.19

• Enhanced Remote Mirroring of an Oracle Database Stored on IBM DS4000 Storage Subsystems

• Enhanced Remote Mirroring of an Oracle Database Using Data Replicator Software

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem A-1© Copyright 2009, IBM Corporation

Page 72: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

A-2 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 73: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Appendix B

Test Configuration

All tests reported in this document were performed with the following logical drive configurations.

The IBM storage subsystem was a DS4800 with the following components:

Firmware: 06.19.15.00

NVSRAM: N6091-619834-401

The host system was an IBM System x 3665 with the following components:

Linux: 2.6.9-42.ELsmp #1 SMP

HBA: QLogic 2642

The IBM storage subsystem was a DS5000 with the following components:

Firmware: 07.30.16.00

NVSRAM: N7091-730800-005

Table B–1 Tested Logical Drive Configurations

Name RAID LEVEL

Geo-metry

Disk Drive Type

Logical Drive Size

Segment Size

Read Cache

Write Cache

Dynamic Prefetch

RD1_V1RAID1 4+4 73 GB/

15 K179.167 GB 512 KB ENABLED ENABLED DISABLED

RD1_V2RAID1 8+8 73 GB/

15 K358.334 GB

512 KB ENABLED ENABLED DISABLED

RD1_V3RAID1 4+4 73 GB/

15 K179.167 GB 128 KB ENABLED ENABLED ENABLED

RD5_V1RAID5 4+1 73 GB/

15 K179.167 GB 512 KB ENABLED ENABLED DISABLED

RD5_V2RAID5 8+1 73 GB/

15 K358.334 GB 512 KB ENABLED ENABLED DISABLED

RD5_V2RAID5 4+1 73 GB/

15 K179.167 GB 128 KB ENABLED ENABLED ENABLED

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem B-1© Copyright 2009, IBM Corporation

Page 74: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

EMW version: 10.30.G0.00

AMW version: 10.30.G0.00

The host system was an IBM System x 3950 with the following components:

Linux: Red Hat Entrprise Edition 5.0

HBA: QLogic 2462

B-2 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 75: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Appendix C

AVT Disable Script

/* Disable AVT in all the host regions */

show "Disabling AVT on Controller A...";set controller[a] HostNVSRAMByte[0x00, 0x24]=0x00; /* 0x01 is enable AVT */set controller[a] HostNVSRAMByte[0x01, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x02, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x03, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x04, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x05, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x06, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x07, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x08, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x09, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x0a, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x0b, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x0c, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x0d, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x0e, 0x24]=0x00; set controller[a] HostNVSRAMByte[0x0f, 0x24]=0x00; show "Complete";

show "Disabling AVT on Controller B...";set controller[b] HostNVSRAMByte[0x00, 0x24]=0x00; /* 0x01 is enable AVT */set controller[b] HostNVSRAMByte[0x01, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x02, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x03, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x04, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x05, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x06, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x07, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x08, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x09, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x0a, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x0b, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x0c, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x0d, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x0e, 0x24]=0x00; set controller[b] HostNVSRAMByte[0x0f, 0x24]=0x00;

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem C-1© Copyright 2009, IBM Corporation

Page 76: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

show "Complete";

show "You must now reboot both controllers for these changes to take effect!";

C-2 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 77: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Appendix D

DS5000 Storage Subsystem

This appendix describes the features, benefits, and components of the DS5000 storage subsystem. This appendix describes how the DS5000 storage subsystem is an improvement over previous models.

Overview of the DS5000 Storage SubsystemIn application and functionality, the DS5000 storage subsystem is equivalent to the DS4000® enterprise level1 series of storage subsystems but with higher performance and greater scalability.

Figure D–1 The DS5000 Storage Subsystem

Supporting Your Critical FunctionsThe DS5000 storage subsystem supports the following functions:

• Transactional applications, such as databases and online transaction processing (OLTP)

• Throughput-intensive applications, such as high-performance computing (HPC) and rich media

• Concurrent workloads for consolidation and virtualization

1.The following Shorter version of product name storage subsystems are considered enterprise level: DS4700 Model 70, DS4700 Model 72, DS4800 Model 80, and DS4800 (Models 82, 84, 88)

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-1© Copyright 2009, IBM Corporation

Page 78: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Growing with Your BusinessThe flexibility of the DS5000 storage subsystem keeps pace with your growing company by adding or replacing host interfaces, increasing performance, and growing capacity. The DS5000 storage subsystem also lets you add cache and to reconfigure the system on-the-fly.

Extending the Life of Your Storage SubsystemThe life of the DS5000 storage subsystem extends beyond the normal three to four years. The extended life delays or even eliminates the expense of migrating data to a new system. Thus, the DS5000 storage subsystem’s acquisition costs can be amortized over extended periods of time.

Securing DataYour data is always available, and any data in cache is captured and safe in the event of a power outage. The DS5000 storage subsystem accomplishes data security with the following functions:

• Redundant components

• Automated path failover

• Extensive online configuration, reconfiguration, and maintenance capabilities

• Multiple replication options

• A persistent cache backup

Product Features of the DS5000 Storage Subsystem

The DS5000 storage subsystem powers IBM’s highest performing and most flexible system to date by integrating the following state-of-the-art technology:

• Flexible host interfaces

• Next-generation XOR engines

• Massive controller bandwidth

• Multiple disk drive technologies

• Robust storage management

D-2 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 79: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

The DS5000 storage subsystem contains the following new features:

• Next-generation enterprise controller technology

• Field-replaceable host interface cards (HICs) – Two for each controller

• Sixteen 4-Gb/s Fibre Channel (FC) disk drive interfaces, which support up to 448 FC or Serial Advanced Technology Attachment (SATA) disk drives

• Up to 32 GB of dedicated data cache (16 GB for each controller)

■ Dedicated cache mirroring channels

■ Persistent cache backup in the event of a power outage

■ Field-upgradeable

• Support for the following RAID levels—RAID 0, RAID 1, RAID 3, RAID 5, RAID 6, and RAID 10

• Two performance levels (base and high) with the ability to upgrade in the field

Releases of the DS5000 Storage SubsystemTable D–1 shows the three releases of the DS5000 storage subsystem and the features available with each release.

Table D–1 Phased Feature Release (1 of 2)

Release Phase Available Features

Initial release 4-Gb/s FC HICs

8 GB or 16 GB of data cache (4 GB or 8 GB for each controller)

256 FC or SATA disk drives in the EXP810 drive expansion enclosure

Premium features:

• Storage Partitioning

• FlashCopy®

• VolumeCopy

• Enhanced Remote Mirroring

Full compatibility list

Two performance levels (base and high) with ability to upgrade

Second release 8-Gb/s FC HIC

448 FC or SATA disk drives in the EXP810 drive expansion enclosure

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-3© Copyright 2009, IBM Corporation

Page 80: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Benefits of the DS5000 Storage SubsystemTable D–2 describes the features and the benefits available in the DS5000 storage subsystem.

Third release 10-Gb/s iSCSI HIC

Intermixing of FC and iSCSI HICs

32 GB of data cache (16 GB for each controller)

Table D–2 Features and Benefits of the DS5000 Storage Subsystem (1 of 2)

Feature Benefits

Flexible, replaceable host interfaces

(4 Gb/s FC initially)

Field-replaceable when the customer's infrastructure changes

Ability to plan for the future while leveraging current investments

Provide unique investment protection and life-cycle longevity

Performance Supports demanding service-level agreements (SLAs), maintaining SLAs through growth

Well-suited for environments with concurrent workloads, such as consolidation and virtualization

Linearly scalable I/O per Second (IOPS) performance

Maintains SLAs through growth

Increases overall IOPS performance with each new disk drive

Balanced performance Adept at both IOPS and MB/s

Supports applications with wide-ranging performance requirements and demanding SLAs

Well-suited for data warehousing, consolidation, and virtualization environments that have diverse workloads and application requirements

Concurrently supports transactional applications, such as databases and OLTP, and throughput-intensive applications, such as HPC and rich media

Custom XOR engine for RAID parity calculations

Efficiently handles compute-intensive parity calculations, which guarantee exceptional disk drive-based performance that is ideally-suited for RAID5 and RAID6 configurations

Support for multiple RAID levels, including RAID 6

Supports high-availability and security for mission-critical data

Configures the system to address varying service levels

Redundant, hot-swappable components

Maintains data availability by permitting components to be replaced without stopping I/O

Table D–1 Phased Feature Release (2 of 2)

Release Phase Available Features

D-4 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 81: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Up to 448 FC or SATA disk drives (256 initially)

Supports demanding capacity requirements

Intermixes FC and SATA disk drives

Sets up tiered storage in a single system

You can allocate FC disk drives to applications that demand high performance and have high I/O rates

You can allocate less-expensive SATA disk drives to applications that require less performance

DS Storage Manager Sets up maximum use and uninterrupted data availability

Supports custom LUN tuning to make sure of maximum performance or utilization

Centrally manages all of the local and networked DS Storage Manager software-based systems

Quickly configures and monitors storage subsystems from a centralized interface

Configures logical drives, performs routine maintenance, and adds new enclosures and capacity without interrupting access to data

Dynamic expansion capabilities

Brings unused storage online for a new host group or an existing logical drive to provide additional capacity on demand

Eliminates application interruptions because of growth, reconfigurations, or tuning

Up to 512 partitions Partitions effectively support large-scale consolidation or virtualization environments, which helps to reduce hardware costs and storage-management costs

Fully integrated replication features

Uses multiple options to let administrators best fit their replication needs

Uses local copies or remote copies for file restoration, backups, application testing, data mining, or disaster recovery

Support for heterogeneous, open operating systems

Supports the Microsoft Windows, UNIX, and Linux operating systems so that the DS5000 storage subsystem can operate in any open-system environment

Table D–2 Features and Benefits of the DS5000 Storage Subsystem (2 of 2)

Feature Benefits

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-5© Copyright 2009, IBM Corporation

Page 82: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Comparing the DS5000 Storage Subsystem to the DS4000 Series Storage Subsystem

Table D–3 shows the improved features of the DS5000 controller module compared to the features of the DS4000 series controller module.

Table D–4 on page D-7 shows the improved performance of the DS5000 controller module compared to the performance of the DS4000 series controller module.

Table D–3 Comparing Controller Module Features

Dual-Controller System (unless noted)

DS5000 Controller Module

DS4000 Series Controller Module

Improvement of the

DS5000 Controller

Module Over the DS4000

Series Controller

Module

Host channels(initial release)

16 FC 8 FC 2X

Redundant disk drive channels Sixteen 4 Gb/s Eight 4 Gb/s 2X

Maximum number of disk drives (initial release)

256 FC or SATA 224 FC or SATA 1.14x

Processor Intel Xeon 2.8 GHz Intel Xeon 2.4 GHz —

Processor memory(single controller)

2 GB Up to 1 GB 2X

XOR technology Dedicated ASIC Dedicated ASIC —

Bus Technology PCI-Express PCI-X 2X

Internal controller bandwidth(single controller)

4 GB/s 1 GB/s 4X

Data cache (minimum/maximum)(initial release)

8 GB/16 GB 2 GB/16 GB —

Cache hold-up Permanent Battery backed —

Cache mirroring 2 dedicated buses Back-end loops —

Cache bandwidth(single controller)

17 GB/s 3.2 GB/s 5X

D-6 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 83: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

NOTE The performance numbers for the controller module are preliminary estimates. Actual results based on testing are to be determined. All performance trials were run using RAID5 with EXP810 drive expansion enclosures.

Table D–4 Comparing Controller Module Performance

Performance MeasureDS5000 Controller

Module(256 disk drives)

DS4000 Series Controller Module

Improvement of the DS5000 Controller Module Over the DS4000 Series

Controller Module

Burst I/O rate cache reads

~ 700,000 IOPS 575,000 IOPS 1.6X

Sustained I/O rate disk drive reads

~ 98,000 IOPS 86,000 IOPS —

Sustained I/O rate disk drive writes

~25,000 IOPS 22,000 IOPS —

Burst throughput cache read

~ 6,400 MB/s 1,700 MB/s 3.8X

Sustained throughput disk drive read

~ 6,400 MB/s 1,600 MB/s 4X

Sustained throughput disk drive write (CMD)

~5,200 MB/s 1,300 MB/s 4X

Sustained throughput disk drive write (CME)

~ 5,000 MB/s FSW

~1,200 MB/s FSW

~ 3,500 MB/s

~ 450 MB/s

4X

7.8X

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-7© Copyright 2009, IBM Corporation

Page 84: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Hardware ComponentsThe DS5000 storage subsystem continues the IBM storage heritage of a modular, building-block design that provides lower acquisition and expansion costs while maintaining maximum flexibility. With two primary components—controller modules and drive expansion enclosures—configurations can be built to meet specific requirements.

DS5000 Controller ModuleFigure D–2 DS5000 Controller Module

The DS5000 controller module contains dual-active, intelligent RAID controllers. The DS5000 controller module provides host and network connectivity and supports multiple drive expansion enclosures. The initial release of the DS5000 controller module provides the following features:

• Contains dual-active, hot-swappable controllers

■ Each controller has two field-replaceable HICs. The initial DS5000 controller module release supports 4-Gb/s FC HICs (16 total ports) capable of 4-Gb/s, 2-Gb/s, or 1-Gb/s link speeds for SAN or direct host connections.

■ Each controller has sixteen 4-Gb/s FC disk drive ports with support for up to 448 FC or SATA disk drives.

• Supports up to 256 disk drives

• Supports the EXP810 drive expansion enclosure

• Intermixes FC and SATA disk drives

■ Each controller has dedicated data cache (DDR2 SDRAM) with dedicated channels for cache mirroring and persistent cache backup in the event of a power outage.

D-8 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 85: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

■ Each controller has four Ethernet ports.

- Two ports for remote management over the network, one for each controller

- Two ports for authorized service personnel troubleshooting and diagnostics, one for each controller

• Supports an interconnect enclosure that provides internal communication between controllers and contains two battery customer-replaceable units (CRUs)

• Provides a controller support enclosure that contains power supplies, redundant cooling fans, and battery charger

• Includes hot-swappable CRUs for all primary components—two controllers, two customer support enclosures, and one interconnect—and can be easily accessed and removed or replaced

EXP810 Drive Expansion EnclosureFigure D–3 The EXP810 Drive Expansion Enclosure

The EXP810 drive expansion enclosure is more than “just-a-bunch-of-disks.” The EXP810 drive expansion enclosure optimizes performance, availability, and serviceability with the following design:

• It contains 4-Gb/s FC interfaces for connectivity.

• It includes up to 16 dual-ported FC or SATA disk drives.

• It provides an environmental services monitor (ESM)-imbedded loop switch.

• It contains redundant 4-Gb/s FC disk drive loops to make sure that complete accessibility to all disk drives is available in the event of a loop or cable failure.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-9© Copyright 2009, IBM Corporation

Page 86: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

• It includes redundant power supplies, cooling fans, and ESMs.

• It provides hot-swappable CRUs for all primary components that can be easily accessed and removed or replaced.

DS Storage ManagerDS Storage Manager manages the DS5000 storage subsystem and provides administrators a powerful, easy-to-use management interface.

Functions

With DS Storage Manager, you can perform the following administrative tasks with no system downtime and no interruption to system I/O.

• Configuration

• Re-configuration

• Expansion

• Maintenance

• Performance tuning

The DS Storage Manager software’s configuration flexibility includes the ability to mix the following capabilities all within a single storage subsystem:

• Disk drive technologies

• RAID levels

• Segment sizes

• Array sizes

• Logical drive characteristics

• Cache policies

Premium Features

The premium features available with DS Storage Manager extend its functionality to allow for even more powerful storage.

• Storage Partitioning – Sets up a single storage subsystem function as multiple, logical storage subsystems. This function provides storage consolidation in heterogeneous environments.

• FlashCopy – Instantaneously creates capacity-efficient, point-in-time logical drive images, which provides a logical drive for such uses as file restoration and backup.

D-10 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 87: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

• VolumeCopy – Creates a complete physical copy of a logical drive within a storage subsystem. This unique entity can be assigned to any host and used for application testing or development, information analysis, or data mining.

• Enhanced Remote Mirroring – Sets up continuous data replication from one system to another to guarantee data protection.

Software Specifications of DS Storage Manager

Table D–5 Software Specifications of DS Storage Manager

Component Specification

Firmware release 7.30

Client software release DS Storage Manager 10.30

Maximum logical drives supported 2,048

Maximum logical drive size supported 14 exabytes (EB)

Maximum host ports and logins supported 2,048

Maximum array size: RAID 6, RAID 5 and RAID 3 – 30 disk drives

RAID 10 and RAID 1 – 256 disk drives (full configuration)

Unlimited global hot spares Yes

Maximum storage partitions (levels: 2, 4, 8, 16, 32, 64, 96, 128, 256, 512)

512

Maximum logical drives for each partition 256

Maximum flashcopies for each base logical drive

16

Total flashcopies 1,024

Maximum concurrent Enhanced Remote Mirroring copy processes

8

Maximum mirrored pairs 128

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-11© Copyright 2009, IBM Corporation

Page 88: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Supported Operating Systems

Table D–6 Supported Operating Systems Used by DS Storage Manager

Operating System Version Supported

Windows Server 2008Standard Server, Enterprise Server, Web, and Core editions

Windows Vista - GUI client only

Windows Server 2003 with SP2 Standard Server and Enterprise Server editions

Windows XP Professional – GUI client only

Red Hat Enterprise Linux V4 update 6, V5 update 1

SUSE Linux Enterprise Server V9 SP4, V10 SP1

VMware ESX 2.5.3, 3.0.1

AIX® 5.1, 5.2, 6.1

Novell NetWare 6.5 SP7

HP-UX 11.23, 11.31

Solaris 8, 9, 10

D-12 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 89: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Technical SpecificationsTable D–7 through Table D–13 show the technical specifications of the DS5000 controller module and the EXP810 drive expansion enclosure.

Physical Characteristics

Operating Temperature

Table D–7 Physical Characteristics

Specification Characteristic DS5000 Controller Module EXP810 Drive Expansion Enclosure

Height 17.45 cm (6.87 in.) 12.95 cm (5.1 in.)

Width 44.45 cm (17.5 in.) 44.7 cm (17.6 in.)

Depth 60.96 cm (24 in.) 57.15 cm (22.5 in.)

Weight (max) 36.79 kg (81.5 lb) 42.18 kg (93 lb)

Table D–8 Operating Temperature1

Specification Characteristic DS5000 Controller Module EXP810 Drive Expansion Enclosure

Operating range 10°C to 40° C (32° F to 104° F) 10° C to 40° C (50° F to 104° F)

Maximum rate of change 10° C (18° F) for each hour 10° C (18° F) for each hour

Storage range –10° C to 65° C (14° F to 149° F) –10° C to 50° C (14° F to 122° F)

Maximum rate of change 15° C (27° F) for each hour 15° C (27° F) for each hour

Transit range –40° C to 65° C (–40° F to 149° F) –40° C to 60° C (–40° F to 140° F)

Maximum rate of change 20° C (36° F) for each hour 20° C (36° F) for each hour5 If you plan to operate a system at an altitude between 1000 m to 3048 m (3280 ft to 10,000 ft) above sea level,

lower the environmental temperature 1.7°C (3.3° F) for every 1000 m (3280 ft) above sea level.

Power Input

Nominal voltage range 90 VAC to 264 VAC 90 VAC to 264 VAC

Frequency range 50 to 60 Hz 50 to 60 Hz

Max operating current5.40 A at 100 VAC 3.90 A at 100 VAC

2.25 A at 240 VAC 2.06 A at 240 VAC

Typical current

Not available at the time of this document release

115 VAC, 60 Hz at 0.73 power supply efficiency and 0.96 power factor

Not available at the time of this document release

230 VAC, 60 Hz at 0.73 power supply efficiency and 0.96 power factor

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-13© Copyright 2009, IBM Corporation

Page 90: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Relative Humidity With No Condensation

Altitude Ranges

Heat Dissipation

Table D–9 Relative Humidity with No Condensation

Specification Characteristic DS5000 Controller Module EXP810 Drive Expansion Enclosure

Operating range 20% to 80% 20% to 80%

Storage range 10% to 93% 10% to 90%

Transit range 5% to 95% 5% to 95%

Maximum dew point 26° C (79° F) 26° C (79° F)

Maximum gradient 10% for each hour 10% for each hour

Table D–10 Altitude Ranges

Specification Characteristic DS5000 Controller Module EXP810 Drive Expansion Enclosure

Operating30.5 m (100 ft) below sea level to 3048 m (10,000 ft) above sea level

30.5 m (100 ft) below sea level to 3000 m (9842 ft) above sea level

Storage30.5 m (100 ft) below sea level to 3048 m (10,000 ft) above sea level

30.5 m (100 ft) below sea level to 3000 m (9842 ft) above sea level

Transit30.5 m (100 ft) below sea level to 12,000 m (40,000 ft) above sea level

30.5 m (100 ft) below sea level to 12,000 m (40,000 ft) above sea level

Table D–11 Heat Dissipation2

Specification Characteristic DS5000 Controller Module EXP810 Drive Expansion Enclosure

Btu/Hr 1842 1517

KVA 0.562 0.462

Watts (AC) 540 444

Amps (240 VAC) 2.25 1.856 The tabulated power and heat dissipation values are the maximum measured operating power.

D-14 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 91: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Acoustic Noise

Power Input

Hardware Architecture and DiagramsThe DS5000 controller has specialized processing elements to optimize processing power. Each of these processing elements has its own memory. Independent memories reduce contention and let each element focus on its specific job. The high-speed XOR engine generates RAID parity with no performance penalty, which lets this compute-intensive task be handled efficiently and effortlessly. A separate processor focuses on data movement control. It processes and dispatches the setup and control instructions independent of data.

With its unique multiple-processor design and multiple high-speed buses, the DS5000 controller is equally adept at IOPS and throughput (MB/s). Its multiple 2-GB/s PCI-Express X8 data buses between the HICs, XOR engine, and 4-Gb/s FC disk drive I/O chips have the bandwidth to handle large-block I/O, and the speed to process large amounts of random, small-block I/O to easily satisfy powerful applications.

Table D–12 Acoustic Noise

Specification Characteristic DS5000 Controller Module EXP810 Drive Expansion Enclosure

Sound power 6.0 bels 6.5 bels

Sound pressure 60 dBA 65 dBA

Table D–13 Power Input

Specification Characteristic DS5000 Controller Module EXP810 Drive Expansion Enclosure

Nominal voltage range 90 VAC to 264 VAC 90 VAC to 264 VAC

Frequency range 50 to 60 Hz 50 to 60 Hz

Max operating current5.40 A at 100 VAC 3.90 A at 100 VAC

2.25 A at 240 VAC 2.06 A at 240 VAC

Typical current

115 VAC, 60 Hz, the typical current is 4 A

115 VAC, 60 Hz at 0.73 power supply efficiency and 0.96 power factor

230 VAC, 60 Hz, the typical current is 2 A

230 VAC, 60 Hz at 0.73 power supply efficiency and 0.96 power factor

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-15© Copyright 2009, IBM Corporation

Page 92: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Each DS5000 controller has two HICs. HICs are field-replaceable to adapt to evolving infrastructure requirements. The initial release supports 4-Gb/s FC HICs, each with four independent ports, 16 ports for each dual-controller storage subsystem, which support direct host or SAN attachments.

On the disk drive side, each DS5000 controller has quad-ported 4-Gb/s FC chips. Disk drive-side loop switches let each controller access all of the disk drive ports to achieve their full 6,400-MB/s of back-end bandwidth while achieving full redundancy.

The DS5000 controller offers two significant new features related to its dedicated data cache:

• Two high-speed cache mirroring channels between controllers make sure that the maximum performance with cache mirroring enabled is available.

• The integrated flash memory provides persistent cache backup in the event of a power outage.

Figure D–1 Architecture of the DS5000 Controller Module

D-16 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 93: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

External ConnectionsFigure D–2 DS5000 Controller Module—Rear View

Figure D–2 shows that the DS5000 controller modules insert from the rear of the enclosure and support all external connections, including the host ports, disk drive ports, Ethernet ports, serial port, and power supply.

Disk Drive Channels and Loop SwitchesEach DS5000 controller has two quad-ported 4-Gb/s FC interface chips on its disk drive side. Figure D–3 on page D-18 shows that one chip on each controller connects to a local integrated loop switch, while the other chip on each controller connects to a loop switch in the alternate controller through the interconnect module. This implementation lets each controller access all 16 disk drive loops for maximum performance and availability.

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-17© Copyright 2009, IBM Corporation

Page 94: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Figure D–3 Controller Access to All 16 Disk Drive Loops

Figure D–4 shows that when you attach drive expansion enclosures to the DS5000 controller module, you must use a disk drive loop from each controller. The loop switch implementation provides each controller with access to all 16 disk drive loops. In the event that a controller is lost, the surviving controller can reach each drive expansion enclosure.

Figure D–4 Adding a Disk Drive Loop from Each Controller

D-18 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 95: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CablingFigure D–5, Figure D–6, and Figure D–7 show cabling for configurations of the DS5000 controller module using four, eight, and 16 drive expansion enclosures.

Figure D–5 Four Drive Expansion Enclosures Using Eight of the DS5000 Back-End Disk Drive Loops

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-19© Copyright 2009, IBM Corporation

Page 96: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Figure D–6 Eight Drive Expansion Enclosures with the Minimum Configuration to Use 16 of the DS5000 Back-End Disk Drive Loops

D-20 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 97: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Figure D–7 Sixteen Drive Expansion Enclosures Using 16 of the DS5000 Back-End Disk Drive Loops

Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem D-21© Copyright 2009, IBM Corporation

Page 98: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

D-22 Best Practices for Running Oracle Database on an IBM Mid-Range Storage Subsystem© Copyright 2009, IBM Corporation

Page 99: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up

Trademarks and special notices © Copyright IBM Corporation 2009. All rights reserved.

References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.

Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

Page 100: Best Practices for Running an Oracle Database on an IBM ... · Database on an IBM Mid-Range Storage Subsystem Best Practices 35834-00 Rev. D May, 2009 ... dynamic prefetch set up