32
Best practices for Oracle 10g with Automatic Storage Management (ASM) and HP StorageWorks 8000 Enterprise Virtual Array 8000 white paper Environment: Oracle 10g with ASM on both single instance database and Real Application Clusters (RAC) using HP ProLiant DL580 servers running Window 2003 Enterprise Edition with HP StorageWorks EVA8000 Executive summary.......... ..................................................................................................................... 3  Key findings ........................................................................................................................................ 3  Overview............................................................................................................................................ 3  Components............ ............................................................................................................................ 4  Configuring the hardware..................................................................................................................... 6  Configuring the EVA8000 stora ge array ............................................................................................ 8  Configuring the HP ProLiant DL580 server............... ............................................................................ 8  Configuring the software ...................................................................................................................... 8  Configuring the MPIO driver ............................................................................................................. 8  Configuring Windows for ASM ......................................................................................................... 9  Using 32-bit Windows............... ....................................................................................................... 9  Modifying ASM hidden parameters.................................................................................................. 10   Working with Benchmark Factory .................................................................................................... 12   ASM performance results................ .................................................................................................... 16  Batch test results............................................................................................................................. 17  OLTP test results ............................................................................................................................. 18  DSS test results............................................................................................................................... 21  Conclusions ...................................................................................................................................... 26  Summary of results ......................................................................................................................... 26  Project learnings ............................................................................................................................ 26  Best practices .................................................................................................................................... 27  Best practices for Storage Administrators .......................................................................................... 27  Best practices for Server Administrators ............................................................................................ 27  Best practices for Database Administrators ........................................................................................ 27   Appendix A. Bill of Materials .............................................................................................................. 28   Appendix B. Examples ....................................................................................................................... 29  For more information................ .......................................................................................................... 32  HP technical references........... ........................................................................................................ 32  HP solutions and training ................................................................................................................ 32  

4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

Embed Size (px)

DESCRIPTION

Best Practices

Citation preview

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 1/32

Best practices for Oracle 10g with Automatic Storage Management(ASM) and HP StorageWorks 8000 Enterprise Virtual Array 8000white paper

Environment: Oracle 10g with ASM on both single instance database and Real ApplicationClusters (RAC) using HP ProLiant DL580 servers running Window 2003 Enterprise Edition withHP StorageWorks EVA8000

Executive summary............................................................................................................................... 3 Key findings........................................................................................................................................ 3 Overview............................................................................................................................................ 3 Components........................................................................................................................................ 4 Configuring the hardware..................................................................................................................... 6 

Configuring the EVA8000 storage array ............................................................................................ 8 Configuring the HP ProLiant DL580 server........................................................................................... 8 

Configuring the software ...................................................................................................................... 8 Configuring the MPIO driver ............................................................................................................. 8 Configuring Windows for ASM ......................................................................................................... 9 Using 32-bit Windows...................................................................................................................... 9 Modifying ASM hidden parameters.................................................................................................. 10 

 Working with Benchmark Factory .................................................................................................... 12  ASM performance results.................................................................................................................... 16 

Batch test results............................................................................................................................. 17 OLTP test results ............................................................................................................................. 18 DSS test results............................................................................................................................... 21 

Conclusions ...................................................................................................................................... 26 Summary of results ......................................................................................................................... 26 Project learnings ............................................................................................................................ 26 

Best practices .................................................................................................................................... 27 Best practices for Storage Administrators .......................................................................................... 27 Best practices for Server Administrators ............................................................................................ 27 Best practices for Database Administrators ........................................................................................ 27 

 Appendix A. Bill of Materials .............................................................................................................. 28  Appendix B. Examples ....................................................................................................................... 29 For more information.......................................................................................................................... 32 

HP technical references................................................................................................................... 32 HP solutions and training ................................................................................................................ 32 

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 2/32

HP product sites ............................................................................................................................. 32 Oracle.......................................................................................................................................... 32 Quest Software.............................................................................................................................. 32 

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 3/32

Executive summary

Improve profitability, manage risk, and lower TCO with HP StorageWorks Application-BasedSolutions. In this project, the HP StorageWorks Customer Focused Test team provided best practicesand optimal performance for configuring an Oracle® 10g database with Automatic StorageManagement (ASM) on the HP StorageWorks 8000 Enterprise Virtual Array (EVA8000). A speciallook into the ASM disk group and EVA disk group configurations is the critical emphasis of thisproject. Results of this testing include the optimal EVA8000 configuration with respect to disk groups,

database file layout, and mapping to corresponding ASM disk groups.

The following questions were specifically addressed in this project:

•  What is the best method to map Oracle/ASM data extents to EVA disk groups?

•  Is there an advantage for “double-striping” in which the hardware and software are striping thedata across the same physical disks?

•  What is the recommended configuration for an Oracle/ASM database on the EVA?

•  Does this recommendation change based on workload type, that is, batch, Online TransactionProcessing (OLTP), and Decision Support Systems (DSS)?

•  Does this recommendation change based on single instance database or Real Application Clusters(RAC)?

Key findings

Testing successfully provided the following high-level results:

•  ASM striping can improve database performance in conjunction with EVA virtualization.

•  The EVA configuration for batch loading of 500 GB was not significant. All configurationsperformed equally well.

•  The EVA configuration for OLTP testing shows an advantage toward a configuration that uses both ASM and EVA striping (double-striping). The advantages were more pronounced as the databasesize increased (2.5-TB RAC database with 1,000 concurrent users).

•  The greatest benefit from double-striping was most noticeable with DSS workloads. Obviousdifferences in the configurations were observed, especially with the 2.5-TB RAC database with nineconcurrent users running a portion of the DSS power test.

•  Even greater performance on DSS workloads can be achieved by modifying one of the ASMhidden parameters, allocation unit size, to maximize data placement on the EVA.

Important findings uncovered during the tests are documented in the Best practices section.

Overview

The main purpose of the project was to test various EVA disk group configurations and determine thebest configuration when running an Oracle 10g database with ASM. The introduction of Oracle ASMcapabilities in Oracle 10g provided alternative storage management options to those available in thestorage hardware alone. ASM provides some similar functionality to the HP StorageWorks EVA.Customers want to understand where the functionality overlaps, if they can co-exist, and if so, what isthe best way to configure HP StorageWorks EVA storage to optimize application performance undervarious database workloads?

3

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 4/32

Several test scenarios were evaluated in regards to their impact on database performance:

•  EVA configurations—Three EVA configurations were tested to see how Oracle performed during thebenchmark testing.

•  Benchmarks—Three industry-standard benchmarks (batch, OLTP, and DSS) were run against theEVA configurations. The intent was to see if the best practices changed with differing workloads,that is, did a particular EVA configuration perform better with DSS workload than with an OLTPworkload?

• Database type—Both single instance and RAC databases were tested to see if clustering had anyeffect on the overall best practice.

Components

To run these tests, HP configured the systems illustrated in Figure 1 and Figure 2. These environmentswere based on input from customers and are representative of typical Oracle database environments.The key components include the following:

•  Oracle 10g—Quest Software’s Benchmark Factory for Databases was used to generate standardOracle workloads against the Oracle database in a variety of workload types (batch, OLTP, andDSS).

•  HP ProLiant DL580 server—This server was used to host the Oracle database. Two configurationswere used—a single instance database (Figure 1) and a two-node RAC instance (Figure 2).

•  HP ProLiant BL20p server—This server was used to host the necessary storage managementsoftware (HP StorageWorks Command View EVA) for monitoring and configuring the storagearray. Also, the storage performance metrics from EVAPerf were managed from this server.

•  EVA8000—SAN-based storage array that stored the Oracle database, logs, and so on.

•  Microsoft® Windows® 2003 Enterprise Edition operating system—The database servers,benchmark servers, and storage management server all ran on Windows 2003 EE.

4

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 5/32

 Figure 1. Environment for the Single Instance database

5

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 6/32

 Figure 2. Environment for the RAC database

Configuring the hardware

For the complete list of all hardware and software, see Appendix A. Bill of Materials. 

HP constructed three different disk layouts for the EVA configurations (Figure 3). These configurationsmeet both HP and Oracle best practices of having two separate disk groups: one for online/systemdata and the other for backup/recovery data. This best practice is not for performance purposes, butto ensure recovery efforts by separating these files. By no means are these meant to be the onlypossible configurations. These are merely a guideline for best practices. Database Administrators(DBAs) can modify these configurations to suit their particular environments.

•  Configuration #1—Both ASM and EVA striping (“double-striping”) across the same physical disks

This configuration highlights the concept of double-striping across the same physical disks using ASM software-based striping and EVA hardware-based striping algorithms. This is the expectedoptimal performing configuration.

• Configuration #2—Single-striping (EVA striping only)In this scenario, the virtual disk is subject to “single-striping,” performed only by the EVA. Becauseonly one LUN is presented to the ASM disk group, ASM cannot stripe. There are obvious scalabilityissues with this configuration for large databases since the EVA has a current maximum LUN size of2 TB.

•  Configuration #2a—Variation

This variation of Configuration #2 is necessary to scale databases larger than 2 TB. Thisconfiguration slowly evolves into Configuration #1 as more LUNs are added.

6

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 7/32

•  Configuration #3—Double-striping across different physical disks

In this scenario, the virtual disk is still subject to double-striping, performed by both the EVA and ASM. This is different from Configuration #1 because the disks that ASM is striping across do notbelong to the same EVA disk group. Because of the smaller spindle count in the EVA disk groups,there is a reduced advantage to EVA striping. Therefore, this configuration is not expected toperform well, nor is recommended in general.

Figure 3. EVA configurations

EVA 8000 (Logical View)

2 Disk Groups

 ASM Disk

Group

Disk Group

LUNLUN

User Files

System Files

Control Files

Online Redo Logs

 Archive Logs

Flashback Area

RMAN Backups

 ASM Disk

Group

LUN LUN

Disk Group

LUN

LUN

LUN

LUN

Configuration 1

LUN

 

EVA 8000 (Logical View)

2 Disk Groups

 ASM DiskGroup

Disk Group

LUN

User Files

System FilesControl Files

Online Redo Logs

 Archive Logs

Flashback Area

RMAN Backups

 ASM DiskGroup

Disk Group

LUN

Configuration 2a

LUN

 

7

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 8/32

Configuring the EVA8000 storage array

The EVA8000 configuration included the following:

•  EVA XCS Firmware: v5.100

•  1 EVA8000 Controller Pair

•  12 EVA disk shelves

•  168 146-GB 15k FC disks

The fibre connections were connected to Brocade Silkworm 3800 switches and configured in dual2-GB fabrics.

 All virtual disks in the EVA configurations used VRAID1. The virtual disks were manually balancedacross both EVA controllers for optimal performance. The physical drives were divided across the twodisk groups with more spindles in the database disk group, for example, 96 drives in the databasedisk group and 64 drives in the backup disk group. The disk groups were configured for single diskfailure protection. The virtual disks were presented only to the host ports that were connected to themaster controller for the virtual disk. This was done, in part, to eliminate any possible impact toperformance from traffic crossing on the mirror ports of the EVA controllers. It is recognized that sucha setup prevents access to the virtual disk in the event of a controller failover and would typically notbe utilized in a customer environment.

Configuring the HP ProLiant DL580 server

 An HP ProLiant DL580 server was used for each database server, with four Intel® Xeon™ processors(hyper-threaded to eight) and 32-GB RAM. The ProLiant Support Pack 7.5.1 was used to configure theserver and install the Windows 2003 EE operating system. Fibre connectivity was provided by way offour Emulex LP1050 dual channel host bus adapters (HBAs) with the native Windows MPIO driverused for host connectivity, for a total of eight physical paths to the fabric. Because of strict zoning tocontrol access to the master controller for each virtual disk, a total of 32 paths (instead of the typical64 paths) was available for each host I/O.

Configuring the softwareConfiguring the MPIO driver

The Windows MPIO/HP DSM driver provides multipathing on Windows servers and supports Active- Active storage arrays. The EVA8000 storage array was configured to load balance across HBAs andcontrollers with Active-Active enabled. The latest Windows MPIO/HP DSM driver can be obtainedfrom the HP website under “support and downloads.” The default method of SQST was used in thistesting (Table 1) for load balancing across the HBA ports. MPIO/DSM version was 2.00.01.

Table 1. MPIO multipathing options

Type Policy Description

NLB No_Load_Balance All I/O requests are routed through a chosen active path.

RR Round_Robin  All I/O requests are distributed across all active paths to the devicein a round robin manner.

SQR Shortest_Queue_Requests Each I/O request is routed to the active path with the least numberof outstanding requests.

SQB Shortest_Queue_Bytes Each I/O request is routed to the active path with the least numberof outstanding data bytes.

SQST Shortest_Queue_ServiceTime Each I/O request is routed to the active path where the totaloutstanding time for pending I/O requests is the least.

8

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 9/32

Configuring Windows for ASM

The diskpart command was run to enable auto-mounting of volumes. This command allows theasmtoolg to see the volume. In Windows 2003 server editions, this value is off by default. Next, thevirtual disk was presented to the server, the disk initialized in Computer Management, and a basicdisk created and the volume formatted as a raw device, that is, a file system was not installed(Figure 4).

DISKPART> automount enable

 Automatic mounting of new volumes enabled.

Figure 4. Unformatted volume

Using the asmtoolg tool, the disk was stamped with the ASM signature for use by the Oracledatabase. For more information, see Appendix B. Examples. 

The stamped ASM disk was verified with the asmtool –list command.

C:\>asmtool –list\Device\Harddisk0\Partition1 69445M

ORCLDISKDATA0 \Device\Harddisk1\Partition1 1019M\Device\Harddisk2\Partition1 2096121M\Device\Harddisk3\Partition1 1048570M

In the RAC environment, the virtual disk was presented to both servers. One of the servers was used tocreate the ASM volume. It is necessary to reboot the other node for it to recognize the volume andlater install the database.

Using 32-bit Windows

If you are running 32-bit Windows with greater than 4 GB of RAM, several extra steps are necessaryto install and configure the Oracle database.

•  Add /3GB and /PAE in the Windows boot.ini file.

–  On a 4-GB RAM server, /3GB will allow applications to use 3 GB of RAM (instead of the default2 GB).

–  On servers with more than 4-GB RAM, /PAE will allow applications access to the higher memory

Note:Database buffer cache is the only parameter that can utilize memorygreater than 3 GB by way of DB_BLOCK_SIZE xDB_DATA_BUFFERS. Other SGA components, PGA, other memoryparameters, and Oracle processes must all share 3 GB of memory.For more information, refer to Metalink Note 225349.1.

–  If used in conjunction with the /PAE for higher memory amounts, /3GB will limit your /PAE to16-GB RAM.

9

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 10/32

•  Adjust registry setting awe_window_memory.

–  This parameter creates a window space to more than 4 GB of memory when /PAE is in affect.

•  Assign the Oracle user the Windows “Lock Pages in Memory” system privilege in AdministratorTools, Local Security Policy, Local Policies, User Rights Assignment.

•  Set the initialization parameter USE_INDIRECT_DATA_BUFFERS to TRUE.

Modifying ASM hidden parameters

There are a few hidden ASM parameters that can be modified to change the ASM stripe size andallocation unit. These ASM parameters are originally intended for use in very large databases(multiple TB to PB range) and are used to reduce the amount of metadata overhead on the ASMinstance. Working with recommendations from Oracle engineering, HP evaluated variations of theseparameters on smaller size databases to see if they would affect or improve overall databaseperformance.

Note the following ASM parameters must start with an “_” (underscore):

•  _asm_ausize (allocation unit)

•  _asm_stripesize

• fine grain/coarse grain striping templates

For more information on setting and configuring these parameters, refer to Oracle MetalinkNote 368055.1.

 ASM allows for the modification of two key parameters that affect the way data is striped across an ASM disk group: allocation unit size and stripe size. ASM Allocation Unit Size (_asm_ausize)determines the extent size for all database files. For example, a 1-GB data file using the defaultausize of 1 MB will consist of 1,024 extents of 1 MB each (Figure 5).

Figure 5. ASM striping with 1 MB

10

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 11/32

The same data file using an ausize of 8 MB will consist of 128 extents of 8 MB each (Figure 6). Datais striped across extents, which are distributed equally across the disks in the ASM disk group. ASMStripe Size (_asm_stripsize) determines the I/O size for fine grain templates (coarse grain templatesare always 1 MB—the Oracle maximum I/O size). The default _asm_stripesize is 128 KB.

Figure 6. ASM striping with 8 MB

In our testing, the _asm_ausize showed the most promise for improving performance. To review theentire project, refer to the following knowledge brief at www.hp.com/go/hpcft. The benchmarks usedin this testing were random read/write (OLTP) and large sequential reads (DSS). The OLTP benchmarkdid not show significant performance differences when applying the ASM parameter changes.However, the large sequential reads benchmark did improve when changing the _asm_ausize value.These DSS results are shown in Figure 7.

11

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 12/32

 Figure 7. DSS results with _asm_ausize =8 MB

Sequential Reads

100%

116%

0%

25%

50%

75%

100%

125%

 ASM Defaults 8MB AU sizeConfigurations

   P  e  r   f  o  r  m  a  n  c  e   %    A

  g  a   i  n  s   t   B  a  s  e

   l   i  n  e

B

 A

S

E

L

I

N

E

 

In this test, the DSS scripts completed much faster when the ASM allocation unit size was set to 8 MB(it outperformed the baseline by completing in less time). The sequential reads appear to functionbetter when the ASM allocation unit is set to 8 MB. In this test, setting the ASM AU size improvedoverall DSS query performance by 16%. HP recommends that customers look at this parameter in apre-production environment to see if database performance can be improved when running againsttheir own data and workloads.

 Working with Benchmark Factory

Quest Software’s Benchmark Factory for Databases 5.0 (Beta 4) was used to conduct the workloadgeneration. Benchmark Factory is a generic workload generation utility that can perform DSS andOLTP workloads. Benchmark Factory can also perform a custom workload based on any tracegenerated by a database or any custom SQL script.

The workloads used for benchmark testing in accordance with the industry standard are defined inTable 2. Each test scenario was executed a minimum of three times, and averages were taken fromall test results.

Table 2. Benchmark scale factors

Host Size User load

Batch—single instance 500 GB 1

Batch—RAC N/A N/A

OLTP—single instance 500 GB 500

OLTP—RAC 2.5 TB 1,000

DSS—single instance 500 GB 3

DSS—RAC 2.5 TB 9

12

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 13/32

Benchmark Factory scale factors are approximate and should not be used as absolute guides. Thefollowing example shows a scale factor of 1000 for the DSS benchmark. While this appears to createa 934-GB database, that value does not include indexes or stored procedures, which will increase theoverall database size even more. The scale factor shown is only an estimate as stated in Figure 8.

Figure 8. Benchmark scale factors

13

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 14/32

Using advanced features in Benchmark Factory (Figure 9), a user can create the tables in differenttablespaces, use the parallel parameter, turn on or off logging, and so on.

Figure 9. Benchmark Factory advanced features

14

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 15/32

Benchmark Factory can also load balance the workload across all nodes in the RAC environment.Figure 10 is a screenshot of 20 virtual users, each connecting to the database concurrently. Noticethat Benchmark Factory has assigned the first 10 users to RAC node 1 and the second 10 users toRAC node 2. The same is true when the user loads of 500 and 1,000 were used for the OLTP testing.

Figure 10. Benchmark Factory load balancing

15

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 16/32

 While the benchmark test is running, the user can monitor the progress from the benchmark agentshown in Figure 11. This screenshot shows the 20 concurrent virtual users running the OLTPbenchmark and the exact query they are currently processing. It also shows transactions per second(TPS), min and max transaction time, as well as errors. This screenshot was taken at the beginning ofa test, so some of these values are not yet populated. In addition, the complete display is not showndue to space constrictions (min and max transactions are not shown from the screenshot).

Figure 11. Benchmark Factory agent

 ASM performance results

The testing effort provided the following results:

•  Batch testing—No discernable performance differences between configurations

•  OLTP testing single instance—Configuration #1 offered the best performance

• OLTP testing RAC—Configuration #1 offered the best performance

•  DSS testing single instance—Configuration #1 offered the best performance

•  DSS testing RAC—Configuration #1 offered the best performance

16

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 17/32

Batch test results

Batch testing consisted of a single user issuing sequential writes by way of Oracle Data Pump inparallel mode into two related tables (Table 3). These tables provided the basis for the OLTPbenchmark. Total size was approximately 500 GB representing 4 billion rows of data. Thebenchmark is measured in completion time (minutes:seconds). The test iteration was approximately3 hours.

Table 3. OLTP benchmark

Table Rows

Product 777,000,000

Part 3,080,000,000

The database was set up with a single bigfile tablespace. The user schema contained the tables thatwere to be loaded during the benchmark. This is diagrammed in Figure 12.

Figure 12. Batch database configuration

17

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 18/32

The test results did not show any major performance difference related to configuration, as shown inFigure 13. This was expected since as long as there is no bottleneck in the system, sequential writesshould not be influenced by the disk configuration.

Figure 13. Batch results on single instance

Batch Results on Single Instance

100% 100% 100%

0%

20%

40%

60%

80%

100%

120%

Config #1 Config #2 Config #3

   P  e  r   f  o  r  m  a  n  c  e   %    A  g  a   i  n  s   t   B  a  s  e   l   i  n  e

B

 A

S

E

L

I

NE

 

The batch testing on RAC was not completed due to a known issue in Oracle’s Data Pump load tool,specifically the ability to utilize the parallel parameter in a RAC environment. Work-arounds included

shutting down all but one RAC node or running without the parallel parameter. In either case, thiswould result in a “single instance” configuration, which was already covered on the previous test. Thisis Oracle bug #5472417 and is currently being tracked by Oracle.

OLTP test results

The OLTP test consisted of random read/write load on two related tables consisting of approximately4 billion rows (Table 4). This test is measured in TPS to completion. The test iteration was run forexactly 3 hours.

Table 4. OLTP benchmark 500 GB

Table Rows

Product 777,000,000

Part 3,080,000,000

18

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 19/32

The database was set up with a single bigfile tablespace. The user schema contained the tables anddata necessary for the benchmark execution. This is diagrammed in Figure 14.

Figure 14. OLTP 500-GB database configuration

Five user schemas (each 500 GB) were used to scale up to the 2.5-TB database size used in the RACtesting, consisting of approximately 20 billion rows (Figure 15).

Figure 15. OLTP 2.5-TB database configuration

19

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 20/32

Transaction ratio “weights” are shown in Figure 16. The transaction types can be weighted to controlthe percentage that they will occur during the random process of the OLTP benchmark. These weightswere left at the industry-standard defaults.

Figure 16. OLTP transaction mix

The results for single instance testing on the 500-GB database with 500 concurrent users do not showa significant trend (Figure 17). Keep in mind that an OLTP workload is typically a server-intensive

benchmark. Migrating to a 2-node RAC environment, with more users and a larger database size,increases the impact of the various EVA configurations.

Figure 17. OLTP results on single instance

OLTP Results on Single Instance

100.00% 99.99% 99.99%

0.00%

25.00%

50.00%

75.00%

100.00%

125.00%

Config #1 Config #2 Config #3

   P  e  r   f  o  r  m  a  n  c  e   %    A  g  a   i  n  s   t   B  a  s  e   l   i  n  e

B

 A

S

E

L

I

N

E

 

20

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 21/32

On the 2.5-TB RAC database with 1,000 concurrent virtual users, the workload on the storagearray has increased. Now you start to see the divergences between the three EVA configurations(Figure 18). Configuration #1 is performing the best (highest throughput measured in TPS). The otherconfigurations are performing at 97% and 96% of the baseline (Configuration #1). In other words,the other configurations had slightly lower TPS during the three-hour benchmark.

Figure 18. OLTP results on RAC

OLTP Results on RAC

100% 96%97%

0%

25%

50%

75%

100%

125%

Config #1 Config #2 Config #3

   P  e  r   f  o  r  m  a  n  c  e   %    A  g  a   i  n  s   t   B  a  s  e   l   i  n  e

B

 A

S

E

L

I

N

E

 

DSS test resultsThe DSS test consists of large sequential reads from eight related tables with approximately 2.6 billionrows. It is measured in time (minutes:seconds) to completion. Test iteration is approximately 7 hours.The 300 scale for the DSS benchmark is shown in Table 5.

Table 5. DSS benchmark

Table Rows

Part 60,000,000

Supplier 3,000,000

Partsupp 240,000,000

Customer 45,000,000

Order 450,000,000

Lineitem 1,800,364,500

Nation 25

Region 5

21

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 22/32

To create the 2.5-TB database, a combination of DSS scales (Table 6) was used from three userschemas (1000, 1000, 300). This is approximately 8.7 billion rows. Again, this benchmark ismeasured in time (minutes:seconds) to completion. Test iteration is approximately 6 hours.

Table 6. DSS benchmark 1000 scale

Table Rows

Part 200,000,000

Supplier 10,000,000

Partsupp 800,000,000

Customer 150,000,000

Order 1,500,000,000

Lineitem 6,001,215,000

Nation 25

Region 5

The database was set up with a single bigfile tablespace. The user schema contained the tables and

data necessary to run the benchmark. This is diagrammed in Figure 19.

Figure 19. DSS 500-GB database configuration

22

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 23/32

To create the 2.5-TB database, a combination of DSS scales were used from three user schemas(1000, 1000, 300). Each schema had three concurrent users issuing the Power Test benchmark(Figure 20).

Figure 20. DSS 2.5-TB database configuration

There are 22 queries in the DSS Power Test. A selection of 15 queries was used for our benchmark tokeep the testing time within a manageable timeframe (no effect on the test outcome). These wereselected based on their impact to the storage array and average time to completion. The queries withan asterisk were dropped from the RAC testing. Again, this was only in the interest of time.

Power Test—Measured in time to completion. Queries:

•  Minimum Cost Supplier Query

•  Customer Distribution Query *

•  Order Priority Checking Query

•  Promotion Effect Query

•  Local Supplier Volume Query

•  Top Supplier Query

•  Forecasting Revenue Change Query

•  Parts/Supplier Relationship Query

•  Volume Shipping Query*

•  Small-Quantity-Order Revenue Query

•  National Market Share Query*

•  Discounted Revenue Query

•  Important Stock Identification Query*

•  Global Sales Opportunity Query

•  Shipping Modes and Order Priority Query*

23

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 24/32

The results of the 500-GB database with three concurrent users in Figure 21 start to show thedifference on database performance based on the configuration of the EVA virtual disks.Configuration #1 performed the best, completing the 15 queries of the Power Test in the fastest time.Configurations #2 and #3 performed within 91% and 97% of the baseline (Configuration #1). Inother words, Configurations #2 and #3 took longer to complete the Power Test.

Figure 21. DSS results on single instance

DSS Results on Single Instance

100%

91% 93%

0%

25%

50%

75%

100%

125%

Config #1 Config #2 Config #3

   P  e  r   f  o  r  m  a  n  c  e   %    A  g  a   i  n  s   t   B  a  s  e   l   i  n  e

B

 A

SE

L

I

N

E

 

24

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 25/32

 When the workload is increased to nine concurrent users on the 2.5-TB RAC database, the greatestdivergence is seen between the configurations (Figure 22). This particular benchmark stressed thestorage array greater than any other workload. Configuration #1 has a substantial advantage overthe other configurations, which performed 88% to 60% of the baseline. Note that Configuration #3actually had fewer overall spindles than Configurations # 1 and #2 (84 verses 80). To create a fairtest, Configuration #3 was re-run with an increased spindle count. This helped to raise itsperformance from 60% to 74% of the baseline, but it was still the lowest performing configuration inthis benchmark.

Figure 22. DSS results on RAC

DSS Results on RAC

100%

88%

60%

74%

0%

25%

50%

75%

100%

125%

Config #1 Config #2 Config #3 Config #3a

   P  e  r   f  o  r  m  a  n  c  e   %    A  g  a   i  n  s   t

   B  a  s  e   l   i  n  e

B

 A

S

E

L

I

N

E

 

25

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 26/32

Conclusions

The following results were observed during this testing.

Summary of results

•  Single Instance Testing

–  Batch—No discernable difference

–  OLTP—Configuration #1

–  DSS—Configuration #1

•  RAC Testing

–  Batch—Not applicable

–  OLTP—Configuration #1

–  DSS—Configuration #1

Project learnings

•  Does it matter how Oracle/Automatic Storage Management (ASM) maps data extents to theEnterprise Virtual Array (EVA) disk groups?

–  Yes, especially in DSS environments (consider modifying ASM hidden parameter _asm_ausize foroptimal performance).

•  Is there a penalty for double-striping?

–  No, Configuration #1 (double-striping) was the best performing configuration in testing. Inaddition, Configuration 1 also complies with both ASM and EVA configuration best practices.

•  What is the general recommended configuration for an Oracle/ASM database on the EVA?

–  Configuration #1

•  Does this recommendation change based on workload type?

–  No. Configuration #1 was the best performing in OLTP and DSS workloads.

•  Does this recommendation change based on single instance or Real Application Clusters (RAC)?

–  No. Configuration #1 was the best performing in both single instance and RAC testing. It wasalso noted that as configuration size increases (addition of RAC nodes, increase in databasesize, increase in users), the performance advantages of Configuration #1 increase.

26

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 27/32

Best practices

During testing, several best practices were developed to improve database performance for eachscenario.

Best practices for Storage Administrators

•  Create at least two EVA disk groups (one for data files, one for archive/backup files).

•  Create multiple LUNs from each disk group, as needed, for the database.

•  Use VRAID1 for the database disk group (online files).

•  Use VRAID1 or VRAID5 for the archive log file disk group (backup files).

•  Place a greater amount of high-performance disks into the first disk group, when possible, toimprove database performance.

•  If necessary, use lower performance disks in the second disk group. Also, consider using fewerdisks.

•  Balance the LUNs across EVA controllers.

Best practices for Server Administrators

•  Use multiple HBAs for path availability and optimal performance.

•  Manage host access to the virtual disk by way of paths to the master controller where possible.

•  Create OCR and voting disk on logical drives (Microsoft Windows).

•  Create and stamp ASM disks on raw partitions.

Best practices for Database Administrators

•  Create at least two ASM disk groups.

•  Use external redundancy, leaving the RAID functionality to the storage array.

•  Place the online files into the first disk group.

•  Place backup files and Flashback Area into the second disk group.–  If using the Flashback Area, be aware that a mirror copy of the online redo logs is placed in this

area by default.

•  Present multiple LUNs, as needed, to each of the ASM disk groups.

•  Consider changing the _asm_ausize to 8 MB for DSS-like workloads.

27

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 28/32

 Appendix A. Bill of Materials

Item Quantity Version

Storage

EVA8000 (2C12D) 2 V5.100

HP StorageWorks 146-GB 15K FC HDD 168 HP01

Servers

HP ProLiant DL580 server 2 G3

HP 36-GB 15K U320 Pluggable Hard Drive 2

Intel X3.00GHz G3 Processor 4

HP DL580R03 Memory Expansion Board 2

HP 4-GB REG PC2-3200 2x2-GB DDR Memory 8

Emulex LP1050DC HBA 4 7.1.11x1 / 1.91a1

HP ProLiant DL580 server 8 G2

HP 36-GB 15K U320 Pluggable Hard Drive 2

Intel Xeon MP X2.85 GHz-2-MB Processor 4

HP Hot Plug Expansion Board Memory 2

HP 4096-MB PC1600 Reg SDRAM Memory 8

Emulex LP1050DC HBA 4 5.1.11x1 / 1.91a1

HP ProLiant BL20p server 2 G3

Intel Xeon 3.06-GHz Processor 2

HP 72-GB 15K U320 Pluggable Hard Drive 1

Qlogic QLA2312 HBA 2 9.0.2.16 / 1.45

SAN Infrastructure

Brocade SilkWorm 3800 SAN switches 8 V3.2.1

Software

Oracle 10g Enterprise Edition 10.2.0.2

Microsoft Windows 2003 Enterprise Edition RC2, SP1, SP2 beta

Quest Software Benchmark Factory for Databases 5.0 beta 4

HP MPIO Full Featured DSM for EVA Disk Arrays 2.00.01

HP MPIO DSM Manager 1.02.00

HP ProLiant Support Pack 7.51

HP StorageWorks Command View EVA 5.1

28

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 29/32

 Appendix B. Examples

This section shows screenshots from the Oracle asmtoolg utility to stamp a volume for ASM. The tool islaunched from the command line.

C:\>asmtoolg

Figure 23. asmtoolg

29

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 30/32

 

30

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 31/32

 

31

7/16/2019 4AA0-9728ENW ASM EVA Best Practices White Paper Final 121806

http://slidepdf.com/reader/full/4aa0-9728enw-asm-eva-best-practices-white-paper-final-121806 32/32

For more information

HP technical references

•  Configuring Oracle ASM hidden parameters for EVA8000 

•  HP StorageWorks Command View EVA installation guide 

•  HP StorageWorks Command View EVA user guide 

•  HP StorageWorks Enterprise Virtual Array configuration best practices — white paper •  HP StorageWorks SAN design reference guide 

HP solutions and training

•  Customer Focused Testing 

•  HP & Oracle alliance 

•  Network Storage Services 

•  HP Storage and SAN Education Courses 

HP product sites

•  Enterprise Class Storage Portfolio 

•  HP StorageWorks Command View EVA 

•  HP ProLiant DL Servers 

•  HP BladeSystem 

•  B-Series SAN Switches 

•  Multi-Path Options for HP Arrays 

•  Fibre Channel Host Bus Adapters 

Oracle

•  Oracle® Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (x64) •  Oracle® Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (32-Bit)  

•  Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide10g Release 2 (10.2) for Microsoft Windows 

•  Oracle® Database 10g Release 2 Automatic Storage Management Overview and Technical BestPractices 

Quest Software

•  Benchmark Factory® for Databases (Database Performance and Scalability Testing) 

© 2006 Hewlett-Packard Development Company, L.P. The information containedherein is subject to change without notice. The only warranties for HP products andservices are set forth in the express warranty statements accompanying suchproducts and services. Nothing herein should be construed as constituting anadditional warranty. HP shall not be liable for technical or editorial errors oromissions contained herein. Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its