42
White Paper EMC Solutions Abstract This solution demonstrates the benefits of deploying EMC ® XtremCache and EMC VMAX ® to increase IOPS and decrease latency for OLTP databases, and of deploying VMAX for Data Warehouse using Oracle 11gR2 RAC. It provides scalability, high performance, and ease of use for mission-critical business demands. December 2013 EMC PROVEN HIGH PERFORMANCE SOLUTION FOR ORACLE RAC ON VMAX EMC VMAX 40K, EMC XtremSF, EMC XtremCache, Red Hat Enterprise Linux, Oracle Database Enterprise Edition Optimum IOPS for OLTP workloads Optimum throughput for an Oracle data warehouse workload

EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

Embed Size (px)

Citation preview

Page 1: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

White Paper

EMC Solutions

Abstract

This solution demonstrates the benefits of deploying EMC® XtremCache™ and EMC VMAX® to increase IOPS and decrease latency for OLTP databases, and of deploying VMAX for Data Warehouse using Oracle 11gR2 RAC. It provides scalability, high performance, and ease of use for mission-critical business demands.

December 2013

EMC PROVEN HIGH PERFORMANCE SOLUTION FOR ORACLE RAC ON VMAX EMC VMAX 40K, EMC XtremSF, EMC XtremCache, Red Hat Enterprise Linux, Oracle Database Enterprise Edition

Optimum IOPS for OLTP workloads Optimum throughput for an Oracle data warehouse workload

Page 2: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

2

Copyright © 2013 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

Part Number H11734.1

Page 3: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

3 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

Table of contents

Executive summary ............................................................................................................................... 5

Business challenge .......................................................................................................................... 5

Technology solution: open, best-in-class components ..................................................................... 5

Open standards benefits ............................................................................................................. 6

Operational advantages .............................................................................................................. 6

Solution overview ............................................................................................................................ 6

Key results/ recommendations ........................................................................................................ 7

Introduction.......................................................................................................................................... 9

Purpose ........................................................................................................................................... 9

Scope .............................................................................................................................................. 9

Audience ......................................................................................................................................... 9

Terminology ..................................................................................................................................... 9

Technology overview .......................................................................................................................... 10

EMC Proven High Performance Solution for Oracle RAC on VMAX .................................................... 10

Architecture diagram ................................................................................................................. 10

Server layer .................................................................................................................................... 10

Server hardware ........................................................................................................................ 11

Server software ......................................................................................................................... 13

Network layer ................................................................................................................................. 15

Storage layer .................................................................................................................................. 16

Storage hardware ...................................................................................................................... 17

EMC Symmetrix VMAX 40K eight-engine configuration ............................................................... 17

Storage software ....................................................................................................................... 18

EMC FAST VP ............................................................................................................................. 18

Oracle Database layer .................................................................................................................... 19

Storage virtual provisioning design ........................................................................................... 19

Drive type....................................................................................................................................... 20

ASM disk group configuration for OLTP database ...................................................................... 20

ASM disk group configuration for DW database ......................................................................... 20

EMC Proven High Performance Solution for Oracle RAC on VMAX: Performance tests ....................... 21

Introduction ................................................................................................................................... 21

Test objectives ............................................................................................................................... 21

OLTP database and workload profile ......................................................................................... 22

DW database and workload profile ............................................................................................ 22

SLOB OLTP workload tests ................................................................................................................. 23

Overview ........................................................................................................................................ 23

Page 4: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

4

Test objectives ............................................................................................................................... 24

Query-only test scenarios and methodology ................................................................................... 24

Query-only test results ................................................................................................................... 25

Update-only test scenarios and methodology ................................................................................ 27

Update-only test results ................................................................................................................. 29

Data warehouse query workload test.................................................................................................. 33

Overview ........................................................................................................................................ 33

Test objective ................................................................................................................................. 33

Test scenarios and methodology .................................................................................................... 33

Test results .................................................................................................................................... 34

Data Warehouse data loading test ...................................................................................................... 36

Overview ........................................................................................................................................ 36

Test objective ................................................................................................................................. 36

Test scenarios and methodology .................................................................................................... 36

Test results .................................................................................................................................... 37

Conclusion ......................................................................................................................................... 39

Summary ....................................................................................................................................... 39

Findings ......................................................................................................................................... 39

OLTP test results........................................................................................................................ 39

DW test results .......................................................................................................................... 40

References.......................................................................................................................................... 41

EMC documentation ....................................................................................................................... 41

Oracle documentation.................................................................................................................... 41

Appendix: Configuring XtremCache devices ....................................................................................... 42

Page 5: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

5 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

Executive summary

Customers require an open, scalable, tiered, highly available and high performance infrastructure to run their critical Oracle systems. Their IT organizations must strive for better performance and increased efficiency in their Oracle infrastructure and Oracle database and storage administration operations, including the following improvements:

Reduce capital expenditures and operational expenditures by deploying an open, non-lock-in technology

Consolidate many Oracle databases (Oracle database versions 10gR1 to 11gR2 and Oracle 12c) and database workloads, including OLTP and Data Warehouse, to maximize the efficiency of the data center infrastructure

Deliver maximum performance while effectively utilizing the existing arrays and Oracle infrastructure

Maintain the highest performance levels and provide predictable performance to deliver the quality of service required in these Oracle mixed workload environments

EMC’s Proven High Performance Solution for Oracle Real Application Clusters (RAC) on EMC® Symmetrix® VMAX® is an open architecture that incorporates open, best-in-class Intel servers with EMC server-side flash storage (EMC XtremSF™) and EMC’s VMAX storage arrays.

The solution uses optimal servers to balance performance, scalability and Oracle license costs. The use of EMC Xtrem™ technologies XtremSF and XtremCache™ software in the servers provides distinct performance and operational advantages over equivalent systems that do not contain server-side flash technologies.

To accelerate an Oracle RAC environment, XtremCache:

Features an ultra-performance tier—XtremCache accelerates any application that benefits from low-latency, high bandwidth physical read I/O.

Hottest data resides on database server flash.

Data is as close to the Oracle Database server CPU as any storage model will allow.

Cooperates with Oracle Clusterware—Oracle Clusterware is the final authority on all node membership information in an Oracle RAC deployment.

Has no awareness of database instances

XtremCache ignores the content of blocks of cached LUNs.

Only XtremCache nodes can access LUNs cached by XtremCache.

XtremCache does not impose a performance penalty on active transactions for cache insertions or cache coherency.

Business challenge

Technology solution: open, best-in-class components

Page 6: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

6

Offers optimized performance—VMAX arrays do not handle read IOPS, leaving more bandwidth for handling writes. VMAX FAST™ enables automatic data placement as data goes from hot to cool usage.

Improves performance with FAST VP—Enable the FAST VP feature in VMAX to improve the performance when the workload is very heavy and to trigger the data hit/miss function in XtremCache.

Delivers the highest performance levels in the industry—This solution delivers the highest performance for mixed workload Oracle environments. EMC Proven Solutions for Oracle have demonstrated sustained metrics over 3.7 million IOPS with latency of less than half a millisecond for OLTP workload, and data warehouse workloads with sustained throughput of 32 GB/s with a data load rate of 21 TB/hour. The Key results and recommendations section provides details.

Open standards benefits

This solution is based on open standards. Advantages resulting from the open standards commitment include the following:

A “flash everywhere” architecture, which produces the right use of flash from the Oracle database server to the EMC storage platform

Mitigation of I/O bottlenecks to deliver maximum Oracle read performance, enabling the VMAX to serve more I/Os for other applications

Flexible adaptation to existing and future customer needs and open industry standards

Lower capital investment and operational expense without vendor lock-in

Operational advantages

EMC open architecture not only supports different releases of Oracle Database software (10g, 11g, and 12c), but it also provides the capability of running the databases concurrently, that is, it supports database consolidation.

Open architecture and flexible adaptation means that application modification is not required for database deployment for this solution, lessening potential unforeseen impact to business operations and systemic data flow throughout the enterprise.

The purpose of the solution is to build an EMC High Performance Solution for Oracle RAC on VMAX infrastructure based on an open architecture and demonstrate the following capabilities of the infrastructure:

High performance and flexibility

Low operational costs

Reduced risk

This white paper validates the performance of the solution and provides guidelines to build similar solutions.

Solution overview

Page 7: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

7 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

The EMC Proven High Performance Solution for Oracle RAC on VMAX has several core advantages:

Delivers the highest performance for mixed Oracle workload environments: EMC Proven High Performance Solution for Oracle RAC on VMAX has demonstrated sustained metrics over 3.7 million IOPS with latency of less than 0.5 milliseconds for the OLTP workload and Data Warehouse workloads with a sustained throughput of 32 GBs with a data load rate of 21 TB/hr. This impressive performance is achieved by utilizing optimal open components at the computer, network and storage layers. Details are listed in Table 1, Table 2, and Table 3:

Table 1. IOPS test results with workload when XtremCache is enabled and FAST VP disabled

Workload type

Performance statistics

One node

Two nodes

Four nodes

Eight nodes

Read only workload

IOPS 457,136 962,155 1,914,963 3,765,176

Response time (ms) 0.74 0.69 0.68 0.75

UPDATE transaction workload

Aggregate IOPS 53,492 99,649 190,809 303,330

Redo throughput (MB/s)

20 37 71 115

Note: Because XtremCache is a write-through cache, the data blocks that have been read into buffer cache are accelerated by EMC XtremCache for the UPDATE workload. Meanwhile, the dirty blocks that have been flushed pass through XtremCache and are directly written to the back-end VMAX array.

Table 2. IOPS test results with workload when XtremCache and FAST VP are enabled

Workload type

Performance statistics

Baseline 10%VP policy

50% VP policy

100% VP policy

Read only workload

IOPS 2,216,377 2,331,233 2,498,420 2,687,456

Response time (ms)

0.95 0.87 0.97 1.11

UPDATE transaction workload

Aggregate IOPS 145,342 156,359 223,508 237,204

Redo throughput (MB/s)

56 60 87 92

Key results and recommendations

Page 8: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

8

Table 3. DW test results for node scalability without XtremCache and FAST VP

Workload type Performance statistics

One node

Two nodes

Four nodes

Eight nodes

Query workload

GB/s 6.98 16.89 24.77 28.25

Data Loading TB/hour 2.71 6.09 10.87 21.31

Uses EMC technology enablers in the reference architecture

EMC VMAX 40K with FAST VP enabled

XtremSF PCIe flash card

XtremCache caching software

This solution provides a foundation that can be scaled in a flexible, predictable, and nearly linear way using additional server resources, including CPUs and memory, HBA ports, and front-end ports, to provide higher IOPS and throughput based on the configuration in this solution.

Page 9: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

9 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

Introduction

The purpose of this white paper is to describe an EMC High Performance Solution for Oracle RAC on VMAX infrastructure based on an open architecture and demonstrate the following capabilities of the infrastructure:

High performance and flexibility

Low operational costs

Reduced risk

This paper validates the performance of the solution and provides guidelines for building similar solutions.

This white paper serves the following purposes:

Introduces the key solution technologies

Describes the solution architecture and design

Describes the solution test scenarios and present the results of performance testing

Identifies the key business benefits of the solution

This white paper is intended for chief information officers (CIOs), data center directors, Oracle DBAs, storage administrators, system administrators, technical managers, and any others involved in evaluating, acquiring, managing, operating, or designing Oracle database environments.

Table 4 lists terminology used in this white paper.

Table 4. Terminology

Term Definition

AWR Automatic Workload Repository

ASM Automatic Storage Management

DML Data Manipulation Language

PCIe Peripheral Component Interconnect Express

PGA Program Global Area

RAC Real Application Clusters

SATA Serial Advanced Technology Attachment

SGA System Global Area

SLOB Silly Little Oracle Benchmark

Purpose

Scope

Audience

Terminology

Page 10: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

10

Technology overview

The EMC Proven High Performance Solution for Oracle RAC on VMAX includes the following layers of components:

Server—Cisco UCS C240 M3

Network—Cisco Director –MDS 9506

Storage—EMC storage and software:

EMC Symmetrix VMAX 40K storage system

EMC XtremSF–Server PCIe flash card and its corresponding driver and firmware

EMC XtremCache–Cache software for server-side flash cache

Oracle 11gR2 Database—Eight-node Oracle RAC deployment

Architecture diagram

Figure 1 depicts the EMC Proven High Performance Solution for Oracle RAC on VMAX. We deployed an eight-node SLOB RAC database for OLTP workload test first. After the SLOB test was finished, we deleted the SLOB database and deployed an eight-node DW RAC database for a DW workload test on the same eight-node cluster.

Figure 1. Solution architecture

Comprising the server layer of the solution, eight Cisco UCS C240 M3 servers utilize a total of 128 cores with 2.90 GHz E5-2690 processors, 2.56 TB RAM, and 11 TB XtremSF flash PCIe cards. The Cisco UCS C240 M3 is an enterprise-class rack server designed for performance and expandability. As part of the EMC Proven High Performance Solution for Oracle RAC on VMAX, UCS C240 M3 enables a high-performing, consolidated approach to an Oracle infrastructure, resulting in

EMC Proven High Performance Solution for Oracle RAC on VMAX

Server layer

Page 11: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

11 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

deployment flexibility without the need for application modification. Features and benefits include the following:

Hardened protection for virtual and cloud environments, as part of the Intel Xeon processor E5-2600 product family

Fully integrated quad-port gigabit Ethernet

Figure 2 shows one of the eight Cisco UCS C240 M3 rack servers utilized in the EMC Proven High Performance Solution for Oracle RAC on VMAX solution.

Figure 2. Cisco UCS C240 M3 rack server (1 of 8)

Server hardware

Table 5 describes the various hardware components of the EMC Proven High Performance Solution for Oracle RAC on VMAX’s server layer.

Table 5. Server hardware

Server

hardware Quantity Configuration Description

Cisco UCS C240 M3

8 2 x 8-core Sandy-Bridge E5-2690 processors

512 GB RAM

4 x 200 GB SSD servers

Servers

Each server includes the following components

PCIEHHS-7XXM

2 700 GB SLC PCIe card EMC XtremSF

UCSC-C240-M3S

1 UCS C240 M3 SFF w/o CPU mem HD PCIe with rail kit expdr

Server housing

Page 12: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

12

Server

hardware Quantity Configuration Description

UCS-CPU-E5-2690

16 2.90 GHz E5-2690/130W 4C/10MB Cache/DDR3 1600 MHz

CPU cores

UCS-ML-1X324RY-A

16 32 GB DDR3-1600 MHz LR DIMM/PC3-12800/quad rank/x4/1.35v

DRAM

UCS-SD200G0KA2-E

4 200 GB standard height 15 mm SATA SSD hot plug/drive sled mounted

Internal hard drives

UCSC-SD-16G-C240

1 16 GB SD card module for C240 servers

SD card

UCSC-RAIL-2U

1 2U rail kit for UCS C-Series servers

Rail kit

N20-BBLKD 20 UCS 2.5-inch HDD blanking panel

HDD panels

UCSC-HS-C240M3

2 Heat sink for UCS C240 M3 rack server

Heat sinks

UCSC-PCIF-01F

4 Full-height PCIe filler for C-Series

PCI slot fillers

UCSC-PCIF-01H

1 Half-height PCIe filler for UCS PCI slot filler

UCSC-RAID-11-C240

1 LSI 2008 SAS RAID mezzanine card for UCS C240 server

RAID card

CAB-C13-C14-AC

2 Power cord C13 to C14 (recessed receptacle) 10A

Power cables

UCSC-PSU-650W

2 650 W power supply for C-Series rack servers

Power supplies

LPE12004-M8

2 Emulex Quad Channel 8 Gb FC PCIe HBA

Fibre Channel cards

E10G42BTDA 1 Intel X520-DA2- Network adapter - PCI Express 2.0 x8 low profile - 10 gigabit Ethernet - 2 ports

10 GbE network card

E10GSFPSR 2 Intel Ethernet SFP+ SR Optics - SFP+ transceiver module - 1000Base-SX, 10GBase-SR - 850 nm

Optical ports for FC

Page 13: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

13 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

Server software

Table 6 describes the various software components (versions tested are shown below) of the EMC Proven High Performance Solution for Oracle RAC on VMAX server layer.

Table 6. Server software

Server software Configuration Description

Red Hat Enterprise Linux 6.3 Operating system for database servers

Oracle Grid Infrastructure 11g Release 2

Enterprise Edition 11.2.0.3

Software provides Clusterware and ASM storage volume management

Oracle Database 11g Release 2

Enterprise Edition 11.2.0.3

Database software

EMC XtremCache software

2.0.1 Software for server-side flash cache

Red Hat Enterprise Linux Red Hat Enterprise Linux includes enhancements and new capabilities that provide rich functionality, especially the developer tools, virtualization features, security, scalability, file systems, and storage. Red Hat Enterprise Linux is a versatile platform that can be deployed on physical systems, as a guest on the major hypervisors, or in the cloud. It supports all leading hardware architectures with compatibility across releases.

Oracle Grid Infrastructure and Database 11g Release 2 Oracle Database 11gR2 is available in a variety of editions tailored to meet the business and IT needs of an organization. This solution utilizes Oracle Database 11gR2 Enterprise Edition (EE). Oracle Database 11g R2 EE delivers industry-leading performance, scalability, security, and reliability on a choice of clustered or single servers running Windows, Linux, or UNIX. It supports advanced features, either included or as extra-cost options such as Virtual Private Database, and data warehousing options such as Partitioning and Advanced Analytics.

EMC XtremSF flash storage technology EMC XtremSF is an advanced flash storage technology deployed in a server designed to deliver unprecedented performance acceleration by reducing latency and increasing I/O throughput. It allows applications to access data in the most efficient manner possible. Residing on the server PCIe interconnect bus, XtremSF reduces application response time from milliseconds to microseconds by performing I/O operations at the server side.

EMC XtremCache technology EMC XtremCache and XtremeSF work together to reduce latency and accelerate throughput to dramatically improve application performance without compromising data consistency in the storage array.

Page 14: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

14

In this solution, two 700 GB EMC XtremSF flash cards are used in each RAC node. One XtremCache cache device is created from one XtremSF card, which means that there are two 700 GB cache devices configured on each Oracle RAC node.

XtremCache accelerates reads and protects data by using a write-through cache policy to the networked storage to deliver persistent high availability, integrity, and disaster recovery.

XtremCache coupled with array-based EMC FAST software provides the most efficient and intelligent I/O path from the application to the underlying storage array. The result is a networked infrastructure that is dynamically optimized for performance, intelligence, and protection for both physical and virtual environments.

Benefits of XtremCache include the following:

Provides performance acceleration for read-intensive workloads.

Enables accelerated performance with the protection of the back-end, networked storage array.

Provides an intelligent path for the I/O and ensures that the right data is in the XtremCache of the servers at the right time.

Uses minimal CPU and memory resources from the server by offloading flash and wear-level management onto the XtremSF PCIe flash card.

Works in both physical and virtual environments.

Provides better data protection. Because XtremCache is a write-through cache, it does not compromise data consistency in the storage array, even if the cards fail in the middle of I/O processing.

XtremCache does not need to be warmed up for database instances reboot. (A server reboot, however, requires a cache warm-up.)

Works for any kind of I/O, for example, any applications and any database platforms.

Is supported on various operating systems and server platforms.

Allows customers flexibility in choice of cache capacity on the cards.

Supports Oracle RAC database, even RAC databases “stretched” with EMC VPLEX.

As XtremCache is installed in a greater number of servers in the environment, more I/O processing is offloaded from the storage array to the XtremCache configured on the servers. This provides a highly scalable performance model in the storage environment. For more information, refer to:

Introduction to EMC XtremCache for Oracle Real Application Clusters listed in References

Introduction to EMC XtremCache for Oracle Real Application Clusters video listed in References

Page 15: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

15 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

EMC XtremCache configuration XtremCache supports Oracle RAC using a distributed cache coherency algorithm. XtremCache automatically recognizes the presence of Oracle RAC and switches operation to clustering mode.

All working Oracle RAC nodes must have XtremCache installed in order for the distributed cache feature to come online. EMC recommends using XtremCache with Oracle RAC to cache LUNs holding data files and TEMP files. EMC does not recommend caching redo logs, archives, or Clusterware files.

Appendix: Configuring XtremCache devices provides steps for configuring XtremCache devices.

The switch component level is made up of two Cisco MDS 9506 director-class SAN switches (shown in Figure 3), configured to produce 108 GB/s active bandwidth. The Cisco MDS 9506 is designed for deployment in storage networks supporting virtualized data centers and enterprise clouds. It combines high performance and low total cost of ownership, a core architectural requirement at all levels of the VMAX performance block.

Figure 3. Cisco MDS 9506 Multilayer Director

The Cisco MDS 9506 also offers these benefits:

Highly available scalability through a combination of nondisruptive software upgrades, stateful process failover, and full redundancy of all core components

Optimal platform for accelerated, intelligent storage applications such as EMC replication and backup, data migration, and storage media encryption

Virtual machine transparency and end-to-end visibility all the way from the virtual machine down to the EMC storage, enabling scalable, mobile virtual machines

Nexus 5.2(8) software is used in the EMC High Performance Solution for Oracle RAC on VMAX.

Network layer

Page 16: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

16

Table 7 lists the hardware components of the network layer of this solution.

Table 7. Hardware components of the network layer

Network

hardware Quantity Configuration Description

MDS-PBF-ADV32 8 Cisco 32-port 8-Gbps FC Port Module

Line cards

MDS-9506-V2 2 Chassis SUP2 no ports Director Director chassis

FC10M-50MLC 128 FCHNL 10M 50/125 LC-LC LC adapters

MDS-PW19-TWST 2 Cisco 9506 Twist Lock Power Cord US

Power cord

FC1M-50MLC 96 FCHNL 1M 50/125 LC-LC

MDS-8G-SW 192 MDS 2/4/8-Gbps FC shortwave SFP LC

MDS-ENT-9500 2 Enterprise license key 9500

The storage components comprising the EMC Proven High Performance Solution for Oracle RAC on VMAX include the following:

VMAX 40K with eight engines (the specification of the engine is shown in Figure 4)

EMC PowerPath®

Figure 4. EMC Symmetrix VMAX 40K

Storage layer

Page 17: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

17 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

Storage hardware

Table 8. Storage hardware

Storage

hardware Quantity Configuration Description

SD-DBV-DIR-1P 4 VMAX 40K DBV DIR 1 Phase

SD-DBV-SPS 16 VMAX 40K SPSV

SD-1P 1 VMAX 40K 1P INFRAST

SVDBSOLDOR1P 4 VMAX 40K Drive Bay Solid Door 1P

SYMV2-MIGRBAS 1 Symmetrix 40K Migration Bundle

PP-SE-SYM 1 PPATH SE SYM

SD-VCONFIG32 1 VMAX 40K VCONFIG 32

SD-FE80000E 16 VMAX 40K 8 MM 8 G Fibre Fibre ports

SD-INTBKVKIT 8 VMAX 40K Internal Cable Bracket Titan

SD-PW40U-US 10 30A 1Phase Namer Japan l6-30P Power

VL4FM2001B 256 VMAX 40K 4G flash 400 GB drive Storage flash

VL4103001B 376 VMAX 40K 4 G 10K 300 GB SAS drive

Disk drives

SD-ADD192C 7 VMAX 40K Add Engine-192GB-C VMAX engine

SD-192-BASEC 1 VMAX 40K Base-192GB-C VMAX cache

SD-DE25-DIR 64 VMAX 40K 25SLT DR ENCL

VMAX 40K eight-engine configuration

VMAX 40K is designed for high efficiency, scalability, and secure data persistence. Built on the strategy of powerful, trusted, and smart storage, and founded in the EMC Virtual Matrix Architecture that allows for seamless, cost-effective growth, the VMAX 40K offers the following:

Zero downtime migration technology and lower cost and greater efficiency through automated tiering

More scalability for less management complexity and operational expense

Page 18: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

18

Table 9 lists the VMAX 40K components used in the EMC Proven High Performance Solution for Oracle RAC on VMAX.

Table 9. VMAX 40K configuration

Component Quantity Configuration

Engines 8 192 GB cache each, total of 1536 GB raw cache

Directors 16 Eight ports on each director with 8 GB FC

Bays 5 1 system, 4 disk

10K SAS drives 376 100 TB Raw, 45 TB usable (RAID1 configured)

Flash drives 256 RAID 1 configured

Storage software

Table 10 lists the software used in the EMC Proven High Performance Solution for Oracle RAC on VMAX storage layer.

Table 10. Storage software

Storage software Configuration Description

VMAX Enginuity™ code 5876 VMAX micro code

EMC Solutions Enabler 7.6 Host CLI storage management software

EMC PowerPath 5.7 SP1 Multipathing and load balancing software

EMC FAST VP

FAST VP provides support for sub-LUN data movement in thin provisioned environments. It combines the advantages of virtual provisioning with automatic storage tiering at the sub-LUN level to optimize performance and cost while radically simplifying storage management and increasing storage efficiency.

FAST VP uses intelligent algorithms to continuously analyze devices at the sub-LUN level. This enables it to identify and relocate the specific parts of a LUN that are most active and would benefit from being moved to higher-performing storage such as SSD. It also identifies the least active parts of a LUN and relocates that data to higher-capacity, more cost-effective storage such as SATA, without altering performance.

Data movement between tiers is based on performance measurement and user-defined policies, and is executed automatically and nondisruptively by FAST VP.

FAST VP configuration involves three types of components—storage groups, FAST policies, and storage tiers:

A storage group is a logical grouping of storage devices used for common management. A storage group is associated with a FAST policy, which determines how the storage group’s devices are allocated across tiers.

A FAST policy is a set of tier usage rules that is applied to associated storage groups. A FAST policy can specify up to three tiers and assigns an upper usage limit for each tier. These limits determine how much data from a storage group

Page 19: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

19 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

can reside on each tier included in the policy. Administrators can set high-performance policies that use more flash drive capacity for critical applications, and cost-optimized policies that use more SATA drive capacity for less-critical applications.

A storage tier is made up of one or more virtual pools. To be a member of a tier, a virtual pool must contain only data devices that match the technology type and RAID protection type of the tier.

FAST VP is an enabling technology for workloads with small, random I/O and relatively small working sets that fit into the higher-performing tiers of a FAST policy. Oracle OLTP databases tend to be highly random in nature, with small working sets compared to the total database size. Additionally, OLTP databases have inherent locality of reference with varied I/O patterns, for the following reasons:

OLTP databases tend to be temporal in nature, as the most recent data is more important than older data.

The relative importance of data changes from object to object. Some tables tend to be accessed more than others.

The number of IOPS per gigabyte of an object, also known as object intensity, changes quite significantly. A good example is a database index compared with a database table. The relative IOPS received by a database block occupied by an index object can be very high compared to the IOPS received by a database block consumed by a table object.

Note: Oracle redo logs have a very predictable sequential write workload, and this type of activity does not benefit significantly from up-tiering to SSD. It is recommended that these logs be excluded from any FAST policy, or else pinned to a 10k rpm or 15k rpm drive tier so that FAST VP will not include them in its analysis.

When using FAST VP, there is no need to match the Logical Volume Manager (LVM) stripe depth with the Virtual Provisioning thin device extent.

Because Oracle typically accesses data either by random single-block read/write operations (usually 8 KB in size) or by sequentially reading large portions of data, FAST VP movements have no impact on the ASM AU size or on data access.

In Oracle 11g R2, Oracle ASM and Oracle Clusterware have been integrated into the Oracle Grid Infrastructure. Oracle Automatic Storage Management Cluster File System (ACFS) extends ASM functionality to act as a general-purpose cluster file system. In the solution, we use ASM to store the database files and Oracle ACFS to store the comma separated values (CSV) files for the Data Warehouse data loading test.

Storage virtual provisioning design

EMC Virtual Provisioning™ automatically stripes data across all data devices in a virtual pool and balances the workload across storage devices. To ensure even striping of data, all data devices in a virtual pool should be the same size.

Oracle Database layer

Page 20: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

20

Table 11 shows the RAID selections and number of spindles for each virtual pool. In this solution, Oracle data files and redo log files are located on thin devices using RAID 1 protection and all physical spindles for the best performance and capacity. The flash tier is used when FAST VP is enabled on VMAX during the OLTP workload test.

Table 11. Virtual pool design on VMAX 40K

Virtual pool RAID protection

Drive type

Physical spindles size

Number of active spindles Item

FC_R1_1 RAID 1 (2-way, mirror)

SAS 10K 300 GB 376 + 5 (spare disk)

CRS, DATA, REDO, CSV

SSD_R5 RAID 5 (3+1) SSD 400 GB 256 + 6 (spare disk)

FAST VP

ASM disk group configuration for OLTP database

Table 12 details the RAC database’s ASM disk group design. For the OLTP database, we used two ASM disk groups to store the relevant database files, including data files, control files, online redo log files, and temporary files. Default settings are used for ASM disk groups.

Table 12. ASM disk group design for OLTP databases

Item LUN size (GB) Number of LUNs ASM disk group name

CRS 10 2 +CRS

DATA 1024 18 +DATA

REDO 64 4 +REDO

ASM disk group configuration for DW database

Table 13 details the ASM disk group design for a data warehouse database. Three ASM disk groups store the relevant database files, including data files, online redo log files, and CSV files (used for ETL).

Table 13. ASM disk group design for DW database

Item LUN size (GB)

Number of LUNs

AU_Size (MB)

Striping ASM disk group name

DATA 1024 20 16 Fine-grain +DATA

REDO 64 4 1 Fine-grain +REDO

CSV 512 2 1 Fine-grain +CSV

Page 21: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

21 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

EMC Proven High Performance Solution for Oracle RAC on VMAX: Performance tests

The EMC Proven High Performance Solution for Oracle RAC on VMAX test environment consisted of two main workloads in order to characterize both OLTP and Data Warehouse (DW) systems. We created an eight-node Oracle 11gR2 RAC database for the OLTP workload testing. When we finished the OLTP test, we destroyed the OLTP database and created another DW database for DW workload testing on the same cluster environment.

We used the SLOB (Silly Little Oracle Benchmark) to generate physical random read/write I/O, which are typical I/O patterns seen in Oracle OLTP database environments. We also tested the performance improvement by enabling different FAST VP policies when running SLOB workload.

On the DW database, we used a DSS-like toolkit to generate the workload. During the generation of the DW workload, XtremCache and FAST VP were not enabled.

The system (including the flash side and the array side) I/O performance metrics (IOPS and latency) were gathered primarily from the Automatic Workload Repository (AWR) report. In addition, we gathered metrics for I/O throughput at the server/database and storage levels.

The objectives of our tests were to demonstrate the following:

Sustained flash and storage array IOPS for Oracle OLTP database workload

Sustained query throughput in GB/s as well as data loading throughput in TB/hour for an Oracle data warehouse workload

During the test, the database was in no-archive-log mode to achieve maximum performance. The test scenarios are listed in Table 14.

Table 14. Test scenarios

Test scenarios XtremCache FAST VP Notes

OLTP with query only

Yes No Node scalability test

Yes Yes Workload running on 8-node, set up a baseline and then enable different FAST VP policy to validate the performance improvement

OLTP with update only

Yes No Node scalability test

Yes Yes Workload running on 8-node, set up a baseline and then enable different FAST VP policy to validate the performance improvement

DW query No No Node scalability test

DW data loading No No Node scalability test

Introduction

Test objectives

Page 22: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

22

The OLTP and DW workload profiles used in these tests are shown below.

OLTP database and workload profile

Table 15 describes each OLTP database workload profile for the solution. We used the SLOB toolkit to generate an OLTP database and deliver the OLTP-like workloads, including the query-only and update-only workloads required for the solution.

Table 15. Database workload profile for each OLTP database

Profile characteristic Details

Database type OLTP

Database size 16 TB

Oracle Database 11gR2 8-node RAC database on ASM

Instance configuration for read workload SGA size for each instance: 16 GB

Note: Considering that larger SGA will buffer more data, we configured a 16 GB SGA which is small enough to generate a stable and high I/O workload

Workload profile OLTP-like workload simulated by SLOB

Network connectivity 8 Gb FC for SAN

10 GbE for IP

DW database and workload profile

Table 16 details the database and workload profile for the solution. We used a DSS-like toolkit to generate a data warehouse database and deliver the DSS workloads, including the query and data loading workloads required for the solution.

Table 16. Database and workload profile for DW database

Profile characteristic Details

Database type Data warehouse

Database size 20 TB

Oracle Database 11gR2 8-node RAC on ASM

Workload profile DSS-like workload

Data load source External flat files on Oracle ACFS used for external tables

Network connectivity 8 Gb FC for SAN

10 GbE for IP

Page 23: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

23 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

SLOB OLTP workload tests

The solution characterizes the Oracle OLTP system performance on VMAX array with EMC XtremSF cards installed on database servers. An eight-node Oracle 11gR2 RAC database was deployed for the OLTP workload test. We used SLOB to generate the workload because it is the preferred SQL workload generator for driving maximum physical random I/O from a database platform.

SLOB is a SQL-driven Oracle database I/O generator, instead of a synthetic I/O generator. SLOB uniquely drives massive physical I/O using minimal host CPU resources, and it specifically targets the Oracle I/O subsystem. SLOB performs all of its physical I/O buffered in the Oracle SGA, no physical I/O buffered in the Oracle PGA is performed. SLOB possesses the following characteristics:

Supports testing Oracle logical read (SGA buffer gets) scaling

Supports testing physical, random single-block reads (db file sequential read/db file parallel read)

Supports testing random single block writes (db file parallel write)

Supports testing extreme REDO logging I/O

Consists of simple PL/SQL

Is entirely free of all application contention

We used SLOB to generate OLTP-like workload on an eight-node Oracle RAC database to demonstrate sustained flash and storage array IOPS. The database performance metrics including IOPS and latency were gathered primarily from the AWR report, and the ratios of I/O served . In addition, we gathered metrics for I/O throughput at the server/database and storage levels.

Notes: Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, the solution test workloads should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute.

Overview

Page 24: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

24

The objectives of the tests were to measure the following:

Physical I/O scalability along with the scaling of the number of concurrent SLOB zero-think-time sessions (simulated concurrent users) and the number of RAC nodes. Multiple concurrent sessions (reader sessions) executing similar query SQL statements were run to validate a read-only workload, and multiple concurrent sessions (writer sessions) executing similar UPDATE SQL statements were run to validate the physical read/write workload.

After the node scalability test, we validated the performance improvement with different FAST VP policies enabled, including 10 percent, 50 percent, and 100 percent. During the FAST VP test, we built up a new baseline with a heavier workload and a broader active dataset. This simulated that the hot data was not hit at XtremCache, so that the data missed at XtremCache had to be accessed from the back-end storage array.

Note: The percentage specified defines the maximum amount of hot data which can be promoted to the flash tier. For example, 10 percent means 10 percent of the data can be promoted to the flash tier.

In the node scalability test, XtremCache was enabled and FAST VP was disabled. Then we gradually increased the number of Oracle RAC database instances and the number of concurrent users, with each user running similar OLTP queries simultaneously.

When we added a RAC node, we also added additional resources, including CPU power and XtremSF cards. With the addition of each new server, we tested the system again by running the similar SLOB workload. For this test, workloads were running simultaneously from all the RAC nodes added.

We then increased the number of concurrent users and measured the performance scalability.

The test process included the following steps:

1. Run the query only workload with 64 concurrent simulated users (zero-think-time sessions) on the first node of an eight-node RAC database using SLOB.

2. Add the second node into the system, then run the workload with 64 concurrent users on each node; that is, with a total of 128 concurrent users running simultaneously on the two-node RAC database.

3. Add two additional nodes into the system, then run the workload with 64 concurrent users on each node separately; that is, with a total of 256 concurrent users running simultaneously on the four-node RAC database.

4. Add four additional nodes into the system, then run the workload with 64 concurrent users on each node separately; that is, with a total of 512 concurrent users running simultaneously on the eight-node RAC database.

After the scalability test, we kept XtremCache enabled, enabled FAST VP with different policies, and ran the SLOB query-only workload to validate the performance improvement. During the FAST VP test we increased the active data set to simulate

Test objectives

Query-only test scenarios and methodology

Page 25: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

25 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

that the hot data was not hit at XtremCache and forced the data missed at XtremCache to be accessed from the back-end VMAX array.

We recorded the performance statistics until the workload was stable. The test steps were:

1. Enable the flash percentage of the FAST VP policy to 10 percent, and run 512 concurrent users on the eight-node RAC database.

2. Enable the flash percentage of the FAST VP policy to 50 percent, and run 512 concurrent users on the eight-node RAC database.

3. Enable the flash percentage of the FAST VP policy to 100 percent, and run 512 concurrent users on the eight-node RAC database.

Performance statistics were captured using Oracle Automatic Workload Repository (AWR) RAC reports. We observed the “physical reads” value in the AWR report to assess read IOPS statistics. Query average response time was calculated from the “db file parallel read” and “db file sequential read” record in the “Top Timed Events” section of the AWR report, as shown in Figure 5.

Figure 5. AWR RAC report snippet for read I/O response time calculation

We used the following logic to calculate the I/O latency:

For the “db file sequential read” event:

The total wait time is T1 which is 19,485.48 seconds, as shown in Figure 5.

The total number of waits is N1 which is 30,614,070, as shown in Figure 5.

For the “db file parallel read” event:

The total wait time is T2 which is 9,803.47 seconds, as shown in Figure 5.

The total number of waits is N2 which is 8,610,615, as shown in Figure 5.

The average read response time is (T1+T2) / (N1+N2), which is (19,485.48 +9,803.47) * 1,000/ (30,614,070 + 8,610,615), as shown in Figure 5. The average response time is 0.75 ms.

Query-only test results

Page 26: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

26

Scalability test – XtremCache enabled and FAST VP disabled

Table 17 depicts the physical read IOPS increase with the scaling of RAC database nodes.

Table 17. Scaling of nodes and resulting increases in IOPS

Figure 6 shows that the IOPS increase when the number of RAC nodes scales, while the average response time remains under one millisecond.

Figure 6. Query only IOPS scaling along with node scaling

As Figure 6 shows, we achieved a total of 3,765,176 read IOPS and an average latency of 0.75 milliseconds with the eight-node RAC database when running 64 concurrent sessions executing similar query SQL statements on each node.

The IOPS increased near linearly with each additional RAC node added into the test environment. For example, the total IOPS of four database nodes reached 1,914,963. After we added another four database nodes for a total of eight database nodes, the IOPS almost doubled to 3,765,176.

The read-hit ratio for XtremCache is about 98 percent for each cache device during the test. Two percent of the I/Os is served from the storage array. The statistics can be monitored with the following command:

vfcmt display -cache_dev <device>

Metrics 1 node 2 nodes 4 nodes 8 nodes

IOPS 457,136 962,155 1,914,963 3,765,176

Average response time (ms) 0.74 0.69 0.68 0.75

Page 27: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

27 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

FAST VP test – XtremCache enabled and FAST VP enabled

Table 18 shows that physical read IOPS increase as the percentage of the FAST VP policy increases for the flash tier.

Table 18. Query-only IOPS and latency with different FAST VP policies

Figure 7 shows that IOPS increase as the percentage of the FAST VP policy increases for the flash tier.

Figure 7. Query-only IOPS increased with different FAST VP polices

As Table 18 and Figure 7 show, we achieved a total of 2,331,233 read IOPS and an average latency of 0.87 milliseconds when running 512 concurrent users on an eight-node RAC database with a 10 percent VP policy. From the results, the IOPS increased nearly linearly when the FAST VP policy incremented. Also the latency is stable at about one millisecond for all the tests.

During the scalability test, XtremCache was enabled and FAST VP was disabled. We gradually increased the number of RAC database nodes and ran multiple concurrent sessions with each session running similar UPDATE SQL on the RAC database.

8-node workload Read IOPS Read latency (ms)

Baseline 2,216,377 0.95

FAST VP policy

10% 2,331,233 0.87

50% 2,498,420 0.97

100% 2,687,456 1.11

Update-only test scenarios and methodology

Page 28: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

28

We decreased the buffer cache for each database instance to push a consistent write I/O workload to the back-end storage. The write workload was driven by the UPDATE SQL statement. Generally, it incurs the following operations sequentially:

1. Read the data blocks that need to be updated into the buffer cache.

2. Update the rows in the data blocks.

3. Commit the updated rows and trigger LGWR flushing redo entries to online log files.

While the SQL UPDATE workload is running, the background DBWR process flushes the dirty blocks out of the buffer cache into the data files. Considering that we used a very small buffer cache, the data blocks were read into the buffer cache and written out of the buffer cache soon after the rows were updated. Thus, the execution of each UPDATE operation caused physical reads, which were accelerated by the EMC XtremCache (when cache hit) or the back-end VMAX array (cache miss).

When the updated data blocks were written out of the buffer cache by the DBWR process, as being a write-through cache for XtremCache, the data blocks were written to the back-end VMAX array. After the application got an acknowledgement from the back-end array, the application I/O request was complete. When an application write operation happens, the data is written to XtremCache in parallel (while the data is being sent to the VMAX array). The VMAX storage array delivers persistent high availability, integrity, and disaster recovery.

The test process included the following steps:

1. Run the update-only workload with 56 concurrent users on one RAC node using the scripts in SLOB.

2. Add one additional node into the system, then run the workload with 56 concurrent users; that is, with 28 concurrent users running simultaneously on each node.

3. Add two additional nodes into the system, then run the workload with 56 concurrent users; that is, with 14 concurrent users running simultaneously on each node.

4. Add four additional nodes into the system, then run the workload with 56 concurrent users; that is, with seven concurrent users running simultaneously on each node.

After the scalability test, we kept XtremCache enabled, also enabled FAST VP with different policies, and ran the SLOB update-only workload to test the performance improvement. During the FAST VP test, we increased the active data set that needed to be updated to simulate that data was not hit at XtremCache. This forced the I/O activity to the back-end VMAX. We recorded the performance statistics until the workload was stable. The test steps were:

1. Ran 384 concurrent users on an eight-node RAC database with FAST VP disabled to get the baseline.

2. Enabled 10 percent FAST VP policy for flash tier and ran 384 concurrent users on an eight-node RAC database.

Page 29: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

29 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

3. Enabled 50 percent FAST VP policy for flash tier and ran 384 concurrent users on an eight-node RAC database.

4. Enabled 100 percent FAST VP policy for flash tier and ran 384 concurrent users on an eight-node RAC database.

Performance statistics are captured using AWR reports. We read the “physical writes” row in the AWR report for the peak write IOPS statistics. Because the write workload is generated by UPDATE statements, as described previously, we also collected “physical reads” from the AWR report for the peak read IOPS statistics that were caused by the write transaction.

As shown in Figure 8, we calculated the write average response time by dividing the “Total Wait Time (s)” by the “Waits” of the “db file parallel write” record in the “Top Timed Events” section of the AWR report. Meanwhile, we calculated LGWR latency by dividing the the “Total Wait Time (s)” by the “Waits” of the “log file parallel write” record in the “Top Timed Events” section of the AWR report.

Taking the following AWR snippet for example, the total wait time of the “db file parallel write” wait event is 287.00 seconds, which is 287,000 ms, and the number of waits is 319,941; thus, the write average response time can be calculated as 287,000 / 319,941= 0.90 ms. The LGWR latency is 1,087.81*1,000/1,323,505 = 0.82 ms.

Figure 8. UPDATE only write average response time measurement from the RAC AWR report

Scalability test—XtremCache enabled and FAST VP disabled

Table 19 and Figure 9 show that the peak disk array write IOPS increase as the number of RAC nodes scales.

Table 19. Scaling of RAC nodes and resulting increases in peak disk array IOPS

Item 1 node 2 nodes 4 nodes 8 nodes

Write 26,950 50,110 95,932 153,276

Read 26,542 49,539 94,877 150,054

Aggregate 53,492 99,649 190,809 303,330

DBWR latency(ms)

0.3 0.40 0.30 0.90

Update-only test results

Page 30: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

30

Item 1 node 2 nodes 4 nodes 8 nodes

Read response time (ms)

0.90 0.80 0.20 0.30

redo size (MB/s) 20 37 71 115

LGWR latency (ms)

0.55 0.54 0.66 0.82

Figure 9 shows the update only read/write IOPS for RAC node scaling while average response time remains under one millisecond.

Figure 9. Update-only IOPS scaling along with RAC node scaling

During an UPDATE transaction, the back-end VMX only needs to handle the writing I/O activities, as the reading I/O activities have been cached and accelerated by XtremCache. Because of this, the solution can scale to accommodate a very heavy transaction workload, as confirmed in testing.

As shown in Table 19 and Figure 9, when running 56 concurrent sessions on an eight-node RAC database while executing similar update SQLs, we achieved 303,330 aggregated IOPS including 153,276 write IOPS and 150,054 read IOPS, which were part of the write transaction. The average latency of write was 0.9 milliseconds, because we used a very small SGA to generate a high physical write I/O workload, because almost no data was cached in the server.

The IOPS increased nearly linearly when additional RAC nodes were added into the workload. For example, the aggregate IOPS were 53,492 when running write workload on one RAC node, and this increased to 99,649 when running workload on two RAC nodes.

Redo size is also a key metric used to measure the transaction capability. As demonstrated through testing, the workload on one node generated 20 MB/second redo entries, and almost doubled to 37 MB/second with the workload running on two nodes. When we ran workload on four nodes the redo throughput is almost doubled

Page 31: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

31 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

again to 71 MB/second. The transaction capability is scaling along with the node scaling.

FAST VP test—XtremCache enabled and FAST VP enabled

Table 20 shows that IOPS increase as the percentage of the FAST VP policy increases for the flash tier.

Table 20. Update-only workload IOPS with different FAST VP policies for flash tier

Figure 10 shows that aggregated read/write IOPS increase as the percentage of the FAST VP policy increased for the flash tier.

Figure 10. Update-only aggregated read/write IOPS with different FAST VP policies As Table 20 shows, we achieved a total of 79,745 read IOPS and 76,614 write IOPS with 384 concurrent users on an eight-node RAC database with FAST VP policy set to 10 percent, which provides an 8 percent IOPS increase, compared with the baseline.

When FAST VP policy was set to 50 percent, the aggregated IOPS increased to 223,508, a 53 percent improvement compared to the baseline.

8-node workload

Write DBWR latency (ms)

Read Read latency (ms)

Aggregate Redo size (MB/s)

Baseline 74,165 0.59 71,177 5.27 145,342 56

FAST VP policy

10% 76,614 0.60 79,745 4.89 156,359 60

50% 109,961 0.57 113,547 2.99 223,508 87

100% 120,231 1.30 116,973 1.95 237,204 92

Page 32: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

32

From the test results, read latency decreased when the FAST VP policy changed. For example, the read latency was 5.27 ms at its baseline and it decreased to 2.99 ms when a 50 percent FAST VP policy was deployed. This is because more SSD capacity was used when the FAST VP policy changed. Also, the write latency is stable at about 0.6 ms for the baseline, 10 percent FAST VP policy and 50 percent FAST VP policy. Therefore, the aggregate IOPS increase when the FAST VP policy changes.

Page 33: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

33 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

Data warehouse query workload test

The DSS-like toolkit provides an Oracle data warehouse test workload to test and validate the performance of typical Oracle data warehouse workloads on the EMC VMAX 40K storage platform.

The schema in the kit has 12 tables including two fact tables—sales and returns. The remaining ten tables act as dimension tables. The two main fact tables are range-partitioned by date and sub-partitioned by hash on their join key. The database is an eight-node RAC database with size of 20 TB. Multiple concurrent users run a series of typical queries against the database. The throughput is measured during the test.

The objective of the test was to measure the performance scalability when the concurrent users were scaling and RAC node scaling. The concurrent users were generated by the DSS-like workload toolkit, with each user running similar queries.

We gradually increased the number of RAC nodes and ran concurrent users, with each user running similar DW queries and multiple users running simultaneously. After that, more additional concurrent users were added into the test and then the performance scalability was measured.

The test process included the following steps:

1. Run the DW query workload with one user on one RAC node using the scripts in the DSS-like toolkit.

2. Add one additional RAC node into the system, then run the workload with one user on each node; that is, with two concurrent users running simultaneously.

3. Add two additional RAC nodes into the system, then run the workload with one user on each of the four nodes separately; that is, with a total of four concurrent users running simultaneously.

4. Add four additional RAC nodes into the system to get eight servers, then run the workload with one user on each node separately; that is, with a total of eight concurrent users running simultaneously.

5. Repeat the preceding steps, running the DW query workload with 4, 16, and 64 concurrent users on each node.

Overview

Test objective

Test scenarios and methodology

Page 34: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

34

Performance statistics were captured using AWR reports. We read the “physical read total bytes” row in the AWR report for the query throughput (GB/s). Table 22 and Figure 11 depict how throughput increased as the RAC nodes scaled.

Table 21. User and server increase and corresponding throughput increase

Users 1 Server (GB/s)

2 Servers (GB/s)

4 Servers (GB/s)

8 Servers (GB/s)

1 3.33 5.32 7.35 8.88

4 4.33 11.44 16.81 20.95

16 5.58 13.39 17.84 24.13

64 6.98 16.89 24.77 28.25

Figure 11. Query throughput scaling along with node scaling

Test results

Page 35: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

35 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

Figure 12 shows how throughput increased when the number of RAC nodes scaled up to eight.

Figure 12. Scaling query throughput with 64 users on each server

The results confirm that the average throughput scales nearly linearly along with the increase of nodes and concurrent users. For example, when 64 users were running on two servers the average throughput was 16.89 GB/s, which increased to 24.77 GB/s when two additional servers were added into the environment. The throughput increased to 28.25 GB/s with eight servers running the workload.

6.98

16.89

24.77

28.25

0

5

10

15

20

25

30

1 node 2 nodes 4 nodes 8 nodes

Thro

ugh

pu

(G

B/s

)

Query Throughput Scaling Along with RAC node Scaling Up to 8

Page 36: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

36

Data Warehouse data loading test

Modern enterprise data warehouses (EDWs) require large, frequent data loads periodically throughout the day. The 24x7 nature of the EDW no longer allows a long window of data loading for DBAs. Therefore, it is important to simulate the impact of data extract, transform, and load (ETL) processes on the performance of the database.

This test scenario demonstrates the ETL processes on the production database and records the performance data, especially the throughput (physical write total megabytes per second), during the ETL load.

We used Oracle external tables for the data loading, which use the ORACLE_LOADER access driver to load data from external tables to internal tables. The data comes from CSV flat files.

This test scenario shows the throughput scalability when data is loaded from external tables on the Oracle ACFS file system into the database.

The objective of the test was to show the throughput scalability of the VMAX 40K, with the disk storage configuration used in this solution, while the sessions running the data loading were scaling out the RAC nodes. Each session ran a similar ETL workload by loading CSV flat files into the database.

This solution scenario demonstrated performance scalability on the VMAX 40K by loading data from external tables into the database. The test process included the following steps:

1. Run one user on one RAC node to load data from one external table. The session loads one CSV file with a size of 120 GB. The CSV file is located on the Oracle ACFS file system. The external table is created as follows:

create table sales_ext (

id integer,

…)

organization external(

type oracle_loader

default directory EXT_DIR

access parameters (fields terminated by "|")

location ('sales.csv'))

parallel reject limit unlimited;

The data is loaded from the external table as follows:

alter session enable parallel dml;

alter table sales parallel;

alter table sales_ext parallel;

insert into /*+ append */ sales select * from sales_ext;

Note The table “sales” has the same structure as the table “sales_ext.” The data is loaded directly with the “append” hint, and multiple parallel slaves are used for data loading.

Overview

Test objective

Test scenarios and methodology

Page 37: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

37 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

2. Add one additional RAC node into the system, then run the same data loading workload as the first step; that is, with two concurrent users running simultaneously.

3. Add two additional RAC nodes into the system, then run the same data loading workload with one user on each of the four servers; that is, with a total of four concurrent users running simultaneously.

4. Add four additional RAC nodes into the system, then run the workload with one user on each of the eight servers; that is, with a total of eight concurrent users running simultaneously.

We read the throughput (TB/hour) from the “physical write total bytes” on the system statistics section in the AWR report. Table 22 and Figure 13 show how throughput increased when the RAC nodes were scaled.

Table 22. Throughput (TB/hour) increasing with additional RAC nodes

1 node 2 nodes 4 nodes 8 nodes

2.71 6.09 10.87 21.31

Figure 13. Data loading throughput scaling

The average throughput increased nearly linearly when the second RAC node was added into the workload, and the number of data loading sessions doubled. For example, running one session to load data from an external table against one RAC node achieved a throughput of 2.71 TB/hour. This increased to 6.09 TB/hour when two sessions were running in two RAC nodes with one session on each node. The throughput increased similarly when the additional RAC nodes were added into the environment.

2.71

6.09

10.87

21.31

0.00

5.00

10.00

15.00

20.00

25.00

1 node 2 nodes 4 nodes 8 nodes

Thro

ugh

pu

t (T

B/H

ou

r)

Data Loading Throughput Scaling Along with RAC nodes Scaling Up to 8

Test results

Page 38: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

38

Higher throughput can be achieved if additional nodes are added into the environment, including CPUs, HBA ports, front-end ports, and other resources.

Page 39: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

39 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

Conclusion

Implementing the EMC Proven High Performance Solution for Oracle RAC on VMAX with innovative and proven products like EMC’s VMAX enabled by FAST VP gives customers choices within an open infrastructure, enabling them to easily integrate into existing data center operations (people, process, and technology) while taking advantage of new technologies such as XtremCache which now fully supports Oracle RAC. Customers will also enable efficient resource utilization (people, process, and technology) through virtualization, database, and applications consolidation. Customers can also independently scale capacity and processing capability without the limitations imposed by a single-purpose appliance.

As the customer undergoes changes from any level, such as applications, databases, and non-database software, this stack is open to align with the shifting technical demands imposed by the business needs. This solution keeps the balance between OLTP and DW workloads while maintaining the protection and resiliency of the data. That adaptability to change and the ability to apply the technology where it is needed protects the capital investment, and can be fluid as the requirements change without sacrificing any of the other data center operations.

Core advantages

The EMC Proven High Performance Solution for Oracle RAC on VMAX includes the following core advantages:

Delivers the highest performance for mixed Oracle workload environments. EMC Proven High Performance Solution for Oracle RAC on VMAX has demonstrated sustained metrics over 3.7 million IOPS with latency of less than 0.5 milliseconds executing multi-workload OLTP and Data Warehouse workloads with a sustained throughput of 32 GBs with a data load rate of 21TB/hr. This impressive performance is achieved by utilizing open best-in-class components at the computer, network, and storage layers.

Uses the following EMC technology enablers in the reference architecture:

EMC VMAX 40K with FAST VP enabled

XtremSF

XtremCache

Provides full support for EMC Performance Boost, HA, continuous availability, and replication technologies

OLTP test results

OLTP test results demonstrated that the solution:

Increases IOPS for OLTP workloads without FAST VP. During RAC nodes scaling:

The read IOPS increased from 457,136 to 1,914,963 when RAC nodes were scaled out from one to four, and it increased to 3,765,176 when eight nodes ran the workload together.

The aggregate read/write IOPS for UPDATE transaction workload was 53,492 on one RAC node with 20 MB/s redo throughput, and it increased to

Summary

Findings

Page 40: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

40

190,809 with 71 MB/s redo throughput when four RAC nodes were added. The IOPS increased to 303,330 and generated 115 MB/s redo throughput when running on eight RAC nodes.

With FAST VP enabled, increases IOPS for OLTP workloads yielding the following results:

The read IOPS increased from 2,216,377 to 2,331,233 when setting flash percentage of FAST VP policy to 10 percent, and increased to 2,687,456 when setting it to 100 percent.

The aggregate read/write IOPS for write workload increased from 145,342 to 156,359 when setting flash percentage of FAST VP policy to 10 percent, and increased to 237,204 when setting it to 100 percent.

DW test results

From the DW test results, average query throughput for DW workload increased during RAC nodes scaling as follows:

The average query throughput increased when additional RAC nodes were added. For the query workload, it increased from 6.98 GB/s to 24.77 GB/s when the RAC nodes were scaled from one to four, and increased to 28.25 GB/s when eight nodes were used.

The average throughput of the data loading increased linearly along with the addition of the RAC nodes. The throughput was 2.71 TB/hour for one RAC node and it increased to 10.87 TB/hour when running four RAC nodes. The throughput increased to 21.31 TB/hour with eight RAC nodes.

This solution is offered as a foundation that can be scaled in a flexible, predictable, and near-linear way, by adding additional node resources including CPUs and memory, HBA ports, and front-end ports, to provide higher IOPS and throughput based on the configuration described in this white paper.

Page 41: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

41 EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

References

The following documents provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative.

EMC Infrastructure for High Performance Microsoft and Oracle Database Systems

Introduction to EMC XtremCache

EMC XtremCache Data Sheet

In addition, XtremCache documentation is available at EMC Online Support: https://support.emc.com/products/25208_XtremCache-Cache/Documentation/

A video entitled Introduction to EMC XtremCache for Oracle Real Application Clusters is available at: https://community.emc.com/videos/6740

For additional information, see the following documents:

Oracle Grid Infrastructure Installation Guide 11g Release 2(11.2) for Linux

Oracle Database Installation Guide 11g Release 2 (11.2) for Linux.

EMC documentation

Oracle documentation

Page 42: EMC Proven High Performance Solution for Oracle RAC on VMAX · PDF fileEMC Proven High Performance Solution for Oracle RAC on VMAX 3 EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise

EMC Proven High Performance Solution for Oracle RAC on VMAX EMC VMAX 40K, XtremSF, XtremCache, Red Hat Enterprise Linux,

Oracle Database Enterprise Edition

42

Appendix: Configuring XtremCache devices

Use the following steps to configure XtremCache devices:

1. Create two cache devices on two 700 GB EMC XtremSF flash cards using these commands:

vfcmt add -cache_device /dev/rssda

vfcmt add -cache_device /dev/rssdb

2. Because source devices are not automatically assigned to cache devices, after cache devices are created, add all of the database source LUNs (data LUNs only) to one of cache devices using this command:

vfcmt add -source_device /dev/emcpowerXX

3. Because there are two cache devices, after all source LUNs are added to one cache device, use this command to move half of the LUNs to another cache device to ensure that the workload on each cache device is balanced:

vfcmt migrate -source_dev /dev/emcpowerXX -existing_cache_dev /dev/rssda -new_cache_dev /dev/rssdb

4. After all source devices are added, use the following command to validate the status of XtremCache:

vfcmt display -all