Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
An Oracle White Paper
September 2010
Oracle Integrated Stack Testing Hardware. Software. Tested Complete. Reference Configurations Introduced
Oracle Integrated Stack Testing Reference Configurations Introduced
Disclaimer
The following is intended to outline our general product direction. It is intended for information purposes
only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or
functionality, and should not be relied upon in making purchasing decisions. The development, release, and
timing of any features or functionality described for Oracle’s products remains at the sole discretion of
Oracle.
Oracle Integrated Stack Testing Reference Configurations Introduced
Introduction ....................................................................................... 2
The Reference Configurations ........................................................... 3
Overview of Test Approach ............................................................... 4
Scope ............................................................................................ 4
Storage and network infrastructure ................................................ 4
General ......................................................................................... 5
Configuration building and interoperability tests ............................. 5
Error and Fault injection tests ........................................................ 6
Out-Of-Box (OOB) Performance tests ........................................... 7
Stability tests ................................................................................. 7
Reference Configuration 1: PeopleSoft Campus on Sun SPARC Enterprise M9000 .......................................................................................................... 8
Stack Components ........................................................................ 8
Setup of PeopleSoft Campus on Sun SPARC Enterprise M9000 Server 9
Upgrade Testing ...........................................................................10
Reference Configuration 2: Siebel Customer Relationship Management (CRM) on a SPARC T3-1 with Oracle RAC .........................................................16
Stack Components .......................................................................16
Setup for SPARC T3-1 Oracle RAC and Siebel CRM ...................18
Reference Configuration 3: Oracle OLTP, Oracle WebLogic and Industry Standard Java EE Benchmark SPECjEnterprise 2010 consolidated on a SPARC T3-1 22
Stack Components .......................................................................22
Reference Configuration 4: Siebel CRM on Sun SPARC T-Series Server with Oracle RAC .................................................................................................26
Stack Components .......................................................................26
Setup the T-Series Oracle RAC ....................................................27
Reference Configuration 5: Oracle VM on 2 Sun Fire X4800 OVM Servers utilizing VM Templates ........................................................................................29
Stack Components .......................................................................29
Setup OVM with 2 Node RAC 11g R1 and PeopleSoft HCM 9.1 Guest Templates .....................................................................................................30
Findings ...........................................................................................33
Upgrade Suggestions ...................................................................33
Virtualization Suggestions ............................................................35
Interoperability ..............................................................................37
Conclusion .......................................................................................39
References .......................................................................................40
Servers .........................................................................................40
Storage .........................................................................................40
Software .......................................................................................40
Appendix A .......................................................................................42
Oracle Integrated Stack Testing Reference Configurations Introduced
2
Introduction
Oracle provides the world’s most complete, open, and integrated business software and
hardware systems, with more than 370,000 customers—including 100 of the Fortune 100—
representing a variety of sizes and industries in more than 145 countries around the globe.
Oracle's product strategy provides flexibility and choice to our customers across their IT
infrastructure. Now, with Sun server, storage, operating-system, and virtualization technology,
Oracle is the only vendor able to offer a complete technology
stack in which every layer is integrated to work together as a
single system.
Only Oracle can offer this stack advantage to its customers
through deep and seamless integration between the tiers that
our competitors cannot match. This whitepaper describes the
testing and validation of five reference configurations.
Oracle Integrated Stack Testing (OIST) ensures that all the
hardware and software components within the reference
configurations interoperate and perform well together. For IT
managers planning to purchase a new application to meet
business needs, the reference configurations documented in this paper provide a starting point
for solution architecture discussions. For IT managers bringing a new technology stack into
production, these reference configurations, which have been fully qualified by Oracle, reduce
in-house testing bringing production systems into deployment sooner and hence reduce costs
for the customer.
The reference configurations featured in this document have been strategically chosen. They
cross both SPARC (M-Series and T-Series) and x86 processor-based servers.
Oracle Integrated Stack Testing Reference Configurations Introduced
3
The Reference Configurations
The reference configurations include:
1. PeopleSoft Campus on Sun SPARC Enterprise M-Series
Patch, software upgrade, and hardware upgrade testing is a key focus for this
reference configuration. Minimizing database and application downtime, even during
system upgrades, is critical. As such, the upgrade tests occur while live instances of
Oracle 11gR1 and PeopleSoft Campus Enterprise are servicing simulated client
activity.
2. Siebel Customer Relationship Management (CRM) on a SPARC T3-1 with Oracle
RAC
This reference configuration includes a two phased approach. It was used to
demonstrate not only the interoperability of the components, but also the ease of
scaling such a configuration. The first phase incorporates Oracle 11gR2 Database with
Siebel CRM consolidated on a single SPARC T3-1. The second phase demonstrates
one possible option to scale out to a 2-Node Oracle RAC configuration while keeping
the focus on minimizing system down time while performing the scaling.
3. OLTP and the Industry Standard Java EE Benchmark SPECjEnterprise2010
consolidated onto a SPARC T3-1
The key focus of this reference configuration is to highlight some of Oracle's latest
products and demonstrate that the complete Oracle Stack has already been validated
with these components. This configuration uses Oracle 11gR2 Database, and Oracle
11gR1 Java EE WebLogic Servers. The configuration incorporates the SPARC T3-1
system, new features of Oracle VM for SPARC 2.0 and the simplicity of the Sun
Storage 7410C.
4. Siebel Customer Relationship Management (CRM) on Sun SPARC Enterprise T
Series Server with Oracle RAC
The integration of Oracle RAC with the T-Series SPARC servers running Oracle VM
for SPARC version 1.3 is the key focus for this reference configuration as
demonstrated by Siebel 8.0 Platform Sizing and Performance Program (PSPP)
Benchmark workload.
5. Oracle VM on x86 with 2 Sun Fire X4800 OVM Servers utilizing Oracle Templates
Oracle Integrated Stack Testing Reference Configurations Introduced
4
This reference configuration was used to demonstrate the ease of deploying many
Guest hosts with Oracle VM Templates on the Sun Fire X4800 Server.
Two Oracle Stacks were configured:
A 2-Node Oracle 11gR1 RAC on OEL 5.4 using the Oracle VM RAC Template in a
HA OVM environment with a Swingbench workload
Human Capital Management (HCM) 9.1 using the Oracle VM PeopleSoft
Templates.
VM templates provide the capability to rapidly deploy Guests on the X4800, and
provide a means to create custom templates for scaling and performance.
Overview of Test Approach
Scope
OIST is a long term, ongoing effort to validate Oracle's current technology and installed base in a
dynamic, continually updated environment. OIST is being adopted throughout Oracle's Quality
Engineering organizations. The OIST tests focus on interoperability of the Oracle Stack components
throughout their lifetime. OIST tests do not, however, provide full functional coverage for each
component of the stack, nor does OIST replace any existing best practice guides. Some references have
also been provided at the end of this document, for more information, please see www.oracle.com.
Storage and network infrastructure
SAN
Brocade 5300 8Gbs Fibre Channel (FC) switches form two independent fabrics. Each Initiator-Target pair are in separate zones. The two storage solutions used as part of the reference configurations. These were:
Sun Storage 7410C
Dual head, for high availability
8 Gb Fibre channel connections
Brocade 5300 switches with 8Gb SFPs.
LUN mapping was done using WWNs within initiator groups.
Four 24 disk JBODs attached.
The pool definitions are:
Database pool
Data = Mirrored
Logs = striped
Block sizes determined on a per LUN/file system basis for each database application,
typically 8KB
LUNs mapped on initiator group basis
Oracle Integrated Stack Testing Reference Configurations Introduced
5
Boot Pool
Data = Triple parity and wide stripes
Log = striped
Block sizes determine on a per LUN/file system basis for each application, typically 128KB
LUNs mapped on initiator group basis
Sun Storage 6780
Dual controllers
8Gb FC
Dual parallel SAN connections
Eight drive trays
LDOMS Boot disks:
These are in a single pool per LDOM.
Number of disks: 6
RAID level: 5
Segment Size: 512K
Tray Loss Protection: YES
Read Ahead: Enabled
NOTE: These are setup with a 2 to 1 ratio of pool size to volumes. RAID 5 is used to maximize
the storage while still giving great performance.
This size varies based on the application being used.
DB Volumes:
Number of disks: A 10 to 1 ratio of total pool size to volumes
RAID level: 5
Segment Size: Will vary based on the specific needs of the application.
Tray Loss Protection: YES
Read Ahead: Enabled
Network
The core of this subnet is a Cisco 4000 series blade switch populated with multiple 48-port GbE
blades. LACP was used for higher bandwidth on some configurations. The reference configurations
used either their internal 1Gb connections or the quad port NIC.
General
The OIST reference configurations undergo a standard and vigorous set of tests. Some of these tests
include:
Configuration building and interoperability tests
Oracle Integrated Stack Testing Reference Configurations Introduced
6
Each reference configuration requires hardware and software setup followed by focused
interoperability testing of two or more of the Oracle Stack components. That is, similar to building a
house, you first start with the foundation, add the frame, the roof and the walls. In these reference
configurations, we start with the main components (the server and operating system), add option cards
and the network to validate the interoperability of these components before we add the storage and
SAN solution. Then finally we add the database and applications that will test the entire Oracle Stack.
The configuration and interoperability testing for each reference configuration are described later.
Error and Fault injection tests
For our customers, Minimizing database and application downtime, even during hardware failures, is
critical. As such, the fault injection tests are conducted while one or more live instance of Oracle 11g is
servicing an application or benchmark workload. These tests target the server’s processor (CPU),
memory (DIMMs), and IO bus.
OIST tests include correctable errors to validate error diagnosis, reporting, and handling, as well as
uncorrectable errors to validate diagnosis, reporting, recovery, and consistency. For systems with
redundant components, such as multipathing and Oracle RAC configurations, the testing verifies that
the surviving path and the surviving nodes continue to function as expected with minimal interruption.
Internally available error and fault injection tools are used to replicate real customer failure scenarios.
Example error/failure scenarios include:
ERROR/FAULT EVENT EXPECTED BEHAVIOR
CORRECTABLE ERRORS (CE):
Inject correctable errors to offline CPU strand The system continues servicing workload without interruption.
Inject correctable errors to offline CPU core The system continues servicing workload without interruption.
Inject correctable errors to offline CPU chip The system stalls. After repairing the faulty resource, workload resumes.
Inject correctable errors to offline memory page The system continues servicing workload without interruption.
Inject correctable errors to offline memory bank The system continues servicing workload without interruption.
Inject correctable errors to offline DIMM The system stalls. After repairing the faulty resource, workload resumes.
Inject correctable errors into the PCI Express root
complex
The system continues servicing workload without interruption.
Inject fabric correctable errors The system continues servicing workload without interruption.
UNCORRECTABLE ERRORS (UE):
Inject UE to offline CPU core and panic RAC clustered
node
Only single domain impacted. The RAC domain panics and recovers, while
workload is re-distributed to other clustered nodes.
Fault Isolation: Inject UE to offline CPU core and panic
non-clustered IO domain.
Only single domain impacted. The non-RAC domain panics and recovers.
The RAC and other domains remain untouched.
CMT: Inject UE to offline CPU core and panic a control Only the control domain panic's and recovers. The IO Domain running the
Oracle Integrated Stack Testing Reference Configurations Introduced
7
domain. RAC cluster remains untouched.
Inject UE to offline Memory Bank and panic a RAC
clustered IO Domain
Only single domain impacted. The RAC domain panics and recovers, while
workload is re-distributed to other clustered nodes.
Fault Isolation: Inject UE to offline Memory Bank and
panic a non-cluster IO domain.
Only single domain impacted. The non-RAC domain panics and recovers.
The RAC and other domains remain untouched.
CMT: Inject a UE to offline a Memory Bank and panic a
control domain.
Only the control domain panic's and recovers. The IO Domain running the
RAC cluster remains untouched.
Inject root complex UE to panic non-clustered IO
domain
Only single domain impacted. The non-RAC domain panics and recovers.
The RAC and other domains remain untouched.
Inject fabric UE to panic a RAC clustered node Only single domain impacted. The RAC domain panics and recovers, while
workload is re-distributed to other clustered nodes.
Out-Of-Box (OOB) Performance tests
When the reference configurations are built, selected software tools, such as customer load generators
and benchmarks, are being used to stress the systems to ensure the expected performance levels are
sustained or improved, and to establish the baseline for subsequent tests and measurements.
Each component sub systems performance is validated to be working according to specification. For
network cards this means the card is driven to line speed. Host Bus Adapters are tested to ensure that
the data transfer rates are as expected. File system IO is test to ensure that it doesn't add any
bottlenecks, and memory bandwidth is tested to ensure it is working according to specification.
Stability tests
Another important piece of the OIST test coverage is the maintenance of the configuration and the
stack workload. Once the configuration is built and initial testing described above is completed, the
configurations continue to be exercised. Depending on the components of the configuration, other
functional test cases continue to be introduced to replicate real customer scenarios, and the stack
continues to be tested with an active workload for extended periods.
Oracle Integrated Stack Testing Reference Configurations Introduced
8
Reference Configuration 1: PeopleSoft Campus on Sun SPARC Enterprise M9000
Stack Components
Hardware Components Software components
Sun SPARC Enterprise M9000 Server initially populated with
SPARC64 VI processors which are upgraded to SPARC64 VII as
part of the upgrade testing
Oracle Solaris 10 Operating System at various update and patch
levels throughout Upgrade testing
8x Sun SG-XPCIE2FC-QF8-Z 8 Gbps Fibre Channel Host Bus
Adapters (HBAs)
Oracle Solaris Live Upgrade (LU) 2.0
IOU Device Mounting Card A (IOUA)
Sun Storage 7410C Oracle Database 11gR1
2x Brocade 5300 SAN Switches PeopleSoft Campus Enterprise
Cisco 4000-Series Blade Switch
Oracle Integrated Stack Testing Reference Configurations Introduced
9
Figure 1 Architectural Overview of SPARC M9000 Enterprise Server with the Sun Storage 7410C solution
Setup of PeopleSoft Campus on Sun SPARC Enterprise M9000 Server
Oracle has many customers running their own
Oracle Stack on the M-Series systems. In many
cases, these customers are concerned about, or
unsure about, the upgrade process to move their
existing Oracle Stack to newer, more feature rich
and better performing components. Therefore,
this configuration starts with components that
have been released for some time and moves the
configuration to more recent hardware and
software components.
The M-Series reference configuration provides an
example of a large scale Oracle database
Oracle Integrated Stack Testing Reference Configurations Introduced
10
deployment with the PeopleSoft Campus application. The test case incorporates both hardware and
software upgrades. The configuration utilizes the M9000 with the Sun Storage 7410C solution which is
connected to multiple Brocade 5300 SAN switches configured as two parallel SANs. Initial system
configuration consisted of a single hardware domain containing two SPARC 64 VI processor modules
with the intention of upgrading the existing processor modules to SPARC 64 VII and then increasing
the number of processor modules in the domain. The root disk has been mirrored across the IOUs.
The network connection is through the internal NIC and across all IOUs.
Upgrade Testing
Upgrade testing was the key focus of the PeopleSoft Enterprise Campus reference configuration. For
our customers, minimizing database and application downtime, even during system upgrades, is critical.
As such, the upgrade tests conduct hardware upgrades while live instances of Oracle 11gR1 and
PeopleSoft Campus Enterprise are servicing simulated client activity. The Oracle Solaris installation is
migrated between Oracle Solaris update levels using Solaris Live Upgrade technology to minimize
application downtime. In this reference configuration, all hardware and software changes were made
to a single domain within the M9000 server.
The upgrade testing timeline shown below in Figure 2 outlines fully qualified upgrade paths.
Milestones written in dark blue refer to software upgrades and milestones written in green refer to
hardware upgrades. Each upgrade milestone is numbered and described in further detail below the
figure.
Figure 2. Upgrade testing timeline with milestones
It should be noted that any Oracle-supported server can be substituted for the M9000 in the following
scenarios. Further, any supported hardware or software upgrade path can be substituted for the paths
described below.
Upgrade Testing Milestone Details
Oracle Integrated Stack Testing Reference Configurations Introduced
11
Note: All hardware and software upgrades are performed while the Oracle 11gR1 database is actively
servicing simulated client activity.
1. Initial configuration details for the system are:
Table 1. Configuration (1) Details
M9000 hardware quantity
CMUs 2
IOUs 2
SPARC64 VI 2.28 GHz Processors 8
SPARC64 VII 2.88 GHz Processors 0
System Memory 256 GB
8 Gb/s HBAs 2
1 GbE NIC 2 (aggregated)
Software version
Operating System Oracle Solaris 10 Update 3 with EIS patches
Database Oracle 11g R1
2. The Oracle Solaris 10 Update 3 operating system is upgraded to Oracle Solaris 10 Update 6
using Live Upgrade. Solaris Live Upgrade enables the operating system to continue to run
while an administrator upgrades the Operating System, applies patches or does routine
maintenance on the inactive or duplicate boot environment. When satisfied with the process,
the administrator can simply reboot the system to activate the updated operating environment.
For detailed Solaris Live Upgrade procedures please see the reference section.
3. Both CMUs SPARC64 VI 2.28 GHz (dual core) processors are replaced with SPARC64 VII
2.88 GHz (quad core) processors. Dynamic Reconfiguration allowed both new CMUs to be
added to the domain with the application on-line and under load. Once the new CMUs were
operational the root file system mirror was migrated through multiple steps of breaking the
mirror and re-silvering on disks connected to the new CMUs. The NICs on both new CMUs
Oracle Integrated Stack Testing Reference Configurations Introduced
12
were added to the link aggregation and the original NICs were removed. Finally the original
CMUs were removed from the domain using Dynamic Reconfiguration.
Changed components denoted by: C
Configuration details:
Table 2. Configuration (3) Details
M9000 hardware quantity
CMUs 2
IOUs 2
SPARC64 VI 2.28 GHz Processors C 0
SPARC64 VII 2.88 GHz Processors C 8
System Memory 256 GB
8 Gb/s HBAs 2
1 GbE NIC 2 (aggregated)
Software version
Operating System C Oracle Solaris 10 Update 6 with EIS
patches
Database Oracle 11g R1
4. The Oracle Solaris 10 Update 6 operating system is upgraded to Oracle Solaris 10 Update 8
once again using Solaris Live Upgrade to minimize downtime.
5. Two additional CPU/memory board units (CMUs), each containing four SPARC VII
processors and 256 GB RAM, are installed in the M9000 and the new hardware resources are
made available to Oracle Solaris.
Configuration details:
Oracle Integrated Stack Testing Reference Configurations Introduced
13
Table 3. Configuration (5) Details
M9000 hardware quantity
CMUs C 4
IOUs C 4
SPARC64 VI 2.28 GHz Processors 0
SPARC64 VII 2.88 GHz
Processors
C 16
System Memory C 512 GB
8 Gb/s HBAs C 4
1 GbE NIC C 4 (aggregated)
Software version
Operating System C Oracle Solaris 10 Update 8 with EIS
patches
Database Oracle 11g R1
6. Two more CMUs are installed in the M9000 and the new hardware resources are made
available to Oracle Solaris.
Configuration details:
Table 4. Configuration (6) Details
M9000 hardware quantity
CMUs C 6
IOUs C 6
Oracle Integrated Stack Testing Reference Configurations Introduced
14
SPARC64 VI 2.28 GHz Processors 0
SPARC64 VII 2.88 GHz
Processors
C 24
System Memory C 768 GB
8 Gb/s HBAs C 6
1 GbE NIC C 6 (aggregated)
Software version
Operating System C Oracle Solaris 10 Update EIS patches
Database Oracle 11g R1
7. Two final CMUs are installed in the M9000 and the new hardware resources are made
available to Oracle Solaris.
Final configuration details:
Table 5. Configuration (7) Details
M9000 hardware quantity
CMUs C 8
IOUs C 8
SPARC64 VI 2.28 GHz Processors 0
SPARC64 VII 2.88 GHz Processors
C 32
System Memory C 1024 GB
8 Gb/s HBAs C 8
1 GbE NIC C 8 (aggregated)
Oracle Integrated Stack Testing Reference Configurations Introduced
15
Software version
Operating System C Oracle Solaris 10 Update EIS patches
Database Oracle 11g R1
Oracle Integrated Stack Testing Reference Configurations Introduced
16
Reference Configuration 2: Siebel Customer Relationship Management (CRM) on a SPARC T3-1 with Oracle RAC
Stack Components
Hardware Components Software components
Phase 1-1x Phase 2-2x SPARC T3-1 server, each with 1x 1.65GHz UltraSPARC T3 processor, 16 core, 8 threads per core, 32GB Memory (16x 2GB DIMMs), 2x 300GB 10K RPM SAS HDDs
Oracle Solaris 10 Update 9
Phase 1-3x Phase 2-5x Sun SG-XPCIE2FC-QF8-Z 8 Gbps Fibre Channel Host Bus Adapters (HBAs)
Oracle VM for SPARC 2.0
5x X4447A-z, Quad 1Gb/sec Ethernet UTP card Oracle Database 11gR2 (11.2.0.1) – RAC and Database
Sun Storage 7410C Siebel CRM 8.1.1.0
2x Brocade 5300 SAN Switches Siebel 8.0 PSPP
Cisco 4000-Series Blade Switch HP LoadRunner Software 8.1
5x Windows Client Systems Oracle Client 11g R1
Oracle Integrated Stack Testing Reference Configurations Introduced
17
Figure 3 Phase 1: SPARC T3-1 single server prior to scaling out.
Oracle Integrated Stack Testing Reference Configurations Introduced
18
Figure 4 Phase 2: SPARC T3-1, Oracle RAC scaled out configuration
Setup for SPARC T3-1 Oracle RAC and Siebel CRM
The reference configuration featured here is
the Sun Storage 7410C and the SPARC T3-1
Server. The SPARC T3-1 Server comes pre-
configured with Oracle Solaris Update 9 and
Oracle VM for SPARC 2.0. This stack uses
Oracle Grid and Database 11gR2 and an
application stack with Siebel CRM, Siebel
Enterprise Server, Siebel Gateway Server and
Siebel Web Server.
With the SPARC T3-1 being a new platform,
the goal of this configuration was to
demonstrate an effective way to use and
scale-out the system for consolidation of
multiple applications.
Oracle Integrated Stack Testing Reference Configurations Introduced
19
The SPARC T3-1 comes pre-configured with Oracle Solaris Update 9 and Oracle VM for SPARC 2.0.
The system IO configuration was populated with the following components providing the ability to
directly attach the Sun Storage 7410 to create 3 DirectIO domains, in addition to the Control Domain.
Table 5. system connectivity
Physical path pseudonym Comments
pci@400/pci@1/pci@0/pci@8 /SYS/MB/RISER0/PCIE0 ldg1: SG-XPCIE2FC-QF8-Z Sun Storage 7410C FC-SAN boot and DB disks
pci@400/pci@2/pci@0/pci@8 /SYS/MB/RISER1/PCIE1 ldg1: X4447A-Z1 4 port 1Gb network card
pci@400/pci@1/pci@0/pci@6 /SYS/MB/RISER2/PCIE2 ldg2: SG-XPCIE2FC-QF8-Z Sun Storage 7410C FC-SAN boot and DB disks
pci@400/pci@2/pci@0/pci@c /SYS/MB/RISER0/PCIE3 ldg2: X4447A-Z1 4 port 1Gb network card
pci@400/pci@1/pci@0/pci@0 /SYS/MB/RISER1/PCIE4 ldg3: SG-XPCIE2FC-QF8-Z Sun Storage 7410CC FC-SAN boot and DB disks
pci@400/pci@2/pci@0/pci@a /SYS/MB/RISER2/PCIE5 ldg3: X4447A-Z1 4 port 1Gb network card
pci@400/pci@1/pci@0/pci@4 /SYS/MB/SASHBA0 Control Domain: 2x boot disk using HW RAID1
pci@400/pci@2/pci@0/pci@4 /SYS/MB/SASHBA1 empty
pci@400/pci@2/pci@0/pci@6 /SYS/MB/NET0 Control Domain: Network
pci@400/pci@2/pci@0/pci@7 /SYS/MB/NET2 Control Domain: Network
There are two phases for this OIST effort. Phase 1 includes the entire stack on a single server, using
Siebel PSPP to drive the workload. For phase 2, the system is upgraded with an additional SPARC T3-
1 server used for Oracle RAC. This second phase demonstrates how the system can be scaled
For phase 1, the SPARC T3-1 IO configuration with 6 PCI Express slots provides direct attach to the
Sun Storage 7410C storage for 3 DirectIO domains, (similar to the IO configuration used in Reference
Configuration 3 in addition to the Control Domain.
For phase 2, each SPARC T3-1 server utilized only 2 DirectIO domains each. By adding a second node to DB server, this configuration allows the allocation of more memory and vCPUs to the Siebel Application servers. This in turn enabled Siebel PSPP test to handle more users. For both phases, HP
Oracle Integrated Stack Testing Reference Configurations Introduced
20
LoadRunner software was installed on window client machines to generate virtual users for test run.
DirectIO Domain is a new feature to Oracle VM for SPARC 2.0. This provides the ability to assign an
individual PCIe end point device (PCIe card) to a guest domain and hence remove IO virtualization
overhead. Each of the FC HBA cards utilized both ports, each port going to a different Brocade 5300
FC Switch. Similarly, each of the 1Gb network cards were configured with port 0 and port 1. This IO
configuration provides redundancy at the hardware level.
Phase 1 Single node
Oracle Siebel CRM software allows businesses to setup scalable applications. Datacenter resources can
be scaled out (by adding more server nodes) or scaled up (by increasing resources such as vCPUs and
Memory). This elegant solution consolidates Web, Gateway, Application and Database tiers on a
single SPARC T3-1 server by using Oracle VM for SPARC to isolate the domains without utilizing lab
space or additional power. Each domain runs it own independent copy of Oracle Solaris. System
resources can be shuffled between tiers manually or automatically with the use of the Dynamic
Resource Management feature.
Phase 1 LDom Resource Allocation
Primary ldg1 Oracle DB
server
ldg2 Siebel App &
Gateway
ldg3 Siebel Web Server
vCPU 4 10 48 10
Memory GB 2 10 16 3
PCIe Slot(s) - PCIE0
PCIE1
PCIE2
PCIE3
PCIE4
PCIE5
Siebel Database Tier – DirectIO Domain1 (ldg1)
This domain is the Database server. The Siebel Database Server stores CRM database tables, indexes
and seed data which is used by Siebel Clients and the Siebel Enterprise Server. Oracle database 11gR2
software is installed using the Oracle User Interface (OUI). The installation of the Oracle database
software required a Unix user to be created and this user must have 'dba' as the default group and
'oinstall' as the supplementary group.
Siebel Enterprise Server and Siebel Gateway Server Tier – DirectIO Domain2 (ldg2)
Oracle Integrated Stack Testing Reference Configurations Introduced
21
This domain is the Siebel Enterprise Server and Siebel Gateway Server. This tier provides services on
behalf of the Siebel Web Clients. Oracle Client 11gR1 software was installed as well as the Siebel
Enterprise and Siebel Gateway Servers. The following three components were configured during the
installation:
Configuration of Gateway name server, Siebel Server on Nameserver and Webprofile
Database configuration
Siebel Management Agent
Siebel Web Tier – DirectIO Domain3 (ldg3)
The third DirectIO domain had the Siebel Web Server installed. This tier processes requests from
Web Clients and interfaces with the Siebel Application Server and Gateway Server Tier. Siebel Web
Server Extensions is installed and the Oracle iPlanet Web Server is configured at this tier.
Siebel Web Client – Windows Client Machines
Web Clients provide user interface functionality such as Siebel Web Client, Siebel Wireless Client,
Siebel Mobile Client, Siebel Handheld Client, etc. During both the phase 1 and the phase 2
configurations, the HP LoadRunner (version 8.1) was used to simulate the load generated by different
sized end-user populations. HP LoadRunner software was installed on the five client window systems.
Phase 2 Scaling Out and Up (2-Node RAC)
In this second phase of testing, two main changes were made (1) the DB server had the Oracle Real
Application Cluster (RAC) installed in a 2-node RAC to provide a highly scalable and available
database solution across the two SPARC T3-1 systems; and (2) the function of the Siebel Web Server
was moved from the initial SPARC T3-1, to a new DirectIO Domain on the second SPARC T3-1,
hence distributing resources across the functions of the Siebel tiers.
Phase 2 LDom Resource Allocation
N1-Primary N1-ldg1 Oracle
DB RAC
N1-ldg2 Siebel App
& Gateway
N2-Primary N2-ldg1 Oracle
DB RAC
N2-ldg2 Siebel
Web Server
vCPU 8 32 64 8 32 64
Memory GB 2 8 18 2 8 18
PCIe Slot(s) - PCIE0
PCIE1
PCIE2
PCIE3
- PCIE0
PCIE1
PCIE2
PCIE3
Oracle Integrated Stack Testing Reference Configurations Introduced
22
The system hardware was setup such that the Siebel Database tier shared storage from the Sun Storage
7410C. Both RAC nodes were DirectIO domains with direct access via the HBA card to the database
files. Oracle Grid 11gR2 was installed on the 2-node RAC. The SCAN addresses were pre-configured
ready for use during the installation. One public network interface and one private network interface
were used. After the Oracle RAC installation, the Oracle Database init.ora and tnsnames.ora files were
modified to include the new LISTENER details. The database could then be started on both nodes
and the srvctl command was used for the RAC configurations changes and database addition.
During the installation of RAC, a few problems (with workarounds) were hit. The details for these are
captured below in the 'Findings' section.
The second DirectIO domain created on the second SPARC T3-1 Server had the Oracle Solaris Operating system installed, followed by the Siebel Web Server software. When the Siebel Web Server was ready to be decommissioned from the original system, it was very quickly restarted on the new system.
Moving the Siebel Web Server to the second node allowed more vCPU and memory resources to be
allocated to the Siebel Application and Gateway Server. This shows examples of both scaling out
(across the SPARC T3-1 systems), as well as scaling up (within the SPARC T3-1 system).
Reference Configuration 3: Oracle OLTP, Oracle WebLogic and Industry Standard Java EE Benchmark SPECjEnterprise 2010 consolidated on a SPARC T3-1
Stack Components
Hardware Components Software components
SPARC T3-1 server, 1x 1.65GHz UltraSPARC T3 processor, 16
core, 8 threads per core, 32GB Memory (16x 2GB DIMMs), 2x
300GB 10K RPM SAS HDDs
Oracle Solaris 10 Update 9
3x Sun SG-XPCIE2FC-QF8-Z 8 Gbps Fibre Channel Host Bus
Adapters (HBAs). One per DirectIO Domain
Oracle VM for SPARC 2.0
3x X4447A-z, Quad 1Gb/sec Ethernet UTP card. One per DirectIO
Domain
Oracle Database 11gR2 (11.2.0.1)
Sun Storage 7410C Oracle WebLogic Server 11gR1 (10.3.3)
2x Brocade 5300 SAN Switches Java EE 5
Cisco 4000-Series Blade Switch SPECjEnterprise2010
Oracle Integrated Stack Testing Reference Configurations Introduced
23
Figure 5 Architectural Overview of the SPARC T3-1 with the Sun Storage 7410C
Oracle Integrated Stack Testing Reference Configurations Introduced
24
Setup of SPECjEnterprise2010 on SPARC T3-1
This reference configuration is based on a single
SPARC T3-1 Server running the
SPECjEnterprise2010 benchmark code on Oracle
WebLogic 11gR1 and using Oracle Database
11gR2.The Sun Storage 7410C is used to reliably
hold the database data and the WebLogic Server
transaction logs. This configuration highlights the
consolidation of multiple tiers of a physical
enterprise Java application onto a single server
using the Oracle VM Server for SPARC. The
SPARC T3-1 is ideal for this type of consolidation
as it can simply host multiple guest virtual
machines with no overhead of IO virtualization.
The IO boards for this configuration are the same as a previous reference configuration and are inn
Table 5.
DirectIO Domain is a new feature to Oracle VM for SPARC 2.0. This provides the ability to assign an
individual PCIe end point device (PCIe card) to a guest domain and hence remove IO virtualization
overhead. Each of the FC HBA cards utilized both ports, each port going to a different Brocade 5300
FC Switch. Similarly, each of the 1Gb network cards were configured with port 0 and port 1. This IO
configuration provides redundancy at the hardware level.
The SPARC T3-1 Control Domain used the HW RAID1 feature for boot disk redundancy and the
onboard network. The ldm(1m) CLI was used to create the Logical Domain configuration.
# ldm ls
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- UART 20 2G 1.7% 13d 21h 59m
ldg1 active -n---- 5000 36 17G 0.0% 4d 19m
ldg2 active -n---- 5001 36 5G 0.0% 13d 21h 22m
ldg3 active -n---- 5002 36 4G 0.1% 2d 4h 34m
The “add-policy” option and Dynamic Resource Management (DRM) were used. These will be
discussed in more detail in the “Findings” Section.
The Sun Storage 7410C provided a boot disk/LUN for each of the DirectIO domains. Each of the
DirectIO domains' HBA belonged to a initiator group along with the domains volumes. This
guaranteed that only that domain could access these volumes. The reference configuration also took
advantage of the inherent features of the Sun Storage 7410C and specifically ZFS.
DirectIO Domain1 (ldg1) – Oracle Database 11gR2
This domain was used as the Oracle Database Server. Before installing the Oracle software stack, a
Unix user needs to be created for installation of the Oracle DB software needs to be created. This user
Oracle Integrated Stack Testing Reference Configurations Introduced
25
should have the default group of dba and also belong to the oinstall group. Depending on the Oracle
DB configuration requirements and system memory size, a Solaris project may need to be created to
increase limits for parameters such as : process.max-file-descriptor=(privileged,65536,deny); process.max-msg-messages=(privileged,8192,deny); process.max-msg-msg-qbytes=(privileged,65535,deny); process.max-sem-nsems=(privileged,8192,deny); project.max-msg-ids=(privileged,1024,deny); project.max-sem-ids=(privileged,8192,deny); project.max-shm-memory=(privileged,77309411328,deny)
The Oracle Database also utilized the Sun Storage 7410C for the Database storage. For a thorough
analysis of configuration criteria and important implementation guidelines to understand how to
accurately match a Unified Storage System configuration to specific Oracle data access requirements
see the Sun BluePrints™ article entitled “Configuring Sun Storage 7000 Unified Storage Systems for
Oracle Databases.: A link can be found in the reference section.
The standard Oracle User Interface (OUI) was used to install the Oracle Database 11gR2, software
only. No specific configuration changes were made during the installation keeping it very simple. The
Oracle Database Configuration Assistant (dbca) was used to create a OLTP database. Most of the
default settings were selected. The main change was to decrease the PGA/SGA default setting during
installation, see findings for details. The other change was to modify the “Database Storage” settings to
separate the file system for the redo logs from the database files. This was done to improve database
performance. This change is made in the “Database Storage” step.
DirectIO Domains 2 & 3 (ldg2, ldg3) – WebLogic and Application Servers
A single instance of the Oracle WebLogic server was installed onto both remaining DirectIO domains.
Staying within the theme of keeping-it-simple, the WebLogic 10 binaries were copied onto the domains
and the installation script was run in console mode. A “Custom Installation” type was used.
With all the Oracle Stack pieces now in place, the benchmark was installed and a client system was
used to run the OLTP, and Java EE WebLogic and Java EE workload. While the workload exercised
the entire SPARC3-1 and the Oracle stack, additional testing such as Memory and CPU DR were
executed. See the Findings section for additional information.
Oracle Integrated Stack Testing Reference Configurations Introduced
26
Reference Configuration 4: Siebel CRM on Sun SPARC T-Series Server with Oracle RAC
Stack Components
Hardware Components Software components
2x Sun SPARC T5440 Servers, each with 4 CPU 1550MHz, 8 core, 256GB Memory (32x8GB DIMMs), 4x 146GB 10K RPM internal disks
Oracle Solaris 10 Update 9
sysfw_version = Sun System Firmware 7.2.8
6x Sun SG-XPCIE2FC-QF8-Z 8 Gbps Fibre Channel Host Bus Adapters (HBAs)
Oracle VM for SPARC 1.3 (aka LDoms1.3)
8x X4447A-Z Quad 1Gb/sec. Ethernet UTP card Oracle Database 11gR2 (11.2.0.1) – RAC and Database
Sun Storage 6780, Dual RAID controller 16GB cache, 8 Host FC Ports, 8Gb/sec Siebel CRM 8.1.1.0
2x Brocade 5300 SAN Switches Siebel 8.0 Platform Sizing Performance Program (PSPP) Benchmark
Cisco 4000-Series Blade Switch HP LoadRunner Software 8.1
5x Windows Client systems Oracle Client 11g R1
Oracle Integrated Stack Testing Reference Configurations Introduced
27
Figure 6 Architectural overview: 2-node SPARC T5440 server with the Sun Storage 7410C
Setup the T-Series Oracle RAC
The T-Series reference configuration focuses on
virtualization and consolidation. This
configuration does not utilize all the resources of
the T5440 server. Instead it highlights how to use
the T5440 for a well performing Oracle RAC
database configuration while freeing other
resources to be utilized elsewhere by the
customer.
The T5440 supports the ability to assign an entire
PCIe Root complex to a domain. This is known
as a Split-PCI configuration. For best performance with Oracle Database, having direct access to the
Oracle Integrated Stack Testing Reference Configurations Introduced
28
system IO, rather than running the database with virtualized IO, is beneficial. Similarly, to limit IO
virtualization overhead on the network of the Siebel Gateway server an IO domain is also created for
this purpose.
For simplicity and ease of maintenance, both the T5440 systems were configured the same. The logical
Domain configuration for the Control Domain and the IO Domains were:
# ldm ls -l
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 16 4G 0.5% 29d 1h 45m
ldg1 active -n---- 5000 80 40G 2.2% 13d 6h 45m
ldg2 active -n---- 5001 160 80G 2.2% 13d 6h 45m
The Siebel Platform Sizing and Performance Program (PSPP) workload was used for this testing. For more information about this workload, please see the details documented in reference configuration 2. The Oracle VM for SPARC configuration on this reference configuration was similar to the configuration on the SPARC T3-1 reference configuration. Siebel Database Tier – IO Domains N1-ldg1 and N2-ldg1 Siebel Application Server and Siebel Gateway Server Tier – IO Domain N1-ldg2 Siebel Web Server Tier – IO Domain N2-ldg2 The Oracle RAC software was installed first across the two nodes (N1-ldg1 and N2-ldg1). The Oracle User Interface was used for this installation. The “Advanced Installation” type was selected and Single Client Access Name (SCAN) feature was used. Oracle’s Automatic Storage Management (ASM) was used in external mode.
Oracle Integrated Stack Testing Reference Configurations Introduced
29
Reference Configuration 5: Oracle VM on 2 Sun Fire X4800 OVM Servers utilizing VM Templates
Stack Components
Hardware Components Software components
2x Sun Fire X4800 Server with eight Intel Xeon processor 7500
series processors and 512GB of memory
OVM Server 2.2.1
1x Sun Fire X4450 Server with 4, Quad-Core 7300 series and
24GB of memory
OVM Management 2.2.0
4x SG-XPCIEFCGBE-Q8-Z 8 Gbps PCI-E FC/Dual Gigabit
Ethernet Host Bus Adapter Express Module
RDAC 09.03.0C02.0253 (Download and recompilation required) See Findings
.
Sun Storage 6780, Dual RAID controller 16GB cache, 8 Host FC
Ports, 8Gb/sec
Oracle Enterprise Linux 5.4 via Template 2-Node RAC
Oracle Real Application Clusters (RAC) 11g Release 1
Template
CSM200, 8 tray, 16x 450GB 15K RPM (4Gb FC-AL drives) Oracle PeopleSoft HCM 9.1
Sun Storage 7410C PeopleTools 8.50.02 Application Server
2x Brocade 5300 SAN Switches Swingbench 2.3
Cisco 4000-Series Blade Switch
Oracle Integrated Stack Testing Reference Configurations Introduced
30
Figure 7 Sun Fire X4800 Oracle VM reference configuration
Setup OVM with 2 Node RAC 11g R1 and PeopleSoft HCM 9.1 Guest Templates
This configuration includes two X4800's as OVM Servers each with two HBA/NIC combo cards
providing SAN and Network connectivity. Shared storage is via the SAN and is provided by the 6780.
Multipathing to the Sun Storage 6780 is handled by RDAC. A Sun Storage 7410C NFS share is used as
the Shared Repository for the Server Pool. The high CPU and memory capabilities of the X4800 make
it a perfect platform to build an OVM environment. A X4450 is being used as the management Server,
for OVM Manager. Both the Server Pool and the Templates are configured in HA mode. This allows
live migration if a failure should occur in an OVM Server. Both the Server Pool and Template must be
in this mode for migration to work.
Each OVM Server can server multiple roles:
Oracle Integrated Stack Testing Reference Configurations Introduced
31
●Server Pool Master: This acts as the contact point to the outside world for Oracle VM Server
and dispatches to other Oracle VM Agents. It also provides virtual machine host load-balancing, and
local persistence of Oracle VM Server information. Only one Master is required per Server Pool.
●Utility Server: This mainly focuses on creating, removing, migrating, and other IO
intensive operations. There can be multiple Utility Servers in a Server Pool.
●Virtual Machine Server: This runs the daemon process to start and stop virtual guests. It also collects
performance data for the host and guest operating systems. The domU's are running on this server.
There can be as many VM Servers as desired in a Server Pool.
The OVM manager role is to create, destroy and administer OVM Guest Domains and Pools.
Configuration for the Oracle Real Application Clusters (RAC) 11g Release 1 Template
The Oracle 11gR1 RAC Template consists of two
Guest hosts. Each one is created from the same
Template. After powering up the Guests the
software asks which one is going to be the 1st and
which the 2nd Node in the cluster is. A series of
checks are done to verify the storage is setup
correctly. Once that is established, hostnames and
IP addresses for each of the Nodes needs to be
entered. The networking is then setup on both
Nodes by a script. This completes the first part of
the installation.
The second part is just as easy and is handled by
one script. On the Node that was established as
the 1st Node in the cluster run /u01/clone/buildcluster.sh. This script setups the RAC cluster and
does not need any more inputs. Once completed, the 2-Node RAC 11gR1 Cluster setup is complete.
Included with the Template are detailed instructions that walk through all the steps required. By using
Oracle OVM Templates the stack is up and running quickly with a configuration that is using Oracles
Best Practices.
Oracle RAC 11gR1 VM Template minimum hardware requirements per each Template:
RAC Nodes
53 GB disk space
2 GB RAM
2 virtual processors
Oracle Integrated Stack Testing Reference Configurations Introduced
32
Configuration for the Oracle PeopleSoft HCM 9.1 and PeopleTools 8.50.02 (64-bit only) Template
The PeopleSoft HCM 9.1 consists of 3
individual Templates, each of which is loaded
as an OVM Guest Host on the X4800 server.
The creation of the Virtual Machine Server
using each Template is simple and
straightforward. CPU and Memory default
allocations per Template were maintained.
Templates can be installed on either of the
X4800 servers, as both reside in the same
OVM Server Pool. The README for the
HCM 9.1 Template trio contains the specific
installation procedure that must be followed
in the given order. Storage was shared to the
OVM Severs and then mounted to the individual Guests in the vm.cfg file.
PeopleSoft HCM 9.1 minimum hardware requirements per each Template:
PeopleSoft HCM Database Template
60 GB disk space
2 GB RAM
2 virtual processors
PeopleSoft Application Server Template
15 GB disk space
1 GB RAM
2 virtual processors
PeopleSoft PIA Template
8 GB disk space
1 GB RAM
1 virtual processor
Oracle Integrated Stack Testing Reference Configurations Introduced
33
Findings
Upgrade Suggestions
The upgrade testing included in the M-Series reference configuration performed three separate types
of upgrades:
1. Multiple Oracle Solaris version upgrades
2. One in-place hardware version upgrade
3. Multiple hardware scaling upgrades
Throughout this effort, the focus remained on application service time – maximizing application
availability throughout the upgrade process. Application upgrades and performance tuning are separate
efforts. Performance tuning and application scaling is not discussed here.
Live Upgrade
Operating environment version upgrades come rife with myriad dependencies each bearing its own
requirements for system state during the upgrade process. Live Upgrade provided an avenue to avoid
much application down-time by allowing the upgrade to be applied to an alternate boot environment,
thus requiring only a single reboot once the boot environment was complete.
Although the upgrade process took nearly two hours to complete, the application remained on-line
except for the time it took to reboot – about fifteen minutes. During an conventional upgrade process
the application and Oracle Solaris operating system would have been down during the entire process.
The same process was employed for both operating environment upgrades with predictable results.
Estimated down-time avoided: 4 hours.
Boot Archive
Solaris releases prior to s10u6 used a multi-stage boot loader which loaded selected files from the root
file system. S10u6 introduced NewBoot which creates a compressed archive of the files necessary to
load Solaris. These files are selected from the installed OS and must match the OS being loaded.
Using Live Upgrade to move from S10u3 to s10u6 (or later) on the same disk introduces the risk of
not being able to boot the s10u3 boot environment without errors once the s10u6 boot environment
has been activated and booted, because the boot-archive for s10u6 does not match s10u3. If the s10u3
boot environment has not been patched sufficiently to also use boot-archives, then it may be
impossible to boot the s10u3 boot environment.
Even if the s10u3 boot environment has been patched sufficiently to use a boot-archive, simply
changing the boot device in the OBP to load the s10u3 boot environment once s10u6 has been
activated will generate errors on boot because the files in the boot-archive do not match s10u3.
(Changing the boot device in the OBP is a common practice.)
Oracle Integrated Stack Testing Reference Configurations Introduced
34
Two recommendations come from this:
1) Follow the instructions in the Live Upgrade documentation to properly activate boot environment.
This will avoid the mismatched boot-archive - OS combinations, because Live Upgrade will cause the
boot-archive to be updated as appropriate.
2) To reduce the risks of encountering mismatched boot-archives, use different disks for each boot
environment rather than slices of the same disk.
In-Place Hardware Upgrade
Two features of the M9000 allow hardware upgrades to occur while the system, and its application,
remains on-line: Dynamic Reconfiguration (DR), and hot-pluggable field replaceable units (FRUs).
Dynamic Reconfiguration allows resources to be added or removed from a running domain. Hot-
pluggable field replaceable units allow physical components of the system to be changed, removed or
added while the rest of the system remains powered-on and on-line. Taking advantage of these features
allows for nearly in-place hardware upgrade.
Replacing the original, slower CMUs with new, faster CMUs using DR allowed the application to
remain on-line through the entire process. Since CMUs are hot-pluggable, the two new CMUs were
installed in the machine, and then dynamically added to the running domain while the application
remained in-service and under load. The entire process completed in just under an hour with no
application down-time.
If a conventional, cold-replace method had been employed the application would have been off-line
for at least an hour. Estimated down-time avoided: 1 hour.
Vertical Hardware Scaling
Scaling a system vertically, adding resources to an SMP environment typically requires the system to be
powered off, hence the application off-line, during the upgrade process. As with the in-place hardware
upgrade, the M9000 features of Dynamic Reconfiguration (DR) and hot-pluggable field replaceable
units (FRUs) allow a single SMP environment to scale while the application remains on-line.
After hot-plugging the CMUs into the machine sets of two CMUs at a time were dynamically added to
the active domain while the application was in-service and under load. The CMUs were added in sets
of two. This more closely represent a typical upgrade path in the field. The time required for each
iteration of two CMUs was just under half an hour (approximately fifteen minutes per CMU). This was
done three times until all eight CMUs were part of the active domain. The cumulative time spent on
this process totals approximately one and a half hours.
Each CMU must execute power-on-self-test before being allowed to integrate into a domain. If these
upgrades were done as cold FRU additions, the time to power-on the domain after each addition
would increase because of the increase number of CMUs. The initial domain of only two CMUs
Oracle Integrated Stack Testing Reference Configurations Introduced
35
powered-on in about half an hour, but the final configuration may take much longer (possibly hours).
The minimum cumulative down-time over three iterations of upgrades would probably be three or
four hours, at least an hour per upgrade. Estimated down-time avoided: 3 or more hours
Virtualization Suggestions
Oracle VM for SPARC virtualization provides many supported configurations. How to choose the best
configuration for your consolidated workloads may be a daunting task. This paper is not attempting to
solve every customer’s configuration needs, this would be impossible. What the virtualization reference
configurations attempt to do, is to provide some validated options and suggestions.
Keep the Control Domain as a separate entity without any application workload. It should
only act as a management and service domain.
The suggested minimum memory requirement for the control domain is 2GB. If booting a
large number of virtual domains on the same system, or if ZFS is used in the Control domain,
the suggested minimum increases (e.g. 4GB if using ZFS).
All IO virtualization has some performance penalty. It was found on the T5440 reference
configuration, when a virtual guest domain was used to replace a IO Domain, the IO
performance decreased. The amount of the decrease was very application specific.
When building virtual Guest Domains, always try to create Guest Domains on CPU core
boundaries. In Oracle VM for SPARC2.0, this is less important because the new CPU Affinity
feature does this for you.
After having created the Logical Domain configurations, always remember to save your
configuration with ldm add-config newSPconfigName and power cycle the system.
This will ensure that the configuration is saved to the SP and will persist through future power
failures or outages.
Since the T-Series Servers are likely to be used in many different ways, the Dynamic Resource
Management (DRM) feature was also incorporated into the testing. This feature provides the ability to
give each Logical Domain a priority and a minimum and maximum resource limit. During the test
cycle, it was found that during idle times, the configuration automatically was freed of all excess CPU
resources taking it to the minimum allowed resource limit as set by the policy, and during peak
workloads, these resources were automatically re-added into the domains to satisfy the workload
demands. Specifically, in the Logical Domain configuration shown in reference configuration #3,
shows the response and behavior of DRM on the idle system with a policy set at -
# ldm add-policy vcpu-min=4 vcpu-max=32 attack=1 decay=1 priority=1 name=primary-use primary # ldm add-policy vcpu-min=24 vcpu-max=48 attack=1 decay=1 priority=2 name=ldg1-use ldg1
# ldm add-policy vcpu-min=24 vcpu-max=48 attack=1 decay=1 priority=3 name=ldg2-use ldg2
# ldm add-policy vcpu-min=16 vcpu-max=48 attack=1 decay=1 priority=4 name=ldg3-use ldg3
The Static Direct IO (SDIO) feature is introduced in Oracle VM for SPARC2.0. As of writing this
paper, there exist a few limitations worth noting. These limitations are:
Oracle Integrated Stack Testing Reference Configurations Introduced
36
Any change in the assignment/removal of a PCIe end point to a domain will require a reboot
of that DirectIO domain. For Control Domains, this is done through delayed-reconfiguration
and the changes will only take effect after the Control Domain is rebooted. For DirectIO or
Guest Domains, the changes are only allowed when in a bound or inactive state.
No extended error management capabilities are available to DirectIO Domains. The error
diagnosis is limited.
A reboot or outage of the Control Domain will cause a reset of the entire PCIe fabric. That is,
devices in use by DirectIO domains will encounter device access issues and the state of the
DirectIO domain is unpredictable.
In order to deal with this last limitation, it is suggested to use the following solution to automatically
shutdown each DirectIO Domain without any unexpected behaviors:
# ldm set-domain failure-policy=reset primary # ldm set-domain master=primary ldg1 # ldm set-domain master=primary ldg2 # ldm set-domain master=primary ldg3
OVM Server installation:
When installing the OVS software the default root partition is only 3GB. Its recommended to increase
this to at least 20GB. During the installation which is very similar to a standard OEL install its
important that you use only static IP's and a fully qualified domain name. The currently supported
RDAC driver needs to be compiled for OVM Server.
OVM Manager Installation:
During the installation which is very similar to a standard OEL install its important that you use only
static IP's and a fully qualified domain name. If the hardware you are using has 6 or more cores you
will need to apply the work around in (CR6927196) before installing. When using a NFS share point on
the Sun Storage 7410C as a Shared Repository you must change two setting from the defaults.
In Access you must change the User and Group to root in the "Root Directory Access" screen.
In Protocols you must change "Anonymous user mapping" to root
OVM Templates
During the creation of the Virtual Machine in OVM, care must be taken to ensure that the xenbr(X)
bridge, for the virtual public network, is configured and online. For each eth(X) network port, a
xenbr(X) bridge is created, whether the eth(X) port is active-online or down-offline. When increasing
memory or CPUs that are available to a Template, tuning will need to be done to get the best
performance.
RDAC driver
Currently the RDAC driver that is native to OVS does not support the 6780. There will be updates to
OVS 2.2.1 to support this in the future. For now see Appendix A for detailed instructions on how to
compile and install.
OEL /OVM Oracle Real Application Clusters (RAC) 11g Release 1 Template
Oracle Integrated Stack Testing Reference Configurations Introduced
37
When importing the OVM RAC Template use "ovsroot" as the root password. If this is not set the
installation will fail without error. During the creation of the Virtual Machine in OVM, care must be
taken to ensure that the xenbr(X) bridge that is selected for the virtual public network is configured
and online.
For each eth(X) network port on the system, a xenbr(X) bridge is created, whether the eth(X) port is
active-online or down-offline. Increasing memory or CPU available to the configuration will require
tuning for best results.
Increasing memory or cpu available to the configuration will require tuning for best results.
OVM Oracle PeopleSoft HCM 9.1 and PeopleTools 8.50.02 (64-bit only) Template
The HCM 9.1 Database and Application Batch Server Template contains a password
expiration date for SYSADM and PEOPLE users which went into effect on August 23, 2010.
Due to the expiration, both the Database VM and App-Batch VM will not start properly.
After creation of each Virtual Machine, the user must VNC/ssh into each system and
change the passwords to expire on a later date, or invoke an unlimited timeout.
A link to procedures to make these changes can be found in the reference section.
Correctable Errors: Specific types of correctable kernel errors (hypervisor) were handled correctly by
the Oracle Solaris operating system, but in a Oracle RAC configuration, the cssdagent is requesting
that the RAC node be shutdown in a controlled manner. It's assumed that the cssdagent detected a
missed heartbeat. This has little impact because of the nature of RAC, but it is being further
investigated.
Interoperability
Most interoperability issues occurred during the installation of the software stack components. Almost
all can be ignored, or have a workaround. They are noted here for completeness and with the intent
that it may save the reader some time:
During the installation of the Oracle RAC 11gR2 Software, the following were seen :
Bug 9553860: Oracle Grid Infrastructure - Setting up Grid Infrastructure - Step 15 of
18 - NTP prerequisite check fails even though requirement has been met
All Oracle RAC domains use of the in-built Oracle Solaris NTP service which is
enabled with svcadm enable ntp:default. For information on how to configure NTP,
see the xntpd(1M) manpage. RAC requires the following NTP configuration settings :
slewalways yes
disable ppl
Bug 10104057: Oracle RAC database won't startup if the number of vCPU is
different among nodes
Oracle Integrated Stack Testing Reference Configurations Introduced
38
During the installation of Oracle RAC 11.2.0.1, the startup failed due to vCPU count
mismatch. The simple workaround was to ensure both/all the Oracle RAC nodes
have the same number of vCPUs allocated.
Bug 9925131: 11gR2 RAC installation has umask error but system umask is already
set to 022
During installation of the Oracle Database 11gR2 Software on Oracle Solaris 10
Update 9, the pre-requisite checks complained of an incorrect umask setting even
when the umask setting is properly configured. This can incorrect warning safely be
ignored.
Bug 9925285: RAC 11gR2 installation has error message on Oracle cluster
verification utility
On completion of the installation of the Oracle Database 11gR2 Software on Oracle
Solaris 10 Update 9, the “Oracle Cluster Verification Utility” may indicate a failure.
This was safely ignored, the installation was in fact completed successfully.
Bug 9606166: Grid 11gR2 installation failed because ASM failed to start while
executing root.sh
If using ASM, as part of the Oracle RAC configuration on a large CMT configuration
with a high vCPU count, some of the processors may need to be offlined using
psradm(1m).
Bug 9508201: DBCA fail to bring db instance up and complain
When using the Oracle Database Configuration Assistant (dbca) to create the OLTP
database, it may be required to decrease the PGA+SGA setting to approximately
20% of physical memory or less. Alternatively, use the /etc/system file to setup
parameters to enable a max-shm-mem (ISM) segment to be more that 25% of the
available physical memory.
CR 4615723 CMS: deal with CMS marking stack overflow
One latent issue was encountered with the JVM bundled in WebLogic:
This required restart of the application. Workarounds were employed to bring the
application back on-line. The long term solution will be to upgrade the application
itself to include a newer JVM release.
Correctable Errors: Specific types of correctable kernel errors (hypervisor)l were handled correctly by
the Oracle Solaris operating system, but in a Oracle RAC configuration, the cssdagent is requesting
that the RAC node be shutdown in a controlled manner. It's assumed that the cssdagent detected a
missed heartbeat. This has little impact because of the nature of RAC, but it is being further
investigated.
Oracle Integrated Stack Testing Reference Configurations Introduced
39
Conclusion
Too often, businesses using a combination of independently developed and supported products in
their enterprise-wide application and database deployments grapple with high IT costs and complicated
systems that fail to provide the information needed for critical business decisions. There is another
way: The Oracle Stack Advantage, Oracle's integrated product stack approach.
Oracle Integrated Stack Testing makes every attempt to ensure that hardware and software
components in a reference configuration interoperate and perform well together. Having all the
required development support internal and within the same company, Oracle, makes debugging and
resolution of problems smoother and quicker.
Since Oracle's acquisition of Sun Microsystems, the major pieces of the stack are no longer separate
entities. The Server and Storage HW, the Oracle Solaris operating system and the Oracle Database and
Middleware stacks are now being developed by integrated organizations with the goal of developing
stable solutions to ease deployment for customers.
Throughout testing, the reference configurations and the complete Oracle stack behaved with
predictability and maintained expected performance levels.
We will continue testing integrated Oracle technology stacks for real world conditions with the latest
hardware revisions and software patch sets.
Oracle Integrated Stack Testing Reference Configurations Introduced
40
References
Servers
Sun SPARC Enterprise M9000 -- http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/m-series/031587.htm
Fujitsu SPARC64 Processor: http://www.fujitsu.com/global/services/computing/server/sparcenterprise/technology/performance/processor.html
Sun SPARC Enterprise T5440 - http://www.oracle.com/us/products/servers-
storage/servers/sparc-enterprise/t-series/031585.htm
Storage
Sun Storage Sun Storage 7410 -- http://www.oracle.com/us/products/servers-storage/storage/unified-storage/031680.htm
Configuring Sun Storage 7000 Unified Storage Systems for Oracle Databases -- http://wikis.sun.com/display/BluePrints/Configuring+Sun+Storage+7000+Unified+Storage+Systems+for+Oracle+Databases
Sun Storage 6780 - http://www.oracle.com/us/products/servers-storage/storage/disk-
storage/031724.htm
Software
PeopleSoft Enterprise -- http://www.oracle.com/us/products/applications/peoplesoft-enterprise/index.html
Oracle Solaris -- http://www.oracle.com/us/products/servers-storage/solaris/index.html
Oracle RAC -- http://www.oracle.com/us/products/database/options/real-application-
clusters/index.html
Oracle VM servers for SPARC --
http://www.oracle.com/us/technologies/virtualization/oraclevm/oracle-vm-server-for-
sparc-068923.html
o LDoms -- http://www.sun.com/blueprints/0207/820-0832.pdf
Oracle VM for x86 --
http://www.oracle.com/us/technologies/virtualization/oraclevm/index.html
Siebel CRM -- http://www.oracle.com/us/products/applications/siebel/index.html
Oracle WebLogic -- http://www.oracle.com/us/products/middleware/application-
server/index.htm
Oracle VM for x86 http://www.oracle.com/us/technologies/virtualization/oraclevm/index.html
Solaris Live Upgrade 2.0 Guide: http://docs.sun.com/app/docs/doc/806-7933
Solaris[TM] Live Upgrade Software: Patch Requirements http://sunsolve.sun.com/search/document.do?assetkey=1-71-1004881
Oracle Integrated Stack Testing Reference Configurations Introduced
41
Oracle VM for SPARC best practices, guidelines and recommendations
http://wikis.sun.com/display/SolarisLogicalDomains/Home.
Oracle PeopleSoft forum URL.
http://forums.oracle.com/forums/forum.jspa?forumID=830
Oracle Integrated Stack Testing Reference Configurations Introduced
42
Appendix A
1. Update root PATH environment variable.
* Add "/opt/ovs-agent-latest/utils:/opt/mpp" to the PATH variable
in /root/.bash_profile
2. Remove pre-installed rdac-mpp-tools package.
* rpm -ev rdac-mpp-tools-1.0.1-4
3. Setup yum to retrieve RDAC required pre-requisite packages from
Oracle yum repository.
* wget -P /etc/yum.repos.d http://public-yum.oracle.com/public-
yum-el5.repo
* vi /etc/yum.repos.d/public-yum-el5.repo
* Search for "enabled=0" under "el5_u3_base" section and change to
"enable=1".
* wget -P /etc/yum.repos.d http://public-yum.oracle.com/public-
yum-ovm2.repo
* vi /etc/yum.repos.d/public-yum-ovm2.repo
* Search for "enabled=0" under "ovm22_2.2.1_base" section and
change to "enable=1".
4. Install RDAC required pre-requisite gcc and kernel-ovs-devel
packages.
* yum install gcc
* yum install kernel-ovs-devel
5. Update /etc/modprobe.conf w/ QLogic entries required by RDAC.
options qla2xxx qlport_down_retry=35
6. Download and unpack RDAC.
* cp /net/ai-
load.central.sun.com/OTHER/Allegheny_M1/Failover/rdac-LINUX-
09.03.0C02.0253-source.tar.gz /tmp
* cd /tmp
* gunzip rdac-LINUX-09.03.0C02.0253-source.tar.gz
* tar xvf rdac-LINUX-09.03.0C02.0253-source.tar
7. Implement workaround (update /etc/issue and driver header
file).
* vi /etc/issue
* Search and replace "release 2.2.1" w/ "release 5" and save file.
Oracle Integrated Stack Testing Reference Configurations Introduced
43
* vi /tmp/linuxrdac-
09.03.0C02.0253/mpp_linux_headers/mppCmn_SysInterface.h
* Search and replace "VOID" w/ "void" and save file.
8. make clean && make uninstall && make install
* cd /tmp/linuxrdac-09.03.0C02.0253
* make clean
* make uninstall
* make install
9. Check and modify current mppVhba.ko and mppUpper.ko
* modinfo /lib/modules/`uname -
r`/kernel/drivers/scsi/mpp/mppVhba.ko | egrep "author|version" |
grep -v srcver
* modinfo /lib/modules/`uname -
r`/kernel/drivers/scsi/mpp/mppUpper.ko | egrep "author|version" |
grep -v srcver
* If says Dell, then must be changed.
* mv /lib/modules/`uname -r`/kernel/drivers/scsi/mpp/mppUpper.ko
/lib/modules/`uname -r`/kernel/drivers/scsi/mpp/mppUpper.ko.orig
* mv /lib/modules/`uname -r`/kernel/drivers/scsi/mpp/mppVhba.ko
/lib/modules/`uname -r`/kernel/drivers/scsi/mpp/mppVhba.ko.orig
* cp /tmp/linuxrdac-09.03.0C02.0253/mppUpper.ko
/lib/modules/`uname -r`/kernel/drivers/scsi/mpp/mppUpper.ko
* cp /tmp/linuxrdac-09.03.0C02.0253/mppVhba.ko /lib/modules/`uname
-r`/kernel/drivers/scsi/mpp/mppVhba.ko
* modinfo /lib/modules/`uname -
r`/kernel/drivers/scsi/mpp/mppVhba.ko | egrep "author|version" |
grep -v srcver
* If says Sun, then continue.
10. Update grub.conf w/ new MPP boot entry and reboot.
* cd /boot; ls |grep mpp
* vi /etc/grub.conf
* Add new boot entry for MPP. Set default boot entry to boot the
MPP initrd.
* reboot
11. Check status of QLogic and RDAC driver.
* lsmod | grep qla
* Verify that "qla2xxx" driver is loaded.
* lsmod | grep mpp
* Verify that "mppUpper" and "mppVhba" drivers are loaded.
* mppUtil -a
* Verify that output showing zero arrays have been discovered.
\ Oracle Integrated Stack Testing
Reference Configurations Introduced
September 2010
Oracle Corporation
World Headquarters
500 Oracle Parkway
Redwood Shores, CA 94065
U.S.A.
Worldwide Inquiries:
Phone: +1.650.506.7000
Fax: +1.650.506.7200
oracle.com
Copyright © 2010, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the
contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
means, electronic or mechanical, for any purpose, without our prior written permission.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective
owners.
AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel
and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are
trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open
Company, Ltd. 0410