60
An Oracle White Paper June 2010 Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

Consolidating Oracle Siebel CRM Environments with High

Embed Size (px)

Citation preview

Page 1: Consolidating Oracle Siebel CRM Environments with High

An Oracle White Paper June 2010

Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

Page 2: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

Executive Overview ............................................................................ 1!Introduction ......................................................................................... 2!Key Solution Technologies ................................................................. 4!An Overview of Oracle’s Siebel CRM Application Architecture .......... 5!Workload Description.......................................................................... 6!

Business Transaction Types........................................................... 8!Test Environments .............................................................................. 9!

Phase 1 Test Environment.............................................................. 9!Phase 2 Test Environment............................................................ 10!

Phase 1 Testing — Consolidating Tiers Using Containers and Domains............................................................................................ 11!

Performance and Scalability Results with Oracle Solaris Containers..................................................................................... 12!Performance and Scalability Results with Oracle VM Server for SPARC .................................................................................... 18!

Phase 2 Testing — Implementing HA............................................... 22!Configuring for HA Using Oracle Solaris Cluster Software ........... 23!Phase 2 Testing Scenarios ........................................................... 26!Performance and Scalability Results with Oracle Solaris Cluster . 26!Failover Testing with Oracle Solaris Cluster ................................. 32!

Best Practices and Recommendations ............................................. 34!Server/Operating System Optimizations....................................... 34!I/O Best Practices ......................................................................... 35!Web Tier Best Practices ............................................................... 36!Siebel Application Tier Best Practices .......................................... 37!Oracle Database Tier Best Practices............................................ 37!Best Practices for High Availability Configurations ....................... 38!

Page 3: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

Sizing Guidelines .............................................................................. 39!Baseline Configurations ................................................................ 41!Small HA Configuration – up to 3,500 users................................. 41!Medium HA Configuration – up to 7,000 users ............................. 41!Large HA Configuration – up to 14,000 users............................... 41!

Conclusion ........................................................................................ 42!Appendix A Phase 1 – Configuration of Containers......................... 43!

Web Server ................................................................................... 43!Application Server ......................................................................... 44!Database Server ........................................................................... 44!

Appendix B Phase 1 – Configuration of Oracle VM Server for SPARC ........................................................................................ 45!

Primary Domain ............................................................................ 45!Siebel Application Server Domain ................................................ 45!Siebel Web Server Domain........................................................... 46!

Appendix C Phase 2 – Configuration of Zone Clusters ................... 46!Web Server ................................................................................... 47!Gateway Server ............................................................................ 49!Application Server ......................................................................... 51!Database Server ........................................................................... 52!

About the Authors ............................................................................. 55!Acknowledgements........................................................................... 55!References........................................................................................ 56!

Page 4: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

1

Executive Overview

Founded on a service-oriented architecture, Oracle! Siebel Customer Relationship Management (CRM) software allows businesses to build scalable standards-based applications that can help to attract new business, increase customer loyalty, and improve profitability. As companies deliver more comprehensive and rich customer experiences through CRM tools, demand can scale rapidly, forcing datacenters to expand system resources quickly to meet increasing workloads. Datacenter resources can be scaled horizontally (with more servers added at each tier), vertically (by adding more powerful servers), or both. As servers are added at Siebel Web, Gateway, Application, and Database tiers, a frequent result is server sprawl. Over time, this can result in negative consequences — greater complexity, poor utilization, increased maintenance fees, and skyrocketing power and cooling costs.

Consolidating tiers is one approach that can help to contain server sprawl and reduce costs. Recognizing the need to grow efficiently while scaling Oracle Siebel CRM capabilities, Oracle created a proof-of-concept solution that consolidates Web, Gateway, Application, and Database tiers on a single Sun SPARC Enterprise server from Oracle, limiting the number of physical machines needed to effectively deploy applications and improving the bottom line. As shown in testing exercises using a well-known Siebel CRM workload and virtualization technologies built into Sun SPARC Enterprise servers, the solution scales easily to accommodate user load, even with workloads of up to 14,000 users.

Because Oracle Siebel CRM applications support business profit centers, they often operate under stringent availability requirements and necessitate demanding service levels. For this reason, Oracle engineers conducted a second phase of proof-of-concept testing. In the second phase, software tiers

Page 5: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

2

were again consolidated using built-in virtualization technologies — but this time in a clustered server configuration that provided high availability (HA). The HA tests demonstrated near linear scalability while at the same time providing mission-critical levels of application availability.

Introduction

To safely and securely consolidate Siebel CRM application tiers, Sun SPARC Enterprise servers offer a choice of built-in, no-cost virtualization technologies:

• Oracle Solaris Containers. Containers are an integrated virtualization mechanism that can isolate application services within a single Oracle Solaris instance. Faults in one container have no impact on applications or service instances running in other containers.

• Oracle VM Server for SPARC (formerly known as Sun Logical Domains). Native to Sun CMT processors (like UltraSPARC T2 Plus processors), this technology allows multiple tiers to be consolidated within isolated domains, without imposing additional cost. Each domain runs an independent copy of Oracle Solaris, and there are no licensing fees for additional OS copies.

Using one or both of these virtualization technologies, Siebel CRM services in each tier can run in isolation, without impacting service execution in other tiers. System resources can be allocated and reassigned to each tier as needed. Compared to other competitive and proprietary virtualization technologies, using Oracle Solaris Containers and/or Oracle VM Server for SPARC can provide significant cost savings when consolidating a Siebel CRM infrastructure. In addition, Oracle guarantees binary compatibility for applications running under Oracle Solaris, whether the OS runs natively as the host OS or as a guest OS in a virtualized environment.

Page 6: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

3

In two phases of scalability testing, Oracle engineers configured different Siebel CRM tiers in virtualized environments on Sun SPARC Enterprise servers. In the first phase, engineers consolidated tiers on a single server, configuring each Siebel CRM tier in a separate container or domain. The initial phase of testing compared the scalability of the two different virtualization technologies. In a second phase of testing, engineers implemented Oracle Solaris Cluster (which supports both containers and domains) on two Sun SPARC Enterprise servers to simulate mission-critical Siebel CRM application workloads in a consolidated yet resilient virtualized environment.

For both phases of testing, the test workload was extracted from the well-established Siebel Platform Sizing and Performance Program (PSPP) benchmark, which simulates real-world environments using some of the most popular Siebel CRM modules. Engineers looked at system resource utilization, response time, and throughput metrics as they scaled the number of users under typical application workloads. This paper shows the test results and clearly documents best practices, which can help system architects more effectively size and optimize the Siebel CRM application on Sun SPARC Enterprise servers.

The test results demonstrate how no-cost virtualization technologies in Sun SPARC Enterprise servers — combined with Oracle Solaris Cluster software — can optimize scalability while reducing datacenter complexity, lowering operating costs and delivering high availability for business-critical CRM services.

Page 7: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

4

Key Solution Technologies

The tested solution was based on Oracle’s massively scalable Sun SPARC Enterprise servers, the Oracle Solaris 10 operating system, and Oracle’s open storage technologies, as shown in Figure 1. Built-in, no-cost virtualization technologies — Oracle Solaris Containers or Oracle VM Server for SPARC — reside at the heart of the solution architecture and enable a flexible infrastructure for consolidation. Oracle Solaris Cluster (and often third-party management tools) are typically added to enhance business continuity and simplify resource allocation tasks for virtualized environments.

Figure 1. Using no-cost virtualization technologies, the proof-of-concept combined Siebel CRM tiers on a Sun SPARC Enterprise T5440 server.

In the first phase of testing, Oracle engineers constructed a proof-of-concept solution based on a single Sun SPARC Enterprise T5440 server (see Figure 1), which features up to four UltraSPARC T2 Plus processors with up to 32 cores and up to 256 concurrently executing threads. With such advanced thread density, a single Sun SPARC Enterprise T5440 server is a powerhouse for consolidating a Siebel infrastructure. To demonstrate this point, Oracle engineers ran a series of scalability tests using both container and domain virtualization technologies. As the test results show, the consolidated solution on a single Sun SPARC Enterprise T5440 server exhibited good scalability, providing reasonable response times and high throughput rates for simulated user populations of up to 14,000 users.

In Sun SPARC Enterprise servers, Chip Multi-Threading (CMT) technology in UltraSPARC T2 Plus processors enables effective scalability. CMT technology applies the available transistor budget to achieve up to eight cores within a single processor. Each core can switch between threads on a clock cycle, helping to keep the processor pipeline active while lowering power consumption and heat dissipation. Because of the advanced thread density, the Sun SPARC Enterprise T5440 server scales well to provide headroom to support growth while minimizing power use.

In the second phase of testing (see Figure 2), Oracle engineers used a clustered configuration of two Sun SPARC Enterprise T5240 servers. Each Sun SPARC Enterprise T5240 server houses two UltraSPARC T2 Plus processors for a maximum of 128 threads per server. In an economical clustered configuration like that used in the Phase 2 testing, two servers support a total of 256. The clustered

Page 8: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

5

configuration also demonstrated good scalability, reasonable response times, and high levels of throughput, at the same time enabling highly available Siebel CRM application services.

Figure 2. The second phase of testing implemented Oracle Solaris Cluster on two Sun SPARC Enterprise T5240 servers in a consolidated,

clustered HA configuration.

An Overview of Oracle’s Siebel CRM Application Architecture

The Oracle Siebel CRM application suite includes the following tiers (see Figure 3):

• Web Clients. Web Clients provide user interface functionality and can encompass a variety of types (Siebel Web Client, Siebel Wireless Client, Siebel Mobile Web Client, Siebel Handheld Client, etc.). In both phases of testing, Mercury LoadRunner version 8.1 simulated the load generated by the different sized end-user populations.

• Web Server. This tier processes requests from Web Clients and interfaces to the Gateway/Application layer. In the scalability testing performed, Sun engineers installed the Siebel Web Server Extension and configured the Oracle iPlanet Web Server (formerly Sun Java System Web Server) at this tier.

• Gateway/Application Server. This tier provides services on behalf of Siebel Web Clients. It consists of two sub-layers: the Siebel Enterprise Server and the Siebel Gateway Server.

• Database Server. While the Siebel File System stores data and physical files used by Siebel Clients and Siebel Enterprise Server, the Siebel Database Server stores Siebel CRM database tables, indexes, and seed data.

Page 9: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

6

In a multiple server deployment, the Siebel Enterprise Server includes a logical grouping of Siebel Servers. (However, in a small configuration, the Siebel Enterprise Server might contain a single Siebel Server.) The Siebel Gateway coordinates the Siebel Enterprise Server and its set of Siebel Servers. It also provides a persistent backing store of Siebel Enterprise Server configuration information.

Each Siebel Server is a flexible and scalable application server that supports a variety of services such as data integration, workflow, data replication, and synchronization services for mobile clients. The Siebel Server also includes logic and infrastructure for running different CRM modules, as well as providing connectivity to the Database Server. The Siebel Server consists of several multithreaded processes that are commonly known as Siebel Object Managers.

Figure 3. This high-level overview of the Oracle Siebel CRM application architecture shows the tiered software architecture.

To provide high availability to all three tiers of Oracle Siebel CRM 8, Oracle Solaris Cluster software is deployed to support mission-critical application availability (see “Configuring for HA Using Oracle Solaris Cluster Software”, page 23). In the second phase of testing, engineers analyzed performance and scalability with Siebel CRM workloads in an HA configuration, using clustered zones to support each software tier.

Workload Description

CRM systems often require customization — typically more frequently than other business applications. Common changes include adding or removing certain application modules, modifying the function of existing modules, or integrating the CRM application with other business applications and processes. While application performance varies according to the particulars of any deployment, testing

Page 10: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

7

a configuration’s scalability with a well-defined workload helps to provide a useful starting point for defining appropriate configurations and sizing.

For the purposes of scalability testing, engineers used a workload extracted from the well-known Siebel Platform Sizing and Performance Program (PSPP) workload. This workload is based on scenarios derived from large Siebel customers and replicates real-world, concurrent, thin-client requirements of typical end-users. The PSPP 8.0 workload is based on user populations who repeatedly perform the following types of tasks and functions:

• Siebel Financial Services Call Center. The Siebel Financial Services Call Center software provides a comprehensive solution for sales and service, helping customer service and telesales representatives to provide world-class customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling opportunities.

• Siebel Partner Relationship Management. Representing eChannel users in partner organizations, the Siebel Partner Relationship Management application enables organizations to effectively and strategically manage relationships with partners, distributors, resellers, agents, brokers, and dealers.

• Siebel Workflow. This business process management engine automates user interaction, business processes, and integration. A graphical drag-and-drop user interface allows simple administration and customization. Administrators can add custom or pre-defined business services, specify logical branching, updates, inserts, and subprocesses to create a workflow process tailored to specific business requirements.

• Siebel eScript. eScript is a programming language that application developers use to write simple scripts to extend Siebel applications. The JavaScript programming language (a popular scripting language used extensively to deploy Web sites) is the core language underlying the Siebel eScript language.

• Siebel Enterprise Application Integration (EAI). EAI software allows organizations to integrate legacy applications with Siebel CRM applications and to integrate Web Service support. This capability enables organizations to extend the functionality of existing applications to provide up-to-the-minute information through standard Web portals and other Web Service-enabled environments.

In Phase 1 of the testing, the PSPP workload simulates the following task mix for the functions listed above:

• Financial Services Call Center - 30% of active concurrent users

• Partner Relationship Management, eScript, and Workflow - 10% of active concurrent users

• Enterprise Application Integration with Web Services - 60% of active concurrent users

In Phase 2, the test configuration used version 8.1.1 of the Oracle Siebel CRM 8.1.1 software instead of version 8.0. Because of changes to the PSPP workload for 8.1.1 testing (e.g., Partner Relationship is not included in 8.1.1 version of the PSPP), the task mix for Phase 2 changed as follows:

• Financial Services Call Center - 40% of active concurrent users

• Enterprise Application Integration with Web Services - 60% of active concurrent users

Page 11: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

8

Business Transaction Types

Based on the Siebel PSPP benchmark workload described above, Mercury LoadRunner 8.1 generated loads to simulate different user populations simultaneously executing complex business transactions. Between each user operation, “think time” (a synthetic delay simulating the typical pause between a user’s actions) averaged approximately 15 seconds. The following paragraphs characterize core business transaction types used in the testing.

Siebel Financial Services Call Center – Incoming Call Creates Opportunity, Quote, and Order

This transaction simulates the pattern of activity in a typical call center transaction:

• Create a new contact

• Create a new Opportunity for that contact

• Add two products to Opportunity

• Navigate to Opportunities – Quote View

• Click “AutoQuote” button to generate quote

• Enter Quote Name, and Price List

• Drill down on the quote name to go to Quote – Line Items View and specify discount

• Click “Reprice All” button

• Update Opportunity

• Navigate to Quotes – Order View

• Click on “AutoOrder” button to automatically generate the order

• Navigate back to Opportunity

Siebel Partner Relationship Management, eScript, and Workflow — Sales and Service

This transaction simulates the steps that occur when entering a partner service request:

• Partner creates a new service request with appropriate detail

• A service request is automatically assigned

• The saving service request invokes scripting that brings the user to the appropriate opportunity screen

• A new opportunity with detail is created and saved

• The saving opportunity invokes scripting that brings the user back to the service request screen

Web Services – Find, Submit a New Service Request, and Update the Service Request

This transaction simulates a Web Service that interfaces to a hypothetical legacy application to find or create a service request. The Web Service acts as a delivery mechanism for integrating heterogeneous applications through Internet protocols. A Web Service can be specified using Web Services

Page 12: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

9

Description Language (WSDL) and is then transported via Simple Object Access Protocol (SOAP), a transport protocol based on XML. Since the PSPP benchmark suite has no UI presentation layer, the load generator simulates a Java™ Platform, Enterprise Edition (Java EE) Web application to send a Web Service request to a Siebel Server (EAIObjMgr_enu) to invoke Siebel business services.

The Siebel Web Services framework generates WSDL files to describe the Web Services hosted by the Siebel application. Also, this framework can call external Web Services by importing a WSDL document as an external Web Service (using the WSDL import wizard in Siebel Tools). Each Web Service exposes multiple methods, such as Query Service Request, Create Service Request, and Update Service Request.

Web Service authentication is done through a session token. The “ServerDetermine” session type is used and a session token is maintained to avoid a “Login” process for each request. To use the “ServerDetermine” session type, a login Web Service call (SessionAccessPing) retrieves the session token before calling other Web Services. At the end of the transaction, a logout call (SessionAccessPing) makes the session token unavailable.

Test Environments

As noted previously, there were two phases of testing: one to determine scalability on a single Sun SPARC Enterprise T5440 server and another using a clustered configuration of two Sun SPARC Enterprise T5240 servers. These test environments are not representative of typical production deployments but are simplified proof-of-concept configurations designed for test and development.

Phase 1 Test Environment

Figure 4 depicts the Phase 1 test environment.

Figure 4. The Phase 1 test environment consolidated Siebel CRM tiers on a single Sun SPARC Enterprise T5440 server.

Page 13: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

10

The Phase 1 test environment consisted of the following hardware and software components:

• Hardware

• One Sun SPARC Enterprise T5440 server with four 1.4GHz UltraSPARC T2 Plus processors and 128 GB of RAM

• Two Oracle Sun Storage J4200 arrays (with SAS drives) or two Oracle StorageTek 2540 arrays

• Nine Sun Fire X4200 servers from Oracle for load generation

• Software

• Oracle Solaris 10 5/08 s10s_u5wos_10 SPARC and Oracle Solaris 10 10/08 s10s_u5wos_10 SPARC

• Oracle 10g R2 Database Server v10.2.0.3.0

• Siebel CRM Release 8.0 Industry Applications

• Oracle iPlanet Web Server (formerly Sun Java" System Web Server) 6.1 SP10

Note that the testing was performed once with two StorageTek 2540 arrays and once with two Sun Storage J4200 arrays. Generally, the workload imposed such a low amount of I/O that the difference in results was negligible.

Phase 2 Test Environment

Figure 5 shows the Phase 2 test environment.

Figure 5. The HA test environment implemented Siebel CRM tiers on two clustered Sun SPARC Enterprise T5240 servers.

Page 14: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

11

The second phase of testing used the following hardware and software components:

• Hardware

• Two Sun SPARC Enterprise T5240 servers, each with two UltraSPARC T2 Plus processors and 128 GB of RAM

• Two Oracle Sun Storage 6140 arrays (with SAS drives)

• Four Sun Fire X2270 servers from Oracle for load generation

• Software

• Oracle Solaris 10 u8 SPARC

• Oracle 11g R2 Database Server

• Siebel CRM Release 8.1.1 Industry Applications

• Oracle iPlanet Web Server (formerly Sun Java System Web Server) 7.0

• Oracle Solaris Cluster 3.2u3

Phase 1 Testing — Consolidating Tiers Using Containers and Domains

In the first phase of testing, engineers executed three test scenarios: once each with 3,500, 7,000 and 14,000 active users respectively. Table 1 shows the Siebel CRM server configuration for the three user population scenarios.

TABLE 1. CONFIGURATION OF SERVICES FOR EACH TEST SCENARIO

NUMBER OF

CONCURRENT

USERS

NUMBER OF WEB

SERVERS

NUMBER OF SIEBEL

SERVERS

TOTAL NUMBER OF

SIEBEL OBJECT

MANAGERS

NUMBER OF ORACLE

DATABASE INSTANCES

3,500 1 1 12 1

7,000 1 1 24 1

14,000 2 2 48 1

During the execution of each scenario, data was collected from the following sources:

• Unix performance metrics

• Load Runner (the workload generator software)

• Oracle Automatic Workload Repository (AWR)

• A power measurement tool

Page 15: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

12

Engineers repeated the same testing scenarios on a single server using different built-in, no-cost virtualization technologies: Oracle Solaris Containers and Oracle VM Server for SPARC (previously known as Sun Logical Domains). The following pages include Phase 1 testing results for configurations using containers (see page 12) and domains (see page 18).

Performance and Scalability Results with Oracle Solaris Containers

In Phase 1 testing on the Sun SPARC Enterprise T5440 server, Oracle Solaris 10 was first configured with three containers (zones) in addition to the global zone. Each zone was used to isolate a different Siebel CRM tier — Web, Gateway/Application, or Database. System resources were dedicated to each tier as indicated in Table 2 (for more information on resource allocation for zones, see “Sizing Recommendations”, page 39 and Appendix A, “Configuration of Containers”, page 43).

TABLE 2. RESOURCES ALLOCATED TO EACH TIER AND CONTAINER

TIER AND CONTAINER VCPUS1 MEMORY

Web tier 22 vcpus 8GB

Gateway/Application tier 196 vcpus 88GB

Database tier 38 vcpus 32GB

1 A “vcpu” (virtual CPU) correlates to a processing thread. Since the Sun SPARC Enterprise T5440 server has four

UltraSPARC T2 Plus processors with 8 cores and 8 threads per core, there is a maximum possible 256 vcpus per system.

The following pages summarize test results for user populations of 3,500, 7,000 and 14,000 active concurrent users with Oracle Solaris Containers, including these metrics:

• CPU utilization (as a percentage)

• Memory utilization (in GB)

• Business transaction throughput (in number of transactions per hour)

• Average transaction response time (in seconds)

• Power consumption

Page 16: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

13

CPU Utilization (Containers)

Figure 6 shows CPU utilization for Siebel CRM Web, Gateway/Application, and Database tiers in separate containers. Table 3 gives the CPU utilization percentage for each tier under each user population load. As shown, additional CPU processing capacity is available, especially for the 3,500 and 7,000 user scenarios. For these user populations in an actual deployment, a single Sun SPARC Enterprise T5440 server can potentially support additional applications using this excess processing capacity, or a smaller server (such as Oracle’s Sun SPARC Enterprise T5240 server) could be used.

Figure 6. CPU utilization percentage is shown for each tested user population.

TABLE 3. CPU UTILIZATION (%)

SIEBEL CRM TIER 3,500 USERS 7,000 USERS 14,000 USERS

Web Server 10,106 19,866 39,715

Gateway/Application Server

8,306 16,645 33,105

Database Server 31,478 62,845 125,475

Page 17: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

14

Memory Utilization (Containers)

Figure 7 shows memory utilization for the Siebel CRM tiers running in different containers. Table 4 lists corresponding utilization (in gigabytes). As the data and graph illustrate, in all three population scenarios, memory utilization remains low, which indicates that more than adequate memory resources are configured.

Figure 7. Percentage of memory utilization is shown for each tested user population.

TABLE 4. MEMORY UTILIZATION (GB)

SIEBEL CRM TIER 3,500 USERS 7,000 USERS 14,000 USERS

Web Server 1.13 2.05 4.53

Gateway/Application Server

19.00 36.00 73.00

Database Server 12.00 15.00 20.00

Page 18: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

15

Business Transaction Throughput (Containers)

Figure 8 shows the number of business transactions per hour for the three transaction types under each user population load. Table 5 lists the throughput rates. As the data indicates, as the user population doubles from 3,500 to 7,000 to 14,000 users, throughput increases almost linearly.

Figure 8. Business transaction throughput is shown for each user population.

TABLE 5. TRANSACTION THROUGHPUT (TRANSACTIONS/HOUR)

BUSINESS TRANSACTION TYPE 3,500 USERS 7,000 USERS 14,000 USERS

Financial Services Call Center 1.13 2.05 4.53

Partner Relationship Management 19.00 36.00 73.00

EAI – Web Services 12.00 15.00 20.00

Average Transaction Response Time (Containers)

Figure 9 depicts the average transaction response time for the three transaction types under each user population using containers. Table 6 lists the average response time in seconds for each transaction type. For purposes of the testing exercise, response times are measured at the Web server instead of at the end user. (This is because response times at the end user depend on a number of other variables such as network latency, the bandwidth between Web server and browser, and the time for content rendering by the browser.)

Page 19: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

16

Figure 9. Average transaction response time is shown for different workload types and each user population.

TABLE 6. AVERAGE TRANSACTION RESPONSE TIME (SECONDS)

SIEBEL CRM TIER 3,500 USERS 7,000 USERS 14,000 USERS

Financial Services Call Center 0.19 0.23 0.34

Partner Relationship Management 0.30 0.36 0.53

EAI – Web Services 0.10 0.12 0.17

Transaction Throughput and Response Time (Containers)

Performance and scalability are inextricably linked. For this reason it is important to examine throughput and response time metrics together when analyzing application performance and configuration scalability. As application load increases, response time must remain within acceptable bounds. As a rule of thumb, as the number of concurrent users increases, if there is a linear increase in throughput, then the increase in response times should also be within an acceptable limit.

Figure 10 combines transaction throughput and response time for the three transaction types and user population loads. Table 7 lists the corresponding data values. As the data indicates, increases in throughput remain almost linear as user load increases, and response times continue to remain within reasonable, sub-second bounds.

Page 20: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

17

Figure 10. Transaction Throughput and Response Times are shown for different workload types and each user population.

TABLE 7. TRANSACTION THROUGHPUT (TPH, TRANSACTIONS PER HOUR) AND RESPONSE TIME (RT, IN SECONDS)

SIEBEL CRM TIER 3,500 USERS 7,000 USERS 14,000 USERS

Financial Services Call Center – TPH 10,106 19,866 39,715

Partner Relationship Management -TPH

8,306 16,645 33,105

EAI – Web Services -TPH 31,478 62,845 125,475

Financial Services Call Center -RT 0.19 0.23 0.34

Partner Relationship Management -RT

0.30 0.36 0.53

EAI – Web Services – RT 0.10 0.12 0.17

Power Consumption

During the 14,000 concurrent user test at a steady state, the Sun SPARC Enterprise T5440 server consumed an average of 1,276 watts. This translates to one watt of energy spent for every 10.97 users. Given that the Sun SPARC Enterprise T5440 server occupies a total of 4 rack units, the server supports about 3,500 users per rack unit.

Page 21: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

18

Performance and Scalability Results with Oracle VM Server for SPARC

Like Oracle Solaris Containers, Oracle VM Server for SPARC (formerly known as Logical Domains) allows multiple physical servers to be consolidated into isolated domains on a single server. (For more information about this technology, see the paper “Oracle VM Server for SPARC: Enabling A Flexible, Efficient IT Infrastructure.”).

In the Phase 1 testing, engineers repeated the testing scenarios on a single server using domains instead of using containers. Each domain was configured with the same system resources that had been configured for each container (resource allocations are summarized below in Table 8 and presented in Appendix B, “Configuration of Oracle VM Server for SPARC Domains”, page 45).

TABLE 8. DOMAIN CONFIGURATIONS

TIER AND DOMAIN VCPUS MEMORY

Web tier 22 vcpus 8GB

Gateway/Application tier 196 vcpus 87.5GB

Database tier 38 vcpus 32GB

While the Web tier and Gateway/Application tiers resided in separate Guest Domains, the Database tier resided in the Primary Domain. If the database tier had instead been deployed in a Guest Domain, then a minimum of 1 virtual cpu (vcpu) and 512 MB of memory should have been allocated to the Primary Domain.

Results for testing Siebel CRM on a single server with domains and user populations of 3,500, 7,000, and 14,000 users are shown in Figure 11 through Figure 14. Table 9 through Table 12 list corresponding data values. The results indicate that, depending on the workload, there can be some additional resource utilization needed to run Siebel CRM applications using domains rather than using Oracle Solaris Containers.

CPU Utilization (Domains vs. Containers)

Figure 11 shows CPU utilization for Siebel CRM Web, Gateway/Application, and Database tiers in separate domains. Table 9 gives the CPU utilization percentage for each tier under populations of 3,500, 7,000, and 14,000 users. As the data indicates, additional CPU processing capacity is available and tracks closely with utilization with Oracle Solaris Containers, with the exception of the Web tier under 14,000 users. Excess processing capacity on a single Sun SPARC Enterprise T5440 server can potentially support other applications, or a smaller server such as the Sun SPARC Enterprise T5240 server could be used.

Page 22: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

19

Figure 11. CPU utilization (%) results compare domains and containers for each tier and user population.

TABLE 9. CPU UTILIZATION (%) COMPARING DOMAINS AND CONTAINERS

NUMBER OF

CONCURRENT USERS

3,500 USERS

(CONTAINER)

3,500 USERS

(DOMAIN)

7,000 USERS

(CONTAINER)

7,000 USERS

(DOMAIN)

14,000 USERS

(CONTAINER)

14,000 USERS

(DOMAIN)

Web Server 13.67 15.27 30.65 32.98 78.21 84.64

Gateway/Application Server

10.75 10.94 26.80 24.30 76.29 67.53

Database Server 14.22 9.45 29.60 27.10 71.73 63.66

Memory Utilization (Domains vs. Containers)

Figure 12 shows memory utilization with each tier running in a different domain, and Table 10 lists the amount of memory (in gigabytes). As shown, memory utilization with domains is comparable to that with containers. Overall, the testing shows the system is configured with adequate memory resources.

Page 23: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

20

Figure 12. Memory utilization (GB) results compare domains and containers for different tiers and user populations.

TABLE 10. MEMORY UTILIZATION (GB) COMPARING DOMAINS AND CONTAINERS

NUMBER OF

CONCURRENT USERS

3,500 USERS

(CONTAINER)

3,500 USERS

(DOMAIN)

7,000 USERS

(CONTAINER)

7,000 USERS

(DOMAIN)

14,000 USERS

(CONTAINER)

14,000 USERS

(DOMAIN)

Web Server 1.13 1.52 2.05 2.66 4.53 5.72

Gateway/Application Server

19.00 16.73 36.00 35.15 73.00 69.93

Database Server 12.00 12.02 15.00 15.12 20.00 20.60

Throughput (Domains vs. Containers)

Figure 13 illustrates throughput using domains in comparison to containers. Table 11 lists the number of business transactions per hour for the three transaction types under population loads of 3,500, 7,000, and 14,000 users using either containers or domains. With either built-in, no-cost virtualization technology, throughput increases almost linearly as the user population increases.

Page 24: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

21

Figure 13. Throughput results compare domains and containers for different workloads and user populations.

TABLE 11. THROUGHPUT USING DOMAINS AND SOLARIS CONTAINERS (TRANSACTIONS/HOUR)

NUMBER OF

CONCURRENT USERS

3,500 USERS

(CONTAINER)

3,500 USERS

(DOMAIN)

7,000 USERS

(CONTAINER)

7,000 USERS

(DOMAIN)

14,000 USERS

(CONTAINER)

14,000 USERS

(DOMAIN)

Financial Services Call Center

10,106 10,091 19,866 20,131 39,715 39,004

Partner Relationship Management

8,306 8,325 16,645 16,633 33,105 33,076

EAI – Web Services 31,478 31,538 62,845 62,882 125,475 125,219

Response Times (Domains vs. Containers)

Figure 14 depicts the average transaction response time for the three transaction types and compares response times with domains and containers. Table 12 lists the average response time in seconds. With either built-in, no-cost virtualization technology, response times remained in the subsecond range.

Page 25: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

22

Figure 14. Response time results compare domains and containers for various workloads and populations.

TABLE 12. RESPONSE TIMES USING DOMAINS AND CONTAINERS (SECONDS)

NUMBER OF

CONCURRENT USERS

3,500 USERS

(CONTAINER)

3,500 USERS

(DOMAIN)

7,000 USERS

(CONTAINER)

7,000 USERS

(DOMAIN)

14,000 USERS

(CONTAINER)

14,000 USERS

(DOMAIN)

Financial Services Call Center

0.19 0.19 0.23 0.23 0.34 0.36

Partner Relationship Management

0.30 0.28 0.36 0.32 0.53 0.52

EAI – Web Services 0.10 0.09 0.12 0.10 0.17 0.16

Phase 2 Testing — Implementing HA

Highly available (HA) clusters provide nearly continuous access to data and applications by keeping systems running through failures that would normally bring down a single server. In mission-critical clustered systems, no single failure — whether it is a hardware, software, or network failure — can cause a cluster to fail. Recognizing the need to keep business-critical Siebel CRM applications up and running (and to support disaster planning scenarios), Oracle conducted a second phase of testing using a clustered HA configuration for Siebel CRM 8.1.1 workloads. Oracle’s clustering products — in particular Oracle Solaris Cluster software — enable highly available solutions that can meet stringent business continuity requirements for Siebel CRM deployments.

Page 26: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

23

Configuring for HA Using Oracle Solaris Cluster Software

A cluster is two or more servers (or nodes) that work together as a single, continuously available system to provide applications, system resources, and data to users. Each cluster node is a fully functional standalone system. However, in a clustered environment, an interconnect bridges the nodes, which work together as a single entity to provide increased availability and performance. The interconnect carries important cluster information (data as well as a heartbeat) that allows cluster nodes to monitor the health of other cluster nodes. High availability using clustered systems is achieved through a combination of both hardware and software.

Oracle Solaris Cluster software enables business continuity and global disaster recovery solutions to meet evolving datacenter needs. In a nutshell, the clustering software:

• Makes use of proven availability and virtualization features in Oracle Solaris 10 and in UltraSPARC processor-based systems, including those in Sun SPARC Enterprise servers

• Supports an industry-leading portfolio of commercial applications, including Oracle RDBMS, Oracle Siebel CRM, and Web server technologies

• Is certified with a broad range of storage arrays and SPARC and x64/x86 platforms

The most recent release of Oracle Solaris Cluster software implements high availability for consolidated environments that use container or domain virtualization technologies, such as the Siebel CRM proof-of-concept solution described in this paper. Oracle Solaris Cluster software supports Oracle Solaris Containers for fault isolation, security isolation, and resource management. Oracle Solaris Cluster can also help to protect virtualized environments that use Oracle VM Server for SPARC domains, lowering risk for servers that provide multiple application services.

When consolidating Siebel CRM tiers in this way, Oracle Solaris Cluster provides high availability agents to monitor components running in different virtualized environments (see Table 13). Available Oracle Solaris Cluster agents include software to support services such as Oracle RDBMS, Siebel services, NFS, DNS, the Oracle iPlanet Web Server, the Apache Web Server, and so forth. Oracle Solaris Cluster software provides configuration files and management methods to start, stop, and monitor these application services.

TABLE 13. ORACLE SOLARIS CLUSTER AGENTS

SOLUTION COMPONENT PROTECTED BY

Web Server Oracle Solaris Cluster HA for Oracle iPlanet Web Server

Siebel Gateway Oracle Solaris Cluster HA for Siebel (resource type: SUNW.sblgtwy)

Siebel Server Oracle Solaris Cluster HA for Siebel (resource type: SUNW.sblsrvr)

Oracle Database Oracle Solaris Cluster HA for Oracle Database

Page 27: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

24

Figure 15 depicts the HA proof-of-concept configuration used as the basis of the Phase 2 testing. The HA configuration uses Oracle Solaris Cluster's Zone Cluster feature to consolidate the entire solution stack on two physical machines by deploying the Web server, Gateway, Application and Database tiers in four separate “virtual clusters.”

Figure 15. Oracle Solaris Cluster can help to deliver highly available Siebel CRM services.

Designed as a failover environment, the Web server and Database are deployed on one machine, and the Gateway and Siebel Servers are deployed on the other. This distributes the workload across the two machines. If one machine fails, all services are hosted on the surviving machine. When the failed machine is restored, Oracle Solaris Cluster can automatically restore application distribution across the two machines, or an operator can do it manually.

This HA configuration is intended to retain operational capability during any single failure, including hardware faults, with as little user impact as possible. As a result, optimization of the servers is biased for maximum concurrent user performance with sufficient computing power kept in reserve to elegantly facilitate transition to a single server with full operational capability.

Using the GUI management tool shown in Figure 16, each virtual cluster is assigned appropriate system resources, and each environment operates independently of the others. Appendix C (see page 46) includes configuration information for the zone clusters. Note that the proof-of-concept

Page 28: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

25

configuration, while useful for purposes of this testing, is not necessarily typical of a production Siebel CRM environment.

Figure 16. Oracle’s Sun Cluster Manager is used to configure and monitor clustered resources for each zone cluster.

In conjunction with highly reliable solution components (such as Sun SPARC Enterprise servers, Sun Storage and StorageTek products, and Oracle Solaris), Oracle Solaris Cluster helps to construct HA solutions that can deliver reliable and resilient Siebel CRM application services. Figure 17 illustrates a large-scale deployment environment — Gateway and Database services are clustered and redundant Web and Siebel Servers are deployed to achieve high levels of availability.

Page 29: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

26

Figure 17. A typical large-scale deployment of clustered servers creates a reliable environment for Oracle Siebel CRM services.

Phase 2 Testing Scenarios

In the second phase of testing, engineers executed three test scenarios: once each with 2,500, 5,000 and 7,000 active users using an HA configuration and clustered zones defined on the two Sun SPARC Enterprise T5240 servers. Table 14 shows the Siebel CRM server configurations for the three user population scenarios.

TABLE 14. CONFIGURATION OF SERVICES FOR HA TESTING

NUMBER OF

CONCURRENT USERS

NUMBER OF WEB

SERVERS

NUMBER OF SIEBEL

SERVERS

TOTAL NUMBER OF SIEBEL

OBJECT MANAGERS

NUMBER OF ORACLE

DATABASE INSTANCES

2,500 1 1 10 1

5,000 1 1 20 1

7,000 1 1 28 1

Performance and Scalability Results with Oracle Solaris Cluster

In Phase 2 testing, Oracle Solaris 10 on each server was configured with four clustered containers (zones) in addition to the global zone. Each clustered zone isolated a different Siebel CRM tier — Web, Gateway, Application, or Database. Table 15 shows how system resources were dedicated to each tier. This design represents a reasonable and likely deployment scenario.

Page 30: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

27

TABLE 15. RESOURCES ALLOCATED TO EACH TIER AND CONTAINER IN PHASE 2 TESTING

TIER AND CONTAINER VCPUS2 MEMORY

Web tier 16 vcpus 3GB

Application tier 70 vcpus 34GB

Gateway tier 2 vcpus 1GB

Database tier 32 vcpus 24GB

2 Since the Sun SPARC Enterprise T5240 server has two UltraSPARC T2 Plus processors with 8 cores and 8 threads per core, there is a maximum possible 128 vcpus per system, for a total of 256 vcpus in this configuration. (Thus the tested clustered configuration has the same number of vcpus as the single Sun SPARC Enterprise T5440 server used in Phase 1 testing.)

Page 31: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

28

In this round of testing, data was also collected from Unix system performance tools, Load Runner (the workload generator software), and Oracle Automatic Workload Repository (AWR). The following pages contain metrics for Phase 2 testing of the HA configuration, including:

• CPU utilization (as a percentage)

• Memory utilization (in GB)

• Business transaction throughput (in number of transactions per hour)

• Average transaction response time (in seconds)

CPU Utilization (Clustered Configuration)

Figure 18 shows CPU utilization for Web, Gateway/Application, and Database tiers in clustered zones. Table 16 gives the CPU utilization percentage for each tier under each user population. As shown, CPU utilization scales as the number of users increases, and there is additional compute capacity available to handle peaks in utilization, especially in the small and medium configurations.

Figure 18. CPU utilization percentage with an HA configuration is shown for each tested user population.

TABLE 16. CPU UTILIZATION (%)

SIEBEL CRM TIER 2,500 USERS 5,000 USERS 7,000 USERS

Web Server 13 26 39

Gateway/Application Server 18 43 71

Database Server 20 48 71

Page 32: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

29

Memory Utilization (Clustered Configuration)

Figure 19 shows memory utilization for Siebel CRM tiers deployed in clustered zones. Table 17 lists corresponding utilization (in gigabytes). As the data and graph illustrate, in all three population scenarios, memory utilization remains low, indicating that more than adequate memory resources are configured. (Note that each Sun SPARC Enterprise T5240 server can support up to a maximum of 256GB.)

Figure 19. Memory utilization in the HA configuration is given in gigabytes for each tested user population.

TABLE 17. MEMORY UTILIZATION (GB)

SIEBEL CRM TIER 2,500 USERS 5,000 USERS 7,000 USERS

Web Server 0.58 0.88 1.11

Gateway/Application Server 11.4 21.9 29.77

Database Server 12.5 17 23

Business Transaction Throughput (Clustered Configuration)

Figure 20 shows the number of business transactions per hour for the three transaction types under each user population load. Table 18 lists the throughput rates. As the user population increases from 2,500 to 5,000 to 7,000 users, throughput increases almost linearly.

Page 33: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

30

Figure 20. Business transaction throughput with an HA configuration is shown for each user population.

TABLE 18. TRANSACTION THROUGHPUT (TRANSACTIONS/HOUR)

BUSINESS TRANSACTION TYPE 2,500 USERS 5,000 USERS 7,000 USERS

Financial Services Call Center 9477 18993 26242

EAI – Web Services 22469 45395 62687

Average Transaction Response Time (Clustered Configuration)

Figure 21 depicts the average transaction response time for the three transaction types under each user population using containers. Table 19 lists the average response time in seconds for each transaction type. For purposes of the testing exercise, response times are measured at the Web server instead of at the end user. (This is because response times at the end user depend on a number of other variables such as network latency, the bandwidth between Web server and browser, and the time for content rendering by the browser.)

Page 34: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

31

Figure 21. Average transaction response time (given an HA configuration) is shown for different workload types and each user population.

TABLE 19. AVERAGE TRANSACTION RESPONSE TIME (SECONDS)

SIEBEL CRM TIER 2,500 USERS 5,000 USERS 7,000 USERS

Financial Services Call Center 0.22 0.26 0.32

EAI – Web Services 0.12 0.14 0.16

Transaction Throughput and Response Time (Clustered Configuration)

Performance and scalability are inextricably linked. For this reason it is important to examine throughput and response time metrics together when analyzing application performance and configuration scalability. As application load increases, response time must remain within acceptable bounds. As a rule of thumb, as the number of concurrent users increases, if there is a linear increase in throughput, then the increase in response times should also be within an acceptable limit.

Figure 22 combines transaction throughput and response time for the three transaction types and user population loads. Table 20 lists the corresponding data values. As the data indicates, increases in throughput remain almost linear as user load increases, and response times continue to remain within reasonable, sub-second bounds.

Page 35: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

32

Figure 22. Throughput and response times (given an HA configuration) are shown for different workload types and user populations.

TABLE 20. TRANSACTION THROUGHPUT (TPH, TRANSACTIONS PER HOUR) AND RESPONSE TIME (RT, IN SECONDS)

SIEBEL CRM TIER 2,500 USERS 5,000 USERS 7,000 USERS

Financial Services Call Center – TPH 9477 18993 26242

EAI – Web Services –TPH 22469 45395 62687

Financial Services Call Center –RT 0.22 0.26 0.32

EAI – Web Services – RT 0.12 0.14 0.16

Power Consumption (Clustered Configuration)

During the Phase 2 testing of the HA configuration, power consumption was not explicitly measured. Estimated power consumption for a Sun SPARC Enterprise T5240 server supporting 7000 concurrent Siebel users is around 778 watts, which is approximately 8.9 users per watt.

Failover Testing with Oracle Solaris Cluster

In addition to performance and scalability testing, Oracle engineers conducted failover testing. Using the same Phase 2 test configuration shown in Figure 15 (page 24), in which one server node hosts primary instances of Web and Database services while a second node hosts primary instances of Gateway and Seibel servers, Oracle engineers conducted four separate failover tests.

The failover tests executed under a workload simulating 1000 concurrent users (40% Financial and 60% EAI) and consisted of these four scenarios:

• Failover of the primary Gateway server on node 2. After the simulated workload reached 1000 active users, engineers killed all processes associated with the Gateway server on node 1. As a result, Oracle

Page 36: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

33

Solaris Cluster restarted the Gateway resource group on node 2. Once the Gateway server came online, workload generation resumed. Throughput and response time were measured to examine whether these metrics were consistent both before and after the failover.

• Reboot of the primary Web server on node 1. With 1000 simulated concurrent users, engineers rebooted the zone cluster on node 1 supporting the Web server. Oracle Solaris Cluster then failed over the Web server resource group to the second node. Once the Web server came online, the workload simulator resumed load generation and engineers measured throughput and response time to determine consistency before and after the fault.

• Reboot of the Database server instance on node 1. After the simulated workload reached 1000 active users, engineers rebooted the zone cluster on node 1 with the Database server. Oracle Solaris Cluster failed over the Database server resource group to the second node. Once the Database server came online, workload generation resumed. Throughput and response time were measured to determine consistency before and after the failover.

• Complete power loss of node 2. In this scenario, after the simulated workload reached 1000 users, engineers powered off node 2 via the server’s built-in service processor. In response, Oracle Solaris Cluster restarted the Gateway and Siebel Server resource groups on node 1. Again, throughput and response time were measured for consistency before and after the node failure.

For purposes of this test, a simplified 1000-user workload in a single pass was used rather than a more burdensome load. This was done to produce a cogent, representative sample of results rather than a range of results that would differ very little (if at all) with increased workload. Concurrent users in this configuration have very little impact on failure detection or recovery times.

In all four scenarios, throughput and response times were consistent before and after failover. Table 21 shows metrics for the 1000-user workload, including baseline values measured prior to testing.

TABLE 21. TRANSACTION THROUGHPUT AND RESPONSE TIME IN FAILOVER SCENARIOS

FAILOVER TEST SCENARIO # USERS THROUGHPUT

(TPH)

RESPONSE TIME

(SEC)

DETECTION (D) AND RECOVERY (R)

TIMES

Baseline

(All tiers, nodes 1 and 2)

400 Financial 600 EAI

3791 8999

0.21 0.11

N/A

Failover of primary Gateway server on node 2

400 Financial 600 EAI

3793 8980

0.21 0.11

Gateway: D = 1s, R = 1mn17s Siebel: R = 26s Total stack: D+R = 1mn44s

Failover of primary Web server on node 1

400 Financial 600 EAI

3777 9052

0.21 0.11

Web: D = 14s, R = 1mn57s Total: D+R = 2mn11s

Failover of primary Database server on node 1

400 Financial 600 EAI

3793 8971

0.21 0.12

Database: D = 17s, R = 1mn1s Total: D+R = 1mn18s

Page 37: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

34

Failover of node 2 (power-off) 400 Financial 600 EAI

3784 8930

0.21 0.12

D = 16s Gateway: R = 23s Siebel: R = 1mn24s Total stack: D+R = 2mn3s

Best Practices and Recommendations

Prior to testing the solution, engineers made several optimizations to the Siebel CRM configurations. Summarized below, these settings and modifications can help customers optimize performance and scalability when consolidating Siebel CRM Web, Gateway/Application, and Database tiers on a server. Sizing recommendations are included at the end of this section, and can be tailored to site-specific requirements. Oracle consultants are experienced in designing optimal solutions for Siebel CRM applications, and are knowledgeable about best practices. By engaging these consultants in application and system architectural design, customers can achieve optimal configurations to help meet business and site requirements.

Server/Operating System Optimizations

Best practices for optimizing the server and operating system include the following:

• Make sure the server firmware is up to date. Check the Sun System Firmware Release site (www.sun.com/bigadmin/patches/firmware/) for the latest firmware release.

• Install the latest release of Oracle Solaris 10. Customers running Siebel CRM applications on Oracle Solaris 10 OS 5/08 should apply kernel patch 137137-09 from sunsolve.sun.com. Later releases incorporate an equivalent workaround for this critical Siebel-specific bug, so no additional patching is required. Eventually Oracle will fix this bug in their code base, but in the meantime the Solaris 10 OS 10/08 release (or the patch for the earlier Solaris OS version) addresses this issue for Siebel applications (and other 32-bit applications that include memory allocators that return unaligned mutexes). For more information, see Sun RFE 6729759 (“Need to accommodate non-8-byte-aligned mutexes”) or the Oracle Siebel support document #735451.1.

• Optimize Oracle Solaris 10 settings in /etc/system. Enable 256M memory page sizes on all nodes. By default, the latest update of the Solaris 10 OS uses a default maximum of 4M memory pages even when 256M pages are a better application fit. To set a 256M page size, change the setting in /etc/system as follows:

set max_uheap_lpsize=0x10000000

• To avoid running into the standard input/output (stdio) limitation of 256 file descriptors, add the following lines to start_server in the Siebel CRM Gateway/Application tier:

ulimit –n 2048 LD_PRELOAD_32=/usr/lib/extendedFILE.so.1 export LD_PRELOAD_32

Page 38: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

35

• The default file descriptor limit in a shell is 256 and the maximum limit is 65,536. However, 2,048 is a reasonable limit from the application’s perspective.

• Improve scalability with a MT-hot memory allocation library, libumem or libmtmalloc. To improve the scalability of the multi-threaded workloads, preload an MT-hot, object-caching memory allocation library like libumem(3lib), mtmalloc(3malloc). To preload the libumem library, set the LD_PRELOAD_32 environment variable in the shell (bash/ksh) as shown below.

Export LD_PRELOAD_32=/usr/lib/libumem.so.1:$LD_PRELOAD_32

Web and Application servers in the Siebel Enterprise stack are 32-bit. However Oracle 10g or 11g RDBMS on the Solaris 10 OS for UltraSPARC processor-based servers is 64-bit. Hence the path to the libumem library in the PRELOAD statement differs slightly in the Database tier, as shown below.

Export LD_PRELOAD_64=/usr/lib/sparcv9/libumem.so.1:$LD_PRELOAD_64

Be aware that the trade-off is an increase in memory footprint — there can be a resulting 5 to 20% increase in the memory footprint with an MT-hot memory allocation library preloaded. In previous Siebel 8 code testing there has been around a 5% improvement in CPU utilization with a 9% increase in the memory footprint with a load of 400 users.

• Tune the TCP/IP network stack by modifying these settings:

ndd –set /dev/tcp tcp_time_wait_interval 60000 ndd –set /dev/tcp tcp_conn_req_max_q 1024 ndd –set /dev/tcp tcp_conn_req_max_q0 4096 ndd –set /dev/tcp tcp_ip_abort_interval 60000 ndd –set /dev/tcp tcp_keepalive_interval 900000 ndd –set /dev/tcp tcp_rexmit_interval_initial 3000 ndd –set /dev/tcp tcp_rexmit_interval_max 10000 ndd –set /dev/tcp tcp_rexmit_interval_min 3000 ndd –set /dev/tcp tcp_smallest_anon_port 1024 ndd –set /dev/tcp tcp_slow_start_initial 2 ndd –set /dev/tcp tcp_xmit_hiwat 799744 ndd –set /dev/tcp tcp_recv_hiwat 799744 ndd –set /dev/tcp tcp_max_buf 8388608 ndd –set /dev/tcp tcp_cwnd_max 4194304 ndd –set /dev/tcp tcp_fin_wait_2_flush_interval 67500 ndd –set /dev/udp udp_xmit_hiwat 799744 ndd –set /dev/udp udp_recv_hiwat 799744 ndd –set /dev/udp udp_max_buf 8388608

I/O Best Practices

The Siebel 8 PSPP workload is moderately sensitive to disk I/O. For example, when all 14,000 concurrent users are online, the database writes about 7.5MB worth of data per second (out of 7.5MB, approximately 3MB is written to the redo logs), and reads about 18.5kB per second. The Oracle database server writes data randomly into the data files (because the tables are scattered), whereas writes to the redo logs are largely sequential. For the purpose of testing, the database resided on a UFS file system.

Page 39: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

36

Best practices relating to I/O include the following:

• Store the data files separately from the redo log files. If the data files and redo log files are stored on the same disk drive and the disk drive fails, then the redo files cannot be used in the database recovery procedure. For this reason, the proof-of-concept configuration uses two Oracle StorageTek 2540 arrays connected to the Sun SPARC Enterprise T5440 server. One StorageTek 2540 array houses the data files, whereas the other stores Oracle redo log files. File systems for data files and redo logs were hosted under UFS and mounted with the “forcedirectio” option.

• Size the online redo logs to control the frequency of log switches. In the tested configuration, two online redo logs were configured each with 10GB disk space. With 14,000 concurrent users in the Phase 1 testing, there was only one log switch during a 60-minute simulated usage period.

• Eliminate double buffering by forcing the file system to use direct I/O. The Oracle database caches data in its own cache within the Oracle shared global area (SGA) known as the database block buffer cache. Database reads and writes are cached in block buffer cache so that subsequent accesses for the same blocks do not need to re-read data from the operating system. In addition, UFS file systems in Oracle Solaris default to reading data though the global file system cache for improved I/O. This is why, by default, each read is potentially cached twice — one copy in the operating system’s file system cache, and the other copy in Oracle’s block buffer cache. In addition to double caching, extra CPU overhead exists for the code that manages the operating system file system cache. The solution is to eliminate double caching by forcing the file system to bypass the OS file system cache when reading and writing to the disk. To implement direct I/O and eliminate double caching, mount the UFS file systems (that hold the data files and the redo logs) with the “forcedirectio” option:

mount –o forcedirectio /dev/dsk/<partition> <mountpoint>

• Enable the StorageTek 2540 array’s read-ahead feature. When “read-ahead enabled” is set to true, the write is committed to the cache as opposed to the disk, and the OS signals the application that the write has been committed. The read-ahead feature is enabled through the GUI of the StorageTek Common Array Manager (CAM) software.

Web Tier Best Practices

Best practices for the Web tier include the following:

• Upgrade to the latest service pack of the Oracle iPlanet Web Server (formerly Sun Java Web Server).

• Run the Web server in multi-process mode by setting the MaxProcs directive in magnus.conf to a value greater than 1. In multi-process mode, the Web server can handle requests using multiple processes with multiple threads in each process. With a value greater than 1 for MaxProcs, the Web server relies on the operating system to distribute connections among multiple Web server processes. However, many modern operating systems (including Oracle Solaris) do not distribute connections evenly, particularly when there are a small number of concurrent connections. For this reason, tune the parameter for the maximum number of simultaneous requests by setting the RqThrottle parameter in magnus.conf to an appropriate value. In Phase 1 testing, a value of 1024 was used in the 14,000 user test.

Page 40: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

37

Siebel Application Tier Best Practices

Best practices for the Siebel Application tier include the following:

• Comment out the following lines in $SIEBEL_HOME/siebsrvr/bin/siebmtshw.

# This will set 4M page size for Heap and 64 KB for stack # MPSSHEAP=4M # MPSSSTACK=64K # MPSSERRFILE=/tmp/mpsserr # LD_PRELOAD=/usr/lib/mpss.so.1 # export MPSSHEAP MPSSSTACK MPSSERRFILE LD_PRELOAD

All Sun SPARC Enterprise T-series systems (Sun SPARC Enterprise T1000/T2000, T5120/T5220, T5140/T5240, and T5440 servers) support a 256M page size. However Siebel’s siebmtshw script restricts the page size to 4M and 64kB for stack unless indicated lines are commented out in the script.

• Experiment with a fewer number of Siebel Object Managers. Configure the Object Managers in such a way that each Object Manager handles at least 200 active users. Siebel’s standard recommendation of 100 or less users per Object Manager is suitable for conventional systems but not ideal for CMT systems like the Sun SPARC Enterprise T5440 server. Sun’s CMT systems are ideal for running multi-threaded processes with numerous lightweight processors (LWPs) per process. With fewer Siebel Object Managers, there is also usually a significant improvement in the overall memory footprint.

Oracle Database Tier Best Practices

Best practices for the Oracle Database tier include setting the following initialization parameters:

• Set the Oracle initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT to appropriate value, such as 8. The DB_FILE_MULTIBLOCK_READ_COUNT parameter specifies the maximum number of blocks read in one I/O operation during a sequential scan. In the testing, DB_BLOCK_SIZE was set to 8kB. Since average reads are around 18.5kB per second, setting DB_FILE_MULTIBLOCK_READ_COUNT to a higher value does not necessarily help to improve I/O performance.

• Set the database initialization parameter CPU_COUNT to 64 on Sun SPARC Enterprise T5240 and T5440 servers. Otherwise, by default, the Oracle RDBMS assumes CPU_COUNT of 128 and 256 for Sun SPARC Enterprise T5240 and T5440 servers respectively. If this parameter is not adjusted, the Oracle optimizer can use a completely different execution plan when it notices such a large CPU_COUNT number, which might not be optimal. In the 14,000-user benchmark in Phase 1 testing, setting CPU_COUNT to 64 produced optimal execution plans.

• Explicitly set the database initialization parameter enableNUMAoptimization to FALSE for Sun SPARC Enterprise T5240 and T5440 servers. On these multi-socket servers, the parameter enableNUMAoptimization is set to TRUE by default. During the 14,000-user test, intermittent shadow process crashes occurred with the default. There were no additional gains with the default NUMA optimizations.

Page 41: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

38

Best Practices for High Availability Configurations

Oracle Solaris Cluster HA for Siebel provides fault monitoring and automatic failover for the Siebel Gateway and Siebel Server. However, in a Siebel cluster deployment, any physical node running the Oracle Solaris Cluster agent for Siebel cannot also run the Resonate agent. (Resonate and Oracle Solaris Cluster can coexist in the same Siebel enterprise, but not on the same physical server. For more information, see Oracle’s “Sun Cluster Data Service for Siebel Guide for Solaris OS” on docs.sun.com.)

Load balancing is a technique to spread workload between two or more instances of the same application to increase throughput and availability. The Web tier can be load-balanced for high availability in a N+1 architecture, such as having multiple containers or domains housing the Web Server with SWSE (Siebel Web Server Extensions) along with a hardware load balancer.

Additionally, Oracle Solaris Cluster can load balance the Web server. An Oracle Solaris Cluster feature called “Shared Address Resource for Scalable Services” allows multiple instances of the same application (such as the Web server) on each node to listen and process requests sent to the same IP address and port number. However, when the Cluster HA agent for the Web server is used together with the Cluster HA agent for Siebel Server, Oracle Solaris Cluster can only provide failover service to the Web server.

To provide disaster recovery over unlimited distance, Oracle Solaris Cluster Geographic Edition provides a multi-site, multi-cluster disaster recovery solution to manage application availability across geographically remote clusters. In the event that a primary cluster fails, Oracle Solaris Cluster Geographic Edition enables administrators to initialize business services with replicated data on a secondary cluster, as depicted in Figure 23.

Figure 23. Oracle Solaris Cluster Geographic Edition enables disaster recovery solutions over long distances for Siebel CRM services.

Page 42: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

39

Sizing Guidelines

Under the Siebel 8 PSPP testing workload in Phase 1, engineers set virtual CPU (vcpu) and memory allocations for Oracle Solaris Containers as shown in Table 22:

TABLE 22. ACTUAL RESOURCE ALLOCATIONS FOR 14,000 USERS ON SUN SPARC ENTERPRISE T5440 SERVER

TIER VCPUS MEMORY ACTUAL USAGE IN TESTED CONFIGURATION

Web tier 22 vcpus 8GB CPU: 78.21% Memory: 4.5 GB

Application tier 196 vcpus 87.5GB CPU: 76.29% Memory: 73 GB

Database tier 38 vcpus 32GB CPU: 71.33% Memory: 20 GB

While the above resource allocations proved to be ideal for the large 14,000-user configuration, these allocations were not optimal for the small (3,500-user) and medium (7,000-user) configurations in Phase 1 — overall resource utilization was much lower for these populations. For small and medium configurations, Table 23 and Table 24 (respectively) project how CPU and memory resources should instead be allocated based on actual CPU and memory utilization.

TABLE 23. RECOMMENDED RESOURCE ALLOCATIONS FOR 3,500 USERS

TIER VCPUS MEMORY ACTUAL USAGE IN TESTED CONFIGURATION

Web tier 6 vcpus 2GB With 22 vcpus, 8GB RAM: CPU: 13.67% Memory: 1.1 GB

Application tier 49 vcpus 22GB With 98 vcpus, 44GB RAM: CPU: 10.75% Memory: 19 GB

Database tier 10 vcpus 8GB With 19 vcpus, 16GB RAM: CPU: 14.22% Memory: 12 GB

Page 43: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

40

TABLE 24. RECOMMENDED RESOURCE ALLOCATION FOR 7,000 USERS

TIER VCPUS MEMORY ACTUAL USAGE IN TESTED CONFIGURATION

Web tier 11 vcpus 4GB With 22 vcpus, 8GB RAM: CPU: 30.65% Memory: 2 GB

Application tier 98 vcpus 44GB With 98 vcpus, 44GB RAM: CPU: 26.80% Memory: 36 GB

Database tier 19 vcpus 16GB With 19 vcpus, 16GB RAM: CPU: 29.60% Memory: 15 GB

Given resource allocations in Table 23 and Table 24, a Sun SPARC Enterprise T5440 Server could potentially be configured as summarized in Table 25.

TABLE 25. POSSIBLE CONFIGURATIONS FOR SUN SPARC ENTERPRISE T5440 SERVER

NUMBER OF

USERS

DESCRIPTION TOTAL VPCUS PHYSICAL CPUS TOTAL MEMORY

3,500 Small 64 1 32GB

7,000 Medium 128 2 64GB

14,000 Large 256 4 128GB

Of course, actual resource configurations depend specifically on site requirements. In small to medium-sized deployments, one strategy is to deploy a server with a greater number of physical resources than what is minimally required for Siebel CRM applications, and to use excess resources and additional virtualized environments to support other (non-Siebel) application workloads. This enables tremendous flexibility as growth occurs. Another alternative is to deploy the Siebel CRM solution on a smaller server, such as the Sun SPARC Enterprise T5240 server. Using a smaller server lowers the cost of deploying an HA configuration by implementing a second server as in the Phase 2 HA test configuration.

Page 44: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

41

Baseline Configurations

Expected performance characteristics are based on proof-of-concept test implementations and are provided as is without warranty of any kind. The entire risk of using information provided herein remains with the reader and in no event shall Oracle be liable for any direct, consequential, incidental, special, punitive or other damages including without limitation, damages for loss of business profits, business interruption or loss of business information.

Based on the testing described in this paper, the remainder of this section outlines recommended hardware configurations as a starting point for small, medium and large deployments. For optimal sizing information, contact your local Oracle representative.

Small HA Configuration – up to 3,500 users

For a highly available configuration supporting up to 3,500 concurrent users, the following hardware components should be considered:

• Storage – Two Sun Storage 6180 arrays, each fully populated with 16 drives and an expansion tray fully populated with 16 drives, for a minimum total capacity of 128 TB. Additional expansion trays can be added to support further capacity requirements.

• Servers – Two Sun SPARC Enterprise T5140 servers, each with 2 CPUs and 128GB of RAM.

Medium HA Configuration – up to 7,000 users

For a medium-sized HA configuration supporting up to 7,000 users, these hardware components are recommended for deployment:

• Storage – Two StorageTek 6140 arrays, configured to achieve a capacity of at least 128 TB. Expansion trays can be added to support additional capacity.

• Servers – Two Sun SPARC Enterprise T5240 servers, each with 2 CPUs and 128GB of RAM.

Large HA Configuration – up to 14,000 users

For a highly available configuration supporting up to 14,000 concurrent users, consider the following hardware components:

• Storage – Two StorageTek 2540 Arrays, configured to achieve a capacity of at least 128 TB. Expansion trays can be added to support additional capacity.

• Servers – Two Sun SPARC Enterprise T5440 servers, each with 2 CPUs and 128GB of RAM. Since the Sun SPARC Enterprise T5440 servers are quad-socketed, this configuration enables CPU expansion in support of additional applications or to enhance available processing resources.

Page 45: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

42

Conclusion

Virtualization allows Siebel CRM applications to be consolidated securely and effectively on a single server, offering many benefits over the use of multiple physical machines — better resource utilization, smaller datacenter footprint, and lower power consumption. Oracle engineers set out to examine the impact of combining Web, Gateway/Application, and Database tiers for Siebel CRM applications on a single Sun SPARC Enterprise T5440 server from Oracle. Using Oracle Solaris Containers or Oracle VM Server for SPARC technologies to create virtualized environments with dedicated system resources, they determined that a single server could support up to 14,000 users under a complex business transaction workload derived from the PSPP benchmark. The advanced thread density of a single Sun SPARC Enterprise T5440 server allowed throughput to scale almost linearly for small, medium, and large user populations, at the same time achieving reasonable response times.

The second phase of testing confirmed scalability of Siebel CRM workloads when HA technology is deployed in conjunction with virtualization technologies built into Sun SPARC Enterprise servers. By implementing Oracle Solaris Cluster HA products on two servers (which together offered the same number of threads as a single Sun SPARC Enterprise T5440 server), Oracle engineers observed good scalability using virtualized Siebel CRM tiers for up to 7,000 users. Thus, a clustered configuration of economical Sun SPARC Enterprise T5240 servers offers a scalable and resilient platform for deploying mission-critical Siebel CRM services.

By taking advantage of the advanced thread density and scalability of Oracle’s Sun SPARC Enterprise servers, customers can build fail-sale virtualized environments that enable remote failover, allowing IT managers to meet SLAs and satisfy stringent disaster recovery requirements for Siebel CRM applications. In configuring a server for a Siebel CRM deployment, Oracle consultants can help to define an effective architectural model, determine optimal sizing, decide what virtualization technologies to use, and recommend initial resource allocations. For more information on engaging experienced Oracle experts to design an agile Siebel CRM environment for your business, see the Web site www.oracle.com/us/support/systems/advanced-customer-services/index.html.

Page 46: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

43

Appendix A Phase 1 – Configuration of Containers

In the first phase of testing, each Oracle Siebel CRM server ran on a non-global zone as follows:

• siebelweb for the Web server;

• siebelapp for the Gateway/Application servers

• siebeldb for the Database server

Virtual CPUs (vcpus) and memory were allocated to the siebelweb and siebelapp zones. Only memory was allocated to the siebeldb zone, leaving the siebeldb zone to use necessary vcpus from the global zone. Since all database processes ran in the siebeldb non-global zone, there was a negligible consumption of CPU resources in the global zone during the test. The configuration of each zone is shown using the zonecfg command.

Web Server

# zonecfg -z siebelweb zonecfg:siebelweb> info zonename: siebelweb zonepath: /zones2/webserver brand: native autoboot: false bootargs: pool: limitpriv: scheduling-class: ip-type: shared inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: 18.1.1.236 physical: nxge2 defrouter not specified dedicated-cpu: ncpus: 22 capped-memory: physical: 8G

Page 47: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

44

Application Server

# zonecfg -z siebelapp zonecfg:siebelapp> info zonename: siebelapp zonepath: /zones3/appserv brand: native autoboot: false bootargs: pool: limitpriv: scheduling-class: ip-type: shared inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: 18.1.1.29 physical: nxge1 defrouter not specified dedicated-cpu: ncpus: 196 capped-memory: physical: 88G

Database Server

# zonecfg -z siebeldb zonecfg:siebeldb> info zonename: siebeldb zonepath: /zones/dbserver brand: native autoboot: false bootargs: pool: limitpriv: scheduling-class: ip-type: shared inherit-pkg-dir: dir: /lib inherit-pkg-dir: dir: /platform inherit-pkg-dir: dir: /sbin inherit-pkg-dir: dir: /usr net: address: 18.1.1.237 physical: nxge3 defrouter not specified device match: /dev/dsk/c6t0d0s6 device match: /dev/dsk/c8t0d0s6 capped-memory: physical: 32G

Page 48: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

45

Appendix B Phase 1 – Configuration of Oracle VM Server for SPARC

The “ldm list” command shows the three domains used for testing in Phase 1.

# ldm list NAME STATE FLAGS CONS VCPU MEMORY primary active -n-cv SP 38 32G siebelapp active -n--- 15001 196 89600M siebelweb active -n--- 15000 22 8G

Details on the three domain configurations are given below.

Primary Domain

Domain Name: primary VARIABLES boot-device=/pci@400/pci@0/pci@1/scsi@0/disk@0,0:a disk net IO DEVICE PSEUDONYM OPTIONS pci@400 pci pci@500 pci pci@600 pci pci@700 pci VCC NAME PORT-RANGE primary-vcc0 15000-15010 VSW NAME MAC NET-DEV DEVICE MODE primary-vsw0 00:14:4f:fb:64:21 nxge3 switch@0 primary-vsw1 00:14:4f:fb:49:d2 nxge2 switch@1 VDS NAME VOLUME OPTIONS DEVICE primary-vds0 vol1 /dev/dsk/c3t40d1s2 primary-vds1 vol2 /dev/dsk/c2t40d1s2 VCONS NAME SERVICE PORT SP

Based on measurements from the test, if the Database server had run in a Guest Domain instead of the Primary Domain, then some resources should be reassigned to its Guest Domain, but leaving at least 1 vcpu and 0.5 GB of RAM assigned to the Primary Domain.

Siebel Application Server Domain

Page 49: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

46

Domain Name: siebelapp VARIABLES auto-boot?=false boot-device=/virtual-devices@100/channel-devices@200/disk@0 NETWORK NAME SERVICE DEVICE MAC vnet2 primary-vsw1@primary network@0 00:14:4f:f8:8f:13 DISK NAME VOLUME TOUT DEVICE SERVER vdisk2 vol2@primary-vds1 disk@0 primary VCONS NAME SERVICE PORT siebelapp primary-vcc0@primary 15001

Siebel Web Server Domain

Domain Name: siebelweb VARIABLES auto-boot?=false boot-device=/virtual-devices@100/channel-devices@200/disk@0 nvramrc=devalias vnet0 /virtual-devices@100/channel-devices@200/network@0 use-nvramrc?=true NETWORK NAME SERVICE DEVICE MAC vnet1 primary-vsw0@primary network@0 00:14:4f:fb:01:50 DISK NAME VOLUME TOUT DEVICE SERVER vdisk1 vol1@primary-vds0 disk@0 primary VCONS NAME SERVICE PORT siebelweb primary-vcc0@primary 15000

Appendix C Phase 2 – Configuration of Zone Clusters

In Phase 2 testing, engineers configured zone clusters for each Oracle Siebel CRM server instance as shown in Figure 15 (see page 24). The zone clusters were:

• websrv-zc for the Web server

• siebelgw-zc for the Gateway server

Page 50: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

47

• siebelsrv-zc for the Siebel application server

• dbsrv-zc for the Database server

Below, the clzc command shows status information for the zone clusters and the clrg command shows status information for cluster resource groups. In subsequent pages, the clzc command displays configuration details for each zone cluster.

# clzc status === Zone Clusters === --- Zone Cluster Status --- Name Node Name Zone HostName Status Zone Status ---- --------- ------------- ------ ----------- siebelsrv-zc db tm161-207 Online Running boxi tm161-208 Online Running siebelgw-zc db tm161-209 Online Running boxi tm161-210 Online Running websrv-zc db tm161-211 Online Running boxi tm161-212 Online Running dbsrv-zc db tm161-205 Online Running boxi tm161-206 Online Running # clrg status -Z all === Cluster Resource Groups === Group Name Node Name Suspended Status ---------- --------- --------- ------ siebelsrv-zc:siebelsrv-rg tm161-207 No Offline tm161-208 No Online siebelgw-zc:siebelgw-rg tm161-209 No Offline tm161-210 No Online websrv-zc:websrv-rg tm161-211 No Online tm161-212 No Offline dbsrv-zc:dbsrv-rg tm161-205 No Online tm161-206 No Offline

Web Server

# clzc show -v websrv-zc === Zone Clusters === Zone Cluster Name: websrv-zc zonename: websrv-zc zonepath: /zone/websrv-zc autoboot: TRUE brand: cluster bootargs: <NULL> pool: <NULL>

Page 51: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

48

limitpriv: <NULL> scheduling-class: <NULL> ip-type: shared enable_priv_net: TRUE --- Solaris Resources for websrv-zc --- Resource Name: net address: tm161-216 physical: auto Resource Name: fs dir: /siebel/web special: /dev/global/dsk/d8s6 raw: /dev/global/rdsk/d8s6 type: ufs options: [] Resource Name: sysid name_service: DNS{domain_name=sfbay.sun.com name_server=129.145.155.220} nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterms timezone: US/Pacific Resource Name: capped-memory physical: 3G swap: 4G Resource Name: capped-memory swap: 4G Resource Name: inherit-pkg-dir dir (0): /lib dir (1): /platform dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (1): /platform dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (3): /usr Resource Name: dedicated-cpu ncpus: 16 importance: 20 Resource Name: dedicated-cpu importance: 20 Resource Name: rctl name: zone.max-swap priv: privileged limit: 4294967296 action: deny

Page 52: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

49

--- Zone Cluster Nodes for websrv-zc --- Node Name: db physical-host: db hostname: tm161-211 --- Solaris Resources for db --- Resource Name: net address: 10.6.161.211 physical: nxge0 defrouter: <NULL> Node Name: boxi physical-host: boxi hostname: tm161-212 --- Solaris Resources for boxi --- Resource Name: net address: 10.6.161.212 physical: nxge0 defrouter: <NULL>

Gateway Server

# clzc show -v siebelgw-zc === Zone Clusters === Zone Cluster Name: siebelgw-zc zonename: siebelgw-zc zonepath: /zone/siebelgw-zc autoboot: TRUE brand: cluster bootargs: <NULL> pool: <NULL> limitpriv: <NULL> scheduling-class: <NULL> ip-type: shared enable_priv_net: TRUE --- Solaris Resources for siebelgw-zc --- Resource Name: net address: tm161-215 physical: auto Resource Name: fs dir: /siebel/gateway special: /dev/global/dsk/d10s6 raw: /dev/global/rdsk/d10s6 type: ufs options: [] Resource Name: sysid name_service: DNS{domain_name=sfbay.sun.com name_server=129.145.155.220} nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterms timezone: US/Pacific

Page 53: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

50

Resource Name: capped-memory physical: 1G swap: 1G Resource Name: capped-memory swap: 1G Resource Name: inherit-pkg-dir dir (0): /lib dir (1): /platform dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (1): /platform dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (3): /usr Resource Name: dedicated-cpu ncpus: 2 importance: 20 Resource Name: dedicated-cpu importance: 20 Resource Name: rctl name: zone.max-swap priv: privileged limit: 1073741824 action: deny --- Zone Cluster Nodes for siebelgw-zc --- Node Name: db physical-host: db hostname: tm161-209 --- Solaris Resources for db --- Resource Name: net address: 10.6.161.209 physical: nxge0 defrouter: <NULL> Node Name: boxi physical-host: boxi hostname: tm161-210 --- Solaris Resources for boxi --- Resource Name: net address: 10.6.161.210 physical: nxge0 defrouter: <NULL>

Page 54: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

51

Application Server

# clzc show -v siebelsrv-zc === Zone Clusters === Zone Cluster Name: siebelsrv-zc zonename: siebelsrv-zc zonepath: /zone/siebelsrv-zc autoboot: TRUE brand: cluster bootargs: <NULL> pool: <NULL> limitpriv: <NULL> scheduling-class: <NULL> ip-type: shared enable_priv_net: TRUE --- Solaris Resources for siebelsrv-zc --- Resource Name: net address: tm161-214 physical: auto Resource Name: fs dir: /siebel/server special: /dev/global/dsk/d12s6 raw: /dev/global/rdsk/d12s6 type: ufs options: [] Resource Name: sysid name_service: DNS{domain_name=sfbay.sun.com name_server=129.145.155.220} nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterms timezone: US/Pacific Resource Name: capped-memory physical: 34G swap: 43G Resource Name: capped-memory swap: 43G Resource Name: inherit-pkg-dir dir (0): /lib dir (1): /platform dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (1): /platform dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (3): /usr Resource Name: dedicated-cpu

Page 55: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

52

ncpus: 70 importance: 20 Resource Name: dedicated-cpu importance: 20 Resource Name: rctl name: zone.max-swap priv: privileged limit: 46170898432 action: deny --- Zone Cluster Nodes for siebelsrv-zc --- Node Name: db physical-host: db hostname: tm161-207 --- Solaris Resources for db --- Resource Name: net address: 10.6.161.207 physical: nxge0 defrouter: <NULL> Node Name: boxi physical-host: boxi hostname: tm161-208 --- Solaris Resources for boxi --- Resource Name: net address: 10.6.161.208 physical: nxge0 defrouter: <NULL>

Database Server

# clzc show -v dbsrv-zc === Zone Clusters === Zone Cluster Name: dbsrv-zc zonename: dbsrv-zc zonepath: /zone/dbsrv-zc autoboot: TRUE brand: cluster bootargs: <NULL> pool: <NULL> limitpriv: <NULL> scheduling-class: <NULL> ip-type: shared enable_priv_net: TRUE --- Solaris Resources for dbsrv-zc --- Resource Name: net address: tm161-213 physical: auto Resource Name: fs dir: /oradata/redo

Page 56: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

53

special: /dev/global/dsk/d9s6 raw: /dev/global/rdsk/d9s6 type: ufs options: [] Resource Name: fs dir: /oradata/control special: /dev/global/dsk/d13s6 raw: /dev/global/rdsk/d13s6 type: ufs options: [] Resource Name: fs dir: /oradata/data special: /dev/global/dsk/d7s6 raw: /dev/global/rdsk/d7s6 type: ufs options: [] Resource Name: sysid name_service: DNS{domain_name=sfbay.sun.com name_server=129.145.155.220} nfs4_domain: dynamic security_policy: NONE system_locale: C terminal: xterms timezone: US/Pacific Resource Name: capped-memory physical: 24G swap: 40G locked: 24G Resource Name: capped-memory swap: 40G locked: 24G Resource Name: capped-memory locked: 24G Resource Name: inherit-pkg-dir dir (0): /lib dir (1): /platform dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (1): /platform dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (2): /sbin dir (3): /usr Resource Name: inherit-pkg-dir dir (3): /usr Resource Name: dedicated-cpu ncpus: 32 importance: 20 Resource Name: dedicated-cpu importance: 20

Page 57: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

54

Resource Name: rctl name: zone.max-locked-memory priv: privileged limit: 25769803776 action: deny Resource Name: rctl name: zone.max-swap priv: privileged limit: 42949672960 action: deny --- Zone Cluster Nodes for dbsrv-zc --- Node Name: db physical-host: db hostname: tm161-205 --- Solaris Resources for db --- Resource Name: net address: 10.6.161.205 physical: nxge0 defrouter: <NULL> Node Name: boxi physical-host: boxi hostname: tm161-206 --- Solaris Resources for boxi --- Resource Name: net address: 10.6.161.206 physical: nxge0 defrouter: <NULL>

Below, the clrs command reports resource status for dbsrv-zc zone cluster.

# clrs status -Z dbsrv-zc === Cluster Resources === Resource Name Node Name State Status Message ------------- --------- ----- -------------- hasp-rs tm161-205 Online Online tm161-206 Offline Offline lh-rs tm161-205 Online Online - LogicalHostname online. tm161-206 Offline Offline db-rs tm161-205 Online Online tm161-206 Offline Offline lsr-rs tm161-205 Online Online tm161-206 Offline Offline

Page 58: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

55

About the Authors

Chad Prucha has over 20 years of professional computing experience ranging from coding to datacenter design. Much of his experience derives from work in Oracle’s Sun Professional Services organization where he designed and led projects in telepresence, open source software, virtualization, and security. Chad makes an effort to train and certify in competing technologies and products in order to more fairly evaluate their qualities. He is most familiar working with academic, state government, manufacturing, and public utility clients where Information Technology seeks every possible efficiency. Chad also enjoys working with microcontrollers, hydroponics, and Stirling engines.

Pedro Lay is an Enterprise Solutions Architect in Oracle’s Sun Systems Technical Marketing Group. He has over 20 years of industry experience that spans application development, database and system administration, and performance and tuning efforts. Since joining Sun in 1990, Pedro has worked in various organizations including Information Technology, the Customer Benchmark Center, the Business Intelligence and Data Warehouse Competency Center, and the Performance Applications Engineering group.

Acknowledgements

The authors would like to recognize the following individuals for their contributions to this article:

• Gia-Khanh Nguyen, Oracle Solaris Cluster Engineering

• Michael D. Hernandez, Oracle Data Center Client Solutions

• Giri Mandalika, ISV engineering

• Uday Shetty, ISV engineering

• Jenny Chen, ISV engineering

Page 59: Consolidating Oracle Siebel CRM Environments with High

Oracle White Paper — Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers

56

References

WEB SITES

Oracle Sun SPARC Enterprise Servers www.oracle.com/us/products/servers-storage/

Oracle Siebel CRM software oracle.com/applications/crm/siebel/index.html

PAPERS

“Using Sun Systems to Build a Virtual and Dynamic Infrastructure,”” www.sun.com/blueprints

“Using Solaris Cluster and Sun Cluster Geographic Edition with Virtualization Technologies”

wikis.sun.com/display/BluePrints/Using+Solaris+ Cluster+and+Sun+Cluster+Geographic+Edition

“Oracle VM Server for SPARC: Enabling A Flexible, Efficient IT Infrastructure”

www.oracle.com/us/oraclevm-sparc-wp-073442.pdf

“Best Practices For Network Availability With Oracle VM Server for SPARC”

www.sun.com/blueprints

Sun Cluster Data Service for Siebel Guide for Solaris OS docs.sun.com

Page 60: Consolidating Oracle Siebel CRM Environments with High

Consolidating Oracle Siebel CRM Environments with High Availability on Sun SPARC Enterprise Servers June 2010 Oracle Corporation World Headquarters 500 Oracle Parkway Redwood Shores, CA 94065 U.S.A. Worldwide Inquiries: Phone: +1.650.506.7000 Fax: +1.650.506.7200 oracle.com

Copyright © 2008, 2010, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark licensed through X/Open Company, Ltd. 0310