32
White Paper Crystal Enterprise Scalability and Sizing Benchmark Crystal Enterprise 10 and IBM AIX 5.2 (Tests Conducted at IBM Solution Partner Center—San Mateo)

White Paper Crystal Enterprise Scalability and Sizing Benchmark

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: White Paper Crystal Enterprise Scalability and Sizing Benchmark

White Paper Crystal Enterprise Scalabilityand Sizing Benchmark

Crystal Enterprise 10 and IBM AIX 5.2(Tests Conducted at IBM Solution Partner Center—San Mateo)

Page 2: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Author: Keith Moon

Contributors: Cortney Claiborne, Kevin Crook, Davythe Dicochea, Erin O’Malley, Derek Stobbart,James Thomas

Distribution under NDA

Page 3: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark iii

Executive Summary: Crystal Enterprise 10 Scalability on IBM AIX . . . . .1Benchmark Test Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1Test Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1Results Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3

Test Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4User Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4Report Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4System Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4Resources and Resource Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4

Benchmark High-Level Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

Test Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7CPU Scalability Measurements 12–24–48 (Peak Load) . . . . . . . . . . . . . . . . .7

Comparative Scaling Test Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7

Active Users Tested and Response Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

Requests per Second . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

Volume of Requests by Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

CPU Utilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11

Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12

Appendix 1: Scalability Primer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13Speed Spot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Sweet Spot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Peak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14Functionality and Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14

Appendix 2: IBM Total Storage Proven Certification . . . . . . . . . . . . . . . . .15

Contents

Page 4: White Paper Crystal Enterprise Scalability and Sizing Benchmark

iv Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Appendix 3: Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16Load Harness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16Weighted Script Mix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16Script Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17Think Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17User Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17Load Schedule Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18Content Check (Error Checking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18

Working Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19

Appendix 4: Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20

Appendix 5: Report Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22

Appendix 6: Software Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23Crystal Enterprise 10 Configuration and Tests . . . . . . . . . . . . . . . . . . . . . . . .23WebSphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

Appendix 7: Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24

Appendix 8: Hardware Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

Page 5: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 1

Business Objects conducts extensive scalability and performance testing on our products to get athorough understanding of how the software performs in actual customer deployments. Inaddition to this testing, comprehensive performance benchmark testing is done with third partiesto verify internal results and to benchmark our software based on a more extensive set of real-world implementation scenarios.

In February 2004, at the IBM Partner Solution Center in San Mateo, California, benchmark testswere performed on Crystal Enterprise™ 10 (CE 10) on IBM AIX hardware to provide a thoroughunderstanding of system performance, reliability, and stability under various user loads, fordifferent report types, and in different configurations. Business Objects uses the results to helpcustomers understand the overall scalability of the system to help plan for actual deployment ofthe software. While benchmark tests are often used to compare the scalability of competitivesystems, Business Objects believes that providing test results for real-world customer deploymentshould be the ultimate goal of performance testing. Therefore, we designed test scripts andconfigurations that map to actual deployment scenarios with real report samples and datasources.

Benchmark Test OverviewThe tests proved that Crystal Enterprise continues to provide outstanding report-processingperformance in terms of throughput of data and viewing response times.

The tests performed include the scalability, performance, and reliability within simulated realworld conditions. All tests were performed on a 32 way p690 (LPAR into four 8ways) to test theability of the software to use additional processing power efficiently. The key measure ofefficiency is linear scalability or, simply, the ability of the software to perform consistently ashardware and processing power is added. Due to latency with internal system communicationprotocols, perfect scalability (1:1) is impossible to achieve. However, many software systemsexperience non-linear scalability resulting in degradation of performance with more resources. Asystem cannot be considered scalable if there is significant degradation.

The test scripts used were designed around real-world scenarios. As such, they must includethink time, or the amount of time a user actually looks at a report once it is processed. This is animportant factor in customer testing because it means that the report must maintain state withinthe system. And this is important because refreshing the report does not require resubmitting thequery to the database, which would introduce unnecessary load on the database. Tests with nothink time built in are not real-world and are deceptive in terms of performance testing.

Test Configuration Overview

• Crystal Enterprise 10 installed on IBM AIX 5.2 machine with 4, 8, 16, and 32 CPUconfiguration enabled.

• Five different report scenarios from a single page report to a multi-page drill report.

• 500 to 4,000 active users. This translates to roughly 40,000 concurrent users, respectively, and400,000 named users (assuming a 10 to 1 ratio of active users to concurrent users viewing orinteracting with the report at any one time).

Executive Summary: Crystal Enterprise 10Scalability on IBM AIX

Page 6: White Paper Crystal Enterprise Scalability and Sizing Benchmark

2 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Results Summary

• Achieved a consistent, positive, linear scalability of over .92 (for every additional processoradded, we achieved a greater than 92% load increase). Only the best software products in theworld can boast this claim.

• Impressive 14.3 MB per second throughput with 32 CPUs and 4000 concurrent users.

• Highly predictable, quick, and efficient responsiveness across all tests. Sub 2, 4, and 6 secondsresponse times (for 90th percentile) including live requests to database.

• Proven system fault tolerance under load.

Page 7: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 3

The benchmark tests conducted at the IBM Partner Solution Center in San Mateo, Californiaallowed Business Objects to test Crystal Enterprise 10 for the purposes reporting levels ofscalability, performance, and reliability within a simulated real-world condition.

The benchmark test (particularly the workflows, reports, and database) was not designed orskewed to show an incredible number of concurrent virtual user connections. The variations offunctionality, complexity of reports, weight of requests per second was determined in order toprovide a realistic concentration of load. Increasing or decreasing the complexity of the designwould decrease or increase the capacity of concurrent virtual users.

Based on the results obtained from this particular test design, this paper will serve as a valuablesizing tool for system analysts to estimate and project potential environment requirements over awide resource spectrum.

Introduction

Page 8: White Paper Crystal Enterprise Scalability and Sizing Benchmark

4 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

This benchmark was inspired and conceptualized based on our customer feedback as to how theyimplement and use Crystal Enterprise. Test design was also influenced directly based on theresults of Customer Advisory Meetings, Surveys, Proof of Concepts, etc. The external benchmarksfor Business Objects are designed as a hybrid test environment that reflects a common ground ofintegrated use. The sequence of tests was chosen to report behavior over a spectrum of user loadand resources while maintaining the same integrated test scenario.

User Actions (Functionality and Mix)Test scenarios (workflows) were designed to emulate common user interaction and requests to thesystem using realistic variable think times. The functionality includes logging on to the system,navigating folders, selecting reports, viewing reports containing saved data, viewing reportsrequesting live data, navigating from page to page within a report, drilling down through groups,scheduling reports, exporting to PDF or Excel, and logging off.

Think times are used within the scripts to help create a more accurate model of user load andsystem resource use.

Report DesignThe tests use a mixture of reports that range from smaller data sets up to larger data sets andsimple design up to high complexity. For example, a one-page summary report with 1,000 recordsto 600-page report with 100,000 records. Reports are comprised of formulas, grouping, charts,images, parameters, and drill-down navigation. See Appendix for sample reports.

System EnvironmentThe Crystal Enterprise environment (working set) consists of a realistic total number of objects(users, reports, folders, groups, servers). See Appendix for system configuration.

Resources and Resource UsageAcceptable virtual user load levels consider available resources and assure that resource usage(CPU and memory) and performance (response times, throughput) falls well within normalacceptable limits. The test sequence was designed to test loads that range from highestperformance (speed spot) to peak level usage.

ReliabilitySystem configuration was designed for an equal balance of performance, scalability, andreliability. It was not configured to merely prove performance as many other benchmarks do.

Performance: Enough services (e.g., Page Server, Cache Server, etc.) were made available tosupport total user load in order to maintain fast response times.

Scalability: Services were distributed to assure the highest degree of load balancing and tosupport system growth.

Reliability: Services were distributed to assure both software and hardware fault tolerance. If aservice were to fail, another service would be available to support requests. If a machine were tofail, another machine would be available with all services to support requests.

Test Concept

Page 9: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 5

This benchmark consisted of a test series that cross the spectrum of low to high virtual user loadsand from lower to higher number of CPUs and memory. The specific series was selected toconcurrently demonstrate scalability capabilities, performance under various loads vs. hardwarecombinations, and reliability.

All virtual users are always active users (users who are using up RAM and CPU cycles by makingrequests for navigating, viewing, scheduling, etc).

Benchmark High-Level Design

4 (4 x 1CPU) 8 (4 x 2CPU) 16 (4 x 4CPU) 32 (4 x 8CPU)

2CPU WAS 4CPU WAS 8CPU WAS 16CPU WAS

4000 virtual users

3200 virtual users

2000 virtual users 2000 virtual users

1600 virtual users 1600 virtual users

1000 virtual users 1000 virtual users 1000 virtual users

800 virtual users 800 virtual users 800 virtual users

500/600 virtual users 600 virtual users 600 virtual users

400 virtual users 400 virtual users 400 virtual users

200 virtual users 200 virtual users

100 virtual users

Page 10: White Paper Crystal Enterprise Scalability and Sizing Benchmark

6 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

1) Working set at baseline (delete any previously scheduled report instances from previous test)2) Recycle test machines3) Initiate performance monitoring4) Start prescheduled recurring instances [Large Report (100K–600 Page) @ 5 minute intervals,

Schedule to Export (10,000K–50 Page) @ alternate 5 minute interval]5) Start load harness suite (ramp up 100 user every 13 seconds) 6) Run at peak load for 30 minutes (full virtual user load)7) Ramp down (gradual shutdown or immediate shutdown acceptable)8) Results analysis9) Repeat pass (Steps 1 to 7)

10) Results analysis

Test Procedure

Page 11: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 7

• The results from the benchmark tests demonstrate excellent linear scalability andpredictability (ranging from 100 users to 4,000 users) across hardware configurations andresources ranging from a combined total of six CPUs to a combined total of 48 CPUs.

• The results indicate that when using a load model of integrated functionality and anenvironment subject to increasing levels of concurrent active users and increasing resources,performance levels can be predicted and maintained linearly.

• The results show that all request types (logon requests, view requests, navigation requests,drill downs, and schedules) maintain performance as the system scales.

• The results show that under a reasonable load, acceptable response times (under six seconds)are maintained for 90% of total concurrent active virtual users as a system scales upwardsand outwards.

• The results show an impressive degree of throughput (14.3 MB per second) managed by theApplication Server and supported at every tier.

CPU Scalability Measurements 12–24–48 (Peak Load)The following results represent peak loads across all hardware configurations. The four tests inthis sequence demonstrate that as user load is increased proportionately to hardware resources,performance, and reliability, functionality is maintained.

The scalability results were achieved with and by including the following configuration aspects(configuration details):

• Configurable number of processes as required to support load

• Configurable multithreaded tuning as required to support load

• Configurable redundant distributed architecture to support performance and reliability

• Configurable multi-user caching (saved data reports only for this test—on demand reportsalways query the database)

Comparative Scaling Test Sequence

Test 1—500 active users loaded on total of 6 CPUsTest 2—1,000 active users loaded on total of 12 CPUsTest 3—2,000 active users loaded on total of 24 CPUsTest 4—4,000 active users loaded on total of 48 CPUs

Test Results

Page 12: White Paper Crystal Enterprise Scalability and Sizing Benchmark

8 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

*Response success rate based on percentage of requests returned as successful response. Any failed responsescause the virtual user to exit the script and immediately choose a new script based on current weighting.

Test 14 (4 x 1CPU)2CPU WAS500 virtual users

Test 28 (4 x 2CPU)4CPU WAS1000 virtual users

Test 316 (4 x 4CPU)8CPU WAS2000 virtual users

Test 432 (4 x 8CPU)16CPU WAS4000 virtual users

Load Peak Load Peak Load Peak Load Peak Load

Avg. Throughput(bytes/second) 1,932,874 3,724,585 7,477,570 14,306,245

Scalability Factor 1.927 3.869 7.402

Scalability % 96.3% 96.7% 92.5%

Total Hits 87,029 171,072 347,572 669,395

Average Hits perSecond 48.349 95.04 193.096 371.886

Total Throughput(bytes) 3,479,173,379 6,704,253,274 13,459,625,274 25,751,241,885

Response Times(sec) Median

Live Data 5.0 4.7 4.0 4.1

Saved Data 1.3 1.8 1.0 0.6

Logon 1.7 1.8 0.2 0.7

CMS Query 1.1 1.0 0.4 0.2

Schedule 0.9 1.0 0.6 1.2

Drill Down 1.5 2.3 1.4 1.3

CPU % of WebApplication Server 87% 91% 88% 90%

CPU % of CrystalEnterprise ReportServers

82% 84% 81% 84%

Response SuccessRate* 100% 99.8% 100% 99.9%

System Stability(server uptime) 100% 100% 100% 100%

Page 13: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 9

Active Users Tested and Response Times

Requests per Second

Figure 1: shows the number of usersincreasing with additional CPUs.

Figure 2: shows the performance levels being maintainedat the same high standard across several configurations.

Figure 3: shows that as the user load and number of CPUs increases, the number of hits persecond increased at the same rate.

Page 14: White Paper Crystal Enterprise Scalability and Sizing Benchmark

10 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Throughput

Volume of Requests by Type

Figure 4: shows that as the data load increases in line with the user load and CPU, thethroughput increases at the same level.

Figure 5: shows the number of requests for each operation increased in linewith the increase in CPU resource and user load.

Page 15: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 11

CPU Utilization

Figure 6: shows that as the measures were taken at each level, the systemuse remained at an acceptable working level and was consistent acrossall the tests.

Page 16: White Paper Crystal Enterprise Scalability and Sizing Benchmark

12 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Crystal Enterprise 10 continues to provide a high performance, scalable, and reliable businessintelligence (BI) and reporting platform for a wide variety of customer scenarios. These testsprove that Crystal Enterprise delivers consistent linear scalability across broad deploymentscenarios for virtually any size and complexity of deployment.

Business Objects encourages customers to review the benchmark tests in conjunction with theprofessional services provided by our company or partners when designing, configuring, anddeploying their BI systems.

Conclusion

Page 17: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 13

Real-world scalability depends upon a balance of reliability, scalability, and performance thatsupports available functionality and usability:

ScalabilityThe goal of this benchmark was to demonstrate Crystal Enterprise 10 and its ability to scale asadditional transactions or users—as well as an equal amount of resource parts—are added whilemaintaining performance (e.g., response time), usability, and functionality.

The ability for the whole system to scale will directly determine and help to predict futureresource costs as system use increases.

PerformanceThe benchmark test series was designed within a spectrum that shows what kind of performancecan be expected when users (with realistic usage) and resources (CPUs, RAM) are added.

The following generic chart illustrates the concepts used in determining test sequence design:

Appendix 1: Scalability Primer

Scalable Platform for BI

Functionality and Usability

Sca

lab

ility

Per

form

ance

Rel

iab

ility

Res

po

nse

Tim

e

Concurrent Active Users

10

9

8

7

6

5

4

3

2

1

Figure 7: Performance Region. This chart shows how responsetime varies as user load is increased on fixed hardware (4 CPUs).

Page 18: White Paper Crystal Enterprise Scalability and Sizing Benchmark

14 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Speed SpotThe speed spot in the graph is the point where the system begins to perform under any degree ofstress. Prior to the speed spot, the system is being under utilized (i.e., excessive hardware ratio toload or 10–15% CPU utilization).

Sweet SpotThe sweet spot is the area of optimal performance. The system is under stress, properly utilizingavailable recourses and maintaining an acceptable level of performance while still leaving roomfor any normal degradation if extra load is required (i.e., 50–75% CPU utilization).

PeakThe peak is the area where system resources are being stressed to the point where there is nofurther room for degradation (100% resource utilization). This peak is an area that should beavoided in every day system usage. Before this point is reached, additional hardware resourcesshould be considered. Benchmark results demonstrate through upward and outward linearscalability. Adding new resources will allow for predictable performance gains.

The performance zone runs between the speed spot and the sweet spot. Depending on throughputor response time requirements, usage patterns, and resource constraints, customers shouldarchitect their system to fall within this zone.

ReliabilityThe benchmark test series was designed within a spectrum that demonstrates flexible, scalableconfiguration of services that provides both software and hardware redundancy. In all configurations,redundant services provided for software fault tolerance.

During this benchmark, Business Objects requested that the Crystal Enterprise system be certifiedfor IBM’s “Total Storage Proven” Certification. The certification was achieved using the sameconfigurations used for scalability and performance testing.

Functionality and UsabilityWorkflows emulating real-life situations were designed for the test to show, as closely as possible,how the system would behave in a true implementation. Usage routines that included viewingprescheduled reports, viewing of reports accessing data in real time, processing of large reports inthe background, and exporting to Excel and PDF were all part of the test to exercise eachcomponent of the enterprise reporting framework.

Page 19: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 15

Testing level achieved:

•Standard: consisting of install, configuration, load, and exercise I/O.

•Tests include failover and recovery support.

Details of the planned tests:

• The overall test displayed the flexibility in set-up of Crystal Enterprise 10 to suit differingdeployment scenarios and to match changing volume requirements.

• The test environment simulated a real-world deployment scenario with high availabilityrequirements.

• Failures in the fiber switch and of the individual components in the storage subsystem weresimulated to test the ability of Crystal Enterprise 10 to continue providing uninterrupted fullservice to all connected users.

• During the test, three of the four logical partitions on the p690 Crystal Enterprise 10 systemwere forcefully removed from the network to simulate server crash, reducing the system froman eight processor installation to a two processor installation.

A description of the actual tests and the results obtained:

• Installation was carried out using standard techniques, documented in the installation manuals.The Crystal Enterprise 10 environment was duplicated for redundancy purposes across allfour logical partitions of the p690 server in order to provide failover support for all services.

• 250 virtual users accessing the system were simulated using LoadRunner software. The userswere carrying out a variety of activities, such as report scheduling, scheduled report viewing,and on-demand report viewing in order to simulate normal system activity.

• On the introduction of the failures mentioned above, certain virtual users experienced longerthan normal response times while the system recovered their connection, but no lasting errorswere logged at this time and the system recovered optimum response times shortly after eachof the failures.

Business Objects Crystal Enterprise 10 Storage Proven listing:http://www.storage.ibm.com/proven/attract.htm

Appendix 2: IBM Total Storage ProvenCertification

Page 20: White Paper Crystal Enterprise Scalability and Sizing Benchmark

16 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Load HarnessLoadRunner Version 7.8—URL-based scripts web (HTTP/HTML) protocol

Weighted Script Mix

Appendix 3: Test Environment

Script Category Weight

Script 1 Viewing On-Demand 30%

Script 2 Viewing Saved Data 60%

Script 3 Schedule to Format 10%

Script 1 Script 2 Script 3

1. Logon 1. Logon 1. Logon

2. Navigate to Folder 2. Navigate to Folder 2. Navigate to Folder

3. Navigate to SubFolder 3. Navigate to SubFolder 3. Navigate to SubFolder

4. Choose Random Report 4. Navigate to SubFolder 4. Choose Random Report

5. View Report (On-Demand) 5. Choose Random Report 5. Schedule Report

6. Select Random Drilldown Node 6. View Report (Saved Data) 6. View History

7. Select Random Drilldown Node 7. Select Random Page 7. Close Report

8. Select Random Drilldown Node 8. Select Random Page 8. Navigate to Folder

9. Select Random Page 9. Select Random Page 9. Choose Random Report

10. Select Random Page 10. Select Random Page 10. Schedule Report

11. Select Random Page 11. Navigate to Folder 11. View History

12. Close Report 12. Navigate to SubFolder 12. Close Report

13. Logoff 13. Choose Random Report 13. Logoff

14. View Report (Saved Data)

15. Select Random Page

16. Select Random Page

17 Select Random Page

18. Select Random Page

19. Close Report

20. Logoff

Page 21: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 17

Script Parameters

username—variable user namepassword—user passwordcms_name—crystal enterprise CMS nameurl_path—root path servername—webappserver namesreportID—random variable reportidrandompage—random variable report pageLogon_ThinkTime—random variable thinktime for logonChooseReport_ThinkTime—random variable thinktime for report selectNavFolder_ThinkTime—random variable thinktime for folder selectNavPage_ThinkTime—random variable thinktime for page select

Think Times

User Definitions

Parameter Random Number Min(Seconds)

Random Number Max(Seconds)

Logon_ThinkTime 5 10

ChooseReport_ThinkTime 20 32

NavFolder_ThinkTime 2 6

NavPage_ThinkTime 8 12

Total User Population All users (physical people) who have access to Crystal Enterprise

Concurrent Users Users who are logged in to Crystal Enterprise. May be using RAM, butmight not be using CPU

Active Users User who are using up RAM and CPU by making requests (navigating,viewing, scheduling, etc.)

Virtual Users Load Harness Users (includes Active Users, Concurrent Users)

Page 22: White Paper Crystal Enterprise Scalability and Sizing Benchmark

18 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Load Schedule DefinitionVirtual users were loaded in a stepprocess until full load. At point ofvirtual user full load, the test wouldcontinue for 30 minutes and thenbegin to ramp down. For all tests,data is derived from the 30-minutefull load period only.

Content Check (ErrorChecking)Internet Protocol: ContentCheck Run-time settings

<Application Name="CatchError" Enabled="true" StampTime="" StampHost="">

<Rule Name="Rule_1" FailIfNotFound="false" MatchCase="false"

UseText="true" Level="error">

<Text>com.crystaldecisions.report.web.viewer</Text>

</Rule>

<Rule Name="Rule_2" FailIfNotFound="false" MatchCase="false"

UseText="true" Level="error">

<Text>Account Information Not Recognized</Text>

</Rule>

<Rule Name="Rule_3" FailIfNotFound="false" MatchCase="false"

UseText="true" Level="error">

<Text>An error occurred at the server</Text>

</Rule>

<Rule Name="Rule_4" FailIfNotFound="false" MatchCase="false"

UseText="true" Level="error">

<Text>Problem found in</Text>

</Rule>

<Rule Name="Rule_5" FailIfNotFound="false" MatchCase="false"

UseText="true" Level="error">

<Text>Unsupported</Text>

</Rule>

<Rule Name="Rule_6" FailIfNotFound="false" MatchCase="false"

UseText="true" Level="error">

<Text>Action canceled</Text>

</Rule>

</Application>

Global Timeout Any request not satisfied with either a success or failure as defined by the response containing anHTTP response code (e.g., 200, 302, 304, etc.) and content checking will return an error after 120seconds.

Load Preview

Vir

tual

Use

rs

Elapsed Time

4,000

3,500

3,000

2,500

2,000

1,500

1,000

500

000:05 00:10 00:15 00:20 00:25 00:30 00:35 00:40 00:45

Page 23: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 19

• 20,000 named users

• 20,128 report objects

• 20,010 folders

• 20,128 report instances (.rpt with saved data)

• Simple to complex reports containing formulas, groups, cross-tabs, charts

• Small to larger reports ranging from 1,000 records up to 100,000 records and 10 pages up to500 pages

Working Set

Page 24: White Paper Crystal Enterprise Scalability and Sizing Benchmark

20 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Design

basicbenchmark.rpt—basic line item report4 Tables, 17 Fields, 1 Group Field50 to 600 Page—1,000 to 100,000 records

summariesbenchmark.rpt—medium complexity report containing summaries4 Tables, 17 fields, 4 Group Field, 40 Summary Fields1 Page Summary—10,000 Records

featuremixbenchmark.rpt—complex report containing mixture of reporting features (e.g., charts,summaries, groups, parameters, etc.)4 Tables, 17 fields, 19 Formula Fields, 3 Running Total Fields, 4 Group Fields, 8 Special Fields, 18Summary Fields, 3 Charts, 1 Cross Tab, 1 Bitmap Image50 to 2,169 Page—1,000 to 100,000 records

crosstabbenchmark.rpt—report containing cross tabs4 Tables, 17 Fields, 2 Group Field, 2 Cross Tab, 6 Formula Fields, 1 Special Field21 Page—10,000 records

textobjectbenchmark.rpt—report containing text objects4 Tables, 17 Fields, 1 Group Field, 10 formatted text objects (text rotation, tool tips, etc.)245 pages

Appendix 4: Reports

featuremixbenchmark.rpt

summariesbenchmark.rpt

Page 25: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 21

basicbenchmark.rpt

crosstabbenchmark.rpt

textobjectbenchmark.rpt

Page 26: White Paper Crystal Enterprise Scalability and Sizing Benchmark

22 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Native IBM DB2 8.1 Server and client using the following TPC-R databases (TPC TransactionProcessing Council—www.tcp.org)

Appendix 5: Report Data Source

Database/Schema Name Rows Size KB

tpcsf067 100.000 85.000

tpcsf0067 10.000 7.000

tpcsf00067 1000

Page 27: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 23

Crystal Enterprise 10 Configuration and Tests

WebSpherePlease refer to the document “Performance Tuning for the Crystal Enterprise Java SDK over theWeb” available through the product team for WebSphere performance settings:

http://salescentral.businessobjects.com/products_services/products/reporting/crystal_enterprise.asp

DB2Please refer to the document “CE10 System and Database Configuration and Tuning Guide”available through product management for database performance settings:

http://salescentral.businessobjects.com/products_services/products/reporting/crystal_enterprise.asp

Appendix 6: Software Environment

CE10Servers

Web AppServers users server1

p69001p1server2p69001p2

server3p69001p3

server4p69001p4 WebSphere

4 x 1CPU 1 x 2CPU 500 2cms, frs, ras 4ps,2cs 4ps,2cs 2js(15) 1*2

4 x 2CPU 1 x 4CPU 1000 4cms, frs, ras 4ps,2cs,1js 4ps,2cs,1js 4ps,2cs,1js 1*2

4 x 4CPU 2 x 4CPU 2000 4cms, frs, ras 8ps,4cs 8ps,4cs 3js(15) 2*3

4 x 8CPU 4 x 4CPU 4000 6cms, frs, ras 16ps,8cs 16ps,8cs 6js(15) 4*2

4 (4 x 1CPU)1 x 2CPU WAS

8 (4 x 2CPU)1 x 4CPU WAS

16 (4 x 4CPU)2 x 4CPU WAS

32 (4 x 8CPU)4 x 4CPU WAS

4000 virtual users

3200 virtual users

2000 virtual users 2000 virtual users

1600 virtual users 1600 virtual users

1000 virtual users 1000 virtual users 1000 virtual users

800 virtual users 800 virtual users 800 virtual users

600 virtual users 600 virtual users 600 virtual users

400 virtual users 400 virtual users 400 virtual users

200 virtual users 200 virtual users

100 virtual users

cms=crystal management server ps=page server js=report server cs=cache server ras=report application server

Page 28: White Paper Crystal Enterprise Scalability and Sizing Benchmark

24 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Appendix 7: Topology

Page 29: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 25

Crystal Enterprise 10 Servers

• 32 CPU p690

• Gigabit Ethernet

• System Model: IBM, 7040-681

• Processor Type: PowerPC_POWER4

• Number Of Processors: 32

• LPAR Info: 4 LPAR4 Processor Clock Speed: 1300 MHz

• LPAR Info: 4 LPAR4 (p69001p1, p69001p2, p69001p3, p69001p4)

• Memory Size: 32768 MB

• Good Memory Size: 32768 MB

Web Application Servers (CE10 Java SDK and WCA)

3 x 4 CPU p630

Each configured as:

• 10/100 Mbps Ethernet PCI Adapter II

• System Model: IBM, 7028-6E4

• Processor Type: PowerPC_POWER4

• Number Of Processors: 4

• Processor Clock Speed: 1453 MHz

• Memory Size: 8192 MB

• Good Memory Size: 8192 MB

1 x 4 CPU p630

Configured as:

• 10/100 Mbps Ethernet PCI Adapter II

• System Model: IBM, 7028-6C4

• Processor Type: PowerPC_POWER4

• Number Of Processors: 4

• Processor Clock Speed: 1002 MHz

• Memory Size: 4096 MB

• Good Memory Size: 4096 MB

Appendix 8: Hardware Environment

Page 30: White Paper Crystal Enterprise Scalability and Sizing Benchmark

26 Business Objects • Crystal Enterprise Scalability and Sizing Benchmark

Page 31: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Business Objects • Crystal Enterprise Scalability and Sizing Benchmark 27

Page 32: White Paper Crystal Enterprise Scalability and Sizing Benchmark

Prin

ted

in F

ranc

e an

d th

e U

nite

d S

tate

s –

Aug

ust 2

004.

Business Objects owns the following U.S. patents, which may cover products that are offered and sold by Business Objects: 5,555,403; 6,247,008 B1; 6,578,027 B2; 490,593; and 6,289,352.

Business Objects, the Business Objects logo, Crystal Reports, and Crystal Enterprise are trademarks or registered trademarks of Business Objects SA or its affiliated companies in the United States and

other countries. All other names mentioned herein may be trademarks of their respective owners. Product specifications and program conditions are subject to change without notice.

Copyright © 2004 Business Objects. All rights reserved. PT# WP2086-A

� www.businessobjects.com

AmericasBusiness Objects Americas3030 Orchard ParkwaySan Jose, California 95134USATel: +1 408 953 6000

+1 800 877 2340

Asia-PacificBusiness Objects Asia Pacific Pte Ltd350 Orchard Road#20-04/06 Shaw House238868SingaporeTel: +65 6887 4228

Europe, Middle East, AfricaBusiness Objects SA157-159 rue Anatole France92309 Levallois-Perret CedexFranceTel: +33 1 41 25 21 21

JapanBusiness Objects Japan K.K.Head OfficeYebisu Garden Place Tower 28F4-20-3 Ebisu, Shibuya-kuTokyo 150-6028Tel: +81 3 5447 3900

For a complete listing of our sales offices, please visit our web site.