6
Topics in Informatics (Fall 2009) instructor: Mehdi Jazayeri assistant: Saˇ sa Neˇ si´ c class date: speaker: topic: Software Performance Summary In software engineering, performance testing is testing that is performed, from one perspective, to determine how fast some aspect of a system performs un- der a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Per- formance testing is a subset of Performance engineering, an emerging computer science practice which strives to build performance into the design and architec- ture of a system, prior to the onset of actual coding effort. Performance testing can serve different purposes. It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload cause the system to perform badly. In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contribute most to the poor performance or to establish throughput levels (and thresholds) for main- tained acceptable response time. It is critical to the cost performance of a new system, that performance test efforts begin at the inception of the development project and extend through to deployment. The later a performance defect is de- tected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to the end-to-end nature of its scope. Heider Jeffer

Software performance by Heider Jeffer

Embed Size (px)

Citation preview

Page 1: Software performance by Heider Jeffer

Topics in Informatics (Fall 2009)

instructor: Mehdi Jazayeriassistant: Sasa Nesic

class date:speaker:

topic: Software Performance

Hayder ALMusawi

Summary

In software engineering, performance testing is testing that is performed, fromone perspective, to determine how fast some aspect of a system performs un-der a particular workload. It can also serve to validate and verify other qualityattributes of the system, such as scalability, reliability and resource usage. Per-formance testing is a subset of Performance engineering, an emerging computerscience practice which strives to build performance into the design and architec-ture of a system, prior to the onset of actual coding effort.

Performance testing can serve different purposes. It can demonstrate that thesystem meets performance criteria. It can compare two systems to find whichperforms better. Or it can measure what parts of the system or workload causethe system to perform badly. In the diagnostic case, software engineers use toolssuch as profilers to measure what parts of a device or software contribute most tothe poor performance or to establish throughput levels (and thresholds) for main-tained acceptable response time. It is critical to the cost performance of a newsystem, that performance test efforts begin at the inception of the developmentproject and extend through to deployment. The later a performance defect is de-tected, the higher the cost of remediation. This is true in the case of functionaltesting, but even more so with performance testing, due to the end-to-end natureof its scope.

Heider Jeffer

Page 2: Software performance by Heider Jeffer

Hayder ALMusawi Software Performance

Purposes

* demonstrate that the systems meets performance criteria.* compare two systems to find which performs better.* measure what parts of the system or workload cause the system to performbadly.

In performance testing, it is often crucial (and often difficult to arrange) for thetest conditions to be similar to the expected actual use. This is, however, notentirely possible in actual practice. The reason is that production systems havea random nature of the workload and while the test workloads do their best tomimic what may happen in the production environment, it is impossible to exactlyreplicate this workload variability - except in the most simple system.

Loosely-coupled architectural implementations (e.g.: SOA) have created addi-tional complexities with performance testing. Enterprise services or assets (thatshare common infrastructure or platform) require coordinated performance test-ing (with all consumers creating production-like transaction volumes and load onshared infrastructures or platforms) to truly replicate production-like states. Dueto the complexity and financial and time requirements around this activity, someorganizations now employ tools that can monitor and create production-like con-ditions (also referred as ”noise”) in their performance testing environments (PTE)to understand capacity and resource requirements and verify / validate qualityattributes.

Performance Testing Sub-Genres

* load* stress* endurance* spike* scalability

Load Testing

This is the simplest form of performance testing. A load test is usually conductedto understand the behavior of the application under a specific expected load.This load can be the expected concurrent number of users on the applicationperforming a specific number of transaction within the set duration. This test willgive out the response times of all the important business critical transactions. Ifthe database, application server, etc are also monitored, then this simple test canitself point towards the bottleneck in the application software.

2

Heider Jeffer

Page 3: Software performance by Heider Jeffer

Hayder ALMusawi Software Performance

Stress Testing

This testing is normally used to break the application. Double the number ofusers are added to the application and the test is run again until the applicationbreaks down. This kind of test is done to determine the application’s robust-ness in times of extreme load and helps application administrators to determineif the application will perform sufficiently if the current load goes well above theexpected load.

Endurance Testing (Soak Testing)

This test is usually done to determine if the application can sustain the continuousexpected load. During endurance tests, memory utilization is monitored to detectpotential leaks. Also important, but often overlooked is performance degradation.That is, to ensure that the throughput and/or response times after some longperiod of sustained activity are as good or better than at the beginning of thetest.

Spike Testing

Spike testing, as the name suggests is done by spiking the number of users andunderstanding the behavior of the application whether it will go down or will it beable to handle dramatic changes in load.

Scalability Testing

Scalability testing is an extension of performance testing - it’s part of the batteryof non-functional tests, is the testing of a software application for measuring itscapability to scale up or scale out in terms of any of its non-functional capability- be it the user load supported, the number of transactions, the data volumeetc. The purpose of scalability testing is to identify major workloads and mitigatebottlenecks that can impede the scalability of the application.

Pre-requisites for Performance Testing

A stable build of the application which must resemble the Production environmentas close to possible.

The performance testing environment should not be clubbed with UAT or de-velopment environment. This is dangerous as if an UAT or Integration testing orother testing is going on the same environment, then the results obtained from theperformance testing may not be reliable. As a best practice it is always advisableto have a separate performance testing environment resembling the productionenvironment as much as possible.

3

Heider Jeffer

Page 4: Software performance by Heider Jeffer

Hayder ALMusawi Software Performance

Diversity of performance testing applications

Performance testing is evolving as a separate sciences, with the number of per-formance testing tools such as:* IBM’s Rational Performance Tester* JMeter* HP’s Load Runner* Open STA* WebLOAD* Silk Performer.* Neotys NeoLoad* Quotium’s QTest

Myths of Performance Testing

Some of the very common myths are given below:1. Performance Testing is done to break the system.Stress Testing is done to understand the break point of the system. Otherwisenormal load testing is generally done to understand the behavior of the applica-tion under the expected user load. Depending on other requirements, such asexpectation of spike load, continued load for an extended period of time woulddemand spike, endurance soak or stress testing.2. Performance Testing should only be done after the System Integration TestingAlthough this is mostly the norm in the industry, performance testing can also bedone while the initial development of the application is taking place. This kindof approach is known as the Early Performance Testing. This approach wouldensure a holistic development of the application keeping the performance pa-rameters in mind. Thus the finding of a performance bug just before the releaseof the application and the cost involved in rectifying the bug is reduced to a greatextent.3. Performance Testing only involves creation of scripts and any applicationchanges would cause a simple refactoring of the scripts.Performance Testing in itself is an evolving science in the Software Industry.Scripting itself although important, is only one of the components of the perfor-mance testing. The major challenge for any performance tester is to determinethe type of tests needed to execute and analyzing the various performance coun-ters to determine the performance bottleneck.

The other segment of the myth concerning the change in application wouldresult only in little refactoring in the scripts is also untrue as any form of changeon the UI especially in Web protocol would entail complete re-development ofthe scripts from scratch. This problem becomes bigger if the protocols involvedinclude Web Services, Siebel, Web Click n Script, Citrix, SAP.

4

Heider Jeffer

Page 5: Software performance by Heider Jeffer

Hayder ALMusawi Software Performance

Technology

Performance testing technology employs one or more PCs or Unix servers to actas injectors each emulating the presence of numbers of users and each runningan automated sequence of interactions (recorded as a script, or as a series ofscripts to emulate different types of user interaction) with the host whose perfor-mance is being tested. Usually, a separate PC acts as a test conductor, coordi-nating and gathering metrics from each of the injectors and collating performancedata for reporting purposes. The usual sequence is to ramp up the load startingwith a small number of virtual users and increasing the number over a periodto some maximum. The test result shows how the performance varies with theload, given as number of users vs response time. Various tools, are available toperform such tests. Tools in this category usually execute a suite of tests whichwill emulate real users against the system. Sometimes the results can revealoddities, e.g., that while the average response time might be acceptable, thereare outliers of a few key transactions that take considerably longer to completesomething that might be caused by inefficient database queries,pictures etc.

Performance specifications

It is critical to detail performance specifications (requirements) and documentthem in any performance test plan. Ideally, this is done during the requirementsdevelopment phase of any system development project, prior to any design effort.See Performance Engineering for more details.

However, performance testing is frequently not performed against a specifica-tion i.e. no one will have expressed what the maximum acceptable response timefor a given population of users should be. Performance testing is frequently usedas part of the process of performance profile tuning. The idea is to identify theweakest link there is inevitably a part of the system which, if it is made to respondfaster, will result in the overall system running faster. It is sometimes a difficulttask to identify which part of the system represents this critical path, and sometest tools include (or can have add-ons that provide) instrumentation that runson the server (agents) and report transaction times, database access times, net-work overhead, and other server monitors, which can be analyzed together withthe raw performance statistics. Without such instrumentation one might have tohave someone crouched over Windows Task Manager at the server to see howmuch CPU load the performance tests are generating (assuming a Windows sys-tem under test).

5

Heider Jeffer

Page 6: Software performance by Heider Jeffer

Hayder ALMusawi Software Performance

Performance Testing Web Applications Methodology

According to the Microsoft Developer Network the Performance Testing Method-ology consists of the following activities:* Activity 1. Identify the Test Environment.Identify the physical test environment and the production environment as well asthe tools and resources available to the test team. The physical environmentincludes hardware, software, and network configurations. Having a thorough un-derstanding of the entire test environment at the outset enables more efficienttest design and planning and helps you identify testing challenges early in theproject. In some situations, this process must be revisited periodically through-out the projects life cycle.* Activity 2. Identify Performance Acceptance Criteria.Identify the response time, throughput, and resource utilization goals and con-straints. In general, response time is a user concern, throughput is a busi-ness concern, and resource utilization is a system concern. Additionally, identifyproject success criteria that may not be captured by those goals and constraints;for example, using performance tests to evaluate what combination of configura-tion settings will result in the most desirable performance characteristics.* Activity 3. Plan and Design Tests.Identify key scenarios, determine variability among representative users and howto simulate that variability, define test data, and establish metrics to be collected.Consolidate this information into one or more models of system usage to be im-plemented, executed, and analyzed.* Activity 4. Configure the Test Environment.Prepare the test environment, tools, and resources necessary to execute eachstrategy as features and components become available for test. Ensure that thetest environment is instrumented for resource monitoring as necessary.* Activity 5. Implement the Test Design.Develop the performance tests in accordance with the test design.* Activity 6. Execute the Test.Run and monitor your tests. Validate the tests, test data, and results collection.Execute validated tests for analysis while monitoring the test and the test envi-ronment.* Activity 7. Analyze Results, Tune, and RetestAnalyses, Consolidate and share results data. Make a tuning change and retest.Improvement or degradation? Each improvement made will return smaller im-provement than the previous improvement. When do you stop? When you reacha CPU bottleneck, the choices then are either improve the code or add moreCPU.

6

Heider Jeffer