Upload
alex-morozov
View
80
Download
0
Embed Size (px)
Citation preview
5/14/2018 Eight Steps to Better Test Automation - slidepdf.com
http://slidepdf.com/reader/full/eight-steps-to-better-test-automation 1/6
Eight Steps to Bet
Test Automatio
Cognizant White Paper
Automation doesn’t automatically make software testing faster, more reliable
and less expensive. Because the up-front costs in automation tools and set-up
can be high, automated testing only pays off if the long-term cost savings offsets
those initial expenses. In addition, not all automation tools and methodologieshave the same features, functions and capabilities, and each project may have
different requirements that affect its costs and benets.
To help you get the maximum benet from automated testing, we offer eight
tips based on our experience with more than 50 enterprise-level automated
testing projects, worldwide. Following these tips can help increase the ROI of
test automation and improve software quality.
Top Eight ways to achieve efcient test automation.
whitepaper
5/14/2018 Eight Steps to Better Test Automation - slidepdf.com
http://slidepdf.com/reader/full/eight-steps-to-better-test-automation 2/6
One: Choose What to Automate
Not all software projects lend themselves
equally to testing. Understanding which
factors increase the difculty of automating
a specic test project is essential to
balancing the costs against the benets.
More appropriate candidates for automated
testing include code that:
• Plays an important role in anapplication
• Processes large amounts of data• Executes common processes• Can be used across applications
Nine-Point Test Decision Tree
Organizations should also look for non-traditional and even unplanned areas, overwhich they can spread their automationinvestment. These include automating thetesting of installation routines for patches anddefect xes, automating test management,and the creation of test reports.
Less appropriate factors for automated testinginclude:
• The extensive use of non-standardsoftware controls (the componentsthat dene the user interface).Automated test tools typically canassess the quality of standardcontrols within operating systems
such as Windows and Linux, but oftencannot do the same for controlscreated by development toolsvendors such as Infragistics, andapplication vendors such as Oracleand Siebel.
• Code or test cases for which theproduction or infrastructure teams
have not supplied the right type ofdata, or the proper amount of data, to support a full-scale automatedtest program.
• Code which changes often, sincesuch changes may also require time-consuming manual revisions to eitherthe test script or the test data.
We have developed a nine-point decision tree (see
Figure 1) that helps clients select automationcandidates based on the following criteria:• Technical feasibility• The frequency of test execution• The extent to which test components
can be reused• Total resource requirements• The complexity of the test cases• The ability to use the same test cases in
multiple browsers or environments• The time required to execute the tests
Two: Choose Your Test Tools
Because companies must amortize their
investment in automation, they should selectautomated test tools that will meet their needsfor years. Proper evaluation criteria include:• Support for various types of automated
testing, including functional, testmanagement, mobile, SOA and IVR(interactive voice response) software.
• Support for multiple testing frameworks(the methodologies that enable the re-usability and efciency of automation).
• The ability to recognize objects createdin a wide variety of programminglanguages.
• Stable setup and operation on anyenvironment/platform.
• Ease of debugging the automationscripts.
• Efcient test execution with a minimumof manual effort.
• Automatic recovery from application
whitepaper2
Figure 1
5/14/2018 Eight Steps to Better Test Automation - slidepdf.com
http://slidepdf.com/reader/full/eight-steps-to-better-test-automation 3/6
failures to prevent test interruptions.A powerful scripting language thatmakes it easy to develop scripts (theinstructions carried out in a specictest) that can be reused acrossplatforms and test types.
• The ability to report results, and becontrolled by, a wide variety of test
management platforms.Ease of use to minimize training costs •and delays.
There are several commercial tools thatprovide these capabilities, but they areexpensive. The decision to procure them maysolely depend on the proposed investment,and as mentioned above, after consideringfuture requirements. Customizing such open-source tools to meet specic testing needscan be an effective way of reducing costs.
Another way to speed ROI from automatedtesting is to use inexpensive software toolsthat, while not designed for that purpose, canbe useful in automating certain scenarios.
Examples:• Use of the macro-recording
capabilities that ship with mostoperating systems to automateprocesses such as loading test data,scheduling, and executing tests.
• Free or low-cost le comparison
utilities can be used to determinewhether actual test output matchesthe expected results at far lowercost than using commercial off-the-shelf test automation tools.
We have developed a methodology to helpcustomers choose the test automation softwaremost appropriate to their needs, and to makethe most effective use of both new and existingautomated test tools (see Figure 2). It beginswith dening the objectives for the tools andspecifying the tests to be automated, such as
functional testing or back-end validation. Thenext step is to dene the requirements, buildan evaluation scorecard, perform a proofof concept, and nally prepare the tools fordeployment. This methodology also helpscustomers optimize their use of automatedtest tools by identifying all testing needs
across the organization, creating an inventoryof tools already available, and reviewingexisting licensing agreements to ensure only
the required licenses are purchased.Among the open-source tools we havecustomized are the Selenium suite forautomated testing of Web applications andthe Bugzilla defect tracking tool. We havealso created a framework based on the WATIR(Web Application Testing in Ruby) toolkit usedto automate browser-based tests during Web
application development.
Tool Selection
Three: Rene Your Test Process
In many organizations, the lack of centralized,standardized automation processes has led to anelongated testing lifecycle that is prohibitively
expensive and fails to detect the maximum
number of defects. Improving such processesrequires the following (see Figure 3):
• Describing the risks of currentmethods, and showing how testingcan be done at a lower cost, morequickly and/or more effectively.
• Management commitment to providethe funding and support for changingworkows required to improvetesting processes.
• Avail support from the test staff formeeting improvement goals.
• Training to provide specialized skillsto test managers, specialists andtesting engineers in testing
methodologies.• Prioritizing process improvements
based on business goals.• Ongoing measurement of test
processes to achieve higher returnon investment.
whitepaper 3
Figure 2
5/14/2018 Eight Steps to Better Test Automation - slidepdf.com
http://slidepdf.com/reader/full/eight-steps-to-better-test-automation 4/6
A Standardized Test Automation Framework
Four: Choose a Framework
Just as with any tool, automation testsolutions must be used correctly tobe effective. Choosing an appropriateframework can help boost long-termreusability and efciency.
A framework is not a replacement foran automation tool, but a roadmap for
its effective use. It should also allow forparameterization (separate storage) of testscripts and test data to allow the maximumreuse and easy maintainability of both. Inthis way, when changes are made to a testcase, only the test case le needs to beupdated, while the driver and start-up scriptsremain the same. Similarly, the test data canbe changed as needed without requiringchanges to the test scripts.
Among the popular frameworks are those
that are “data driven,” where the test datais stored separately from the automationtool. This provides signicant ease of useand customization of reports, eases datamaintainability, and allows multiple testcases to be conducted with multiple setsof input data. However, the initial cost andmaintenance overhead can be considerable.Another approach is “keyword-driven”in which data tables and keywords aremaintained independently of the automationtool and the test scripts that drive the tests.But this is somewhat harder to use than the
data-driven approach, which increases costsand delays.
We have developed a hybrid framework (seeFigure 4, above) ,that combines the bestelements of both the keyword- and the data-driven approaches. It stores the test dataapart from the automation tool, (usually inan Excel spreadsheet), making it very easy tomaintain and reuse the scripts.
A Hybrid Approach to Test Automation
It allows multiple test cases to be driven withmultiple sets of data, and parameterizes the
data to ease maintenance and modication.It drives efciency by allowing businessusers to design and change tests without
learning complex scripting languages. Theuse of keywords makes it easy to measurehow much of the code required to performcommon business functions, such as“validate applicant’s age,” is reused overtime.
Five: Don’t Underestimate the Manual
Effort
The word “automation” implies that machinesdo the work, not humans. But the amount
of manual effort required in “automated”testing is one of the least understood aspectsof software testing. Humans must set up thetest machines, create the scripts, select thetests, analyze the test results, log the defectsand clean up the test machines. An accurateestimate of these costs is important not onlyfor budgeting and planning, but also to buildan accurate return on investment calculation.
In our client work, we have identied thefactors IT organizations should take intoaccount in estimating the manual effort
required for test automation. These includethe complexity of the language used tocreate test scripts, and the amount ofwork required to plan, generate, executeand maintain the scripts. Another aid toestimating effort is classifying test cases assimple, medium and complex based on thenumber of transactions and the number ofsteps within the scripts required by eachcase.
whitepaper4
Figure 4
Figure 3
5/14/2018 Eight Steps to Better Test Automation - slidepdf.com
http://slidepdf.com/reader/full/eight-steps-to-better-test-automation 5/6
Six: Watch Those Scripts
Areas in which standards are particularly im-
portant include the following (see Figure 5):• Exception handling, which determines
whether, and how well, the test script can recover from the failure of the
application being tested, or unexpected
behavior such as the appearance of a pop-up. The use of standards helps
ensure all scripts continue to run in
the event of unexpected conditions, reducing the need for manual intervention and speeding test execution.
• Error logging, in which standardsmake it easier for multiple developers and
testers to analyze and act on the
test results.• Documentation standards in areas
such as comments and indentationhelp developers create uniform scripts,
and to understand script code created
by others.
Scripting for Success
Along with the development and use ofscripting standards, we have developedconsolidated toolsets for script creation that
have saved customers 10% to 15% in script-ing costs by allowing greater script reuse.
Seven: Identify Who Will Execute the Tests
Automation raises the expectation amongmanagers that testing can be achieved withlittle or no manual effort. Thus, they do notallocate the staff required to perform themanual steps required in automated testing,such as analyzing test results and creating
and cleaning up test machines. Many of thelikely candidates for such work are eitherunavailable or lack the proper skills.
For example:• Functional testers already have their
time allocated to manual testing.• Automation engineers, while qualied
to run the tests, typically don’tunderstand enough of the functionalrequirements to analyze the failedscenarios. They are too busydeveloping and maintaining tests toexecute the tests and track defects.
• Without planning for the propernumbers of skilled staff, an automatedtesting program will fall victim tounexpected outages, delays and costoverruns. The organization willnot be able to run enough testsfrequently enough to justify itsinvestment in test automation.
That’s why it is important to determinewho will own, and execute all the processes
around the actual automated testing at thebeginning of the process. As an experiencedtest automation consulting rm, we can helpcustomers make sure they have the rightamount of skilled staff to cost-effectively en-
sure software quality. This includes accurate-
ly assessing the manual requirements at allstages in the automatic testing process, andmaking better use of existing staff. Functionaltesters can be retrained, for example, to bothexecute and maintain scripts (see Figure 6):
whitepaper 5
Figure 5
5/14/2018 Eight Steps to Better Test Automation - slidepdf.com
http://slidepdf.com/reader/full/eight-steps-to-better-test-automation 6/6
About Cognizant:
Cognizant (NASDAQ: CTSH) is a leading provider of information technology, consulting, and business process outsourcing services. Cogn
single-minded passion is to dedicate our global technology and innovation know-how, our industry expertise and worldwide resourworking together with clients to make their businesses stronger. With over 50 global delivery centers and more than 68,000 employee
September 30, 2009, we combine a unique onsite/offshore delivery model infused by a distinct culture of customer satisfaction. A me
of the NASDAQ-100 Index and S&P 500 Index, Cognizant is a Forbes Global 2000 company and a member of the Fortune 1000 and is r
among the top information technology companies in BusinessWeek’s Hot Growth and Top 50 Performers listings.
Visit us online at www.cognizant.com.
Real-World Example:
A global nancial services company turned to Cognizant to speed its testing of business-critical code, and to improve the return
automation investment through the reuse of test components. Cognizant helped the client establish a central function to perform autom
functional and regression testing of key modules such as trading, payments, accounts, safekeeping and wealth management, and to impl
an automation architecture.
Cognizant reduced the overall time required for testing by 72%, and the effort required to design and execute multilingual tests by 70
creating a test quality center and developing a framework to maximize component reuse and the ROI of automated testing. The clieautomated 43% of its regression testing, reuses 60% of its automation scripts, and has extended the benets of this work by implement
automation framework on multiple projects. Cognizant also developed a dashboard that provides business and technical managers meas
performance metrics through dashboards showing weekly progress on key performance indicators and service level agreements.
Conclusion:
Automating software testing is not as simple or as quick as the term implies. Software test tools can be expensive, and setting up, exec
and analyzing test results requires extensive manual work. But by using the proper tools such as open-source software, and the p
processes and automation frameworks, organizations can realize the cost savings and quality of automated software testing.
Resourcing the Test Automation Function
Eight: Measure Your Success -- Accurately
Tools such as the Automated ExecutionEfciency Ratio (AEER) can determine howeffectively you are executing automatedtests by analyzing the ratio of human effortas a percentage of the total effort needed to
execute the automated tests.
But don’t just compare the number ofperson-hours required for manual vs.automated tests for a given size of the code.Be sure to include other benets such as
the following (see Figure 7):• A higher percent of defects found.• Reductions in the time needed for
testing.• Reduced time to market.• Improvements in customer satisfaction.• Increased productivity due to improved
application quality.
All these are real benets that must bebalanced against stafng and tool licensing
costs to determine an accurate return oninvestment.
Measuring what Matters
Figure 7
Figure 6