132
Managing Successful Test Automation Contents Session 0: Introduction to the tutorial Objectives, What we cover (and don’t cover) today Session 1: Planning and Managing Test Automation Responsibilities Pilot project Test automation objectives (and exercise) Measures for automation Return on Investment (ROI) (and exercise) Session 2: Testware Architecture Importance of a testware architecture What needs to be organised Session 3: Pre- and Post-Processing Automating more than tests Test status Session 4: Scripting Techniques Objectives of scripting techniques Different types of scripts Domain specific test language Session 5: Automated Comparison Automated test verification Test sensitivity Comparison example Session 6: Final Advice, Q&A and Direction Strategy exercise Final advice Questions and Answers Appendix (useful stuff) That’s no reason to automate (Better Software article) Effective Automated Testing with a DSTL, Martin Gijsen Man and Machine, Jonathan Kohl (Better Software) Technical vs non-technical skills in test automation

Managing Successful Test Automation

Embed Size (px)

DESCRIPTION

Many organizations never achieve the significant benefits that are promised from automated test execution. What are the secrets to test automation success? There are no secrets, but the paths to success are not commonly understood. Dorothy Graham describes the most important automation issues that you must address, both management and technical, and helps you understand and choose the best approaches for your organization—no matter which automation tools you use. If you don’t begin with good objectives for your automation, you will set yourself up for failure later. If you don’t show return on investment (ROI) from automation, your automation efforts may be doomed, no matter how technically good they are. Join Dot to learn how to identify achievable and realistic objectives for automation, show ROI from automation, understand technical issues including testware architecture, pick up useful tips, learn what works in practice, and devise an effective automation strategy.

Citation preview

Managing Successful Test Automation Contents

Session 0: Introduction to the tutorial Objectives, What we cover (and don’t cover) today

Session 1: Planning and Managing Test Automation Responsibilities Pilot project Test automation objectives (and exercise) Measures for automation Return on Investment (ROI) (and exercise)

Session 2: Testware Architecture Importance of a testware architecture What needs to be organised

Session 3: Pre- and Post-Processing Automating more than tests Test status

Session 4: Scripting Techniques Objectives of scripting techniques Different types of scripts Domain specific test language

Session 5: Automated Comparison Automated test verification Test sensitivity Comparison example

Session 6: Final Advice, Q&A and Direction Strategy exercise Final advice Questions and Answers

Appendix (useful stuff) That’s no reason to automate (Better Software article) Effective Automated Testing with a DSTL, Martin Gijsen Man and Machine, Jonathan Kohl (Better Software) Technical vs non-technical skills in test automation

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

0-1

0-1

Managing Successful Test Automation

Prepared and presented by

Dorothy Graham

© Dorothy Graham 2013

www.DorothyGraham.co.uk email: [email protected]

Twitter: @DorothyGraham

0-2

Objectives of this tutorial

•  help you achieve better success in automation –  independent of any particular tool

•  mainly management and some technical issues – objectives for automation – showing Return on Investment (ROI) –  importance of testware architecture – practical tips for a few technical issues – what works in practice (case studies)

•  help you plan an effective automation strategy

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

0-2

0-3

Tutorial contents

1) planning & managing test automation 2) testware architecture 3) pre and post processing 4) scripting techniques 5) automated comparison 6) final advice, Q&A and direction

0-4

Shameless commercial plug

www.DorothyGraham.co.uk [email protected]

Part 1: How to do automation - still relevant today, though we plan to update it at some point

New book!

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

0-3

0-5

What is today about? (and not about)

•  test execution automation (not other tools) •  I will NOT cover:

– demos of tools (time, which one, expo) – comparative tool info (expo, web) – selecting a tool*

•  at the end of the day – understand technical and non-technical issues – have your own automation objectives – plan your own automation strategy

* I will email you Ch 10 of the STA book on request – [email protected]

0-6

Test automation survey 2004 - 2010 – survey by Erik van Veenendaal in Professional

Tester magazine (Nov/Dec 2010)

–  2004 2010 – have test execution tools 29% 48% – have shelfware tools 26% 28%

•  most common shelfware – test execution tools (33%)

– achieving many benefits 27% 50%

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

0-4

0-7

Current tool adoption (Australia)

•  K.J. Ross & Associates survey 2010 – 19% good benefits –  7% using tools but not anticipated ROI

– 35% don’t have skills to adopt or maintain – 14% automation just an added burden, no time – 12% tools abandoned – too much maintenance – 10% tools bought, no resources to adopt –  3% no budget to buy or adopt tools

Source: www,kjross.com.au (good webinar on automation)

26% success (partial)

74% failure!

0-8

About you

•  your Summary and Strategy document – where are you now with your automation?

– what are your most pressing automation problems?

– why are you here today?

There is small group work throughout the day (maximum of 5 per group); introduce yourselves within your group.

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-1

Planning & Managing Test Automation

Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

4 Scripting 6 Advice 5 Comparison

1-2

Contents Responsibilities

Pilot project Test automation objectives Measures for automation

Return on Investment (ROI)

Successful Test Automation Managing

1 2 3

4 5 6

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-3

What is an automated test?

•  a test! – designed by a tester for a purpose

•  test is executed –  implemented / constructed to run automatically

using a tool – could be run manually also

•  who decides what tests to run? •  who decides how a test is run?

1-4

Existing perceptions of automation skills

•  many books & articles don’t mention automation skills – or assume that they must be acquired by testers

•  test automation is technical in some ways •  using the test execution tool directly (script writing) •  designing the testware architecture (framework /

regime) •  debugging automation problems

–  this work requires technical skill – most people now realise this (but many still don’t)

See article: “Technical vs non-technical skills in test automation”

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-5

Responsibilities

•  test the software –  design tests –  select tests for automation

•  requires planning / negotiation

•  execute automated tests –  should not need detailed

technical expertise

•  analyse failed automated tests –  report bugs found by tests –  problems with the tests may

need help from the automation team

•  automate tests (requested by testers)

•  support automated testing –  allow testers to execute tests –  help testers debug failed tests –  provide additional tools (home-

grown)

•  predict –  maintenance effort for software

changes –  cost of automating new tests

•  improve the automation –  more benefits, less cost

Testers Automators

1-6

Test manager’s dilemma

•  who should undertake automation work – not all testers can automate (well) – not all testers want to automate – not all automators want to test!

•  conflict of responsibilities – automate tests vs. run tests manually

•  get additional resources as automators? – contractors? borrow a developer? tool vendor?

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-7

Recent poll

EuroStar Webinar, Intelligent Mistakes in Test Automation, 20 Sept 2012

0 10 20 30 40 50 60 70 80

Testers should never be automators

Testers shouldn't have to become automators

Testers can be automators if they want to

Every tester should be able to write code

Testers as automators?

1-8

Roles within the automation team

•  Testware architect – designs the overall structure for the automation

•  Champion –  “sells” automation to managers and testers

•  Tool specialist / toolsmith –  technical aspects, licensing, updates to the tool

•  Automated test (& script) developers – write new keyword scripts as needed – debug automation problems

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-9

Agile automation: Lisa Crispin – starting point: buggy code, new functionality

needed, whole team regression tests manually –  testable architecture: (open source tools)

•  want unit tests automated (TDD), start with new code •  start with GUI smoke tests - regression •  business logic in middle level with FitNesse

– 100% regression tests automated in one year •  selected set of smoke tests for coverage of stories

– every 6 mos, engineering sprint on the automation – key success factors

•  management support & communication •  whole team approach, celebration & refactoring

Chapter 1, pp 17-32, Experiences of Test Automation

1-10

Automation and agile •  agile automation: apply agile principles to

automation – multidisciplinary team – automation sprints –  refactor when needed

•  fitting automation into agile development –  ideal: automation is part of “done” for each sprint

•  Test-Driven Design = write and automate tests first

– alternative: automation in the following sprint -> •  may be better for system level tests

See www.satisfice.com/articles/agileauto-paper.pdf (James Bach)

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-11

Automation in agile/iterative development

A manual testing of

this release (testers) A B

B C A

F E D C B A

regression testing (automators automate the best tests)

run automated tests (testers)

1-12

Requirements for agile test framework

•  Support manual and automated testing – using the same test construction process

•  Support fully manual execution at any time –  requires good naming convention for components

•  Support manual + automated execution – so test can be used before it is 100% automated

•  Implement reusable objects •  Allow “stubbing” objects before GUI available

Source: Dave Martin, LDSChurch.org, email

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-13

Contents Responsibilities

Pilot project Test automation objectives Measures for automation

Return on Investment (ROI)

Successful Test Automation Managing

1 2 3

4 5 6

1-14

A tale of two projects: Ane Clausen

– Project 1: 5 people part-time, within test group •  no objectives, no standards, no experience, unstable •  after 6 months was closed down

– Project 2: 3 people full time, 3-month pilot •  worked on two (easy) insurance products, end to end •  1st month: learn and plan, 2nd & 3rd months: implement •  started with simple, stable, positive tests, easy to do •  close cooperation with business, developers, delivery •  weekly delivery of automated Business Process Tests

– after 6 months, automated all insurance products

Chapter 6, pp 105-128, Experiences of Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-15

Pilot project •  reasons

–  you’re unique –  many variables /

unknowns at start •  benefits

–  find the best way for you (best practice)

–  solve problems once –  establish confidence

(based on experience) –  set realistic targets

•  objectives –  demonstrate tool value –  gain experience / skills

in the use of the tool –  identify changes to

existing test process –  set internal standards

and conventions –  refine assessment of

costs and achievable benefits

1-16

What to explore in the pilot

•  build / implement automated tests (architecture) – different ways to build stable tests (e.g. 10 – 20)

•  maintenance – different versions of the application –  reduce maintenance for most likely changes

•  failure analysis – support for identifying bugs – coping with common bugs affecting many

automated tests Also: naming conventions, reporting results, measurement

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-17

After the pilot…

•  having processes & standards is only the start – 30% on new process – 70% on deployment

•  marketing, training, coaching •  feedback, focus groups, sharing what’s been done

•  the (psychological) Change Equation – change only happens if (x + y + z) > w

x = dissatisfaction with the current state y = shared vision of the future z = knowledge of the steps to take to get from here to there w = psychological / emotional cost to change for this person

Source: Eric Van Veenendaal, successful test process improvement

1-18

Contents Responsibilities

Pilot project Test automation objectives Measures for automation

Return on Investment (ROI)

Successful Test Automation Managing

1 2 3

4 5 6

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-19

An automation effort

•  is a project – with goals, responsibilities, and monitoring – but not just a project – ongoing effort is needed

•  not just one effort – different projects – when acquiring a tool – pilot project – when anticipated benefits have not materialized – different projects at different times

•  with different objectives

•  objectives are important for automation efforts – where are we going? are we getting there?

1-20

Evolvable Economic

A manual vs an automated test

an interactive (manual) test

first Run of an automated test

Effective

Exemplary

Economic

an automated test

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-21

Good objectives for automation? –  run regression tests evenings and weekends

– give testers a new skill / enhance their image

–  run tests tedious and error-prone if run manually

– gain confidence in the system

–  reduce the number of defects found by users

1-22

Objectives Exercise

Test Automation Objectives Exercise

© Dorothy Graham STA1110126 Page 1 of 5

Test Automation Objectives Exercise The following are some possible test automation objectives. Evaluate each objective – is it a suitable objective for automation? If not, why not?

Which are already in place in your own organisation?

Possible test automation objectives

Good automation objective?

(If not, why not) Already in

place?

Achieve faster performance for the system

NO – this is not an objective for test execution automation, nor is it an objective for performance testing! Performance test tools may help by giving the measurements to see whether the system is faster.

Achieve good results and quick payback with no additional resources, effort or time

Automate all tests

Build a long-lasting automation regime that is easy to maintain

Easy to add new automated tests

Ensure repeatability of regression tests

Ensure that we meet our release deadlines

Find more bugs

Find defects in less time

Free testers from repeated (boring) test execution to spend more time in test design

Test Automation Objectives Exercise

© Dorothy Graham STA1110126 Page 2 of 5

Possible test automation objectives

Good automation objective?

(If not, why not) Already in

place?

Improve our testing

Reduce elapsed time for testing by x%

Reduce the cost and time for test design

Reduce the number of test staff

Run more tests

Run regression tests more often

Run tests every night on all PCs

Achieve a positive Return on Investment in no more than <x> test interations (where x = ?)

Other objectives:

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-23

Same tests automated

edit tests (maintenance) set-up execute

analyse failures clear-up

Manual testing

More mature automation

Reduce test execution time

1-24

Automate x% of the tests

•  are your existing tests worth automating? –  if testing is in chaos, automating gives you faster

chaos •  which tests to automate (first)? •  what % of manual tests should be automated?

–  “100%” sounds impressive but may not be wise •  what else can be automated

– automation can do things not possible or practical in manual testing!

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-25

Manual vs automated

manual tests automated

tests

tests not worth

automating exploratory

test automation

manual tests automated (% manual)

tests (& verification)

not possible to do manually

tests not automated

yet

1-26

Exploratory Test Automation - 1

•  sounds like an oxymoron? •  when you are exploring, you might say

–  “that’s weird – might there be any more like that?” •  is there a way to check a result automatically?

– a heuristic oracle •  can you generate lots of different (random)

inputs? •  have you got lots of computer power available?

Source: Harry Robinson tutorial at CAST 2010, August 2010

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-27

Exploratory Test Automation - 2

•  set off lots of tests that produce something that can be checked with an automated oracle

•  alerts a human when something unusual occurs •  it’s exploratory – we don’t know what we will

find, we don’t have a planned route through the system

•  it’s automated – a “shotgun” approach – lots of different (random) inputs

•  can find a class of bug too hard to find manually – because so many tests can be run

1-28

Success = find lots of bugs?

•  tests find bugs, not automation •  automation is a mechanism for running tests •  the bug-finding ability of a test is not affected

by the manner in which it is executed

•  this can be a dangerous objective – especially for regression automation!

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-29

What is automated?

regression tests exploratory testing

likelihood of finding bugs

most often automated

1-30

When is “find more bugs” a good objective for automation?

•  objective is “fewer regression bugs missed” •  when the first run of a given test is

automated – MBT, Exploratory test automation, automated

test design – keyword-driven (e.g. users populate

spreadsheet) •  find bugs in parts we wouldn’t have tested?

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-31

Good objectives for test automation •  measurable

– EMTE (Equivalent Manual Test Effort) –  tests run, coverage (e.g. features tested)

•  realistic and achievable •  short and long term •  regularly re-visited and revised •  should be different objectives for testing and

for automation •  automation should support testing activities

1-32

Trying to get started: Tessa Benzie

– consultancy to start automation effort •  project, needs a champion – hired someone •  training first, something else next, etc.

– contract test manager – more consultancy •  bought a tool – now used by a couple contractors •  TM moved on, new QA manager has other priorities

–  just wanting to do it isn’t enough •  needs dedicated effort •  now have “football teams” of manual testers

Chapter 29, pp 535, Experiences of Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-33

Contents Responsibilities

Pilot project Test automation objectives Measures for automation

Return on Investment (ROI)

Successful Test Automation Managing

1 2 3

4 5 6

1-34

Useful measures

•  a useful measure: “supports effective analysis and decision making, and

that can be obtained relatively easily.” Bill Hetzel, “Making Software

Measurement Work”, QED, 1993. •  easy measures may be more useful even

though less accurate (e.g. car fuel economy) •  ‘useful’ depends on objectives, i.e. what you

want to know

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-35

Automation measures?

•  aspects of automation – number of times tests are run –  time to run the automated tests (& manual equiv) – effort in automation and running automated tests

•  effort to build new automated tests •  effort to run automated tests (by people) •  effort to analyse failed automated tests •  effort to maintain automated tests

– number of test failures caused by one s/w fault – number of automation scripts

1-36

EMTE – what is it?

•  Equivalent Manual Test Effort – given a set of automated tests, – how much effort would it take

•  IF those tests were run manually

•  note – you would not actually run these tests manually – EMTE = what you could have tested manually

•  and what you did test automatically

– used to show test automation benefit

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-37

EMTE – how does it work? a manual test

the manual test now automated

Manual testing Automate the manual testing?

only time to run the tests 1.5 times

doesn’t make sense – can run them more

1-38

EMTE – how does it work? (2) Automated testing

EMTE

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-39

EMTE example

•  example – automated tests take 2 hours –  if those same tests were run manually, 4 days

•  frequency – automated tests run every day for 2 weeks

(including once at the weekend), 11 times •  calculation

– EMTE =

1-40

Examples of good (measurable) objectives for test automation

•  achieve positive ROI in less than 6 test iterations –  measured by comparing to manual testing

•  run most important tests using spare resource –  top 10% of usefulness rating, run out of hours

•  reduce elapsed time of tool-supported activities –  measured for maintenance & failure analysis time

•  improve automation support for testers –  testers rate usefulness of automation support –  how often utilities/automation features are used

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-41

Contents Responsibilities

Pilot project Test automation objectives Measures for automation

Return on Investment (ROI)

Successful Test Automation Managing

1 2 3

4 5 6

1-42

Why measure automation ROI?

•  to justify and confirm starting automation – business case for purchase/investment decision,

to confirm ROI has been achieved e.g. after pilot – both compare manual vs automated testing

•  to monitor on-going automation –  for increased efficiency, continuous improvement – build time, maintenance time, failure analysis time,

refactoring time •  on-going costs – what are the benefits?

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-43

What can you show as a benefit?

•  number of additional times tests are run – how many times run manually & automated

•  how long tests take to run – execution time for manual and automated, EMTE

•  how much effort to run tests – human effort for manual testing and kicking off/

dealing with automated tests •  data variety and/or coverage

– different sets of data run manually & automated – different parts of the system now tested

1-44

An example comparative benefits chart

0

10

20

30

40

50

60

70

80

exec speed times run data variety tester work

man aut

ROI spreadsheet – email me for a copy

14 x faster 5 x more often 4 x more data 12 x less effort

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-45

EMTE over releases

2708 Total “manual” hours of testing done automatically 339 Days of testing performed

0

100

200

300

400

500

600

700

800

1 2 3 4 5

EMTE per release

1-46

Return on Investment (ROI)

•  ROI = (benefit – cost) / cost –  Investment: costs can be expressed as effort

•  which can be converted to money – Return: benefits

•  reduced tester time effort •  but what about things like “faster execution”, “run more

often”, “greater coverage” “quicker time to market”? •  how to convert these to money?

– possible calculation based on tester time – the other things are bonuses? •  comparing to manual testing

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-47

ROI example

Manual time testing

Human Time to run automated tests

Savings per automated run

No. automated runs per release

Total savings

66 7 59 9 531

Average Build time for new aut tests

Maintenance time

Failure analysis time

Other automation time spent

Total automation investment

ROI = (Gain - Cost) / Cost

15 50 35 22 122 3.4

Free ROI spreadsheet available

1-48

MBT @ ESA: Stefan Mohacsi, Armin Beer

– home-grown tool interfaced to commercial tools •  Model-Based Testing and Test Case Generation •  layers of abstraction for maintainability

– define model before software is ready •  capture and assign GUI objects later •  developers build in testability

– ROI calculations •  invest 460 hours in automation infrastructure •  break-even after 4 test cycles

Chapter 9, pp 155-175, Experiences of Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-49

Example ROI graph using MBT

0

200

400

600

800

1000

1200

1400

1 2 3 4 5 6

Manual hrs

Automated hrs

Source: Stefan Mohacsi & Armin Beer

People Hours

1-50

Database testing: Henri van de Scheur

–  tool developed in-house (now open source) •  agreed requirements with relevant people up front •  9 months, 4 developers in Java (right people) •  good architecture, start with quick wins

–  flexible configuration, good reporting, metrics used to improve

–  results: 2400 times more efficient •  from: 20 people run 40 tests on 6 platforms in 4 days •  to: 1 person runs 200 tests on 10 platforms in 1 day •  quick dev tests, nightly regression, release tests •  life cycle of automated tests •  little maintenance, machines used 24x7, better quality

Chapter 2, pp 33-48, Experiences of Test Automation

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-51

Large S Africa bank: Michael Snyman •  was project-based, too late, lessons not learned

–  “our shelves were littered with tools..” •  2006: automation project, resourced, goals

–  formal automation process •  ROI after 3 years

– US$4m on testing project, automation $850K – savings $8m, ROI 900%

•  20 testers for 4 weeks to 2 in 1 week – automation ROI justified the testing project

•  only initiative that was measured accurately Chapter 29, pp 562-567, Experiences of Test Automation

1-52

Example ROI graph

Source: Lars Wahlberg, Chapter 18 in “Experiences of Test Automation”

-200%

-150%

-100%

-50%

0%

50%

100%

0 500 1000 1500 2000 2500

Savings % vs Tests

monthly weekly daily

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-53

Why is measuring ROI dangerous?

•  focus on what’s easy to measure (tester time) •  other factors may be much more important

–  time to market, greater coverage, faster testing •  defining ROI only by tester time may give the

impression that the tools replace the testers –  this is dangerous –  tools replace some aspects of some of what

testers do, with increased cost elsewhere – we want a net benefit, even if hard to quantify

1-54

How important is ROI?

•  “automation is an enabler for success, not a cost reduction tool” (Yoram Mizrachi*)

•  many achieve lasting success without measuring ROI (depends on your context) – need to be aware of benefits (and publicize them)

*“Planning a mobile test automation strategy that works, ATI magazine, July 2012

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-55

Sample ‘starter kit’ for metrics for test automation (and testing)

•  some measure of benefit – e.g. EMTE or coverage

•  average time to automate a test (or set of related tests)

•  total effort spent on maintaining automated tests (expressed as an average per test)

•  also measure testing, e.g. Defect Detection Percentage (DDP) – test effectiveness –  more info on DDP on my web site & blog

1-56

Recommendations

•  don’t measure everything! •  choose three or four measures

– applicable to your most important objectives •  monitor for a few months

– see what you learn •  change measures if they don’t give useful

information •  be careful with ROI if based on tester time

– may give impression tools can replace people!

1-Managing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

1-57

Summary: key points •  Assign responsibility for automation (and testing) •  Use a pilot project to explore the best ways of

doing things •  Know your automation objectives •  Measure what’s important to you •  Show ROI from automation

Successful Test Automation Managing

1 2 3

4 5 6

1-58

Good objectives for automation? –  run regression tests evenings and weekends

– give testers a new skill / enhance their image

–  run tests tedious and error-prone if run manually

– gain confidence in the system

–  reduce the number of defects found by users

not a good objective, unless they are worthwhile tests!

not a good objective, could be a useful by-product

good objective

an objective for testing, but automated regression tests help achieve it

not a good objective for automation, good objective for testing!

Test Automation Objectives Solution

© Dorothy Graham STA110126 Page 3 of 5

Test Automation Objectives Solution We have given some ideas as to which objectives are good and why the others are not.

Possible test automation objectives

Good automation objective?

(If not, why not)

Achieve faster performance for the system

NO – this is not an objective for test execution automation, nor is it an objective for performance testing! Performance test tools may help by giving the measurements to see whether the system is faster.

Achieve good results and quick payback with no additional resources, effort or time

NO – this is totally unrealistic – expecting a miracle with no investment!

Automate all tests NO – automating ALL tests is not realistic nor sensible. Automate only those tests that are worth automating.

Build a long-lasting automation regime that is easy to maintain

YES – this is an excellent objective for test automation, and it is measurable.

Easy to add new automated tests YES. with a good automation regime, it can be easier to add a new automated test than to run that test manually.

Ensure repeatability of regression tests YES. The tools will run the same test in the same way every time.

Ensure that we meet our release deadlines

NO. Automation may help to run some tests that are required before release, but there are many more factors that go into a release decision.

Find more bugs NO. Automation just runs tests. It is the tests that find the bugs, whether they are run manually or are automated.

Find defects in less time Not really. Some types of defects (regression bugs) will be found more quickly by automated tests, but it may actually take longer to analyse the failures found.

Free testers from repeated (boring) test execution to spend more time in test design

YES. This is a good objective for test execution automation.

Test Automation Objectives Solution

© Dorothy Graham STA110126 Page 4 of 5

Possible test automation objectives

Good automation objective?

(If not, why not)

Improve our testing NO. Better testing practices and better use of techniques will improve testing.

Reduce elapsed time for testing by x% NO. Elapsed time depends on many factors, and not much on whether tests are automated (see further explanation in the slides).

Reduce the cost and time for test design

NO. Test design is independent from automation – the time spent in design is not affected by how those tests are executed.

Reduce the number of test staff NO. You will need more staff to implement the automation, not less. It can make existing staff more productive by spending more time on test design.

Run more tests YES but only long term. Short term, you may actually run fewer tests because of the effort taken to automate them.

Run regression tests more often YES – this is what the test execution tools do best.

Run tests every night on all PCs NO. It may look impressive, but what tests are being run? Are they useful? If not, this is a waste of electricity.

Achieve a positive Return on Investment in no more than <6> test interations

YES. This is a good objective, if the number of iterations is a reasonable number (e.g. 6).

Other objectives:

Test Automation Objectives Solution

© Dorothy Graham STA110126 Page 5 of 5

Test Automation Objectives: Selection and Measurement On this page, record the test objectives that would be most appropriate for your organisation (and why), and how you will measure them (what to measure and how to measure it). I suggest that you include at least one about showing Return on Investment. If you currently have automation objectives in place in your organisation that are not good ones, make sure that they are removed and replaced by the better ones below!

Proposed test automation objective (with justification)

What to measure and how to measure it

Add any comments or thoughts here or on the back of this page.

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-1

Testware Architecture

Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

Ref. Chapter 5: Testware Architecture “Software Test Automation”

4 Scripting 6 Advice 5 Comparison

2-2

Contents

Importance of a testware architecture What needs to be organised

Architecture

1 2 3

4 5 6

Successful Test Automation

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-3

Testware architecture

abstraction here: easier to write

automated tests -> widely used

abstraction here: easier to maintain, and change tools

-> long life

testware'architecture'

Testers''

Test'Execu/on'Tool'runs'scripts'

High Level Keywords

Structured Scripts

structured'testware'

Test Automator(s)

write'tests'(in'DSTL)'

2-4

Architecture – abstraction levels

•  most critical factor for success – worst: close ties between scripts, tool & tester

•  separate testers’ view from technical aspects – so testers don’t need tool knowledge

•  for widespread use of automation •  scripting techniques address this

•  separate tests from the tool – modular design –  likely changes confined to one / few module(s) –  re-use of automation functions –  for minimal maintenance and long-lived automation

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-5

Easy way out: use the tool’s architecture

•  tool will have its own way of organising tests – where to put things (for the convenience of the

tool!) – will “lock you in” to that tool – good for vendors!

•  a better way (gives independence from tools) – organise your tests to suit you – as part of pre-processing, copy files to where the

tool needs (expects) to find them – as part of post-processing, copy back to where

you want things to live

2-6

Tool-specific script ratio Testers''

Test'Execu/on'Tool'

Testers''

Test'Execu/on'Tool'

Tool-specific scripts

Not Tool-specific

High maintenance and/or tool-dependence

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-7

Learning is incremental: Molly Mahai – book learning – knew about investment, not

replace people, don’t automate everything, etc. – set up good architecture? books not enough – picked something to get started

•  after a while, realised limitations •  too many projects, library cumbersome

–  re-designed architecture, moved things around – didn’t know what we needed till we experienced

the problems for ourselves •  like trying to educate a teenager

Chapter 29, pp 527-528, Experiences of Test Automation

2-8

Contents

Importance of a testware architecture What needs to be organised

Successful Test Automation Architecture

1 2 3

4 5 6

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-9

A test for you

•  show me one of your automated tests running – how long will it take before it runs?

•  typical problems –  fails: forgot a file, couldn’t find a called script – can’t do it (yet):

•  Joe knows how but he’s out, •  environment not right, •  haven’t run in a while, •  don’t know what files need to be set up for this script

•  why not: run up control script, select test, GO

2-10

Key issues •  scale

–  the number of scripts, data files, results files, benchmark files, etc. will be large and growing

•  shared scripts and data – efficient automation demands reuse of scripts and

data through sharing, not multiple copies •  multiple versions

– as the software changes so too will some tests but the old tests may still be required

•  multiple environments / platforms

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-11

Terms - Testware artefacts Testware

Products By-Products

Test Materials Test Results

logs status

summary differences

logs status

summary differences

actual results

scripts

data

inputs

expected results

doc (specifications)

env utilities

2-12

Shared script

saveas.scp

Shared script

open.scp

Scribble

Testware for example test case

diffs.txt

Differences

Test script: - test input

countries.scp

Log

log.txt

countries2.dcm

Expected Output Test

Specification

testspec.txt

countries.dcm

Initial Document

countries2.dcm

Edited Document

Compare

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-13

Testware by type Testware

countries2.dcm

Products By-Products

Test Materials Test Results

saveas.scp

countries.dcm

countries.scp

testdef.txt

open.scp

countries2.dcm

log.txt

diff.txt

status.txt

2-14

Benefits of standard approach

•  tools can assume knowledge (architecture) –  they need less information; are easier to use;

fewer errors will be made •  can automate many tasks

– checking (completeness, interdependencies); documentation (summaries, reports); browsing

•  portability of tests – between people, projects, organisations, etc.

•  shorter learning curve

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-15

Testware Sets

•  Testware Set: – a collection of testware artifacts –  four types:

•  Test Set - one or more test cases •  Script Set - scripts used by two or more Test Sets •  Data Set - data files used by two or more Test Sets •  Utility Set - utilities used by two or more Test Sets

– good software practice: look for what is common, and keep it in only one place!

– Keep your testware DRY!

2-16

Testware library d_ScribbleTypical v1 d_ScribbleTypical v2 d_ScribbleVolume v1 s_Logging v1 s_ScribbleDocument v1 s_ScribbleDocument v2 s_ScribbleDocument v3 s_ScribbleNavigate v1 t_ScribbleBreadth v1 t_ScribbleCheck v1 t_ScribbleFormat v1 t_ScribbleList v1 t_ScribbleList v2 t_ScribbleList v3 t_ScribblePrint v1 t_ScribbleSave v1 t_ScribbleTextEdit v1 t_ScribbleTextEdit v2 u_ScribbleFilters v1 u_GeneralCompare v1

–  repository of master versions of all Testware Sets

– uncategorised scripts worse than no scripts

–  Onaral & Turkmen

– CM is critical –  “If it takes too long to update

your test library, automation introduces delay instead of adding efficiency”

–  Linda Hayes, AST magazine, Sept 2010

Version numbers

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-17

Separate test results

Test suite

Software under test

Test results

2-18

Incremental approach: Ursula Friede

–  large insurance application –  first attempt failed

•  no structure (architecture), data in scripts –  four phases (unplanned)

•  parameterized (dates, claim numbers, etc) •  parameters stored in single database for all scripts •  improved error handling (non-fatal unexpected events) •  automatic system restart

– benefits: saved 200 man-days per test cycle •  €120,000!

Chapter 23, pp 437-445, Experiences of Test Automation

2-Architecture

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

2-19

Summary: key points

•  Structure your automation testware to suit you •  Testware comprises many files, etc. which need to

be given a home •  Use good software development standards

Successful Test Automation Architecture

1 2 3

4 5 6

3-Pre and Post Processing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

3-1

Pre- and Post-Processing

Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

6 Advice 4 Scripting

Ref. Chapter 6: Automating Pre- and Post-Processing “Software Test Automation”

5 Comparison

3-2

Pre and Post

1 2 3

4 5 6

Contents

Automating more than tests Test status

Successful Test Automation

3-Pre and Post Processing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

3-3

What is pre- and post-processing?

•  Pre-processing – automation of setup tasks necessary to fulfil test

case prerequisites

•  Post-processing – automation of post-execution tasks necessary to

complete verification and house-work

•  These terms are useful because: –  there are lots of tasks, they come in packs, many

are the same, and they can be easily automated

3-4

Automated tests/automated testing

Select / identify test cases to run Set-up test environment:

•  create test environment •  load test data

Repeat for each test case: •  set-up test pre-requisites •  execute •  compare results •  log results •  analyse test failures •  report defect(s) •  clear-up after test case

Clear-up test environment: •  delete unwanted data •  save important data

Summarise results

Automated tests

Select / identify test cases to run Set-up test environment:

•  create test environment •  load test data

Repeat for each test case: •  set-up test pre-requisites •  execute •  compare results •  log results •  clear-up after test case

Clear-up test environment: •  delete unwanted data •  save important data

Summarise results Analyse test failures Report defects

Automated testing

Automated process Manual process

3-Pre and Post Processing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

3-5

Examples •  pre-processing

–  copy scripts from a common script set (e.g. open, saveas)

–  delete files that shouldn’t exist when test starts

–  set up data in files –  copy files to where the

tool expects to find them –  save normal default file

and rename the test file to the default (for this test)

•  post-processing –  copy results to where the

comparison process expects to find them

–  delete actual results if they match expected (or archive if required)

–  rename a file back to its normal default

3-6

Outside the box: Jonathan Kohl

–  task automation (throw-away scripts) •  entering data sets to 2 browsers (verify by watching) •  install builds, copy test data

– support manual exploratory testing –  testing under the GUI to the database (“side door”) – don’t believe everything you see

•  1000s of automated tests pass too quickly •  monitoring tools to see what was happening •  “if there’s no error message, it must be ok”

–  defects didn’t make it to the test harness –  overloaded system ignored data that was wrong

Chapter 19, pp 355-373, Experiences of Test Automation

3-Pre and Post Processing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

3-7

Automation +

execution

comparison

traditional test

automation

DSTL structured

testware architecture

loosen your oracles

ETA, monkeys

Pre & post

processing

Metrics e.g. EMTE Utilitie

s

eg data load

Disposable scripts

manual

testing

3-8

Contents

Automating more than tests Test status

Pre and Post

1 2 3

4 5 6

Successful Test Automation

3-Pre and Post Processing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

3-9

Test status – pass or fail?

•  tool cannot judge pass or fail – only “match” or “no match” – assumption: expected results are correct

•  when a test fails (i.e. the software fails) – need to analyse the failure

•  true failure? write up bug report •  test fault? fix the test (e.g. expected result) •  known bug or failure affecting many automated tests?

–  this can eat a lot of time in automated testing –  solution: additional test statuses

3-10

Test statuses for automation

Compare to No differences found Differences found (true) expected outcome Pass Fail expected fail outcome Expected Fail Unknown don’t know / missing Unknown Unknown

•  other possible additional test statuses – environment problem (e.g. network down, timeouts) – set-up problems (files missing) –  things that affect the tests that aren’t related to the

assessment of matching expected results –  test needs to be changed but not done yet

3-Pre and Post Processing

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

3-11

Summary: key points

•  Pre- and post processing to automate setup and clear-up tasks

•  Test status is more than just pass / fail

Pre and Post

1 2 3

4 5 6

Successful Test Automation

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-1

Scripting Techniques

Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

4 Scripting

Ref. Chapter 3: Scripting Techniques “Software Test Automation”

6 Advice 5 Comparison

4-2

Contents

Objectives of scripting techniques Different types of scripts

Domain specific test language

Successful Test Automation

Scripting

1 2 3

4 5 6

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-3

Objectives of scripting techniques •  implement your

testware architecture •  to reduce costs

–  make it easier to build automated tests

•  avoid duplication

–  avoid excessive maintenance costs

•  greater reuse of functional, modular scripting

•  greater return on investment –  better testing support –  greater portability

•  environments & hardware platforms

•  enhance capabilities –  achieve more testing for

same (or less) effort •  testing beyond traditional

manual approaches best achieved by keywords or DSTL

4-4

Contents

Objectives of scripting techniques Different types of scripts

Domain specific test language

Successful Test Automation

Scripting

1 2 3

4 5 6

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-5

Test Tool Test Tool

automators testers

Capture/replay = linear script

(manual) test procedures

record /edit

test scripts

software under test

instructions and test data

4-6

About capture/replay

•  fair amount of effort to produce scripts – similar for each test procedure – subsequent editing may also be necessary

•  dominated by maintenance costs – scripts are exact mould for software – one software change can mean many scripts

change – script is linear – sequence of every step

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-7

FocusOn ‘Open File’ Type ‘countries’ LeftMouseClick ‘Open’ FocusOn ‘Scribble’ SelectOption ‘List/Add Item’ FocusOn ‘Add Item’ Type ‘Sweden’ LeftMouseClick ‘OK’ FocusOn ‘Scribble’ SelectOption ‘List/Add Item’ FocusOn ‘Add Item’ Type ‘USA’ LeftMouseClick ‘OK’ FocusOn ‘Scribble’ SelectOption ‘List/Move Item’

Example captured script

captured individual interactions

with software

SelectOption ‘List/Add Item’ FocusOn ‘Add Item’ Type ‘USA’ LeftMouseClick ‘OK’ FocusOn ‘Scribble’

SelectOption ‘List/Add Item’ FocusOn ‘Add Item’ Type ‘Sweden’ LeftMouseClick ‘OK’ FocusOn ‘Scribble’

duplicated actions result in duplicated

instructions

duplicated actions result in duplicated

instructions

4-8

Test Tool

Structured scripting

(manual) test procedures

Test Tool

software under test

create

test scripts

script library

high-level instructions

and test data low-level “how to”

instructions

automators testers

software under test

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-9

About structured scripting

•  script library for re-used scripts – part of testware architecture implementation

•  shared scripts interface to software under test •  all other scripts interface to shared scripts

•  reduced costs – maintenance

•  fewer scripts affected by software changes

– build •  individual control scripts are smaller and easier to read

(a ‘higher’ level language is used)

4-10

Structured scripting example

DeleteItem ::::::::::::::::::::::

OpenFile ::::::::::::::::::::::

shared scripts

AddItem :::::::::::::::::::::: MoveItem

:::::::::::::::::::::: CloseSaveAs ::::::::::::::::::::::

LeftMouseDouble ‘Scribble’ Call OpenFile(‘countries’) Call AddItem(‘Sweden’) Call AddItem(‘USA’) Call MoveItem(4,1) Call AddItem(‘Norway’) Call DeleteItem(2) Call DeleteItem(7) FocusOn ‘Delete Error’ LeftMouseClick ‘OK’ FocusOn ‘Scribble’ Call CloseSaveAs(‘countries2’) SelectOption ‘File/Exit’

test script

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-11

Usable (re-usable) scripts

•  To re-use a script, need to know: – what does this script do? – what does it need? – what does it deliver? – what state when it starts? – what state when it finishes?

•  Information in a standard place for every script – can search for the answers to these questions

4-12

Test Tool Test Tool

automators testers

Data driven

(manual) test procedures softw

are under test

script library

low-level “how to”

instructions

high-level instructions

test data

create

data files

create

control scripts

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-13

About data driven

•  test data extracted from scripts –  placed into separate data files

•  control script reads data from data file –  one script implements several tests by reading different

data files (reduces script maintenance per test) •  reduced build cost

–  faster and easier to automate similar test procedures –  many test variations using different data

•  multiple control scripts required –  one for each type of test (with varying data)

4-14

Data-driven example For each TESTCASE OpenDataFile(TESTCASEn) ReadDataFile(RECORD) For each record ReadDataFile(RECORD) Case (Column(RECORD)) FILE: OpenFile(INPUTFILE) ADD: AddItem(ITEM) MOVE: MoveItem(FROM, TO) DELETE: DeleteItem(ITEM) ….. Next record Next TESTCASE

Control script

Data file: TestCase2 FILE ADD MOVE DELETE Europe France Germany 1,3 2,2 1 5,3

Data file: TestCase1 FILE ADD MOVE DELETE

countries Sweden USA

4,1 Norway 2 7

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-15

Data-driven: example data file Data file: TestCase1

countries Sweden USA 4,1 Norway 2 7

Headings indicate actions - but may not be read by a control script (they could be

just “comments”)

Position represents data for the action (data in the wrong

column will cause problems)

FILE ADD MOVE DELETE

Tests are not identical – first one has 3 countries, 2 moves, next test had 2 countries, 3 moves. The test data drives the test for things it can deal with – the script responds to data in a fixed position in the data file.

4-16

Test Tool Test Tool

automators testers

Keywords (basic)

software under test

(manual) test procedures

create

test definitions

control script

high-level instructions

and test data

script library low-level “how to”

instructions and keywords

single control script: “interpreter” / ITE

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-17

Unit test: calculate one interest payment

About keywords

•  single control script (Interactive Test Environment) –  improvements to this benefit all tests (ROI) – extracts high-level instructions from scripts

•  ‘test definition’ –  independent of tool scripting language – a language tailored to testers’ requirements

•  software design •  application domain •  business processes

•  more tests, fewer scripts

System test: summarise interest for one customer

Acceptance test: end of day run, all interest payments

4-18

Comparison of data files

ScribbleOpen Europe AddToList France Italy MoveItem 1 to 3 MoveItem 2 to 2 DeleteItem 1 MoveItem 5 to 2 SaveAs Test2

keyword approach data-driven approach FILE ADD MOVE DELETE SAVE Europe France Italy 1,3 2,2 1 5,2 Test2

which is easier to read/understand?

what happens when the test becomes

large and complex? this looks more like

a test

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-19

Contents

Objectives of scripting techniques Different types of scripts

Domain specific test language

Successful Test Automation

Scripting

1 2 3

4 5 6

4-20

procedures /definitions

test

high-level instructions

and test data

Test Tool

Merged test procedure/test definition

software under test

create

control script

high-level instructions

and test data

script library

low-level “how to” instructions and keyword scripts

single control script: “interpreter” / ITE

(manual) test

Advanced keyword-driven / action words

procedures

/

test definitions

language for testers

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-21

Domain Specific Test Language (DSTL)

•  test procedures and test definitions similar – both describe sequences of test cases

•  giving test inputs and expected results

•  combine into one document – can include all test information – avoids extra ‘translation’ step –  testers specify tests regardless manual/automated – automators implement required keywords

4-22

Keywords in the test definition language

•  multiple levels of keywords possible –  high level for business functionality –  low level for component testing

•  composite keywords –  define keywords as sequence of other keywords –  gives greater flexibility (testers can define composite

keywords) but risk of chaos •  format

–  freeform, structured, or standard notation •  (e.g. XML)

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-23

Example use of keywords Create a new account, order 2 items and check out

Firstname Surname Email address Password

Create Account Edward Brown [email protected] apssowdr

Item Num Items Check Price for Items

Order Item 1579 3 15.30

Order Item 2598 12.99

Checkout

4-24

Documenting keywords Name Name for this keyword Purpose What this keyword does Parameters Any inputs needed, outputs produced Pre-conditions

What needs to be true before using it, where valid

Post-conditions What will be true after it finishes Error conditions

What errors it copes with, what is returned

Example An example of the use of the keyword

Source: Martin Gijsen. See also Hans Buwalda book & articles

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-25

Example keyword documentation - 1

*mandatory

Name Create account

Purpose Creates a new account

Parameters *First name: 2 to 32 characters

*Last name: 2 to 32 characters

*Email address: also serves as account id

*Password: 4 - 32 characters

Pre-conditions Account doesn't exist for this person

Post-conditions Account created (including email confirmation)

order Screen displayed

Error conditions

Example (see example)

4-26

Example keyword documentation - 2

*mandatory

Name Order item Purpose Orders items from the online supplier Parameters *Item number: 1000 to 9999, in catalogue

Number of items wanted: 1 to Max-for-item. If blank, assumes 1

Pre-conditions Valid account logged in Items in stock (at least one) Prices available for all items in stock

Post-conditions Total for all items ordered is available (for checkout) Number of available items decreased by number ordered

Error conditions Item(s) not in stock Example (see example)

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-27

Implementing keywords •  ways to implement keywords

– scripting language (of a tool) – programming language (e.g. Java) – use what your developers are familiar with!

•  ways of creating an architecture to support a DSTL – commercial, open source or home-grown

framework – spreadsheet or database for test descriptions

4-28

Tools / frameworks

•  commercial tools – ATRT, Axe, Certify, eCATT, FASTBoX, Liberation,

Ranorex, TestComplete, TestDrive, Tosca Testsuite

•  open source – FitNesse, JET, Open2Test, Power Tools, Rasta,

Robot Framework, SAFS, STAF, TAF Core •  I can email you my Tool List

–  test execution and framework tools –  [email protected]

4-Scripting

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

4-29

Test Tool

software under test

script libraries

tool dependent tool independent

Execution-tool-independent framework

frame- work

test procedures /definitions

tool independent scripting language

Another Test Tool

sut

Test Tool

software under test

script libraries

some tests run manually

4-30

Summary •  Objectives of good scripting

–  to reduce costs and enhance capabilities •  Many types of scripting

–  e.g. capture playback, structured, data-driven, keyword

•  Keyword/DSTL the most sophisticated –  yields significant benefits for investment

•  Increased tester productivity –  tailored front end, test language, tool independence

Successful Test Automation

Scripting

1 2 3

4 5 6

Keyword-Driven Exercise

© Grove Consultants ATT090117 Page 1 of 3

Exercise (Part 1)

Using only the keywords described below, specify two or three test cases each comprising a short sequence of keywords for an online grocery store.

Keywords

The following keywords are available. Keyword Arguments

Basket should contain “n items” where n is the number of items in the basket Checkout using credit card Card type, Card number, Security code, Expiry date Create a valid user Username, Password, Address, Postcode Last order status should be One of: “Delivery scheduled”, “Delivery due”,

“Delivered” Login Username, Password Logout Put items in basket List of items

Example

The following is an example of short test case.

Test Case Action Argument Argument

Existing user can place an order

Login Joe Shopper Passwurd

Put items in basket Carrots Cabbage … Potatoes Basket should

contain 3 items

Checkout using credit card

Mastercard 1234 5678 0123

… 123 01-11 Logout

Note that “…” in the Action column indicate a continuation line, i.e. row contains more arguments for the previous keyword. Multiple continuation lines are allowed.

Exercise (Part 2)

Design 2 new additional keywords suitable for testing this online grocery store and specify one or two additional test cases that use your new keywords.

Keyword-Driven Exercise

© Grove Consultants ATT090117 Page 2 of 3

Test Case Action Argument Argument

Keyword-Driven Solution

© Grove Consultants ATT090117 Page 3 of 3

Test Case Action Argument Argument

New user can create account and place order

Create a valid user Joe Shopper Passwurd

… 6 Lower Road Worcester

WR1 2AB

Put items in basket Cornflakes Milk

… Tea bags Coffee

… Sugar Biscuits

Basket should contain

6 items

Checkout using credit card

Mastercard 1234 5678 0123

… 123 01-11

Logout

Existing user can query order

Login Joe Shopper Passwurd

Last order status should be

Delivery scheduled

Logout

5-Comparison

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

5-1

Automated Comparison Ref. Chapter 4: Automated Comparison

“Software Test Automation”

1 Managing 2 Architecture 3 Pre- and Post

4 Scripting 6 Advice 5 Comparison

Successful Test Automation

5-2

Contents

Automated test verification Test sensitivity

Comparison

Successful Test Automation 1 2 3

4 5 6

5-Comparison

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

5-3

Perverse persistence: Michael Williamson

–  testing Webmaster tools at Google (new to testing) – QA used Eggplant (image processing tool) – new UI broke existing automation – automate 4 or 5 functions – comparing bitmap images – inaccurate and slow –  testers had to do automation maintenance

•  not worth developers learning tool’s language

– after 6 months, went for more appropriate tools – QA didn’t use the automation, tested manually!

•  tool was just running in the background

Chapter 17, pp 321-338, Experiences of Test Automation

5-4

Checking versus testing – checking confirms that things are as we think

•  e.g. check that the code still works as before

–  testing is a process of exploration, discovery, investigation and learning •  e.g. what are the threats to value to stakeholders, give

information

– checks are machine-decidable •  if it’s automated, it’s probably a check

–  tests require sapience •  including “are the checks good enough”

Source: Michael Bolton, www.developsense.com/blog/2009/08/testing-vs-checking/

5-Comparison

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

5-5

General comparison guidelines

•  keep as simple as possible •  well documented •  standardise as much as possible •  avoid bit-map comparison •  poor comparisons destroy good tests •  divide and conquer:

– use a multi-pass strategy – compare different aspects in each pass

5-6

Two types of comparison •  dynamic comparison

–  done during test execution

–  performed by the test tool –  can be used to direct the

progress of the test •  e.g. if this fails, do that

instead

–  fail information written to test log (usually)

•  post-execution comparison –  done after the test

execution has completed –  good for comparing files

or databases –  can be separated from

test execution –  can have different levels

of comparison •  e.g. compare in detail if all

high level comparisons pass

5-Comparison

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

5-7

Scribble

Comparison types compared

diffs.txt

Differences

Log

log.txt

countries2.dcm

Expected Output

countries.dcm

Initial Document

countries2.dcm

Edited Document

Compare

Test script: - test input - comparison instructions

scribble1.scp

Error message as expected?

dynamic comparison

post-execution comparison

5-8

Comparison process

•  few tools for post-execution comparison •  simple comparators come with operating

systems but do not have pattern matching – e.g, Unix ‘diff’, Windows ‘UltraCompare’

•  text manipulation tools widely available – sed, awk, grep, egrep, Perl, Tcl, Python

•  use pattern matching tools with a simple comparator to make a ‘comparison process’

•  use masks and filters for efficiency

5-Comparison

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

5-9

Contents

Automated test verification Test sensitivity

Comparison

Successful Test Automation 1 2 3

4 5 6

5-10

Test sensitivity

•  the more data there is available: –  the easier it is to analyse faults and debug

•  the more data that is compared: –  the more sensitive the test

•  the more sensitive a test: –  the more likely it is to fail –  (this can be both a good and bad thing)

5-Comparison

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

5-11

Sensitive versus specific(robust) test

Unexpected change occurs

Sensitive test verifies the entire outcome

Test outcome

Specific test verifies this field only

Test is supposed to change only this field

5-12

Unexpected change occurs for every test

Three tests, each changes a different field

If all tests are sensitive, they

all show the unexpected change

If all tests are specific, the unexpected

change is missed

Test outcome

Too much sensitivity = redundancy

5-Comparison

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

5-13

Using test sensitivity

•  sensitive tests: –  few, at high level – breadth / sanity checking tests – good for regression / maintenance

•  specific/robust tests: – many, at detailed level –  focus on specific aspects – good for development

A good test automation strategy

will plan a combination of

sensitive and specific tests

5-14

Summary: key points •  Balance dynamic and post-execution

comparison •  Balance sensitive and specific tests •  Use masking and filters to adjust

sensitivity

Comparison

Successful Test Automation 1 2 3

4 5 6

Comparison Illustration

© Dorothy Graham STA110126 Page 1 of 4

Comparison Illustration The following examples illustrate some of the problems in automated comparison. Below are the expected outcome and the actual outcome for a test. Do they match? I.e. does the test pass? The expected results were taken from a previous test that was confirmed as correct. Test objective: Ensure that the core processing is correct, i.e. the correct error messages are produced, the progress of the order is correctly logged, and the right number of things have been ordered. It doesn’t matter when the test is run, what specific things are ordered, or the exact number assigned to the order. The total should be correct for the things that have been ordered, however. Test input (high level conditions): Log on as existing customer (software gets the first one in the list of existing customers). Order one thing that is not in stock (from a pre-defined table.) Order three things that are in stock (3 things at random locations in the stock database). Total the cost of items ordered.

Expected outcome Order number X43578 Date: 24 Jan 2008 Time 11:02 Message: “Log-on accepted, J. Smith” Message: “Add to basket, 1 toaster” Message: “Out of stock, mountain bike” Message: “Add to basket, 1 computer mouse” Message: “Add to basket, 1 pedometer” Add to total: £15.95 Add to total: £14.99 Add to total: £4.95 Message: “Check out J Smith” Message: “Total due is £35.89” Actual outcome Message: “Log-on accepted, M. Jones” Order number X43604 Message: “Out of stock, anti-gravity boots” Message: “Add to basket, 1 nose stud” Add to total: £4.99 Message: “Add to basket, 1 car booster seat” Add to total: £69.95 Message: “Add to basket, 1 pedometer” Add to total: £4.95 Message: “Total due is £79.99” Message: “Check out J Smith” Date: 18 Feb 2008 Time 13:08 Has this test passed or failed? How long did it take you to decide? Would automated comparison be better / quicker?

Comparison Illustration

© Dorothy Graham STA110126 Page 2 of 4

Computer matching attempt No. 1:

Expected outcome Actual outcome Comparison result

Test run number X43578 Message: “Log-on accepted, M. Jones” FAIL Date: 24 Jan 2008 Test run number X43604 FAIL Time 11:02 Message: “Out of stock, anti-gravity boots” FAIL Message: “Log-on accepted, J. Smith” Message: “Add to basket, 1 nose stud” FAIL Message: “Add to basket, 1 toaster” Add to total: £4.99 FAIL Message: “Out of stock, mountain bike” Message: “Add to basket, 1 car booster seat” FAIL Message: “Add to basket, 1 computer mouse” Add to total: “£69.95” FAIL Message: “Add to basket, 1 pedometer” Message: “Add to basket, 1 pedometer” PASS Add to total: £15.95 Add to total: £4.95 FAIL Add to total: £14.99 Message: “Total due is £79.99” FAIL Add to total: £4.95 Message: “Check out J Smith” FAIL Message: “Check out J Smith” Date: 18 Feb 2008 FAIL Message: “Total due is £35.89 Time 13:08 FAIL

Well, this isn’t terribly helpful, is it! The only thing that passed was just a coincidence (ordering the same item in the two tests).

What filters could we use to get the computer’s simple comparison to give a more meaningful result?

We can see that some things would have a better chance of matching if the results were in the same order. For example, Date and Time come last in the actual outcome but nearly first in our expected outcome.

Let’s apply a filter to both sets of results to put the individual lines in alphabetical order.

Note that this now destroys the sequence of steps as a test (this may be OK for the purposes of comparing the individual steps).

Comparison Illustration

© Dorothy Graham STA110126 Page 3 of 4

After applying the filter:

Sort into alphabetical order: Expected outcome Actual outcome Comparison

result Add to total: £14.95 Add to total: £4.95 FAIL Add to total: £15.95 Add to total: £4.99 FAIL Add to total: £4.99 Add to total: £69.95 FAIL Date: 24 Jan 2008 Date: 18 Feb 2008 FAIL Message: “Add to basket, 1 computer mouse” Message: “Add to basket, 1 car booster seat” FAIL Message: “Add to basket, 1 pedometer” Message: “Add to basket, 1 nose stud” FAIL Message: “Add to basket, 1 toaster” Message: “Add to basket, 1 pedometer” FAIL Message: “Check out J Smith” Message: “Check out J Smith” PASS Message: “Log-on accepted, J. Smith” Message: “Log-on accepted, M. Jones” FAIL Message: “Out of stock, mountain bike” Message: “Out of stock, anti-gravity boots” FAIL Message: “Total due is £35.89 Message: “Total due is £79.99” FAIL Test run number X43578 Test run number X43604 FAIL Time 11:02 Time 13:08 FAIL

We don’t seem to be much further forward now, although a different item has now passed! There are two items that would have matched if they had been in a row higher or lower. There is also something very strange about this result – the thing that has passed is ‘Message: “Check out J Smith”’. But we logged in as two different people! So the one thing shown as a Pass is actually a bug!! What other filters could we apply to get a better result from automated comparison?

Comparison Illustration

© Dorothy Graham STA110126 Page 4 of 4

Here are some other filters we could apply: Filter: After {Time} change if in the correct format to <nn:nn>

After {Test run number} change if in the correct format to <Xnnnnn> After {Date} change if in the correct format to <nn Aaa nn> After {Message: “Add to basket, } change to <item> all text before {”} After {Message: “Out of stock, } change to <item> all text before {“}”

Here is our result after applying these filters:

Expected outcome Actual outcome Comparison result

Add to total: £14.95 Add to total: £4.95 FAIL Add to total: £15.95 Add to total: £4.99 FAIL Add to total: £4.99 Add to total: £69.95 FAIL Date: <nn Aaa nn> Date: <nn Aaa nn> PASS Message: “Add to basket, <item>” Message: “Add to basket, <item>” PASS Message: “Add to basket, <item>” Message: “Add to basket, <item>” PASS Message: “Add to basket, <item>” Message: “Add to basket, <item>” PASS Message: “Check out J Smith” Message: “Check out J Smith” PASS Message: “Log-on accepted, J. Smith” Message: “Log-on accepted, M. Jones” FAIL Message: “Out of stock, <item>” Message: “Out of stock, <item>” PASS Message: “Total due is £35.89 Message: “Total due is £79.99” FAIL Test run number <Xnnnnn> Test run number <Xnnnnn> PASS Time <nn:nn> Time <nn:nn> PASS

We were not interested in checking what items were ordered – if we aren’t interested in the amounts either, we could apply further filters to the amounts?

Note that there are two aspects where simple comparisons such as those shown here are not really adequate.

We want to know that the total for the amount ordered is correct (by the way, the other bug is that the total for the Actual outcome should be £79.89). In order to check this automatically, we would need to define a variable, add the individual item amounts to it, and check that its total was the same as that calculated by the application. This can certainly be done, but it takes additional effort to program it into the comparison process!

Similarly, we could store the name used at log-on and compare it to the name used at check out, and this would find that bug (but also requires additional work).

Conclusion: automated comparison is not trivial!

6-Advice

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

6-1

Final Advice and Direction

Successful Test Automation

1 Managing 2 Architecture 3 Pre- and Post

4 Scripting 6 Advice 5 Comparison

6-2

What next?

•  we have looked at a number of ideas about test automation today

•  what is your situation? –  what are the most important things for you now? –  where do you want to go? –  how will you get there?

•  make a start on your test automation strategy now –  adapt it to your own situation tomorrow

6-Advice

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

6-3

Strategy exercise •  your automation strategy / action plan

–  review your objectives for today (p1) –  review your “take-aways” so far (p2) –  identify the top 3 changes you want to make to your

automation (top of p3) – note your plans now on p3

•  discuss with your neighbour or small group – exchange emails, keep in touch –  form a support group for each other

6-4

6-Advice

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

6-5

Dealing with high level management

•  management support – building good automation takes time and effort – set realistic expectations

•  benefits and ROI – make benefits visible (charts on the walls) – metrics for automation

•  to justify it, compare to manual test costs over iterations •  on-going continuous improvement

–  build cost, maintenance cost, failure analysis cost –  coverage of system tested

6-6

Dealing with developers

•  critical aspect for successful automation – automation is development

•  may need help from developers •  automation needs development standards to work

–  testability is critical for automatability – why should they work to new standards if there is “nothing in it

for them”?

– seek ways to cooperate and help each other •  run tests for them

–  in different environments –  rapid feedback from smoke tests

•  help them design better tests?

6-Advice

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

6-7

Standards and technical factors

•  standards for the testware architecture – where to put things – what to name things – how to do things

•  but allow exceptions if needed

•  new technology can be great – but only if the context is appropriate for it (e.g.

Model-Based Testing) •  use automation “outside the box”

6-8

On-going automation

•  you are never finished – don’t “stand still” - schedule regular review and re-

factoring of the automation – change tools, hardware when needed –  re-structure if your current approach is causing

problems •  regular “pruning” of tests

– don’t have “tenured” test suites •  check for overlap, removed features •  each test should earn its place

6-Advice

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

6-9

Information and web sites – www.AutomatedTestingInstitute.com

•  TestKIT Conference, Autumn, near Washington DC •  Test Automation certificate (TABOK)

–  tool information •  commercial and open source: http://testertools.com •  open source tools

– www.opensourcetesting.org –  http://sourceforge.net –  http://riceconsulting.com (search on “cheap and free tools”)

– www.ISTQB.org •  Expert level in Test Automation (in progress)

6-10

Summary: successful test automation

•  assigned responsibility for automation tasks •  realistic, measured objectives (testing ≠ automation) •  technical factors – testware architecture, levels of

abstraction, DSTL, scripting and comparison, pre and post processing

•  management support, ROI, continuous improvement

Successful Test Automation 1 2 3

4 5 6

6-Advice

presented by Dorothy Graham [email protected]

© Dorothy Graham 2013 www.DorothyGraham.co.uk

6-11

any more questions? please email me!

[email protected] Thank you for coming today

I hope this will be useful for you All the best in your automation!

32 BETTER SOFTWARE JULY/AUGUST 2009 www.StickyMinds.com

www.StickyMinds.com JULY/AUGUST 2009 BETTER SOFTWARE 33

“Why automate?” This seems such an easy question to answer; yet many people don’t achieve the success they hoped for. If you are aiming in the wrong direction, you will not hit your target!

This article explains why some testing objectives don’t work for auto-mation, even though they may be very sensible goals for testing in general. We take a look at what makes a good test automation objective; then we examine six commonly held—but misguided—objectives for test execution automation, explaining the good ideas behind them, where they fail, and how these objectives can be modified for successful test auto-mation.

Good Objectives for Test Automation

A good objective for test automation should have a number of characteristics. First of all, it should be measurable so that you can tell whether or not you have achieved it.

Objectives for test automation should support testing activities but should not be the same as the objectives for testing. Testing and automation are different and distinct activities.

Objectives should be realistic and achievable; otherwise, you will set your-self up for failure. It is better to have smaller-scale goals that can be met than far-reaching goals that seem impossible. Of course, many small steps can take you a long way!

Automation objectives should be both short and long term. The short-term goals should focus on what can be achieved in the next month or quarter. The long-term goals focus on where you want to be in a year or two.

Objectives should be regularly revised in the light of experience.

Misguided Objectives for Test Automation

OBJECTIVE 1: FIND MORE BUGSGood ideas behind this objective:

-tomated testing should find them quicker.

run more tests and find even more bugs.

so we should also find bugs in the parts we weren’t able to test manually.

Basing the success of automation on finding bugs—especially the automa-tion of regression tests—is not a good thing to do for several reasons. First, it is the quality of the tests that determines whether or not bugs are found, and this has very little, if anything, to do with automation. Second, if tests are first run manually, any bugs will be found then, and they may be fixed by the time the automated tests are run. Finally, it sets an expectation that the main purpose of test automation is to find bugs, but this is not the case: A repeated test is much less likely to find a new bug than a new test. If the software is really good, auto-mation may be seen as a waste of time and resources.

Regression testing looks for unex-pected, detrimental side effects in un-changed software. This typically in-volves running a lot of tests, many of which will not find any defects. This is ideal ground for test automation as it can significantly reduce the burden of this repetitive work, freeing the testers to focus on running manual tests where more defects are likely to be. It is the testing that finds bugs—not the automa-tion. It is the testers who may be able to find more bugs, if the automation frees them from mundane repetitive work.

The number of bugs found is a mis-leading measure for automation in any case. A better measure would be the per-centage of regression bugs found (com-pared to a currently known total). This is known as the defect detection per-centage (DDP). See the StickyNotes for more information.

Sometimes this objective is phrased in a slightly different way: “Improve the quality of the software.” But iden-tifying bugs does nothing to improve software—it is the fixing of bugs that improves the software, and this is a de-velopment task.

If finding more bugs is something that you want to do, make it an objective for measuring the value of testing, not for measuring the value of automation.

Better automation objective: Help tes-

ters find more regression bugs (so fewer regression failures occur in operation). This could be measured by increased DDP for regression bugs, together with a rating from the testers about how well the automation has supported their ob-jectives.

OBJECTIVE 2: RUN REGRESSION TESTS OVERNIGHT AND ON WEEKENDS

Good ideas behind this objective: -

nings and weekends).

“while we sleep.”

At first glance, this seems an excellent objective for test execution automation, and it does have some good points.

Once you have a good set of auto-mated regression tests, it is a good idea to run the tests unattended overnight and on weekends, but resource use is not the most important thing.

What about the value of the tests that are being run? If the regression tests that would be run “off peak” are really valuable tests, giving confidence that the main areas of the system are still working correctly, then this is useful. But the focus needs to be on supporting good testing.

It is too easy to meet this stated objec-tive by just running any test, whether it is worth running or not. For example, if you ran the same one test over and over again every night and every weekend, you would have achieved the goal as stated, but it is a total waste of time and electricity. In fact, we have heard of someone who did just this! (We think he left the company soon after.)

Of course, automated tests can be run much more often, and you may want some evidence of the increased test exe-cution. One way to measure this is using equivalent manual test effort (EMTE). For all automated tests, estimate how long it would have taken to run those tests manually (even though you have no intention of doing so). Then each time the test is run automatically, add that EMTE to your running total.

Better automation objective: Run the most important or most useful tests, em-ploying under-used computer resources when possible. This could be partially

34 BETTER SOFTWARE JULY/AUGUST 2009 www.StickyMinds.com

measured by the increased use of re-sources and by EMTE, but should also include a measure of the value of the tests run, for example, the top 25 per-cent of the current priority list of most important tests (priority determined by the testers for each test cycle).

OBJECTIVE 3: REDUCE TESTING STAFF Good ideas behind this objective:

tool, so we should be able to save elsewhere.

and staff costs are high.

This is an objective that seems to be quite popular with managers. Some managers may go even further and think that the tool will do the testing for them, so they don’t need the testers—this is just wrong. Perhaps managers also think that a tool won’t be as argumentative as a tester!

It is rare that staffing levels are re-duced when test automation is intro-duced; on the contrary, more staff are usually needed, since we now need people with test script development skills in addition to people with testing skills. You wouldn’t want to let four testers go and then find that you need eight test au-tomators to maintain their tests!

Automation supports testing activi-ties; it does not usurp them. Tools cannot make intelligent decisions about which tests to run, when, and how often. This is a task for humans able to assess the current situation and make the best use of the available time and resources.

Furthermore, automated testing is not automatic testing. There is much work for people to do in building the au-tomated tests, analyzing the results, and maintaining the testware.

Having tests automated does—or at least should—make life better for testers. The most tedious and boring tasks are the ones that are most amenable for au-tomation, since the computer will hap-pily do repetitive tasks more consistently and without complaining. Automation can make test execution more efficient, but it is the testers who make the tests themselves effective. We have yet to see a tool that can think up tests as well as a human being can!

The objective as stated is a manage-ment objective, not an appropriate ob-jective for automation. A better manage-ment objective is “Ensure that everyone is performing tasks they are good at.” This is not an automation objective either, nor is “Reducing the cost of testing.” These could be valid objectives, but they are related to management, not automation.

Better automation objective: The total cost of the automation effort should be significantly less than the total testing ef-fort saved by the automation. This could be partially measured by an increase in tests run or coverage achieved per hour of human effort.

OBJECTIVE 4: REDUCE ELAPSED TIME FOR TESTING

Good ideas behind this objective:

way we can save time is good.

testing will help overall.

This one seems very sensible at first and sometimes it is even quantified—“Reduce elapsed time by X%”—which sounds even more impressive. However, this objective can be dangerous because of confusion between “testing” and “test execution.”

The first problem with this objec-tive is that there are much easier ways to achieve it: run fewer tests, omit long tests, or cut regression testing. These are not good ideas, but they would achieve the objective as stated.

The second problem with this ob-jective is its generality. Reducing the elapsed time for “testing” gives the im-pression we are talking about reducing the elapsed time for testing as a whole. However, test execution automation tools are focused on the execution of the tests (the clue is in the name!) not the whole of testing. The total elapsed time for testing may be reduced only if the test execution time is reduced suffi-ciently to make an impact on the whole. What typically happens, though, is that the tests are run more frequently or more tests are run. This can result in more bugs being found (a good thing), that take time to fix (a fact of life), and

increase the need to run the tests again (an unavoidable consequence).

The third problem is that there are many factors other than execution that contribute to the overall elapsed time for testing: How long does it take to set up the automated run and clear up after it? How long does it take to recognize a test failure and find out what is actually wrong (test fault, software fault, envi-ronment problem)? When you are testing manually, you know the context—you know what you have done just before the bug occurs and what you were doing in the previous ten minutes. When a tool identifies a bug, it just tells you about the actual discrepancy at that time. Whoever analyzes the bug has to put together the context for the bug before he or she can really identify the bug.

In figures 1 and 2, the blocks repre-sent the relative effort for the different activities involved in testing. In manual testing, there is time taken for editing tests, maintenance, set up of tests, ex-ecuting the tests (the largest component of manual testing), analyzing failures, and clearing up after tests have com-pleted. In figure 1, when those same tests are automated, we see the illusion that automating test execution will save us a lot of time, since the relative time for execution is dramatically reduced. How-ever, figure 2 shows us the true picture—total elapsed time for testing may actu-ally increase, even though the time for test execution has been reduced. When test automation is more mature, then the total elapsed time for all of the testing activities may decrease below what it was initially for manual testing. Note that this is not to scale; the effects may be greater than we have illustrated.

We now can see that the total elapsed time for testing depends on too many things that are outside the control or in-fluence of the test automator.

The main thing that causes increased testing time is the quality of the soft-ware—the number of bugs that are al-ready there. The more bugs there are, the more often a test fails, the more bug reports need to be written up, and the more retesting and regression testing are needed. This has nothing to do with whether or not the tests are automated or manual, and the quality of the software

www.StickyMinds.com JULY/AUGUST 2009 BETTER SOFTWARE 35

is the responsibility of the developers, not the testers or the test automators.

Finally, how much time is spent main-taining the automated tests? Depending on the test infrastructure, architecture, or framework, this could add consid-erably to the elapsed time for testing. Maintenance of the automated tests for later versions of the software can con-sume a lot of effort that also will detract from the savings made in test execution. This is particularly problematic when the automation is poorly implemented, without thought for maintenance issues when designing the testware architec-ture. We may achieve our goal with the first release of software, but later ver-sions may fail to repeat the success and

may even become worse.Here is how the automator and tester

should work together: The tester may request automated support for things that are difficult or time consuming, for example, a comparison or ensuring that files are in the right place before a test runs. The automator would then pro-vide utilities or ways to do them. But the automator, by observing what the tester is doing, may suggest other things that could be supported and “sell” additional tool support to the tester. The rationale is to make life easier for the tester and to make the testing faster, thus reducing elapsed time.

Better automation objective: Reduce the elapsed time for all tool-supported

testing activities. This is an ongoing objective for automation, seeking to improve both manual and existing auto-mated testing. It could be measured by elapsed time for specified testing activi-ties, such as maintenance time or failure analysis time.

OBJECTIVE 5: RUN MORE TESTSGood ideas behind this objective:

better coverage.

must be better.

More is not better! Good testing is not found in the number of tests run, but in the value of the tests that are run. In fact, the fewer tests for the same value, the better. It is definitely the quality of the tests that counts, not the quantity. Automating a lot of poor tests gives you maintenance overhead with little return. Automating the best tests (however many that is) gives you value for the time and money spent in automating them.

If we do want to run more tests, we need to be careful when choosing which additional tests to run. It may be easier to automate tests for one area of the software than for another. However, if it is more valuable to have automated tests for this second area than the first, then automating a few of the more difficult tests is better than automating many of the easier (and less useful) tests.

A raw count of the number of au-tomated tests is a fairly useless way of gauging the contribution of automation to testing. For example, suppose testers decide there is a particular set of tests that they would like to automate. The real value of automation is not that the tests are automated but the number of times they are run. It is possible that the testers make the wrong choice and end up with a set of automated tests that they hardly ever use. This is not the fault of the automation, but of the testers’ choice of which tests to automate.

It is important that automation is responsive, flexible, and able to auto-mate different tests quickly as needed. Although we try to plan which tests to automate and when, we should always start automating the most important tests first. Once we are running the tests,

Figure 1

Figure 2

36 BETTER SOFTWARE JULY/AUGUST 2009 www.StickyMinds.com

the testers may discover new information that shows that different tests should be automated rather than the ones that had been planned. The automation regime needs to be able to cope with a change of direction without having to start again from the beginning.

During the journey to effective test automation, it may take far longer to au-tomate a test than to run that test manu-ally. Hence, trying to automate may lead, in the short term at least, to run-ning fewer tests, and this may be OK.

Better automation objective: Auto-mate the optimum number of the most useful and valuable tests, as identified by the testers. This could be measured as the number or percentage automated out of the valuable tests identified.

OBJECTIVE 6: AUTOMATE X% OF TESTING

Good ideas behind this objective:

of our automation effort.

our automation.

This objective is often seen as “Au-tomate 100 percent of testing.” In this form, it looks very decisive and macho! The aim of this objective is to ensure that a significant proportion of existing manual tests is automated, but this may not be the best idea.

A more important and fundamental point is to ask about the quality of the tests that you already have, rather than how many of them should be auto-mated. The answer might be none—let’s have better tests first! If they are poor tests that don’t do anything for you, automating them still doesn’t do any-thing for you (but faster!). As Dorothy Graham has often been quoted, “Auto-mated chaos is just faster chaos.”

If the objective is to automate 50 per-cent of the tests, will the right 50 percent be automated? The answer to this will depend on who is making the decisions and what criteria they apply. Ideally, the decision should be made through nego-tiation between the testers and the au-tomators. This negotiation should weigh the cost of automating individual tests or sets of tests, and the potential costs of maintaining the tests, against the value

of automating those tests. We’ve heard of one automated test taking two weeks to build when running the test manually took only thirty minutes—and it was only run once a month. It is difficult to see how the cost of automating this test will ever be repaid!

What percentage of tests could be au-tomated? First, eliminate those tests that are actually impossible or totally imprac-tical to automate. For example, a test that consists of assessing whether the screen colors work well together is not a good candidate for automation. Auto-mating 2 percent of your most important and often-repeated tests may give more benefit than automating 50 percent of tests that don’t provide much value.

Measuring the percentage of manual tests that have been automated also leaves out a potentially greater benefit of automation—there are tests that can be done automatically that are impossible or totally impractical to do manually. In figure 3 we see that the best automation includes tests that don’t make sense as manual tests and does not include tests that make sense only as manual tests.

Automation provides tool support for testing; it should not simply auto-mate tests. For example, a utility could be developed by the automators to make comparing results easier for the testers. This does not automate any tests but may be a great help to the testers, save them a lot of time, and make things much easier for them. This is good auto-mation support.

For more on the following topics go to www.StickyMinds.com/bettersoftware.

Dorothy Graham’s blog on DDP and test automationSoftware Test Automation

Sticky Notes

Better automation objective: Automa-tion should provide valuable support to testing. This could be measured by how often the testers used what was provided by the automators, including automated tests run and utilities and other support. It could also be measured by how useful the testers rated the various types of sup-port provided by the automation team. Another objective could be: The number of additional verifications made that couldn’t be checked manually. This could be related to the number of tests, in the form of a ratio that should be increasing.

What are your objectives for test execution automation? Are they good ones? If not, this may seriously impact the success of your automation efforts. Don’t confuse objectives for testing with objectives for automation. Choose more appropriate objectives and measure the extent to which you are achieving them, and you will be able to show how your automation efforts benefit your organi-zation. {end}

Figure 3

© Martin Gijsen, 2009 Page 1 of 12

Effective Automated Testing with a DSTL

Martin Gijsen Test automation architect

[email protected] © 2009

Abstract: This article will demonstrate how a Domain Specific Test Language (DSTL) can be defined and implemented. The maintenance sensitivity of the testware will be minimized and its maintainability maximized using only a few rules. This results in an easy to use, low maintenance automated test solution. Examples will use the freeware ETA (Essential Test Automation) Framework.

* * *

Introduction

Test automation for functional testing requires an investment, in time as well as in money. To maximize the return on investment, the benefits should be reaped for as long as possible. They should preferably keep flowing for as long as the System Under Test (SUT) is being maintained and tested. While developing into a test automation architect over the course of about ten years, I have found that following certain rules goes a long way in obtaining these effective automated tests. This article will list these rules and explain them using examples. I hope that they will help the reader as they have helped me. A key feature of the approach that the rules define is the DSTL, or Domain Specific Test Language. A DSTL is an easy to use test language, designed to test a specific system as it evolves over time. It does not (normally) look like a programming language. The DSTL will evolve with the SUT. With a properly designed test automation solution, this is just fine. Note that, although many tools do not allow creating a DSTL in the exact way suggested here, the same rules can be applied in that context and their benefits harvested. The example being used concerns a hypothetical banking system for private clients. A client can open one or more accounts, deposit or withdraw an amount, and transfer an amount between accounts.

Rule number one: Support tester friendly tests

In order for testers to be both willing and able to actually use a solution for automated testing, it is important to make the solution tester friendly. Although some testers are quite comfortable using the scripting languages that test automation tools often use, they are a minority, and fewer still are good at it. Much like few developers are good testers. The most effective way to deal with this fact is to accept it and find a way in which testers can do what they are good at, while others take care of the other relevant tasks. This effectively means that the test analysis part of automated testing should require no programming. It also implies that someone with programming skills will have to take care of the technical part as an automator. The activities will therefore be divided over two roles, the test analyst and the automator. The test analyst will focus on what to test rather than on the details of how to test it. This

© Martin Gijsen, 2009 Page 2 of 12

requires the analytical skills that testers normally need but no programming skills. This makes automated testing with this approach a pure testing activity. The way tests are written is more formal than usual but not really 'technical.' The automator focuses on the technical side only, making sure that tests run. This does not require in depth knowledge of SUT functionality and is a pure development role, to be assigned to a software engineer. It would seem, if a tester is also a fair programmer, that taking the technical part out of the testing is less important. This is not the case. To begin with, clearly separating test analysis and test automation is an example of the 'separation of concerns' principle. This is important to avoid making things more complicated than necessary and always a good idea. Also, consider what will happen to the test solution if its author is unable to maintain it any longer, for example after moving to another project or company. This is a serious threat to the continuity of the test solution that has already resulted in a premature end for many automated testing efforts. It is wise not to rely on a particular technical tester, or even the presence of technical testers in general, to write, review and maintain the automated tests. It is also unnecessary. Consider the following test case.

account first name last name address

open account 1234567890 John Doe 11 test street, Testburg

open account 1234567891 Jane Doe 27 test lane, Testville

account amount

deposit 1234567890 10000

amount from account to account

transfer 1234 1234567890 1234567891

account amount

check balance 1234567890 8766

check balance 1234567891 1234

Before going into the contents of the test case, please consider its form first. It consists of a number of instructions with varying numbers of arguments. The arguments are described in an optional comment line above each instruction. Tests in this column based format are easy to write, review and maintain. Any spreadsheet program can be used to create them. These programs are easy to use and some are even free. And only their most basic features are needed; the program serves as little more than a table editor. Note that the basic test case is clear just from reading the instructions in the first column. This is one reason why instructions should start with a verb. To make sure they are easy to read, they should not be in CAPITALS or contain characters like underscores. This is the format that the test engine of the Essential Test Automation Framework supports. This framework is freeware, available to anyone at no cost through www.DeAnalist.nl. Test instructions can be implemented in Java. This format will be used throughout this article. The use of colours is optional and just for improving readability.

© Martin Gijsen, 2009 Page 3 of 12

Rule number two: Test analyst and automator cooperate

When defining a DSTL, the test analyst, who knows the testing needs for the SUT and will be using the DSTL to write tests, should be in the lead. The automator should check that the proposed instructions can be realized. An experienced automator can also support less experienced test analysts, both with what the instructions look like and with effective use of the test engine functionality. So defining a DSTL is a task on which test analyst and test automator should work together. The instructions that form the DSTL must be well documented to avoid confusion. The functionality of the instructions as described is what is available to the test analyst to write tests. The document also serves as a requirements document for the automator. Using a template like the one in appendix A helps ensure that all important aspects of the behaviour of an instruction are captured. Note that, as the SUT evolves, new instructions may be needed and existing instructions may change. Again, both the test analyst and the automator should be involved in defining the new instructions. The document with the instruction definitions must be updated, so that it always reflects the current DSTL.

Rule number three: Use the natural abstraction level

Computers have very little difficulty processing, storing and searching through huge amounts of data very quickly. A human brain does not work that way. Its short term memory can hold only a few things at any one time. So a test case should be brief in order to be understandable to its writer, reviewer and maintainer. The main way to achieve this is by using a high abstraction level for the instructions of the DSTL. Consider the first few lines from the test case above, repeated here.

account first name last name address

open account 1234567890 John Doe 11 test street, Testburg

open account 1234567891 Jane Doe 27 test lane, Testville

Note that the 'open account' instruction is functional in nature: It describes what must be done rather than how it is done. The instruction may very well be a complex one, accessing multiple screens of the system. But unless such details are essential to understanding the test case, they can and probably should be left implicit. A natural abstraction level is easily obtained by taking a candidate instruction and asking yourself, possibly repeatedly, 'Why do I do this?' Take entering the account number above into a GUI field, for example. The answer to the question 'why?' might be that it is required to complete the screen. The next question then focusses on why the screen is being filled out, the answer being: to open an account. As the reason that we are opening an account is because the test case asks for it, we are done and decide that 'open account' will be an instruction rather than 'enter account number.'

Rule number four: Avoid irrelevant interface details

If the focus of a test case is on a system interface, it is unavoidable that some details of this interface appear in the test case. In most test cases, however, interface details are not essential to understand what is being tested. Nor is the test case the only place where such details can be specified. Also, interface details in a test case often increase its

© Martin Gijsen, 2009 Page 4 of 12

maintenance sensitivity significantly: It is often these details that change, break the test and make maintenance to the testware necessary. Irrelevant interface details are therefore best hidden in the implementation of the instructions. Consider once more the 'open account' instruction. Note that it is not called 'use account screen,' 'call account service' or something else that even suggests what kind of interface the instruction addresses. Instructions that have the natural abstraction level, as suggested in rule number three, do not normally refer to the interface in any way, let alone to its details. If a test case does contain interface details that are not essential to understand it, this suggests that the level of abstraction of some instructions can be raised. Doing so will further reduce maintenance sensitivity of the test, making automated testing more effective and a more pleasant activity. One additional advantage of hiding irrelevant interface details is that the same instruction can be implemented multiple times, in different ways. This is particularly useful if a system has multiple interfaces that support (more or less) the same functionality. An example would be a system that can be accessed both as a web application and through web services. With two implementations of the relevant instructions, the same test can be run against the same system using both interfaces. Another example is when different systems have similar functionality. A final example is when an interface is not yet available but another one is temporarily used instead, for instance when accessing a database directly makes automated testing possible while the application GUI is still being developed. If interface details do need to be specified in the test, the proper place for them is normally in a configuration section of the test, not in the test cases. Changes to these details are then likely to be restricted to the configuration section and not affect the test cases at all. An example is the URL or IP address to connect to. It can change when moving to a new (test) environment and may be needed in many places. Defining it outside the test cases will avoid having to check all test cases for all uses of it. How to do this is explained under rule number six below.

Rule number five: Avoid irrelevant tooling details

As with interface details, any information about the tooling that is used to perform the test will make test cases harder to understand and more sensitive to maintenance. And as with the interface details, there are two places for tooling details that are much more suitable than the test cases: The first is the implementation of the instructions. Putting them here ensures that they will not bother the test analyst at all, but this may not be convenient for configuration items. These items could be read from other files by the instruction implementation, but separating configuration data from the test cases completely does not make maintenance any easier. They are best placed in a configuration section of the test. There is one exception to this rule: The test engine that runs the test is also a tool and it does affect the test cases because it determines the format of the whole test. If a different test engine is ever selected, the whole test may have to be rewritten, perhaps including (part of) the instructions. Until a standard for such tools is defined, this is hard to avoid. Fortunately, it is unlikely to become an issue if a good test engine is used (like the one in the ETA Framework).

Rule number six: Give names to values

Many of the things that are most susceptible to change in a test case take the form of a single value, like a URL, an IP address, etc. Having such values appear in many places

© Martin Gijsen, 2009 Page 5 of 12

throughout a test is a maintenance nightmare just waiting to happen. Take for example the case where a URL is used like this in many places in a test.

url

open URL http://server17:1001

When the URL that is used changes, every location where it is used must be identified and the URL value updated. This is boring and can be cumbersome. Even worse could be the possibility that some occurrences are missed, which becomes more likely as the task is considered more annoying. It can be quite difficult, for instance, to find the reason for a failing test case when the URL looks correct at first glance but in fact points to an old, outdated but still existing system. The chances of something like this happening increase if the URL itself is not too meaningful to the reader, even more so if several URLs are in use at the same time and they look similar. The URL for the system test environment and for the integration test environment may look alike, for example. Such issues are easily avoided by assigning a name to a value and referring to it by this name only. Now consider this somewhat longer sample.

name value

define constant oldSystemTestUrl http://server07:1001

define constant systemTestUrl http://server17:1001

define constant integrationTestUrl http://server14:1001

define constant url ?systemTestUrl

url

open URL ?url

The 'define constant' instruction is a built-in instruction of the ETA Framework, meant for exactly this kind of situation. It introduces a name in the test that can be referred to anywhere after that point in the test. The way to refer to a constant is by prefixing the name with a question mark, as has been done twice in the above fragment. The first three constant definitions define the available URLs. The last one selects the one that applies at this time. The 'url' constant can and should be used throughout the test. Switching to the integration test URL is as simple as replacing '?systemTestUrl' with '?integrationTestUrl' in one place only. The constant definitions would normally be placed in the configuration section of the test, so all such settings can be maintained together. Maintenance sensitivity of the test is significantly reduced and its maintainability increased by naming values. Even if a value is used only once in the test, giving it a meaningful name first makes a lot of sense. The readability of the test increases, which makes it easier to understand and thus easier to review and maintain. Applying this rule to our banking test case gives the following version.

name value

define constant johnsAccount 1234567890

define constant janesAccount 1234567891

© Martin Gijsen, 2009 Page 6 of 12

account first name last name address

open account ?johnsAccount John Doe 11 test street, Testburg

open account ?janesAccount Jane Doe 27 test lane, Testville

account amount

deposit ?johnsAccount 10000

amount from account to account

transfer 1234 ?johnsAccount ?janesAccount

account amount

check balance ?johnsAccount 8766

check balance ?janesAccount 1234

All references to the account numbers have now been changed into references to the constants that represent the account numbers. The result is a test case that is easier to interpret than before, and changing an account number only needs to be done in one place.

Conclusion

To get a sustainable test automation solution with no dependence on highly technical testers, focus on:

maximizing ease of writing, reviewing and maintaining tests and minimizing the amount of maintenance to all testware.

This article has discussed six rules that go a long way in achieving these goals:

1. Support tester friendly tests. 2. Test analyst and automator cooperate. 3. Use the natural abstraction level. 4. Avoid irrelevant interface details. 5. Avoid irrelevant tooling details. 6. Give names to values.

A suitable test engine like the one in the ETA Framework takes care of rule number one and offers additional features that support applying rule number six. The rest requires human intelligence and skills. So effective test automation is very well feasible. Comments on this article and questions are welcome at [email protected].

Implementing the DSTL using the ETA Framework

The Essential Test Automation Framework offers many features that make it easy for a Java developer to implement DSTL instructions. The four instructions from the examples have been implemented for your convenience. The source code is available in the ETA Framework package and is also provided in appendix B. The result of running the test with this implementation of the instructions can be found in appendix C.

© Martin Gijsen, 2009 Page 7 of 12

Appendix A: A template for describing instructions

This template can be used to specify the behaviour of an instruction in detail.

Name and aliases The names under which the instruction is available.

Description Describe what the instruction is used for.

Parameters The parameters and their meaning. Indicate what values are accepted ('yes' or 'no', integers between 17 and 23?) and if it is mandatory or optional.

Pre-condition What is required for the instruction to succeed.

Post-condition What will be after the instruction succeeds.

Error situations When can go wrong and what will happen then.

Example Either an example of the usage of the instruction or a reference to another instruction (with an example that contains this instruction).

The below example instruction is for a built-in instruction of the ETA Framework itself.

Name and aliases 'begin test case' and 'begin testcase'.

Description Indicates the beginning of a (new) test case.

Parameters

1 The identification of the test case, preferably unique within the test. Optional.

2 The test case description. Optional.

Pre-condition None.

Post-condition If the previous input line was part of a test case, that test case is closed. A new test case is opened and assigned the next sequence number for the report(s), starting at one.

Error situations Is not executed but generates an error in a procedure definition.

Example ...

© Martin Gijsen, 2009 Page 8 of 12

Appendix B: The demo source code for the ETA Framework

This appendix contains the source code for the four instructions in the example test case, plus the source code for the library that registers these new instructions with the ETA Framework and defines the main() method. These files, together with the .jar files that the framework engine requires to run, form a complete test solution: It runs the sample test. Since the instructions were for a hypothetical system, the logic that actually executes the instructions and addresses the interface(s) of the SUT could not be created. Each instruction simply writes a comment to the test report saying “executed”. In the source code, the marked sections show: That an instruction class derives from org.etaFramework.Instruction, How the checkMandatoryArgument() method from the base class is used to validate

each instruction argument and to report on invalid ones using a description, How org.etaFramework.validation.Validators.cDefaultValidator can be supplied to

checkMandatoryArgument() if no validation is required, and How a simple check on the values returned by checkMandatoryArgument() ensures

that all arguments are indeed valid before the real instruction logic is invoked. The instruction library defines the main() method and registers the instructions with the test run object. It is also a convenient place to define the validators that the instructions of this library use to validate their arguments. Note that checking the arguments is not required. It just makes sure that invalid arguments result in a meaningful error message. Not performing such checks means less code to write, but it can also mean spending (much) more time to figure out what is wrong and where the error was introduced. The ETA Framework distribution package contains documentation on all of its features, including the ones used in this article. OpenAccountInstruction.java: package org.etaFramework.demo1; import org.etaFramework.ITestLine; import org.etaFramework.Instruction; import org.etaFramework.validation.Validators; class OpenAccountInstruction extends Instruction { OpenAccountInstruction () { super (); } public void execute (final ITestLine testLine) { // validate the arguments final String accountNr = checkMandatoryArgument (

testLine, 0, DemoLibrary.cAccountNrValidator, "account number"); final String firstName = checkMandatoryArgument (

testLine, 1, Validators.cDefaultValidator, "first name"); final String lastName = checkMandatoryArgument (

© Martin Gijsen, 2009 Page 9 of 12

testLine, 2, Validators.cDefaultValidator, "last name"); final String address = checkMandatoryArgument (

testLine, 3, Validators.cDefaultValidator, "address"); // only execute the instruction if all arguments are valid if (accountNr != null && firstName != null && lastName != null &&

address != null) { reportComment ("executed"); } } } depositInstruction.java: package org.etaFramework.demo1; import org.etaFramework.ITestLine; import org.etaFramework.Instruction; class DepositInstruction extends Instruction { DepositInstruction () { super (); } public void execute (final ITestLine testLine) { // validate the arguments final String accountNr = checkMandatoryArgument (

testLine, 0, DemoLibrary.cAccountNrValidator, "account number"); final String amount = checkMandatoryArgument (

testLine, 1, DemoLibrary.cAmountValidator, "amount"); // only execute the instruction if all arguments are valid if (accountNr != null && amount != null) { reportComment ("executed"); } } } TransferInstruction.java: package org.etaFramework.demo1; import org.etaFramework.ITestLine; import org.etaFramework.Instruction; class TransferInstruction extends Instruction { TransferInstruction () { super (); } // TransferInstruction () public void execute (final ITestLine testLine) { // validate the arguments

© Martin Gijsen, 2009 Page 10 of 12

final String amount = checkMandatoryArgument ( testLine, 0, DemoLibrary.cAmountValidator, "amount");

final String fromAccountNr = checkMandatoryArgument ( testLine, 1, DemoLibrary.cAccountNrValidator, "from account number");

final String toAccountNr = checkMandatoryArgument ( testLine, 2, DemoLibrary.cAccountNrValidator, "to account number");

// only execute the instruction if all arguments are valid if (amount != null && fromAccountNr != null && toAccountNr != null) { reportComment ("executed"); } } } CheckBalanceInstruction.java: package org.etaFramework.demo1; import org.etaFramework.ITestLine; import org.etaFramework.Instruction; class CheckBalanceInstruction extends Instruction { CheckBalanceInstruction () { super (); } // CheckBalanceInstruction () public void execute (final ITestLine testLine) { // validate the arguments final String accountNr = checkMandatoryArgument (

testLine, 0, DemoLibrary.cAccountNrValidator, "account number"); final String amount = checkMandatoryArgument (

testLine, 1, DemoLibrary.cAmountValidator, "amount"); // only execute the instruction if all arguments are valid if (accountNr != null && amount != null) { reportComment ("executed"); } } } DemoLibrary.java: package org.etaFramework.demo1; import org.etaFramework.Framework; import org.etaFramework.IInstructionLibrary; import org.etaFramework.ITestRun; import org.etaFramework.Options; import org.etaFramework.validation.IValidator; import org.etaFramework.validation.PatternBasedValidator; class DemoLibrary implements IInstructionLibrary { public static void main (final String[] args) {

© Martin Gijsen, 2009 Page 11 of 12

// create an ITestRun final ITestRun testRun = Framework.createTestRun (); // register the demo instruction library testRun.registerInstructionLibrary (new DemoLibrary ()); // create an options object final Options options = new Options (); // parse the arguments options.parse (args); // run the file specified in the options testRun.run (options); } public boolean registerInstructions (final ITestRun testRun) { testRun.registerInstruction ("open account", new OpenAccountInstruction ()); testRun.registerInstruction ("deposit", new DepositInstruction ()); testRun.registerInstruction ("transfer", new TransferInstruction ()); testRun.registerInstruction ("check balance",

new CheckBalanceInstruction ()); return true; } public void cleanup () { ; } static IValidator cAccountNrValidator = new PatternBasedValidator ("\\d+"); static IValidator cAmountValidator = new PatternBasedValidator ("\\d+(,\\d\\d)?"); }

© Martin Gijsen, 2009 Page 12 of 12

Appendix C: The test report

The implementation of the instructions that a test uses is all that is needed to run it. Since the source code in appendix A implements all instructions that the example test case uses, the test case can be run. Letting the engine of the Essential Test Automation Framework execute the test case results in the following test report, in the same column base format as the test is in. The lines from the test are preceded by a line number in the report, so that they are easy to look up in the test. The text in blue indicates comments that were generated by either the framework or by the instructions.

20 BETTER SOFTWARE DECEMBER 2007 www.StickyMinds.com

--- ,

© Dorothy Graham, 2013 Page 1 of 5

Technical versus non-technical skills in test automation

Dorothy Graham Software Testing Consultant

[email protected]

SUMMARY In this paper, I discuss the role of the testers and test automators in test automation. Technical skills are needed by test automators, but testers who do not have technical skills should not be prohibited from writing and running automated tests.

Keywords Tester, test automator, test automation, skills.

1. INTRODUCTION Test automation is a popular topic in software testing, and an area where a number of organizations have had good success. Tests that may take days to run manually can be executed in hours, running overnight and at weekends, with greater accuracy and repeatability. Tests can be run more often, giving immediate feedback for new builds. Yet despite the obvious potential, many organizations are still struggling to achieve good benefits from automation. I believe that one reason for this is the role of the “test automator”. There is a common misperception that testers should take on this role. This paper explains why this may not be the best solution. It is popular for testers to be encouraged to develop programming skills. For example at EuroStar 2012, a keynote speaker advised all testers to learn to code. I don’t agree with this, and this paper, originally written for the CAST conference 2010, explains why.

2. TERMS

I will start by defining the terms I use in this paper. Test automation: the computer-assisted running of software tests, i.e. the automation of test execution.

Test automator: A person who builds and maintains the testware associated with automated tests. [4] Tester: A person who identifies test conditions, designs test cases and verifies test results. A tester may also build and execute tests and compare test results. [4] Testware: The artifacts required to plan, design and execute tests, such as documentation, scripts, inputs, expected outcomes, set-up and clear-up procedures, files, databases, environments, and any additional software or utilities used in testing. [4]

© Dorothy Graham, 2013 Page 2 of 5

3. TEST AUTOMATION SKILLS

3.1 Existing perceptions

The automation of test execution is a popular application of computer technology to itself. There are a number of books about test automation. [1,2,3,4,7,8,10,11,12] Many of them do not appear to mention skills needed (or it was not obvious if they did). There is a general perception that testers must be or become technical, i.e. programmers, if they are to become involved in automation, although there are a few exceptions that mention a distinction between testers and automators. Linda Hayes in her useful booklet on automation [7] says: “… developing test scripts is essentially a form of programming; for this role, a more technical background is needed.” She distinguishes between “Test Developers” i.e. testers, and “Script Developers”, which is part of the role of a test automator.

Dustin et al in [3] says: “When people think of AST [Automated Software Testing], they often think the skill set required is one of a ‘tester’, and that any manual tester can pick up and learn how to use an automated testing tool. Although the skills [of a tester] … are still needed to implement AST, a complement of skills similar to the broad range of skill sets needed to develop the software product itself is needed.” (p 225)

A paper by Mosaic [13] mentions three roles: “Manual Test Engineer”, “Automation Test Engineer” and “Lead Automator”. In this model, the design of tests (i.e. the tester’s role) is done by both test engineers; the automation work (i.e. test automator’s role) is done by the lead automator and automation test engineer. The key distinction is who designs the tests, which in my view is best done by the tester, but collaborating with the test automator for tests that are to be automated.

3.2 Is test automation a technical task? The answer to this question depends on what you include as part of “test automation”. If you view it as the direct use of a test execution tool, i.e. writing, editing and running scripts written in the tool’s scripting language, then it is a technical task, and programming (i.e. scripting) skills are needed.

Another technical aspect of test automation is the design of the testware architecture – the structure and relationship of all of the items of testware that comprise the artefacts required for automated tests to successfully run. The design of the testware architecture is a critical aspect for successful test automation, and the skills needed for this include technical expertise, as well as knowledge of how the tests are to be used. The person who designs the testware architecture may be called a test automator, test architect, or lead automator.

3.3 Constructing automated tests is not entirely a technical process The construction of the automation architecture, and the scripts and other testware that will be used to run automated tests is a technical task, but automated testing is not just the structure of the architecture and scripts. The whole purpose of test automation is to make it possible to run tests with minimal human involvement in test execution (and comparison). There is a need for testers to be able to use automated tests, both to write tests to be run automatically, and to run those tests and view the results. The tests that are to be automated could be technical tests, such as those written by developers as part of Test-Driven Development or unit or integration testing,

© Dorothy Graham, 2013 Page 3 of 5

but system and acceptance tests can also be automated, and the testers who write those tests are not always technical (i.e. software developers).

The content of the test needs to be determined, but this is a task that is done by a tester; the implementation of the test is what is done by the automator.

4. TESTERS TO AUTOMATORS?

4.1 Testers become automators?

I have seen it work well to have a team of manual testers embarking on an automation project, where all (or nearly all) of the testers effectively become programmers, i.e. test programmers, or scripters. At a former colleague’s company, five out of the team of six testers went on the tool vendor’s training course and became familiar with the tool’s scripting language. One tester decided he didn’t want to become technical, so he concentrated on manual testing, but the others all became good test automators. There were two interesting side-effects of the testers’ newly acquired skillset. First, they had a lot more sympathy for the developers, as they now understood first-hand the frustrations of trying to get the computer to do what you wanted it to do. Second, they found that the developers treated them with a bit more respect, as they now also had some development skills. This led to a better relationship between the developers and testers. Another example where it worked very well to have all of the testers become automators is described in a chapter by Lisa Crispin [2] in our forthcoming book. An agile team of 9 to 12 people were all involved in doing manual regression testing, so were highly motivated to automate 20% of their work, and everyone became involved in the automation.

4.2 A separate team of test automators?

I have seen other organizations where a separate team is set up to automate tests, leaving the testers free to concentrate on designing tests and running manual tests. As the automation team gets going, they automate tests nominated by the testers, freeing the testers from having to do those tests manually. The automation team provides a service to the testers, designing the testware architecture and structure of the tests, and assisting where needed when problems are encountered with the automated tests. For example, if an automated test fails, it could be because of a software fault (in which case the tester would have found a bug), but it could fail for a technical reason such as a problem with the environment, a missing testware item (i.e. a bug in the automated testware), or a problem with the tool itself. The tester, not being technical, will need technical assistance to identify the source of the problem. So we have the situation where test automation does require technical skills, but we have testers who do not have those skills – can this really work? Yes it can, but it needs two key separations or layers of abstraction.

5. AUTOMATION SUCCESS NEEDS LAYERS OF ABSTRACTION

5.1 Technical Layer Technical aspects are very important for test automation. A good testware architecture will have two layers of abstraction [6]. The technical layer will implement good software development practices for the testware, separating the tool itself and the direct scripting of the tool from the software or scriptware

© Dorothy Graham, 2013 Page 4 of 5

that calls and uses the lower level scripts. Modularity and reuse are key factors in minimizing maintenance of automated testware. If something changes in the software, the testware will need to reflect that change. With lower levels of scripting (a recorded test or linear script being the lowest), a small change to a screen can result in making “magnetic trash” [9] of the automated tests.

If possible, the testware should be designed so that it can cope with changes in the software under test without needing any changes to the testware. If this is not possible, the effects of any change to the software being tested should be confined to only one testware artefact (or a minimum number if this is not practical).

This layer gives good maintainability to the automated test regime.

5.2 Tester Layer

If all of the testers are technical, such as developers who are doing Test-Driven Design or unit testing, then this layer is not as critical. The Tester layer of abstraction is needed when system testers or user acceptance testers want to use test automation, but do not want to become technical, i.e. programmers.

In order to achieve this, the non-technical testers must be able both to write tests (that can then be run automatically) and also to run tests, i.e. to “kick off” a set of automated tests.

If the testware architecture uses a keyword-driven approach [1,4,5,6], the testers can write tests using keywords that are related to the business knowledge or domain knowledge that they are familiar with. Yes, they do have to follow the correct syntax for the keywords, but tools enable this to be relatively easy to do, for example by providing a drop-down list of valid keywords and checking the syntax of parameters entered to the keywords. The keywords are implemented (i.e programmed) by test automators, using the scripting language of the tool, or using any other programming language that they know and would be appropriate. The testers are not involved in the implementation of the keywords, but they are able to use them to write tests. The testers also need to be able to select a set of tests to be run automatically. This can be implemented by the test automators to make it easy for the testers to kick off a set of tests, for example by providing options in a user-friend interface to the automation.

The testers also need to receive and understand the results of the automated tests, and the way in which this information is communicated to them is also designed by the test automator.

This separation of the tester from the automation is needed for the automation to grow within an organization and to give long-lasting benefits and wide-spread acceptance.

6. SUMMARY AND CONCLUSION Test automation does need technical skill – for those who are closest to the tool itself. The skills of the tester and the skills of the test automator may be found in the same person, but it may work better to have different people performing the two roles. The test automator’s role is critical in establishing a modular and well-structure testware architecture, separating the tool from the testware, and providing a tester-friendly interface to the testware for non-technical testers. Not every tester can or should become a test automator. Many non-technical people are very good testers; they should be able to use test automation without needing to have technical skills. Getting to

© Dorothy Graham, 2013 Page 5 of 5

this point, however, does require good technical support, but that support does not have to be provided by the tester.

7. REFERENCES [1] Buwalda, H., Janssen, D. and Pinkster, I. 2002. Integrated Test Design and Automation. Addison Wesley/Pearson Education, London.

[2] Crispin, L. Zero to 100% Regression Test Automation in one year: an Agile Approach to Automation 2010. In Graham, D. and Fewster, M. Experiences of Test Automation. [Publisher not yet determined]

[3] Dustin, E., Garrett, T. and Gauf, B. 2009. Implementing Automated Software Testing. Addison Wesley/Pearson Education, Boston, MA.

[4] Fewster, M. and Graham, D. 1999. Software Test Automation. Addison Wesley/Pearson Education, ACM Press, NY.

[5] Gijsen, M. 2009. Effective Automated Testing with a DSTL [Domain Specific Test Language]. Paper from the author and http://www.linkedin.com/ppl/webprofile?action=ctu&id=5550465&pvs=pp&authToken=7sp6&authType=name&trk=ppro_getintr&lnk=cnt_dir

[6] Graham, D. and Fewster, M. 2012 Experiences of Test Automation, Addison Wesley/Pearson Education, Boston, MA.

[7] Hoffman, D and Strooper, P. 1995. Software Design, Automated Testing, and Maintenance. International Thompson Computer Press, Boston, MA.

[8] Kaner, C., Falk, J. and Nguyen, H. Q. 1993. Testing Computer Software. Van Nostrand Reinhold, NY. [9] Mosley, D. J. and Posey, Bruce. A. 2002. Just Enough Software Test Automation. Yourdon Press/Pearson Education, Upper Saddle

River, NJ. [10] Siteur, M.M. 2005. Automate your testing! Sdu Uitgevers bv, Den Haag. [11] Stottlemyer, D. 2001. Automated Web Testing Toolkit. Wiley, NY.

[12] [author unknown] 2002. Staffing your test automation team. Mosaic Inc, Chicago IL. www.mosaicinc.com/mosaicinc/successful_test.htm