36
Software Testing Strategies Cliff Andrews & Iain McCowatt

Software Testing Strategies Cliff Andrews

Embed Size (px)

Citation preview

Page 1: Software Testing Strategies Cliff Andrews

Software Testing Strategies

Cliff Andrews & Iain McCowatt

Page 2: Software Testing Strategies Cliff Andrews

Introduction

Page 3: Software Testing Strategies Cliff Andrews

Introduction

• Welcome• Disclaimer:

– The views expressed during this seminar are not intended to be authoritative

– Testing is context dependant– What works for us may not work for you

• Slides, citations and jumping off points to other literature– See notes pages for citations & links

• Approach for the evening• Group introductions

Page 4: Software Testing Strategies Cliff Andrews

Introduction What are the OBJECTIVES of Testing?

• The purpose of testing is to obtain and communicate information

• Such information might include1;– Information about important bugs– Information about readiness for release– Information about compliance to requirements or specifications– Information about compliance to regulations or standards

Page 5: Software Testing Strategies Cliff Andrews

Introduction What are the CHALLENGES of testing?

• Testing is fundamentally challenging:– We cannot test everything– We cannot find all the bugs– Time and resources are limited

Page 6: Software Testing Strategies Cliff Andrews

Introduction What is a TEST STRATEGY and how can it HELP?

• A test strategy is a set of organising principles for achieving the objectives of testing

• A test strategy can help you decide:– what to test– how aggressively to test– what order to test in– what types of bugs to test for– when to stop

Page 7: Software Testing Strategies Cliff Andrews

Introduction Common Strategies

There are many types of Test Strategy1.• Analytical:

– Requirements based– Risk based

• Model Based:– Scenario based

• Methodical:– Function Based– Standards Based

There is no “one size fits all” strategy, each has strengths and weaknesses

• Dynamic:– Defect Taxonomy– Error Guessing– Attacks– Exploratory

• Process Based:– Historical– Lifecycle

• Regression:– Continuous Integration

Page 8: Software Testing Strategies Cliff Andrews

Introduction Tonight’s Focus

• Tonight we will look at some of these strategies in more detail:– Requirements based– Risk based– Dynamic strategies– Blended strategies

Page 9: Software Testing Strategies Cliff Andrews

Requirements Based Testing

Page 10: Software Testing Strategies Cliff Andrews

Requirements Based Testing Overview (What are requirements?)

• A requirement is a statement that identifies a capability or function that is needed by a system in order to satisfy its customer’s needs1

– A functional requirement defines what the system must do- inputs and expected outputs

– A non-functional requirement describes how well the system performs, its usability, performance, reliability, etc

• External: Driven by customer needs, describe what the system must do, but not how it should be implemented

• Internal: Driven by company, could be determined by processes, resources, engineering standards etc

Page 11: Software Testing Strategies Cliff Andrews

• What is Requirements based testing?– An analytical strategy where test cases are derived from defined

requirements– Complimented by design docs, use cases, data base schemas, etc

• Validation vs. verification of requirements– Verification: Does software meet requirements?– Validation: Are requirements correct? Is the need being

fulfilled?

• Testers play an important role in requirement review & validation

Requirements Based Testing Overview (Requirements Testing)

Page 12: Software Testing Strategies Cliff Andrews

Requirements Based TestingMechanics: Test Design

• Test design should start early in the cycle– Provides additional requirements validation – Provides enough time to learn the system and develop tests– The earlier issues are discovered, the cheaper/easier it is to fix 1, 2, 3

• Test Design– Tests can directly reflect the requirement– Test may not explicitly reflect requirement– Test ‘around’ requirements

Page 13: Software Testing Strategies Cliff Andrews

Requirements Based TestingMechanics: Coverage

• Every requirement must be tested*– Does not require one to one relationship – Multiple requirements can be covered in a single test – Some requirements may require multiple tests– Track coverage with a requirements-test case matrix– Prove requirement is met or exceeded, but not to what extent

• *Untestable requirements– Can’t test requirements stating ‘shall never’, shall always’– Requirements that do not evaluate to a boolean, i.e. reqs that cannot

be stated using conditional logic if/then/else– Those we cannot test directly

Page 14: Software Testing Strategies Cliff Andrews

Requirements Based TestingMechanics: Techniques

• How to identify test cases– Read the requirement and write a test case– Pull attributes from requirements and develop tests– Other test design techniques, for example requirements expressed

with conditional logic lend themselves to decision tables / cause-effect graphs. 1

– Consider flow of system, robustness, performance, stability

Page 15: Software Testing Strategies Cliff Andrews

Requirements Based TestingMechanics: No requirements Document?

What can you do if there is no formal requirements document?• Use a different strategy• But, what if you don’t want to use those?• Look for something similar to requirements

– Models, for example a Call flow for automated phone system– Design documentation and user guides– Consider behaviour of existing software. E.g.: building a new GUI, but

must behave similarly to the current interface – GUI - is there a style guide?– Standards– Other resources: BA or the actual customer

Page 16: Software Testing Strategies Cliff Andrews

Requirements Based Testing Mechanics: Heuristics

• Heuristics:– Higher priority requirements need more focus– Make use of design documentation, user manuals– Customer habits – Can identify areas used the most, hence heavier

testing– Consider historical risks and bugs

Page 17: Software Testing Strategies Cliff Andrews

Requirements Based Testing:

Strengths and Weaknesses

• Strengths– Provides definitive understanding of what system must do– Prioritized requirements guide test case prioritization– Ensures system meets requirements - thus provides value1

– Metrics: Pass/failure rate more reflective of system quality– Definitive end to testing

• Weaknesses – If the requirement is wrong, software is wrong even if test passes – Does not easily accommodate vague or ambiguous requirements1 – May not cover non-functional characteristics, which are harder to

specify– Areas of high risk not explicitly identified

Page 18: Software Testing Strategies Cliff Andrews

Risk Based Testing

Page 19: Software Testing Strategies Cliff Andrews

Risk Based Testing Overview

• Testing is generally concerned with finding problems1 • A product risk is an issue which may exist in the product

under test• Risk based testing uses product risks to focus test efforts

Page 20: Software Testing Strategies Cliff Andrews

Risk Based Testing What is Risk?

• Risk is a function of likelihood and impact• Many factors affect likelihood, including1:

– Complexity– Change– Constraints

• Factors affecting impact include1:– Consequence of failure; safety, financial, image, liability– Civil and criminal liability issues– Frequency of use– Availability of workarounds

– Integration & dependencies– Organisational factors

Page 21: Software Testing Strategies Cliff Andrews

Risk Based TestingMechanics: Process

Analysis

Control

Identification

Page 22: Software Testing Strategies Cliff Andrews

Risk Based TestingMechanics: Risk Identification

• Process:– Brainstorm potential risks

• Key Decisions1:– Level of detail– Selection of stakeholders

• Heuristics:– Quality characteristics2

– Bug taxonomies3

– Requirements– Lists of functions– Historical bugs

Page 23: Software Testing Strategies Cliff Andrews

Risk Based TestingMechanics: Risk Analysis

• Process:– Assess each risk for likelihood and impact– Determine appropriate control; Mitigate / Transfer / Ignore

• Key Decisions1:– Quantitative vs. qualitative– Informal vs. formal– Selection of stakeholders– Formula for risk priority

• Heuristics:– Complexity measures– Historical defect densities– Support costs

Page 24: Software Testing Strategies Cliff Andrews

Risk Based TestingMechanics: Risk Control

• What to test:– Determine test scope based on risks to “mitigate”

• How aggressively to test:– Determine intensity based on risk priority

• What order to test in:– Sequence test design and execution based on risk priority

• What types of bugs to test for:– Design tests based on the types of risks identified

• When to stop:– Execute tests, reporting mitigated vs. residual risk– Stop when stakeholders are comfortable with balance of mitigated vs. residual

risk

Page 25: Software Testing Strategies Cliff Andrews

Risk Based Testing: Strengths & Weaknesses

• Strengths– Actively engages stakeholders in testing process– Informs entire test process from planning, through design, to release

decision– Can function in the absence of test basis (requirements, design,

models etc)

• Weaknesses– Requires active stakeholder involvement– Risk identification and analysis is limited by the knowledge and

imagination of the participants– Tends to rely on “up front” analysis and may be slow to react to

change

Page 26: Software Testing Strategies Cliff Andrews

Dynamic Strategies

Page 27: Software Testing Strategies Cliff Andrews

Dynamic Test Strategies:Overview

• There are a variety of dynamic approaches:– Error Guessing1 – Defect taxonomies2

– Attacks3

– Exploratory4

• Here we will focus on Exploratory testing, which can be used to blend all of the above

Page 28: Software Testing Strategies Cliff Andrews

Dynamic Test Strategies:Exploratory Testing

• Exploratory testing is simultaneous learning, test design, and test execution1

• All testing is exploratory to some degree2:

• Exploratory testing is NOT ad hoc testing, it can be managed and directed through:– Session Based Test Management3

– Tours4

More ExploratoryMore Scripted

Page 29: Software Testing Strategies Cliff Andrews

Dynamic Test Strategies:Mechanics

• What to test:– Targeting heuristics1

• How aggressively to test:– Intensity heuristics2

• What order to test in:– Early coverage heuristics3

• What types of bugs to test for:– Attacks4

– Bug Taxonomies5

– Test heuristics6

• When to stop:– Stopping heuristics7

Page 30: Software Testing Strategies Cliff Andrews

Dynamic Testing: Strengths & Weaknesses

• Strengths– High yield in terms of finding important bugs fast; excellent under time

pressure– Maximizes time spent hands on testing, at the expense of planning

and documentation– Can be executed with little or no preparation– Can react rapidly to changes or emerging risks– Can function in the absence of test basis (requirements, design,

models etc)

• Weaknesses– Requires experienced testers– Light in terms of documentation and coverage– Low repeatability

Page 31: Software Testing Strategies Cliff Andrews

Blended Strategies

Page 32: Software Testing Strategies Cliff Andrews

Blended Strategies:Overview

• Every test strategy has its strengths and weaknesses• Context is the key determinant of strategy selection• Blending strategies enable you to mitigate the weakness of

one with the strength of another• For example…

Page 33: Software Testing Strategies Cliff Andrews

Blended Strategies:Example

RiskAutomation

Exploratory

Page 34: Software Testing Strategies Cliff Andrews

Blended Strategies:Example

Risk is the motivating force behind our testing. It helps determine:– What to test– When to test– How aggressively to test– When to stop testing

RiskAutomation

Exploratory

Page 35: Software Testing Strategies Cliff Andrews

Blended Strategies:Example

Automated regression testing:– Helps provide rapid feedback on changes– Is a safety net which lets us change more, and later– Frees our testers from repetitive regression testing and allows

them to perform…

RiskExploratory

Automation

Page 36: Software Testing Strategies Cliff Andrews

Blended Strategies:Example

Exploratory testing:– A creative yet structured investigation into potential issues– Sufficiently dynamic to respond to changing requirements and

newly emerging risks

RiskAutomation

Exploratory