Upload
scott-reed
View
219
Download
4
Tags:
Embed Size (px)
Citation preview
1
Copyright © 1996-2003, Satisfice, Inc.
V1.6.1
James Bach, Satisfice, Inc.
www.satisfice.com
(540)631-0600
2
Copyright Notice
These slides are distributed under the Creative Commons License.
In brief summary, you may make and distribute copies of these slides so long as you give the original author credit and, if you alter, transform or build upon this work, you distribute the resulting work only under a license identical to this one.
For the rest of the details of the license, see http://creativecommons.org/licenses/by-sa/2.0/legalcode.
3
Acknowledgements
Some of this material was developed in collaboration with Dr. Cem Kaner, of the Florida Institute of Technology.
Many of the ideas in this presentation were inspired by or contributed by other colleagues including Bret Pettichord, Brian Marick, Doug Hoffman, Dave Gelperin, Elisabeth Hendrickson, and Noel Nyman.
This class is under continuous development. Many ideas were improved or contributed by students in earlier versions of the class since 1996.
4
Assumptions
You test software. You have at least some control over the design of
your tests and some time to create new tests. One of your goals is to find important bugs fast. You test things under conditions of uncertainty
and time pressure. You have control over how you think and what
you think about. You want to get very good at software testing.
5
Primary Goal of this Class
To teach you how to test a productwhen you have to test it right now,under conditions of uncertainty,
in a way that stands up to scrutiny.
6
Background
7
Your Moves:Rapid Testing Cycle
Make sense ofyour status
Focus on whatneeds doing
START
STO
PDo a burst of
testing
Report
Compare statusagainst mission
8
What About “Slow” Testing?
All Testing
Rapid
automationextensive
preparationsuper
testabilitysuperskill
Rigorousor Thorough
Management likesto talk about this…but they don’tfund it.
You can do this,no matter what.
9
What is quality?What is a bug?
Quality is value to some person.
A bug is anything that threatens the valueof the product.
These definitions are designed to be inclusive. Inclusive definitions minimize the chance that you will
inadvertently overlook an important problem.
11
A Test (or Test Suite) is Like a Questionyou Ask the Product
A tester’s questions seek potentially valuable information. To some degree, good tests have these attributes:
Power. When a problem exists, the test will reveal it.
Validity. When the test reveals a problem, it is a genuine problem.
Value. It reveals things your clients want to know about the product or project.
Pop. (short for Karl Popper) It reveal things about our basic or critical assumptions.
Coverage. It exercises the product in some way.
Performability. It can be performed as designed; repeated as needed.
Accountability. You can explain, justify, and prove you ran it.
Cost. This includes time and effort, as well as direct costs.
Opportunity Cost. Performing it may prevent you from doing other tests.
12
Contrasting Approaches
In scripted testing, tests are first designed and recorded. Then they may be executed at some later time or by a different tester.
In exploratory testing, tests are designed and executed at the same time, and they often are not recorded.
Product
Tests
Tests
13
Contrasting Approaches
Scripted testing emphasizes accountability and decidability.
Exploratory testing emphasizes adaptability and learning.
Product
Tests
Tests
14
Exploratory Testing Defined
Exploratory testing is simultaneouslearning, test design, and test execution.
pure scriptedfreestyle exploratory
When I say “exploratory testing” and don’t qualify it, I mean anythingon the exploratory side of this continuum.
chartersvague scriptsfragmentarytest cases roles
15
ET Done Well is aStructured Process
Exploratory testing, as I teach it, is a structured process conducted by a skilled tester, or by a lesser skilled testers or users working under reasonable supervision.
The structure of ET comes from: Test design heuristics Chartering Time boxing Perceived product risks The nature of specific tests The structure of the product being tested The process of learning the product Development activities Constraints and resources afforded by the project The skills, talents, and interests of the tester The overall mission of testing
In other words,it’s not “random”,
but reasoned.
16
ET is an Adaptive Process
Exploratory testing decentralizes the testing problem. Instead of trying to solve it:
only before test execution begins. by investing in expensive test documentation that tends to reduce the
total number of tests that can be created. only via a designer who is not necessarily the tester. while trying to eliminate the variations among testers. completely, and all at once.
It is solved: over the course of the project. by minimizing the need for expensive test documentation so that more
tests and more complex tests can be created with the same effort. via testers who may also be test designers. by taking maximum advantage of variations among testers. incrementally and cyclically.
17
Exploratory Forks
New testidea
New testidea
New testidea
New test ideas occur continually during an ET session.
18
Lateral Thinking
but periodically take stockof your status against your mission
Let yourself be distracted…
‘cause you never knowwhat you’ll find…
19
Exploratory Testing Tasks
LearningExecute
TestsDesignTests
Product(coverage)
Techniques
Quality(oracles)
Discover the elements
of the product.
Discover howthe product
should work.
Discover testdesign techniquesthat can be used.
Decide whichelements to test.
Observe product behavior.
Speculate aboutpossible quality
problems.
Evaluatebehavior against
expectations.
Configure & operate the
product.
Select & applytest design techniques.
Testingnotes
TestsProblems
Found
20
Taking Notes
Test Coverage Outline/Matrix Oracle Notes Risk/Strategy List Test Execution Log Issues, Questions & Anomalies
It would be easier to test if you changed/added… How does … work? Is this important to test? How should I test it? I saw something strange…
21
KEY IDEA
22
Models
A Model is… A map of a territory A simplified perspective A relationship of ideas An incomplete representation of reality A diagram, list, outline, matrix…
No good test design has ever been done without models.
The trick is to become aware of how you modelthe product, and learn different ways of modeling.
23
The Universal Test Procedure
“Try it and see if it works.”
Learn about it Model it Speculate about it Configure it Operate it
Know whatto look for
See what’sthere
Understand the requirements
Identify problems Distinguish bad
problems from not-so-bad problems
Models OraclesCoverage
24
All Product Testing is Something Like This
ProjectEnvironment
ProductElements
QualityCriteria
TestTechniques
PerceivedQuality
25
Seven Big Problems of Testing
Logistics Problem
Coverage Problem
OracleProblem
Reporting Problem
Stopping Problem
Pesticide Problem&
Agency Problem&
26
Coverage
There are as many kinds of coverage as there are ways to model the product. Structural Functional Data Platform Operations
Product coverage is the proportion of theproduct that has been tested.
27
Sometimes your coverage is disputed…
“No user would do that.”
“No user I can think of, who I like,would do that on purpose.”
Who aren’t you thinking of?Who don’t you like who might really use this product?
What might good users do by accident?
28
Useful Oracle Heuristics
Consistent with History: Present function behavior is consistent with past behavior.
Consistent with our Image: Function behavior is consistent with an image that the organization wants to project.
Consistent with Comparable Products: Function behavior is consistent with that of similar functions in comparable products.
Consistent with Claims: Function behavior is consistent with what people say it’s supposed to be.
Consistent with User’s Expectations: Function behavior is consistent with what we think users want.
Consistent within Product: Function behavior is consistent with behavior of comparable functions or functional patterns within the product.
Consistent with Purpose: Function behavior is consistent with apparent purpose.
29
Rapid, Frequent Feedback to Clients
Test Cyclereceivebuild
Sanity Checkis it testable?
Fix Verificationswere they fixed?
New Stuffis it functional?
Common and Critical Tests
Complex Tests
General RegressionTests
30
Risk Focus:Common and Critical Cases
Core functions: the critical and the popular. Capabilities: can the functions work at all? Common situations: popular data and pathways. Common threats: likely stress and error situations. User impact: failures that would do a lot of damage. Most wanted: problems of special interest to
someone else on the team.
31
Rapid Bug Investigation
Identification Notice a problem. Recall what you were doing just prior to the problem. Examine symptoms of the problem w/o disturbing system state. Consider possibility of tester error.
Investigation How can the problem be reproduced? What are the symptoms of the problem? How severe could the problem be? What might be causing the problem? What might be a workaround?
Reality Check Do we know enough about the problem to report it? Is it important to investigate this problem right now? Is this problem, or any variant of it, already known? How do we know this is really a problem? Is there someone else who can help us?
Identify
Investigate Check
32
KEY IDEA
33
Test Strategy
Strategy: “The set of ideas that guide your test design.”
Logistics: “The set of ideas that guide your application of resources to fulfilling the test strategy.”
Plan: “The set of ideas that guide your test project.”
A good test strategy is: Product-Specific Risk-focused Diversified Practical
34
Test Strategy
Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy.
Example of a poorly stated (and probably poorly conceived) test strategy: “We will use black box testing, cause-effect graphing,
boundary testing, and white box testing to test this product against its specification.”
35
Test Strategy
Not to be confused with test logistics, which involve the details of bringing resources to bear on the test strategy at the right time and place.
You don’t have to know the entire strategy in advance. The strategy should change as you learn more about the product and its problems.
36
One way to make a strategy…
1. Learn the product.
2. Think of important potential problems.
3. Think of ways to test that will cover the product and look for those important problems.
4. Make sure you are taking advantage of resources.
5. Make sure that your strategy is reasonably practical.
37
Test Strategy Heuristic:Diverse Half-Measures
There is no single technique that finds all bugs. We can’t do any technique perfectly. We can’t do all conceivable techniques.
Use “diverse half-measures”-- lots of differentpoints of view, approaches, techniques, evenif no one strategy is performed completely.
38
Strategy Heuristic:Function/Data Square
Data
Functions
risk testing
Function testing
reliabilitytesting
smoketesting
39
Test Techniques
A test techniqueis a recipe
for performingthese tasks that
will reveal somethingworth reporting
Analyze the situation. Model the test space. Select what to cover. Define your oracles. Configure the test system. Operate the test system. Observe the test system. Evaluate the test results.
40
Dynamic Quality Paradigm
Perfect
Awful
unnecessary quality
unacceptable quality
Item A
Item B
It’s more important to work on Item B.
Further improvement would not be a good use of resources.
Further improvement is necessary.
Good enoughquality bar
floating line
41
A Heuristic for Good Enough
1. X has sufficient benefits.
2. X has no critical problems.
3. Benefits of X sufficiently outweigh problems.
4. In the present situation, and all things considered, improving X would be more harmful than helpful.
BenefitsProblems
All conditionsmust apply.
42
Good Enough...
…with what level of confidence? …to meet ethical obligations? …in what time frame? …compared to what? …for what purpose? …or else what? …for whom?
Perspectiveis
Everything
43
MISSION:The most important part
Find important problems Assess quality Certify to standard Fulfill process mandates Satisfy stakeholders Assure accountability
Advise about QA Advise about testing Advise about quality Maximize efficiency Minimize time Minimize cost
The quality of testing depends on which of thesepossible missions matter and how they relate.
Many debates about the goodness of testingare really debates over missions and givens.
44
Testability
Controllability Observability Availability Simplicity Stability Information
Log files!
ScriptableInterface!
45
It Boils Down To…
YOU: Skills, equipment, experience, attitude
THE BALL: The product, testing tasks, bugs
YOUR TEAM: Coordination, roles, support
THE GAME: Risks, rewards, project environment, corporate environment, your mission as a tester
YOUR MOVES: How you spend your attention and energy to help your team win the game.
46
Rapid Testing
Develop your scientific mind. Use exploratory testing. Know your coverage and oracles. Run crisp test cycles that focus first on areas of risk. Use a diversified test strategy that serves the mission. Assure that your testing fits the logistics of the project.
47
Exploratory Process
48
Introducing the Test Session
1) Charter
2) Time Box
3) Reviewable Result
4) Debriefing vs.
49
Charter:A clear mission for the session
A charter may suggest what should be tested, how it should be tested, and what problems to look for.
A charter is not meant to be a detailed plan. General charters may be necessary at first:
“Analyze the Insert Picture function”
Specific charters provide better focus, but take more effort to design: “Test clip art insertion. Focus on stress and flow techniques, and
make sure to insert into a variety of documents. We’re concerned about resource leaks or anything else that might degrade performance over time.”
50
Time Box:Focused test effort of fixed duration
Brief enough for accurate reporting. Brief enough to allow flexible scheduling. Brief enough to allow course correction. Long enough to get solid testing done. Long enough for efficient debriefings. Beware of overly precise timing.
Short: 60 minutes (+-15)Normal: 90 minutes (+-
15)Long: 120 minutes (+-15)
51
Debriefing:Measurement begins with observation
The manager reviews session sheet to assure that he understands it and that it follows the protocol.
The tester answers any questions. Session metrics are checked. Charter may be adjusted. Session may be extended. New sessions may be chartered. Coaching happens.
52
Reviewable Result:A scannable session sheet
Charter #AREAS
Start Time Tester Name(s) Breakdown
#DURATION #TEST DESIGN AND EXECUTION #BUG INVESTIGATION AND REPORTING #SESSION SETUP #CHARTER/OPPORTUNITY
Data Files
Test Notes Bugs
#BUG
Issues #ISSUE
CHARTER-----------------------------------------------Analyze MapMaker’s View menu functionality and report on areas of potential risk. #AREASOS | Windows 2000Menu | ViewStrategy | Function TestingStrategy | Functional Analysis START-----------------------------------------------5/30/00 03:20 pm TESTER-----------------------------------------------Jonathan Bach TASK BREAKDOWN----------------------------------------------- #DURATIONshort #TEST DESIGN AND EXECUTION65 #BUG INVESTIGATION AND REPORTING25 #SESSION SETUP20
53
Coverage:Specifying coverage areas
These are text labels listed in the Charter section of the session sheet. (e.g. “insert picture”)
Coverage areas can include anything areas of the product test configuration test strategies system configuration parameters
Use the debriefings to check the validity of the specified coverage areas.
54
Closing Concepts
55
Common Concerns About ET
Concerns We have a long-life product and many versions, and we want a
good corporate memory of key tests and techniques. Corporate memory is at risk because of the lack of documentation.
The regulators would excommunicate us. The lawyers would massacre us. The auditors would reject us.
We have specific tests that should be rerun regularly.
Replies So, use a balanced approach, not purely exploratory. Even if you script all tests, you needn’t outlaw exploratory
behavior. Let no regulation or formalism be an excuse for bad testing.
56
Concerns Some tests are too complex to be kept in the tester’s head. The
tester has to write stuff down or he will not do a thorough or deep job.
Replies There is no inherent conflict between ET and documentation. There is often a conflict between writing high quality
documentation and doing ET when both must be done at the same moment. But why do that?
Automatic logging tools can solve part of the problem. Exploratory testing can be aided by any manner of test tool,
document, or checklist. It can even be done from detailed test scripts.
Common Concerns About ET
57
Concerns ET works well for expert testers, but we don’t have any.
Replies Detailed test procedures do not solve that problem, they
merely obscure it, like perfume on a rotten egg. Our goal as test managers should be to develop skilled testers
so that this problem disappears, over time. Since ET requires test design skill in some measure, ET
management must constrain the testing problem to fit the level and type of test design skill possessed by the tester.
I constrain the testing problem by personally supervising the testers, and making use of concise documentation, NOT by using detailed test scripts. Humans make poor robots.
Common Concerns About ET
58
Concerns How do I tell the difference between bluffing and exploratory
testing? If I send a scout and he comes back without finding anything,
how do I know he didn’t just go to sleep behind some tree? Replies
You never know for sure– just as you don’t know if a tester truly followed a test procedure.
It’s about reputation and relationships. Managing testers is like managing executives, not factory workers.
Give novice testers short leashes; better testers long leashes. An expert tester may not need a leash at all.
Work closely with your testers, and these problems go away.
Common Concerns About ET
59
Challenges ofHigh Accountability Exploratory Testing
Architecting the system of charters (test planning) Making time for debriefings Getting the metrics right Creating good test notes Keeping the technique from dominating the
testing Maintaining commitment to the approach
For example session sheets and metrics seehttp://www.satisfice.com/sbtm