30
Firesmith’s Testing Pitfalls Checklist 1 November 2014 1 Common System and Software Testing Pitfalls Checklist The following checklist is based on the book Common System and Software Testing Pitfalls by Donald Firesmith. It has been updated with additional testing pitfalls that have been identified since the book’s manuscript was frozen for publication. A testing pitfall is defined as a: A human mistake that unnecessarily and unexpectedly causes testing to be: Less effective at uncovering defects Less efficient in terms of time and effort expended More frustrating to perform A bad decision, an incorrect mindset, a wrong action, or failure to act A failure to adequately: Meet a testing challenge Address a testing problem A way to screw up testing This checklist is intended for use when: Producing test strategies/plans and related documentations Evaluating contractor proposals Evaluating test strategies/plans and related documentation (quality control) Evaluating as-performed test process (quality assurance) Identifying test-related risks and their mitigation approaches

Common System and Software Testing Pitfalls Checklist - 2014

Embed Size (px)

DESCRIPTION

A checklist for identifying commonly-occurring system and software testing pitfalls as described in the book Common System and Software Testing Pitfalls (2014).

Citation preview

Page 1: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 1

Common System and Software Testing Pitfalls Checklist

The following checklist is based on the book Common System and Software Testing Pitfalls by Donald Firesmith. It has been updated with additional testing pitfalls that have been identified since the book’s manuscript was frozen for publication.

A testing pitfall is defined as a:

• A human mistake that unnecessarily and unexpectedly causes testing to be: – Less effective at uncovering defects – Less efficient in terms of time and effort expended – More frustrating to perform

• A bad decision, an incorrect mindset, a wrong action, or failure to act • A failure to adequately:

– Meet a testing challenge – Address a testing problem

• A way to screw up testing

This checklist is intended for use when: • Producing test strategies/plans and related documentations • Evaluating contractor proposals • Evaluating test strategies/plans and related documentation (quality control) • Evaluating as-performed test process (quality assurance) • Identifying test-related risks and their mitigation approaches

Page 2: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 2

1 General Testing Pitfalls The following pitfall categories are general and potentially apply to all types of testing:

1. Test Planning and Scheduling 2. Stakeholder Involvement and Commitment 3. Management 4. Staffing 5. Process 6. Pitfall-Related Pitfalls 7. Test Tools and Environments 8. Automated Testing 9. Communication 10. Testing-as-a-Service (TaaS) 11. Requirements

1.1 Test Planning and Scheduling

Testing Pitfalls Observations and Notes

No Separate Test Planning Documentation (GEN-TPS-1) There is no separate testing-specific planning documentation, only incomplete, high-level overviews of testing in the general planning documents.

� Symptoms � Consequences � Causes � Recommendations

Incomplete Test Planning (GEN-TPS-2) Test planning and its associated documentation are not sufficiently complete for the current point in the system development cycle.

� Symptoms � Consequences � Causes � Recommendations

Page 3: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 3

Test Plans Ignored (GEN-TPS-3) The test planning documentation is ignored (that is, it has become “shelfware”) once developed and delivered.

� Symptoms � Consequences � Causes � Recommendations

Test-Case Documents as Test Plans (GEN-TPS-4) Test-case documents that document specific test cases are mislabeled as test plans.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Test Schedule (GEN-TPS-5) The testing schedule is inadequate to complete proper testing.

� Symptoms � Consequences � Causes � Recommendations

Testing at the End (GEN-TPS-6) All testing is performed late in the development cycle; there is little or no testing of executable models or unit or integration testing planned or performed during the early and middle stages of the development cycle.

� Symptoms � Consequences � Causes � Recommendations

Independent Test Schedule (GEN-TPS-7) The test schedule is developed independently of the project master schedule and the schedules of the other development activities.

� Symptoms � Consequences � Causes � Recommendations

1.2 Stakeholder Involvement and Commitment

Testing Pitfalls Observations and Notes

Wrong Testing Mindset (GEN-SIC-1) Some testers and testing stakeholders have one or more incorrect

� Symptoms � Consequences � Causes � Recommendations

Page 4: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 4

beliefs concerning testing.

Unrealistic Testing Expectations (GEN-SIC-2) Testing stakeholders (especially customer representatives and managers) have various unrealistic expectations with regard to testing.

� Symptoms � Consequences � Causes � Recommendations

Lack of Stakeholder Commitment to Testing (GEN-SIC-3) Stakeholder commitment to the testing effort is inadequate; sufficient resources (for example, people, time in the schedule, tools, or funding) are not allocated the testing effort.

� Symptoms � Consequences � Causes � Recommendations

1.3 Management

Testing Pitfalls Observations and Notes

Inadequate Test Resources (GEN-MGMT-1) Management allocates inadequate resources (for example, budget, schedule, staffing, and facilities) to the testing effort.

� Symptoms � Consequences � Causes � Recommendations

Inappropriate External Pressures (GEN-MGMT-2) Managers and others in positions of authority subject testers to inappropriate external pressures.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Test-Related Risk Management (GEN-MGMT-3) There are too few test-related risks identified in the project’s official risk repository, and those that are identified have inappropriately low probabilities, low harm severities, and low priorities.

� Symptoms � Consequences � Causes � Recommendations

Page 5: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 5

Inadequate Test Metrics (GEN-MGMT-4) Too few test metrics are produced, analyzed, reported, or acted upon, and some of the test metrics that are produced are inappropriate or not very useful.

� Symptoms � Consequences � Causes � Recommendations

Inconvenient Test Results Ignored (GEN-MGMT-5) Management ignores or treats lightly inconvenient negative test results (especially those with negative ramifications for the schedule, budget, or system quality).

� Symptoms � Consequences � Causes � Recommendations

Test Lessons Learned Ignored (GEN-MGMT-6) Lessons learned from testing on previous projects are ignored and not placed into practice on the current project.

� Symptoms � Consequences � Causes � Recommendations

1.4 Staffing

Testing Pitfalls Observations and Notes

Lack of Independence (GEN-STF-1) The test organization or project test team lack adequate administrative, financial, and technical independence to enable them to withstand inappropriate pressure from the development management to cut corners.

� Symptoms � Consequences � Causes � Recommendations

Unclear Testing Responsibilities (GEN-STF-2) The testing responsibilities are unclear and do not adequately address which organizations, teams, and people are going to be responsible for and perform the different types of testing.

� Symptoms � Consequences � Causes � Recommendations

Page 6: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 6

Developers Responsible for All Testing (GEN-STF-3) Developers are responsible for all of the developmental testing that occurs during system or software development.

� Symptoms � Consequences � Causes � Recommendations

Testers Responsible for All Testing (GEN-STF-4) Testers are responsible for all of the developmental testing that occurs during system or software development.

� Symptoms � Consequences � Causes � Recommendations

Testers Responsible for Quality (GEN-STF-5) Testers are (solely) responsible for the quality of the system or software under test.

� Symptoms � Consequences � Causes � Recommendations

Users Responsible for Testing (GEN-STF-6) The users are responsible for most of the testing, which occurs after the system(s) are operational.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Testing Expertise (GEN-STF-7) Some testers, developers, or other testing stakeholders have inadequate testing-related understanding, expertise, experience, or training.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Domain Expertise (GEN-STF-8) Testers do not have adequate training, experience, and expertise in the system’s application domain.

� Symptoms � Consequences � Causes � Recommendations

Adversarial Relationship (GEN-STF-9) A counterproductive adversarial relationship exists between the testers and either management, the developers, or both.

� Symptoms � Consequences � Causes � Recommendations

Page 7: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 7

1.5 Process

Testing Pitfalls Observations and Notes

No Planned Testing Process (GEN-PRO-1) There is no as-planned testing process and all testing (if any) is both ad hoc and completely up to the whims of the individual developers and testers.

� Symptoms � Consequences � Causes � Recommendations

Essentially No Testing (GEN-PRO-2) Very little if any developmental testing is being performed.

� Symptoms � Consequences � Causes � Recommendations

Incomplete Testing (GEN-PRO-3) The testers or developers fail to adequately test certain testable behaviors, characteristics, or components of the system or software under test.

� Symptoms � Consequences � Causes � Recommendations

Test Process Ignored (GEN-PRO-4) The testers, developers, or managers ignore the official documented as-planned test process.

� Symptoms � Consequences � Causes � Recommendations

One-Size-Fits-All Testing (GEN-PRO-5) All testing is performed the same way, to the same level of rigor, regardless of its criticality.

� Symptoms � Consequences � Causes � Recommendations

Sunny Day Testing Only )GEN-PRO-6) Testing is largely or totally restricted to verifying that the system does what it should, but does not verify that the system does not do

� Symptoms � Consequences � Causes � Recommendations

Page 8: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 8

what it should not do.

Testing and Engineering Process Not Integrated (GEN-PRO-7) The testing process is not adequately integrated into the overall system engineering process, but is rather treated as a separate specialty engineering activity with only limited interfaces with the primary engineering activities.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Test Prioritization (GEN-PRO-8) Testing is not adequately prioritized (for example, all types of testing have the same priority).

� Symptoms � Consequences � Causes � Recommendations

Test-Type Confusion (GEN-PRO-9) Test cases from one type of testing are redundantly repeated as part of another type of testing, even though the testing types have quite different purposes and scopes.

� Symptoms � Consequences � Causes � Recommendations

Functionality Testing Overemphasized (GEN-PRO-10) There is an overemphasis on testing functionality as opposed to testing quality, data, and interface requirements and testing architectural, design, and implementation constraints.

� Symptoms � Consequences � Causes � Recommendations

Black-Box System Testing Overemphasized (GEN-PRO-11) There is an overemphasis on black-box system testing for requirements conformance, and there is very little white-box unit and integration testing for the architecture, design, and implementation verification.

� Symptoms � Consequences � Causes � Recommendations

Black-Box System Testing Underemphasized (GEN-PRO-12) � Symptoms � Consequences � Causes � Recommendations

Page 9: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 9

There is an overemphasis on white-box unit and integration testing, and very little time is spent on black-box system testing to verify conformance to the requirements.

Test Preconditions Ignored (GEN-PRO-13) Test cases do not address preconditions such as the system’s internal mode and states as well as the state(s) of the system’s external environment.

� Symptoms � Consequences � Causes � Recommendations

Too Immature for Testing (GEN-PRO-14) Products are delivered for testing when they are immature and not ready to be tested.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Test Data (GEN-PRO-15) The test data (including individual test data and sets of test data) lack adequate fidelity to operational data, is incomplete, or is invalid.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Evaluations of Test Assets (GEN-PRO-16) The quality of the test assets is not adequately evaluated prior to using them.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Maintenance of Test Assets (GEN-PRO-17) Test assets are not properly maintained (that is, adequately updated and iterated) as defects are found and the object under test (OUT) is changed.

� Symptoms � Consequences � Causes � Recommendations

Testing as a Phase (GEN-PRO-18) Testing is treated as a phase that takes place late in a sequential (also known as waterfall) development cycle instead of as an

� Symptoms � Consequences � Causes � Recommendations

Page 10: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 10

ongoing activity that takes place continuously in an iterative, incremental, and concurrent (an evolutionary, or agile) development cycle.

Testers Not Involved Early (GEN-PRO-19) Testers are not involved at the beginning of the project, but rather only once an implementation exists to test.

� Symptoms � Consequences � Causes � Recommendations

Developmental Testing During Production (GEN-PRO-20) Significant system testing is postponed until the system is already in production when fixing defects is much more difficult and expensive.

� Symptoms � Consequences � Causes � Recommendations

No Operational Testing (GEN-PRO-21) Representative users are not performing any operational testing of the “completed” system under actual operational conditions.

� Symptoms � Consequences � Causes � Recommendations

Test Oracles Ignore Nondeterministic Behavior (GEN-PRO-22) Testers do not have any criteria for determining when a test has passed when non-deterministic behavior results in intermittent failures and faults.

� Symptoms � Consequences � Causes � Recommendations

Testing in Quality (GEN-PRO-23) Testing stakeholders rely on testing quality into the system/software under test rather than building quality in from the beginning via all engineering and management activities.

� Symptoms � Consequences � Causes � Recommendations

Developers Ignore Testability (GEN-PRO-24) The system or software under test is unnecessarily difficult to test because the developers did not consider testing when designing and

� Symptoms � Consequences � Causes � Recommendations

Page 11: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 11

implementing the system or software.

Testing the BackBlob (GEN-PRO-25) Testers do not adequately deal with their increasing workload due to an ever increasing backlog of testing work including manual regression testing and the maintenance of automated tests.

� Symptoms � Consequences � Causes � Recommendations

Test Assets Not Delivered (GEN-PRO-26) The system or software under test is delivered without its associated testing assets that would enable the receiving organization(s) to test new capabilities and perform regression testing after changes.

� Symptoms � Consequences � Causes � Recommendations

1.6 Pitfall-Related Pitfalls

Testing Pitfalls Observations and Notes

Overly Ambitious Process Improvement (GEN-PRP-1) The test team is overly ambiguous with regard to improving the testing process by attempting to address too many relevant testing pitfalls at once.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Pitfall Prioritization (GEN-PRP-2) Test managers do not adequately prioritize the testing pitfalls (for example by relevance, frequency, severity of negative consequences, and/or risk) when attempting to improve the testing process by better addressing testing pitfalls.

� Symptoms � Consequences � Causes � Recommendations

Page 12: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 12

1.7 Test Tools and Environments

Testing Pitfalls Observations and Notes

Over-Reliance on Manual Testing (GEN-TTE-1) Testers place too much reliance on manual testing such that the majority of testing is performed manually, without adequate support of test tools or test scripts.

� Symptoms � Consequences � Causes � Recommendations

Over-Reliance on Testing Tools (GEN-TTE-2) Testers and other testing stakeholders place too much reliance on commercial off-the-shelf (COTS) and homegrown testing tools.

� Symptoms � Consequences � Causes � Recommendations

Target Platform Difficult to Access (GEN-TTE-3) The testers are not prepared to perform adequate testing when the target platform is not designed to enable access for testing.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Test Environments (GEN-TTE-4) There are insufficient test tools, test environments or test beds, and test laboratories or facilities, so adequate testing cannot be performed within the schedule and personnel limitations.

� Symptoms � Consequences � Causes � Recommendations

Poor Fidelity of Test Environments (GEN-TTE-5) The testers build and use test environments or test beds that have poor fidelity to the operational environment of the system or software under test (SUT), and this causes inconclusive or incorrect test results (false-positive and false-negative test results).

� Symptoms � Consequences � Causes � Recommendations

Inadequate Test Environment Quality (GEN-TTE-6) � Symptoms � Consequences � Causes � Recommendations

Page 13: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 13

The quality of one or more test environments is inadequate due to an excessive number of defects.

Test Environments Inadequately Tested (GEN-TTE-7) Testers do not test their test environments/beds to eliminate defects that could either prevent the testing of the system or software under test or cause incorrect test results.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Test Configuration Management (GEN-TTE-8) Testing work products (for example, test cases, test scripts, test data, test tools, and test environments) are not under configuration management (CM).

� Symptoms � Consequences � Causes � Recommendations

Developers Ignore Testability (GEN-TTE-9) It is unnecessarily difficult to develop automated tests because the developers do not consider testing when designing and implementing their system or software.

� Symptoms � Consequences � Causes � Recommendations

1.8 Automated Testing

Testing Pitfalls Observations and Notes

Over-Reliance on Manual Testing (GEN-AUTO-1) Testers place too much reliance on manual testing such that the majority of testing is performed manually, without adequate support of test tools or test scripts

� Symptoms � Consequences � Causes � Recommendations

Automated Testing Replaces Manual Testing (GEN-AUTO-2) � Symptoms � Consequences � Causes � Recommendations

Page 14: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 14

Managers, developers, or testers attempt to replace all manual testing with automated testing.

Excessive Number of Automated Tests (GEN-AUTO-3) The ratio of the amount of automated tests to the amount of deliverable software is too high.

� Symptoms � Consequences � Causes � Recommendations

Inappropriate Distribution of Automated Tests (GEN-AUTO-4) The distribution of the amount of automated testing among the different levels of testing (such as unit testing, integration testing, system testing, and user interface testing) is inappropriate.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Automated Test Quality (GEN-AUTO-5) The automated tests have excessive numbers of defects.

� Symptoms � Consequences � Causes � Recommendations

Automated Tests Excessively Complex (GEN-AUTO-6) The automated tests are significantly more complex than they need to be.

� Symptoms � Consequences � Causes � Recommendations

Automated Tests Not Maintained (GEN-AUTO-7) The automated tests are not maintained so that they are no longer trusted or reusable.

� Symptoms � Consequences � Causes � Recommendations

Insufficient Resources Invested (GEN-AUTO-8) Insufficient resources are allocated to plan for, develop, and maintain automated tests.

� Symptoms � Consequences � Causes � Recommendations

Automation Tools Not Appropriate (GEN-AUTO-9) The developers and testers select inappropriate tools for supporting

� Symptoms � Consequences � Causes � Recommendations

Page 15: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 15

automated testing.

Stakeholders Ignored (GEN-AUTO-10) The developers and testers ignore the stakeholders when planning and performing automated testing.

� Symptoms � Consequences � Causes � Recommendations

1.9 Communication

Testing Pitfalls Observations and Notes

Inadequate Architecture and Design Documentation (GEN-COM-1) Architects and designers produce insufficient architecture or design documentation (for example, models and documents) to support white-box (structural) unit and integration testing.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Defect Reports (GEN-COM-2) Testers and others create defect reports (also known as bug and trouble reports) that are incomplete, contain incorrect information, or are difficult to read.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Test Documentation (GEN-COM-3) Testers create test documentation that is incomplete or contains incorrect information.

� Symptoms � Consequences � Causes � Recommendations

Source Documents Not Maintained (GEN-COM-4) Developers do not properly maintain the requirements specifications, architecture documents, design documents, and

� Symptoms � Consequences � Causes � Recommendations

Page 16: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 16

associated models that are needed as inputs to the development of tests.

Inadequate Communication Concerning Testing (GEN-COM-5) Verbal and written communication concerning the testing among testers and other testing stakeholders is inadequate.

� Symptoms � Consequences � Causes � Recommendations

Inconsistent Testing Terminology (GEN-COM-6) Different testers, developers, managers, and other testing stakeholders often use inconsistent and ambiguous technical jargon so that the same word has different meanings and different words have the same meaning.

� Symptoms � Consequences � Causes � Recommendations

1.10 Testing-as-a-Service (TaaS)

Testing Pitfalls Observations and Notes

Cost-Driven Provider Selection (GEN-TaaS-1) Executive or administrative management selects the TaaS provider based solely on minimizing cost.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Oversight (GEN-TaaS-2) Project management does not provide adequate oversight of the TaaS provider’s testing effort.

� Symptoms � Consequences � Causes � Recommendations

Lack of Outsourcing Expertise (GEN-TaaS-3) Project administrative and technical management has insufficient training, expertise, and experience in the outsourcing, especially of

� Symptoms � Consequences � Causes � Recommendations

Page 17: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 17

testing as a service.

Inadequate TaaS Contract (GEN-TaaS-4) The contract between the development/maintenance organization and the TaaS contractor/vendor does not adequately address the project’s Key Performance Indicators (KPIs), associated Service Level Agreements (SLAs), and the specific metrics by which achievement of the SLAs will be measured.

� Symptoms � Consequences � Causes � Recommendations

TaaS Improperly Chosen (GEN-TaaS-5) TaaS is selected for the outsourcing of a type of testing for which it is an inappropriate choice.

� Symptoms � Consequences � Causes � Recommendations

1.11 Requirements

Testing Pitfalls Observations and Notes

Tests as Requirements (GEN-REQ-1) Developers use black-box system- and subsystem-level tests as a replacement for the associated system and subsystem requirements.

� Symptoms � Consequences � Causes � Recommendations

Ambiguous Requirements (GEN-REQ-2) Testers misinterpret a great many ambiguous requirements and therefore base their testing on incorrect interpretations of these requirements.

� Symptoms � Consequences � Causes � Recommendations

Obsolete Requirements (GEN-REQ-3) Testers waste effort and time testing whether the object under test

� Symptoms � Consequences � Causes � Recommendations

Page 18: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 18

(OUT) correctly implements a great many obsolete requirements.

Missing Requirements (GEN-REQ-4) Testers overlook many undocumented requirements and therefore do not plan for, develop, or run the associated overlooked test cases.

� Symptoms � Consequences � Causes � Recommendations

Incomplete Requirements (GEN-REQ-5) Testers fail to detect that many requirements are incomplete; therefore, they develop and run correspondingly incomplete or incorrect test cases.

� Symptoms � Consequences � Causes � Recommendations

Incorrect Requirements (GEN-REQ-6) Testers fail to detect that many requirements are incorrect, and therefore develop and run correspondingly incorrect test cases that produce false-positive and false-negative test results.

� Symptoms � Consequences � Causes � Recommendations

Requirements Churn (GEN-REQ-7) Testers waste an excessive amount of time and effort developing and running test cases based on many requirements that are not sufficiently stable and that therefore change one or more times prior to delivery.

� Symptoms � Consequences � Causes � Recommendations

Improperly Derived Requirements (GEN-REQ-8) Testers base their testing on improperly derived requirements, resulting in missing test cases, test cases at the wrong level of abstraction, or incorrect test cases based on cross cutting requirements that are allocated without modification to multiple architectural components.

� Symptoms � Consequences � Causes � Recommendations

Page 19: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 19

Verification Methods Not Adequately Specified (GEN-REQ-9) Testers (or other developers) fail to properly specify the verification method(s) for each requirement, thereby causing requirements to be verified using unnecessarily inefficient or ineffective verification method(s).

� Symptoms � Consequences � Causes � Recommendations

Lack of Requirements Trace (GEN-REQ-10) The testers do not trace the requirements to individual tests or test cases, thereby making it unnecessarily difficult to determine whether the tests are inadequate or excessive.

� Symptoms � Consequences � Causes � Recommendations

Titanic Effect of Deferred Requirements (GEN-REQ-11) Managers or chief engineers defer numerous requirements from previous blocks or builds to the current block or build after the resources for the current build have been allocated and largely expended.

� Symptoms � Consequences � Causes � Recommendations

Page 20: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 20

2 Test-Type-Specific Testing Pitfalls The following pitfall categories are specific to a single type of testing:

1. Executable Model Testing 2. Unit Testing 3. Integration Testing 4. Specialty Testing 5. System Testing 6. User Testing 7. A/B Testing 8. Acceptance Testing 9. SoS Testing 10. Regression Testing

2.1 Executable Model Testing

Testing Pitfalls Observations and Notes

Inadequate Executable Models (TTS-MOD-1) Either there are no executable requirements, architectural, or design models or else the models that exist are inadequate to enable associated test cases to be manually or automatically developed.

� Symptoms � Consequences � Causes � Recommendations

Executable Models Not Tested (TTS-MOD-2) No one (such as testers, requirements engineers, architects, or designers) is testing executable requirements, architectural, or design models to verify whether they conform to the requirements and incorporate any defects.

� Symptoms � Consequences � Causes � Recommendations

Page 21: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 21

2.2 Unit Testing

Testing Pitfalls Observations and Notes

Testing Does Not Drive Design and Implementation (TTS-UNT-1) Software developers and testers do not develop their tests first and then use these tests to drive development of the associated architecture, design, and implementation.

� Symptoms � Consequences � Causes � Recommendations

Conflict of Interest (TTS-UNT-2) Nothing is done to address the following conflict of interest that exists when developers test their own work products: Essentially, they are being asked to demonstrate that their software is defective.

� Symptoms � Consequences � Causes � Recommendations

Brittle Test Cases (TTS-UNT-3) Unit test cases are too brittle and unnecessarily need to be changed when the unit under test changes.

� Symptoms � Consequences � Causes � Recommendations

2.3 Integration Testing

Testing Pitfalls Observations and Notes

Integration Decreases Testability Ignored (TTS-INT-1) Testers fail to take into account that integration encapsulates the individual parts of the whole and the interactions between them, thereby making the internal parts of the integrated whole less

� Symptoms � Consequences � Causes � Recommendations

Page 22: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 22

observable and less controllable and, therefore, less testable.

Inadequate Self-Testing (TTS-INT-2) Testers fail to take into account that integration encapsulates the individual parts of the whole and the interactions between them, thereby making the internal parts of the integrated whole less observable and less controllable and, therefore, less testable.

� Symptoms � Consequences � Causes � Recommendations

Unavailable Components (TTS-INT-3) Integration testing must be postponed due to the unavailability of (1) system hardware or software components or (2) test environment components.

� Symptoms � Consequences � Causes � Recommendations

System Testing as Integration Testing (TTS-INT-4) Testers are actually performing system-level tests of system functionality when they are supposed to be performing integration testing of component interfaces and interactions.

� Symptoms � Consequences � Causes � Recommendations

2.4 Specialty Testing

Testing Pitfalls Observations and Notes

Inadequate Capacity Testing (TTS-SPC-1) Testers perform little or no capacity testing (or the capacity testing they do perform is superficial) to determine the degree to which the system or software degrades gracefully as capacity limits are

� Symptoms � Consequences � Causes � Recommendations

Page 23: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 23

approached, reached, and exceeded.

Inadequate Concurrency Testing (TTS-SPC-2) Testers perform little or no concurrency testing (or the concurrency testing they do perform is superficial) to explicitly uncover the defects that cause the common types of concurrency faults and failures: deadlock, livelock, starvation, priority inversion, race conditions, inconsistent views of shared memory, and unintentional infinite loops.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Configuration Testing (TTS-SPC-3) Testers perform little or no configuration testing to determine the degree to which the software under test functions properly when executing on different platform configurations.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Internationalization Testing (TTS-SPC-4) Testers perform little or no internationalization testing—or the internationalization testing they do perform is superficial—to determine the degree to which the system is configurable to perform appropriately in multiple countries.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Interface Standards Compliance Testing (TTS-SPC-5) Testers perform little or no testing of key interfaces to ensure that they properly comply with associated open interface standards.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Interoperability Testing (TTS-SPC-6) Testers perform little or no interoperability testing (or the interoperability testing they do perform is superficial) to determine the degree to which the system successfully interfaces and

� Symptoms � Consequences � Causes � Recommendations

Page 24: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 24

collaborates with other systems.

Inadequate Performance Testing (TTS-SPC-7) Testers perform little or no performance testing (or the testing they do perform is only superficial) to determine the degree to which the system has adequate levels of the performance quality attributes: event schedualability, jitter, latency, response time, and throughput.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Reliability Testing (TTS-SPC-8) Testers perform little or no long-duration reliability testing (also known as stability testing)—or the reliability testing they do perform is superficial (for example, it is not done under operational profiles and is not based on the results of any reliability models)—to determine the degree to which the system continues to function over time without failure.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Robustness Testing (TTS-SPC-9) Testers perform little or no robustness testing, or the robustness testing they do perform is superficial (for example, it is not based on the results of any robustness models), to determine the degree to which the system exhibits adequate error, fault, failure, and environmental tolerance.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Safety Testing (TTS-SPC-10) Testers perform little or no safety testing, or the safety testing they do perform is superficial (for example, it is not based on the results of a safety or hazard analysis), to determine the degree to which the system is safe from causing or suffering accidental harm.

� Symptoms � Consequences � Causes � Recommendations

Page 25: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 25

Inadequate Security Testing (TTS-SPC-11) Testers perform little or no security testing—or the security testing they do perform is superficial (for example, it is not based on the results of a security or threat analysis)—to determine the degree to which the system is secure from causing or suffering malicious harm.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Usability Testing (TTS-SPC-12) Testers or usability engineers perform little or no usability testing—or the usability testing they do perform is superficial—to determine the degree to which the system’s human-machine interfaces meet the system’s requirements for usability, manpower, personnel, training, human factors engineering (HFE), and habitability.

� Symptoms � Consequences � Causes � Recommendations

2.5 System Testing

Testing Pitfalls Observations and Notes

Test Hooks Remain (TTS-SYS-1) Testers fail to remove temporary test hooks after completing testing, so they remain in the delivered or fielded system.

� Symptoms � Consequences � Causes � Recommendations

Lack of Test Hooks (TTS-SYS-2) Testers fail to take into account how a lack of test hooks makes it more difficult to test parts of the system hidden via information hiding.

� Symptoms � Consequences � Causes � Recommendations

Page 26: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 26

Inadequate End-to-End Testing (TTS-SYS-3) Testers perform inadequate system-level functional testing of a system’s end-to-end support for its missions.

� Symptoms � Consequences � Causes � Recommendations

2.6 User Testing

Testing Pitfalls Observations and Notes

Inadequate User Involvement (TTS-UT-1) Too few users representing too few of the different types of users are involved in the performance of user testing and the evaluation of its results.

� Symptoms � Consequences � Causes � Recommendations

Unprepared User Representatives (TTS-UT-2) The representative user performing user testing are not adequately prepared to effectively and efficiently perform user testing.

� Symptoms � Consequences � Causes � Recommendations

User Testing Merely Repeats System Testing (TTS-UT-3) User testing is only a repetition of a subsystem of the existing system tests by representative users.

� Symptoms � Consequences � Causes � Recommendations

User Testing is Mistaken for Acceptance Testing (TTS-UT-4) User testing, often referred as User Acceptance Testing (UAT), is frequently confused with system acceptance testing in spite of their very different goals and descriptions.

� Symptoms � Consequences � Causes � Recommendations

Assume Knowledgeable and Careful User (TTS-UT-5) � Symptoms � Consequences � Causes � Recommendations

Page 27: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 27

Testers (and developers) mistakenly assume that representative users performing user testing will be careful and as knowledgeable as they are about how the system should work.

2.7 A/B Testing

Testing Pitfalls Observations and Notes

Poor Key Performance Indicators (TTS-ABT-1) The key performance indicators (KPIs) of the testing do not support business or mission goals.

� Symptoms � Consequences � Causes � Recommendations

Misuse of Probability and Statistics (TTS-ABT-2) Probability and statistics are misused when interpreting the results of A/B testing.

� Symptoms � Consequences � Causes � Recommendations

Confusing Statistical Significance for Business Significance (TTS-ABT-3) All statistically significant test results are mistakenly assumed to be sufficiently significant to justify choosing one variant over another, even if the benefits do not justify the associated costs.

� Symptoms � Consequences � Causes � Recommendations

Error Source(s) not Controlled (TTS-ABT-4) Various sources of error are not controlled during the testing.

� Symptoms � Consequences � Causes � Recommendations

System Variant(s) Changed During Test (TTS-ABT-5) One or both of the variants of the system or software under test are changed during an A/B test.

� Symptoms � Consequences � Causes � Recommendations

Page 28: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 28

2.8 Acceptance Testing

Testing Pitfalls Observations and Notes

No Clear System Acceptance Criteria (TTS-AT-1) There are no clear system acceptance criteria for use during acceptance testing to determine whether the system is acceptable to the acquiring organization.

� Symptoms � Consequences � Causes � Recommendations

2.9 Systems of Systems (SoS) Testing

Testing Pitfalls Observations and Notes

Inadequate SoS Test Planning (TTS-SoS-1) Testers and SoS architects perform an inadequate amount of SoS test planning and fail to appropriately document their plans in SoS-level test planning documentation.

� Symptoms � Consequences � Causes � Recommendations

Unclear SoS Testing Responsibilities (TTS-SoS-2) Managers or testers fail to clearly define and document the responsibilities for performing end-to-end SoS testing.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Resources for SoS Testing (TTS-SoS-3) Management fails to provide adequate resources for system of systems (SoS) testing

� Symptoms � Consequences � Causes � Recommendations

SoS Testing Not Properly Scheduled (TTS-SoS-4) System of systems testing is not properly scheduled and coordinated

� Symptoms � Consequences � Causes � Recommendations

Page 29: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 29

with the individual systems’ testing and delivery schedules

Inadequate SoS Requirements (TTS-SoS-5) Many SoS-level requirements are missing, are of poor quality, or are never officially approved or funded.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Support from Individual System Projects (TTS-SoS-6) Test support from individual system development or maintenance projects is inadequate to perform system of systems testing.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Defect Tracking Across Projects (TTS-SoS-7) Defect tracking across individual system development or maintenance projects is inadequate to support system of systems testing.

� Symptoms � Consequences � Causes � Recommendations

Finger-Pointing (TTS-SoS-8) Different system development or maintenance projects assign the responsibility for finding and fixing SoS-level defects to other projects.

� Symptoms � Consequences � Causes � Recommendations

2.10 Regression Testing

Testing Pitfalls Observations and Notes

Inadequate Regression Test Automation (TTS-REG-1) Testers and developers have automated an insufficient number of tests to enable adequate regression testing.

� Symptoms � Consequences � Causes � Recommendations

Page 30: Common System and Software Testing Pitfalls Checklist - 2014

Firesmith’s Testing Pitfalls Checklist 1 November 2014 30

Regression Testing Not Performed (TTS-REG-2) Testers and maintainers perform insufficient regression testing to determine if new defects have been accidentally introduced when changes are made to the system.

� Symptoms � Consequences � Causes � Recommendations

Inadequate Scope of Regression Testing (TTS-REG-3) The scope of regression testing is insufficiently broad.

� Symptoms � Consequences � Causes � Recommendations

Only Low-Level Regression Tests (TTS-REG-4) Only low-level (for example, unit-level and possibly integration) regression tests are rerun, so there is no system, acceptance, or operational regression testing and no SoS regression testing.

� Symptoms � Consequences � Causes � Recommendations

Test Resources Not Delivered for Maintenance (TTS-REG-5) The test resources produced by the development organization are not made available to the maintenance organization to support testing new capabilities and regression testing changes

� Symptoms � Consequences � Causes � Recommendations

Only Functional Regression Testing (TTS-REG-6) Testers and maintainers only perform regression testing to determine if changes introduce functionality-related defects.

Inadequate Retesting of Reused Software (TTS-REG-7) Developers reuse software without adequately retesting it to ensure that it continues to operate correctly as part of the current system or software.

� Symptoms � Consequences � Causes � Recommendations