11579_Writing and Tracking Test Cases

Embed Size (px)

Citation preview

WRITING AND TRACKING TEST CASES

The Goals of Test Case PlanningIf software testers expect the project managers and the programmers to be more disciplined, instill some methods, and follow some rules to make the development process run more smoothly, they should also expect to do the same. Carefully and methodically planning test cases is a step in that direction. Doing so is very important for following four reasons:

Organization. Even on small software projects it's possible to have many thousands of test cases. The cases may have been created by several testers over the course of several months or even years. Proper planning will organize them so that all the testers and other project team members can review and use them effectively.

Repeatability. As it may be necessary over the course of a project to run the same tests several times to look for new bugs and to make sure that old ones have been fixed. Without proper planning, it would be impossible to know what test cases were last run and exactly how they were run so that you could repeat the exact tests.

Tracking. You need to answer important questions over the course of a project such as How many test cases did you plan to run? How many did you run on the last software release? How many passed and how many failed? Were any test cases skipped? And so on. If no planning went into the test cases, it would be impossible to answer these questions.

Proof of testing In a few high-risk industries, the software test team must prove that it did indeed run the tests that it planned to run. It could actually be illegal, and dangerous, to release software in which a few test cases were skipped. Proper test case planning and tracking provides a means for proving what was tested.

Don't confuse test case planning with the identification of test cases. Test case planning is the next step up and is similar to a programmer learning how to perform highlevel design and properly document his work.

AD HOC TESTINGAd hoc testing, describes performing tests without a real plan, no test case planning and sometimes not even a high-level test plan. With ad hoc testing, a tester sits down with the software and starts banging the keys. Some people are naturally good at this and can find bugs right away.

it's not organized, it's not repeatable, it can't be tracked, and when it's complete, there's no proof that it was ever done. As a tester you don't want code that was written in an ad hoc manner, nor do your customers want software that was tested exclusively in an ad hoc manner.

Test Case Planning OverviewMoving further away from the top-level test plan puts less emphasis on the process of creation and more on the resulting written document. The reason is that these plans become useful on a daily, sometimes hourly, basis by the testers performing the testing. At the lowest level they become step-by-step instructions for executing a test, making it key that they're clear, concise, and organized how they got that way isn't nearly as important.

Test DesignThe overall project test plan is written at a very high level. It breaks out the software into specific features and testable items and assigns them to individual testers, but it doesn't specify exactly how those features will be tested. There may be a general mention of using automation or black-box or white-box testing, but the test plan doesn't get into the details of exactly where and how they will be used.

This next level of detail that defines the testing approach for individual software features is the test design specification.

IEEE 829 states that the test design specification "refines the test approach [defined in the test plan] and identifies the features to be covered by the design and its associated tests. It also identifies the test cases and test procedures, if any, required to accomplish the testing and specifies the feature pass/fail criteria."

The purpose of the test design spec is to organize and describe the testing that needs to be performed on a specific feature. It doesn't, however, give the detailed cases or the steps to execute to perform the testing. The following topics, adapted from the IEEE 829 standard, address this purpose and should be part of the test design specs that you create:

Identifiers. A unique identifier that can be used to reference and locate the test design spec. The spec should also reference the overall test plan and contain pointers to any other plans or specs that it references.

Features to be tested. A description of the software feature covered by the test design spec. This section should also identify features that may be indirectly tested as a side effect of testing the primary feature. It should also list features that won't be tested, ones that may be misconstrued as being covered by this plan

Approach. A description of the general approach that will be used to test the features. It should expand on the approach, if any, listed in the test plan, describe the technique to be used, and explain how the results will be verified.

Test case identification. A high-level description and references to the specific test cases that will be used to check the feature. It should list the selected equivalence partitions and provide references to the test cases and test procedures used to run them.

Pass/fail criteria. Describes exactly what constitutes a pass and a fail of the tested feature. What is acceptable and what is not? This may be very simple and clear a pass is when all the test cases are run without finding a bug. It can also be fuzzy a failure is when 10 percent or more of the test cases fail. There should be no doubt, though, what constitutes a pass or a fail of the feature.

Test CasesIEEE 829 states that the test case specification "documents the actual values used for input along with the anticipated outputs. A test case also identifies any constraints on the test procedure resulting from use of that specific test case."

Essentially, the details of a test case should explain exactly what values or conditions will be sent to the software and what result is expected. It can be referenced by one or more test design specs and may reference more than one test procedure. The IEEE 829 standard also lists some other important information that should be included such as following:

Identifiers. A unique identifier is referenced by the test design specs and the test procedure specs.

Test item. This describes the detailed feature, code module, and so on that's being tested. It should be more specific than the features listed in the test design spec. It should also provide references to product specifications or other design docs from which the test case was based.

Input specification. This specification lists all the inputs or conditions given to the software to execute the test case. If you're testing Calculator, this may be as simple as 1+1. If you're testing a file-based product, it would be the name of the file and a description of its contents.

Output specification. This describes the result you expect from executing the test case. Did 1+1 equal 2? Did all the contents of the file load as expected?

Environmental needs. Environmental needs are the hardware, software, test tools, facilities, staff, and so on that are necessary to run the test case.

Special procedural requirements. This section describes anything unusual that must be done to perform the test.

Intercase dependencies.. If a test case depends on another test case or might be affected by another, that information should go here.

Test ProceduresIEEE 829 states that the test procedure specification "identifies all the steps required to operate the system and exercise the specified test cases in order to implement the associated test design. The test procedure or test script spec defines the step-by-step details of exactly how to perform the test cases. Following is the information that needs to be defined:

Identifier. A unique identifier that ties the test procedure to the associated test cases and test design. Purpose. The purpose of the procedure and reference to the test cases that it will execute. Special requirements. Other procedures, special testing skills, or special equipment needed to run the procedure.

Procedure steps. Detailed description of how the tests are to be run: Log.

Tells how and by what method the results and observations will be recorded. Setup. Explains how to prepare for the test. Start. Explains the steps used to start the test.

Procedure.

Describes the steps used to run the tests. Measure. Describes how the results are to be determined for example, with a stopwatch or visual determination. Shut down. Explains the steps for suspending the test for unexpected reasons. certain point if there's a failure or after shutting down. Restart. Tells the tester how to pick up the test at a certain point if there's a failure or after shutting down.

Describes the steps for an orderly halt to the test. Wrap up. Explains how to restore the environment to its pre-test condition. Contingencies. Explains what to do if things don't go as planned. Stop.

Test Case Organization and TrackingSome sort of process needs to be in place that allows you to manage your test cases and track the results of running them. There are essentially four possible systems::

In your head. Don't even consider this one, even for the simplest projects, unless you're testing software for your own personal use and have no reason to track your testing. You just can't do it.

Paper/documents. It's possible to manage the test cases for very small projects on paper. Tables and charts of checklists have been used effectively. They're obviously a weak method for organizing and searching the data. They do offer one very important positive a written checklist that includes a tester's initials or signature denoting that tests were run is excellent proof in a court-of-law that testing was performed.

Spreadsheet. A popular and very workable method of tracking test cases is by using a spreadsheet. By keeping all the details of the test cases in one place, a spreadsheet can provide an at-a-glance view of your testing status. They're easy to use, relatively easy to set up, and provide good tracking and proof of testing.

Custom database. The ideal method for tracking test cases is to use a Test Case Management Tool, a database programmed specifically to handle test cases. Many commercially available applications are set up to perform just this specific task

The important thing to remember is that the number of test cases can easily be in the thousands and without a means to manage them, you and the other testers could quickly be lost in a sea of documentation. You need to know, at a glance, the answer to fundamental questions such as, "What will I be testing tomorrow, and how many test cases will I need to run?"