26
14.1 Introduction Using test tools makes only sense if it is profitable in terms of tim e, money, or quality. For successful introduction of test automation there must be a lifecycle and knowledge of all the test ac tivities. The test process must be repeatable. Up to date information about available tools can be found through: conferences and exhibitions; vendor exhibitions for free trial versions of tools; research reports such as the CAST report (Graham et al., 1996); the internet – www.ovum.com (ovum reports) – www.testing.com (Brian Marick’s Testing Foundations) – www.soft.com/Institute/HotList (Software Research Incorpor ated) – most tool vendors have a website with detailed information about their (commercial) products.

14.1 Introduction Using test tools makes only sense if it is profitable in terms of time, money, or quality. For successful introduction of test automation

Embed Size (px)

Citation preview

14.1 Introduction Using test tools makes only sense if it is profitable in terms of time, money, or

quality. For successful introduction of test automation there must

be a lifecycle and knowledge of all the test activities. The test process must be repeatable. Up to date information about available tools can be found through:

conferences and exhibitions; vendor exhibitions for free trial versions of tools; research reports such as the CAST report (Graham et al., 1

996); the internet

– www.ovum.com (ovum reports) – www.testing.com (Brian Marick’s Testing Foundations) – www.soft.com/Institute/HotList (Software Research Incorporate

d) – most tool vendors have a website with detailed information about

their (commercial) products.

14.2 Categorization of test tools

Tools are available for every phase in the test lifecycle. Figure 14.1 shows how tools can be categorized according to where they are applied in the testing lifecycle (see Chapter 6). These tools are explained in more detail in the following sections – the list presented is not meant to be exhaustive.

14.2.1 Planning and control

Most tools for the planning and control phase are project management based. These tools are not developed especially for testing, except for test and defect management tools. The following tools support planning and control: defect management tool; test management tool; configuration management tool; scheduling and progress monitoring tool.

14.2.1.1 Defect management tool

Defects detected during the test process must be collated in an orderly way. For a small project, a simple file system with a few control procedures is sufficient. More complex projects need at least a database with the possibility of generating progress reports showing.

14.2.1.2 Test management tool

The test management tools provided by tool vendors offer only the functionality to store test cases, scripts, and scenarios and sometimes integrate defect management.

This type of test management tool is not very useful for planning and control activities. Tools with the ability to link system requirements to test cases are now available. These tools afford the ability to keep track of the coverage of test cases in relation to the system requirements. In addition, they become very useful if system requirements are changed or might change – the test manager is able to show the impact of these changes and this may give cause to block the changes.

14.2.1.3 Configuration management tool

All testware has to be stored in a proper way. After quality checks, the testware should be frozen. The next step is to make the testware configuration items and store them in a configuration management tool.

A configuration management tool provides the functionality to keep track of the configuration items and their changes.

14.2.1.4 Scheduling and progress monitoring tool

For scheduling and progress monitoring, many different tools are available which are specially made for project management.

Scheduling and progress monitoring are combined in most tools. These kinds of tools are very useful for a test manager – combined with information from the defect and test management systems it is possible to keep track of progress in very detailed manner if necessary.

14.2.2 Preparation phase-14.2.2.1 CASE tool analyzer

CASE tools, such as tools for UML, are able to do consistency checks. These can be used to check whether or not the design has omissions and inconsistencies.

In this way, a CASE tool can be used to support the “testability review of the test basis” activity – provided that the tool was used to design the system of course.

14.2.2 Preparation phase-14.2.2.2 Complexity analyzer

The degree of complexity is an indicator of the chance of errors occurring and also of the number of test cases needed to test it thoroughly.

14.2.3 Specification phase-14.2.3.1 Test case generator

Test design is a labor intensive job and using tools would make it simpler –unfortunately, not many are available.

To use a test design tool, a formal language (mainly mathematically-based) has to be used for the system requirement specification.

14.2.4 Execution phase Many types of tools are available for the execution phase.

Many tool vendors have generic and highly specialized tools in their portfolio. The following types of tools are described in this section: test data generator record and playback tool load and stress test tool simulator stubs and drivers debugger static source code analyzer error detection tool performance analyzer code coverage analyzer thread and event analyzer threat detection tool.

14.2.4.1 Test data generator

This generates a large amount of different input within a pre-specified input domain. The input is used to test if the system is able to handle the data within the input domain.

14.2.4.2 Record and playback tool

This tool is very helpful in building up a regression test. Much of the test execution can then be done with the tool and the tester can mainly focus on new or changed functionality. Building up these tests is very labor intensive and there is always a maintenance problem. To make this type of tool profitable in terms of time, money, or quality, delicate organizational and technical decisions have to be made.

14.2.4.3 Load and stress test tool

Systems that have to function under different load conditions have to be tested for their performance. Load and stress tools have the ability to simulate different load conditions. Sometimes, one may be interested only in the performance under load or, sometimes more interesting, the functional behavior of the system under load or stress. The latter has to be tested in combination with a functional test execution tool.

Another reason to use this type of tool is to speed up the reliability test of a system. A straightforward reliability test takes at least as long as the mean time between failures, which often is in the order of months. The test duration can be reduced to a matter of days, by using load and stress tools that can simulate the same amount of use in a much shorter period.

14.2.4.4 Simulator

A simulator is used to test a system under controlled conditions.

The simulator is used to simulate the environment of the system, or to simulate other systems which may be connected to the system under test. A simulator is also used to test systems under conditions that are too hazardous to test in real life.

For example the control system of a nuclear power plant.

14.2.4.5 Stubs and drivers Interfaces between two system parts can only be tes

ted if both system parts are available. This can have serious consequences for the testing time. To avoid this and to start testing a system part as early as possible, stubs and drivers are used.

Standardization and the use of a testbed architecture (see Figure 14.2) can greatly improve the effective use of stubs and drivers. The testbed provides a standard (programmable) interface for the tester to construct and execute test cases. Each separate unit to be tested must have a stub and driver built for it with a specific interface to that unit but with a standardized interface to the testbed.

Techniques for test automation, such as data-driven testing (see section 15.2.1), can be applied effectively. Such a testbed architecture facilitates the testing of any unit, the reuse of such tests during integration testing, and large-scale automation of low-level testing.

14.2.4.5 Stubs and drivers

14.2.4.6 Debugger

A debugger can detect syntactic failures. Debuggers have the ability to run the application stepwise and to set, monitor, and manipulate variables. Debuggers are standard elements of developer tools.

14.2.4.7 Static source code analyzer

Coding standards are often used during projects. These standards define, for instance, the layout of the code, that pre- and post-conditions should be described, and that comments should be used to explain complicated structures.

Additional standards which forbid certain error-prone, confusing or exotic structures, or forbid high complexity, can be used to enforce a certain stability on the code beforehand. These standards can be forced and checked by static analyzers. These already contain a set of coding standards which can be customized. They analyze the software and show the code which violates the standards. Be aware that coding standards are language dependent.

14.2.4.9 Performance analyzer

This type of tool can show resource usage at algorithm level while load and stress tools are used to do performance checks at system level.

The tools can show the execution time of algorithms, memory use, and CPU-capacity. All these figures give the opportunity to optimize algorithms in terms of these parameters.

14.2.4.10 Code coverage analyzer

This type of tool shows the coverage of test cases. It monitors the lines of code triggered by the execution of test cases.

The lines of code are a metric for the coverage of the test cases. Be aware that code coverage is not a good metric for the quality of the test set.

14.2.4.11 Thread and event analyzer

This type of tool supports the detection of run-time multi-threading problems that can degrade Java performance and reliability.

It helps to detect the causes of thread starvation or thrashing, deadlocks, unsynchronized access to data, and thread leaks.

13.3.2 Hardware/software integration tests

In the hardware/software integration (HW/SW/I) test the test object is a hardware part on which the integrated software is loaded. The software is incorporated in the hardware in memory, usually (E)EPROM.

The piece of hardware can be an experimental configuration, for instance a hard-wired circuit board containing several components including the memory.

The goal of the HW/SW/I test is to verify the correct execution of the embedded software on the target processor in co-operation with surrounding hardware. Because the behavior of the hardware is an essential part of this test, it is often referred to as “hardware-in-the-loop”.

13.3.2 Hardware/software integration tests

The test environment for the HW/SW/I test will have to interface with the hardware. Depending on the test object and its degree of completeness, the following possibilities exist: offering input stimuli with signal generators; output monitoring with oscilloscopes or a logic

analyzer, combined with data storage devices; in-circuit test equipment to monitor system behavior on

points other than the outputs; simulation of the environment of the test object (the

“plant”) in a real-time simulator.

14.2.4.12 Threat detection tool

This type of tool can detect possible threats that can lead to malfunctioning.

The type of threats are memory leaks, null pointer dereference, bad deallocation , out of bounds array access , uninitialized variable, or possible division by zero.

14.2.5 Completion phase Basically, the same tools as for the planning a

nd control phase are used for this phase – although the focus here is on archiving and extracting conclusions rather than planning and control.

Configuration management tools are used for archiving testware and infrastructure, and tool descriptions, calibration reports, etc.

Scheduling and monitoring tools are used to derive project metrics for evaluation reports.

Defect management tools are used to derive product metrics and sometimes provide information for project metrics.