4 Dynamic 3x

Embed Size (px)

DESCRIPTION

Dynamic

Citation preview

2

Session 4, Page 16

Dynamic Testing Techniques

DYNAMIC TESTING TECHNIQUES

Delegate Notes

Dynamic Testing Techniques

The need for testing techniques

In Session 1 we explained that testing everything is known as exhaustive testing (defined as exercising every combination of inputs and preconditions) and demonstrated that it is an impractical goal. Therefore, as we cannot test everything we have to select a subset of all possible tests. In practice the subset we select is a very tiny subset and yet it has to have a high probability of finding most of the faults in a system.

Experience and experiments have shown us that selecting a subset at random is neither very effective nor very efficient (even if it is tool supported). We have to select tests using some intelligent thought process. Test techniques are such thought processes.

What is a testing technique?

A testing technique is a thought process that helps us select a good set of tests from the total number of all possible tests for a given system. Different techniques offer different ways of looking at the software under test, possibly challenging assumptions made about it. Each technique provides a set of rules or guidelines for the tester to follow in identifying test conditions and test cases. They are based on either a behavioural or a structural model of the system In other words, they are based on an understanding of the system's behaviour (functions and non-functional attributes such as performance or ease of use - what the system does) or its structure - how it does it.

There are a lot of different testing techniques and those that have been published have been found to be successful at identifying tests that find faults. The use of testing techniques is 'best practice' though they should not be used to the exclusion of any other approach.

Put simply, a testing technique is a means of identifying good tests. Recall from Session 1 that a good test case is four things:

effective- has potential to find faults;

exemplary- represents other test cases;

evolvable- easy to maintain;

economic- doesnt cost much to use.

Advantages of Techniques

Different people using the same technique on the same system will almost certainly arrive at different test cases but they will have a similar probability of finding faults. This is because the technique will guide them into having a similar or the same view of the system and to make similar or the same assumptions.

Using a technique also gives the tester some independence of thought. Developers who apply test techniques may find faults that they would not have found if they had only tested intuitively.

Using techniques makes testing more effective

This means that more faults will be found with less effort. Because a technique focuses on a particular type of fault it becomes more likely that the tests will find more of that type of fault. By selecting appropriate testing techniques it is possible to control more accurately what is being tested and so reduce the chance of overlap between different test cases. You also have more confidence that you are testing the things that are most in need of being tested.

Using techniques makes testing more efficient

This means that faults will be found with less effort. Given two sets of tests that find the same faults in the same system, if one set costs 25% less to produce then this is better even though they both find the same faults. Good testing not only maximises effectiveness but also maximises efficiency, i.e. minimises cost.

Systematic techniques are measurable, meaning that it is possible to quantify the extent of their use making it possible to gain an objective assessment of the thoroughness of testing with respect to the use of each testing technique. This is useful for comparison of one test effort to another and for providing confidence in the adequacy of testing.

Types of Testing Technique

There are many different types of software testing technique, each with its own strengths and weaknesses. Each individual technique is good at finding particular types of fault and relatively poor at finding other types. For example, a technique that explores the upper and lower limits of a single input range is more likely to find boundary value faults than faults associated with combinations of inputs. Similarly, testing performed at different stages in the software development life cycle is going to find different types of faults; component testing is more likely to find coding faults than system design faults.

Each testing technique falls into one of a number of different categories. Broadly speaking there are two main categories, static and dynamic. However, dynamic techniques are subdivided into two more categories behavioural (black box) and structural (white box). Behavioural techniques can be further subdivided into functional and non-functional techniques. Each of these is summarised below.

Static Testing Techniques

As we saw in Session 3, static testing techniques do not execute the code being examined, and are generally used before any tests are executed on the software. They could be called non-execution techniques. Most static testing techniques can be used to test any form of document including source code, design, functional and requirement specifications. However, static analysis is a tool supported version that concentrates on testing formal languages and so is most often used to statically test source code.

Functional Testing Techniques (Black Box)

Functional testing techniques are also known as black-box and input / output-driven testing techniques because they view the software as a black box with inputs and outputs, but have no knowledge of how it is structured inside the box. In essence, the tester is concentrating on the function of the software, that is, what it does, not how it does it.

Structural Testing Techniques (White Box)

Structural testing techniques use the internal structure of the software to derive test cases. They are commonly called white-box or glass-box techniques (implying you can see into the system) since they require knowledge of how the software is implemented, that is, how it works. For example, a structural technique may be concerned with exercising loops in the software. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the functionality of the software.

Non-Functional Testing Techniques

Non-functional aspects of a system (also known as quality aspects) include performance, usability portability, maintainability, etc. This category of technique is concerned with examining how well the system does something, not what it does or how it does it. Techniques to test these non-functional aspects are less procedural and less formalised than those of other categories as the actual tests are more dependent on the type of system, what it does and the resources available for the tests. How to specify non-functional tests is outside the scope of the syllabus for this course but an approach to doing so is outlined in the supplementary section at the back of notes. The approach uses quality attribute templates, a technique from Tom Gilbs book Principles of Software Engineering Management, Addison-Wesley, 1988.

Non-functional testing at system level is part of the syllabus and was covered in Session 2 (but techniques for deriving non-functional tests are not covered).

Black Box versus White Box

Black box techniques are appropriate at all stages of testing (Component Testing through to User Acceptance Testing). While individual components form part of the structure of a system, when performing Component Testing it is possible to view the component itself as a black box, that is, design test cases based on its functionality without regard for its structure. Similarly, white box techniques can be used at all stages of testing but are typically used most predominately at Component and Integration Testing in the Small.

Black Box Test Techniques

Techniques Defined in BS 7925-2

The Software Component Testing Standard BS7925-2 defines the following black-box testing techniques:

Equivalence Partitioning;

Boundary Value Analysis;

State Transition Testing;

Cause-Effect Graphing;

Syntax Testing;

Random Testing.

The standard also defines how other techniques can be specified. This is important since it means that anyone wishing to conform to the Software Component Testing Standard is not restricted to using the techniques that the standard defines.

Equivalence Partitioning & Boundary Value Analysis

Equivalence partitioning

Equivalence Partitioning is a good all-round functional black-box technique. It can be applied at any level of testing and is often a good technique to use first. It is a common sense approach to testing, so much so that most testers practise it informally even though they may not realise it. However, while it is better to use the technique informally than not at all, it is much better to use the technique in a formal way to attain the full benefits that it can deliver.

The idea behind the technique is to divide or partition a set of test conditions into groups or sets that can be considered the same or equivalent, hence 'equivalence partitioning'. Equivalence partitions are also known as equivalence classes, the two terms mean exactly the same thing.

The benefit of doing this is that we need test only one condition from each partition. This is because we are assuming that all the conditions in one partition will be treated in the same way by the software. If one condition in a partition works, we assume all of the conditions in that partition will work and so there is no point in testing any of these others. Conversely, if one of the conditions in a partition does not work, then we assume that none of the conditions in that partition will work so again there is no point in testing any more in that partition. Of course these are simplifying assumptions that may not always be right but writing them down at least gives other the chance to challenge the assumptions and hopefully help identify more accurate equivalence partitions.

For example, a savings account in a bank earns a different rate of interest depending on the balance in the account. In order to test the software that calculates the interest due we can identify the ranges of balance values that each earns a different rate of interest. For example, if a balance in the range 0 to 100 has a 3% interest rate, a balance between 100 and 1,000 has a 5% interest rate, and balances of 1,000 and over have a 7% interest rate, we would initially identify three equivalence partitions: 0 - 100, 100.01 - 999.99, and 1,000 and above. When designing the test cases for this software we would ensure that these three equivalence partitions were each covered once. So we might choose to calculate the interest on balances of 50, 260 and 1,348. Had we not have identified these partitions it is possible that at least one of them could have been missed at the expense of testing another one several times over (such as with the balances of 30, 140, 250, and 400).

Boundary value analysis

Boundary value analysis is based on testing on and around the boundaries between partitions. If you have done "range checking", you were probably using the boundary value analysis technique, even if you weren't aware of it. Note that we have both valid boundaries (in the valid partitions) and invalid boundaries (in the invalid partitions).

Examples of Boundary Value Analysis and Equivalence Partitioning

Design Test Cases

Having identified the conditions that you wish to test, the next step is to design the test cases. The more test conditions that can be covered in a single test case, the fewer the test cases that are needed.

Generally, each test case for invalid conditions should cover only one condition. This is because programs typically stop processing input as soon as they encounter the first fault. However, if it is known that the software under test is required to process all input regardless of its validity it is sensible to continue as before and design test cases that cover as many invalid conditions in one go as possible. In either case, there should be separate test cases covering valid and invalid conditions.

The test cases to cover the boundary conditions are done in a similar way.

Why do both EP and BVA?

Technically, because every boundary is in some partition, if you did only boundary value analysis (BVA) you would also have tested every equivalence partition (EP).

However this approach will cause problems when the value fails was it only the boundary value that failed or did the whole partition fail? Also by testing only boundaries we would probably not give the users too much confidence as we are using extreme values rather than normal values. The boundaries may be more difficult (and therefore more costly) to set up as well.

We recommend that you test the partitions separately from boundaries - this means choosing partition values that are NOT boundary values.

What partitions and boundaries you exercise and which first depends on your objectives. If your goal is the most thorough approach, then follow the traditional approach and test valid partitions, then invalid partitions, then valid boundaries and finally invalid boundaries. However if you are under time pressure and cannot test everything (and who isn't), then your objective will help you decide what to test. If you are after user confidence with minimum tests, you may do valid partitions only. If you want to find as many faults as possible as quickly as possible, you may start with invalid boundaries.

State Transition Testing

This technique is not specifically required by the ISEB syllabus, although we do need to cover one additional black box technique. Therefore the details of this technique should not be in the exam, but only an awareness of another functional technique.

State transition testing is used where some aspect of the system can be described in what is called a finite state machine. This simply means that the system can be in a (finite) number of different states, and the transitions from one state to another are determined by the rules of the machine. This is the model on which the system and the tests are based.

Any system where you get a different output for the same input, depending on what has happened before, is a finite state system. For example, if you request to withdraw 100 from a bank ATM, you may be given cash. Later you may make exactly the same request but be refused the money (because your balance is insufficient). This later refusal is because the state of your bank account had changed from having sufficient funds to cover the withdrawal to having insufficient funds. The transaction that caused your account to change its state was probably the earlier withdrawal.

Another example is a word processor. If a document is open, you are able to Close it. If no document is open, then Close is not available. After you choose Close once, you cannot choose it again for the same document unless you open that document. A document thus has two states: open and closed.

A state transition model has four basic parts:

the states that the software may occupy (open/closed or funded/insufficient funds);

the transitions from one state to another (not all transitions are allowed);

the events that cause a transition (withdrawing money, closing a file);

the actions that result from a transition (an error message, or being given your cash).

Note that a transition does not need to change to a different state, it could stay in the same state. In fact, trying to input an invalid input would be likely to produce an error message as the action, but the transition would be back to the same state the system was in before.

Deriving test cases from the state transition model is a black box approach. Measuring how much you have tested (covered) is getting close to a white box perspective. However, state transition testing is generally regarded as a black box technique.

You can design tests to test every transition shown in the model. If every (valid) transition is tested, this is known as 0-switch coverage. You could also test a series of transitions through more than one state. If you covered all of the pairs of two valid transitions, you would have 1-switch coverage, covering the sets of 3 transitions would give 2-switch coverage, etc.

However, deriving tests only from the model may omit the negative tests, where we could try to generate invalid transitions. In order to see the total number of combinations of states and transitions, both valid and invalid, a state table can be used.

White Box Test Techniques

White box techniques are normally used after an initial set of tests has been derived using black box techniques. They are most often used to measure "coverage" - how much of the structure has been exercised or covered by a set of tests.

Coverage measurement is best done using tools, and there are a number of such tools on the market. These tools can help to increase productivity and quality. They increase quality by ensuring that more structural aspects are tested, so faults on those structural paths can be found. They increase productivity and efficiency by highlighting tests that may be redundant, i.e. testing the same structure as other tests (although this is not necessarily a bad thing!)

What are Coverage Techniques?

Coverage techniques serve two purposes: test measurement and test case design. They are often used in the first instance to assess the amount of testing performed by tests derived from functional techniques. They are then used to design additional tests with the aim of increasing the test coverage.

Coverage techniques are a good way of generating additional test cases that are different from existing tests and in any case they help ensure breadth of testing in the sense that test cases that achieve 100% coverage in any measure will be exercising all parts of the software.

There is also danger in these techniques. 100% coverage does not mean 100% tested. Coverage techniques measure only one dimension of a multi-dimension concept. Two different test cases may achieve exactly the same coverage but the input data of one may find an error that the input data of the other doesnt. Further more, coverage techniques measure coverage of the software code that has been written, they cannot say anything about the software that has not been written. If a function has not been implemented, only functional testing techniques will reveal the omission.

In common with all structural testing techniques, coverage techniques are best used on areas of software code where more thorough testing is required. Safety critical code, code that is vital to the correct operation of a system, and complex pieces of code are all examples of where structural techniques are particularly worth applying. They should always be used in addition to functional testing techniques rather than as an alternative to them.

Types of Coverage

Test coverage can be measured based on a number of different structural elements in software. The simplest of these is statement coverage which measures the number of executable statements executed by a set of tests and is usually expressed in terms of the percentage of all executable statements in the software under test. In fact, all coverage techniques yield a result which is the number of elements covered expressed as a percentage of the total number of elements.

Statement coverage is the simplest and perhaps the weakest of all coverage techniques. The adjectives weak and strong applied to coverage techniques refer to their likelihood in finding errors. The stronger a technique the more errors you can expect to find with test cases designed using that technique with the same measure of coverage.

There are a lot of structural elements that can be used for coverage. Each technique uses a different element; the most popular are described in later sections.

Besides statement coverage, there are number of different types of control flow coverage techniques most of which are tool supported. These include branch or decision coverage, LCSAJ (linear code sequence and jump) coverage, condition coverage and condition combination coverage. Any representation of a system is in effect a model against which coverage may be assessed. Call tree coverage is another example for which tool support is commercially available.

Another popular, but often misunderstood, coverage measure is path coverage. Path coverage is usually taken to mean branch or decision coverage because both these techniques seek to cover 100% of the paths through the code. However, strictly speaking for any code that contains a loop, path coverage is impossible since a path that travels round the loop say 3 times is different from the path that travels round the same loop 4 times. This is true even if the rest of the paths are identical. So if it is possible to travel round the loop an unlimited number of times then there are an unlimited number of paths through that piece of code. For this reason it is more correct to talk about independent path segment coverage though the shorter term path coverage is frequently used.

There is currently very little tool support available for data flow coverage techniques, though tool support is growing. Data flow coverage techniques include definitions, uses, and definition-use pairs.

Other, more specific, coverage measures include things like database structural elements (records, fields, and sub-fields) and files. State transition coverage is also possible. It is worth checking for any new tools, as the test tool market can develop quite rapidly.

How to Measure Coverage

For most practical purposes coverage measurement is something that requires tool support. However, knowledge of steps needed to measure coverage is useful in understanding the relative merits of each technique.

1.Decide on the structural element to be used.

2.Count the structural elements.

3.Instrument the code.

4.Run the tests for which coverage measure is required.

5.Using the output from the instrumentation, determine the percentage of elements exercised.

Instrumenting the code (step 3) implies inserting code along-side each structural element in order to record that the associated structural element has been exercised. Determine the actual coverage measure (step 5) is then a matter of analysing the recorded information.

When a specific coverage measure is required or desired but not attained, additional test cases have to be designed with the aim of exercising some or all of the structural elements not yet reached. These are then run through the instrumented code and a new coverage measure determined. This is repeated until the required coverage measure is achieved.

Finally the instrumentation should be removed. However, in practice the instrumentation should be done to a copy of the source code such that it can be deleted once you have finished measuring coverage. This avoids any errors that could be made when removing instrumentation. In any case all the tests ought to be re-run again on the un-instrumented code.

Statement Coverage

Statement coverage is the number of executable statements exercised by a test or test suite. This is calculated by:

Statement Coverage = EQ \F(Number of statements exercised,Total number of statements) x 100%

Typical ad hoc testing achieves 60% to 75% statement coverage.

Branch & Decision Testing / Coverage

Branch coverage is the number of branches (decisions) exercised by a test or test suite. This is calculated by:

Branch Coverage = EQ \F(Number of branches exercised,Total number of branches) x 100%

Typical ad hoc testing achieves 40% to 60% branch coverage. Branch coverage is stronger than statement coverage since it may require more test cases to achieve 100% coverage. For example, consider the code segment shown below.

if a > b

c = 0endifTo achieve 100% statement coverage of this code segment just one test case is required which ensures variable a contains a value that is greater than the value of variable b. However, branch coverage requires each decision to have had both a true and false outcome. Therefore, to achieve 100% branch coverage, a second test case is necessary. This will ensure that the decision statement if a > b has a false outcome.

Note that 100% branch coverage guarantees 100% statement coverage.

Branch and decision coverage are actually slightly different for less than 100% coverage, but at 100% coverage they give exactly the same results.

Guessing

Although it is true that testing should be rigorous, thorough and systematic, this is not all there is to testing. There is a definite role for non-systematic techniques.

Many people confuse error guessing with ad hoc testing. Ad hoc testing is unplanned and usually done before (or instead of) rigorous testing. Error guessing is done last as a supplement to rigorous techniques.

Error guessing is a technique that should always be used after other more formal techniques. The success of error guessing is very much dependent on the skill of the tester, as good testers know where the errors are most likely to lurk.

Some people seem to be naturally good at testing and others are good testers because they have a lot of experience either as a tester or working with a particular system and so are able to pin point its weaknesses. That is why error guessing is best done after more formal techniques. In using other techniques the tester is likely to gain a better understanding of the system, what it does and how it works. With a better understanding anyone is likely to be more able to think of ways in which the system may not work properly.

There are no rules for error guessing. The tester is encouraged to think of situations in which the software may not be able to cope. Typical conditions to try include divide by zero, blank (or no) input, empty files and the wrong kind of data (e.g. alphabetic characters where numeric are required). If anyone ever says of a system or the environment in which it is to operate That could never happen, it might be a good idea to test that condition as such assumptions about what will and will not happen in the live environment are often the cause of failures.

Other names for error guessing include experience-driven testing, heuristic testing and lateral testing.

Summary Quiz

Test techniques are:

Black box based on:

White box based on:

Error guessing:

Error guessing best used:

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

EMBED PowerPoint.Slide.8

Session 4, Version 1 16 of 32 Xansa , 2001

Session 4, Version 3X4of 32 Xansa 2009

_1078643569.pptVersion X1ISEB Software Testing Dynamic. *

Error guessing: deriving test cases

Consider:Past failuresIntuitionExperienceBrain stormingWhat is the craziest thing we can do?

_1110618389.pptVersion 2xISEB Software Testing Dynamic. *

Loan amount

_1110619318.pptVersion 2xISEB Software Testing Dynamic. *

Paths Through Code With Loops

for as many times as itis possible to go roundthe loop (this can beunlimited, i.e. infinite)

_1110619756.pptVersion 2xISEB Software Testing Dynamic. *

Example 5

Cyclomatic complexity:Minimum tests to achieve:Statement coverage:Branch coverage:

Read ARead BIF A < 0 THEN Print A negativeENDIFIF B < 0 THEN Print B negativeENDIF

3

1

2

_1110619777.pptVersion 2xISEB Software Testing Dynamic. *

Example 6

Cyclomatic complexity:Minimum tests to achieve:Statement coverage:Branch coverage:

Read AIF A < 0 THEN Print A negativeENDIFIF A > 0 THEN Print A positiveENDIF

3

2

2

_1110619459.pptVersion 2xISEB Software Testing Dynamic. *

Example 2

Cyclomatic complexity: Minimum tests to achieve:Statement coverage: Branch coverage:

Read AIF A > 0 THEN IF A = 21 THENPrint Key ENDIFENDIF

3

1

3

_1110619493.pptVersion 2xISEB Software Testing Dynamic. *

Example 3

Cyclomatic complexity:Minimum tests to achieve:Statement coverage: Branch coverage:

Read ARead BIF A > 0 THEN IF B = 0 THENPrint No values ELSEPrint BIF A > 21 THEN Print AENDIF ENDIFENDIF

4

2

4

_1110619517.pptVersion 2xISEB Software Testing Dynamic. *

Example 4

Cyclomatic complexity:Minimum tests to achieve:Statement coverage: Branch coverage:

Read ARead BIF A < 0 THEN Print A negativeELSE Print A positiveENDIFIF B < 0 THEN Print B negativeELSE Print B positiveENDIF

3

2

2

Note: there are 4 paths

_1110619423.pptVersion 2xISEB Software Testing Dynamic. *

Example 1

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

Wait for card to be insertedIF card is a valid card THEN display enter PIN numberIF PIN is valid THEN select transactionELSE (otherwise) Display PIN invalidELSE (otherwise)Reject cardEnd

_1110618944.pptVersion 2xISEB Software Testing Dynamic. *

Example of statement coverage

read(a)IF a > 6 THEN b = aENDIFprint b

_1110619274.pptVersion 2xISEB Software Testing Dynamic. *

Paths Through Code

_1110618443.pptVersion 2xISEB Software Testing Dynamic. *

Condition template

_1110618299.pptVersion 2xISEB Software Testing Dynamic. *

Account number

_1110618356.pptVersion 2xISEB Software Testing Dynamic. *

Example: loan application

Customer nameAccount numberLoan amount requested Term of loan Monthly repayment Term:Repayment:Interest rate:Total paid back:

4 digits, 1stnon-zero

200 to 5000

1 to 10 years

Minimum 10





2-64 chars.

_1110618056.pptVersion 2xISEB Software Testing Dynamic. *

Customer name

_1110618104.pptVersion 2xISEB Software Testing Dynamic. *

Example: loan application

Customer nameAccount numberLoan amount requested Term of loan Monthly repayment Term:Repayment:Interest rate:Total paid back:

4 digits, 1stnon-zero

200 to 5000

1 to 10 years

Minimum 10





2-64 chars.

_1079525515.pptVersion X1ISEB Software Testing Dynamic. *

Some test techniques

Static

Dynamic

_1110618005.pptVersion 2xISEB Software Testing Dynamic. *

Example: loan application

Customer nameAccount numberLoan amount requested Term of loan Monthly repayment Term:Repayment:Interest rate:Total paid back:

4 digits, 1stnon-zero

200 to 5000

1 to 10 years

Minimum 10





2-64 chars.

_1078643358.pptVersion X1ISEB Software Testing Dynamic. *

Statement coverage

Percentage of executable statements exercised by a test suiteNumber of statements exercised Total number of statementsExample:Program has 100 statementsTests exercise 87 statementsStatement coverage = 87%

=

Typical ad hoc testing achieves 60 - 75%

Statement coverageis normally measuredby a software tool.

_1078643480.pptVersion X1ISEB Software Testing Dynamic. *

Non-systematic test techniques

Trial and error / ad hocError guessing / experience-drivenUser testingUnscripted testing

A testing approach that is onlyrigorous, thorough and systematicis incomplete

_1078643524.pptVersion X1ISEB Software Testing Dynamic. *

Error-guessing

Always worth includingAfter systematic techniques have been usedCan find some faults that systematic techniques can missA mopping up approachSupplements systematic techniques

Not a good approach to start testing with

_1078643418.pptVersion X1ISEB Software Testing Dynamic. *

Decision Coverage (Branch Coverage)

Percentage of decision outcomes exercised by a test suiteNumber of decisions outcomes exercised Total number of decision outcomesExample:Program has 120 decision outcomesTests exercise 60 decision outcomesDecision coverage = 50%

Typical ad hoc testing achieves 40 - 60%

=

Decision coverageis normally measuredby a software tool.

True

False

?

_1078642841.pptVersion X1ISEB Software Testing Dynamic. *

Why do both EP and BVA?

If you do boundaries only, you have covered all the partitions as wellTechnically correct and may be OK if everything works correctly!If the test fails, is the whole partition wrong, or is a boundary in the wrong place - have to test mid-partition anywayTesting only extremes may not give confidence for typical use scenarios (especially for users)Boundaries may be harder (more costly) to set up

_1078642979.pptVersion X1ISEB Software Testing Dynamic. *

State transition testing (design)

Test cases designed to achieve required coverage:State transitions (0-switch)Transition pairs (1-switch)Transition triples (2-switch)Etc.A more complete test set will test for possible invalid transitionsUse state table to identify invalid transitions

Limitation ofswitch coverage:covers onlyvalid transitions

_1078643071.pptVersion X1ISEB Software Testing Dynamic. *

Using structural coverage

More tests

More tests

More tests

More tests

More tests

More tests

More tests

More tests

_1078643094.pptVersion X1ISEB Software Testing Dynamic. *

The test coverage trap

Structure exercised,insufficient function

Function exercised,insufficient structure

% Statement % Decision % Condition Combination

Structural Testedness

FunctionalTestedness

100% coverage does not mean 100% tested!

Coverage is not Thoroughness

_1078643028.pptVersion X1ISEB Software Testing Dynamic. *

Design and Measurement Techniques

Techniques defined in BS 7925-2Statement testingBranch / decision testingData flow testingBranch condition testingBranch condition combination testingModified condition decision testingLCSAJ testingAlso defines how to specify other techniques

_1078642914.pptVersion X1ISEB Software Testing Dynamic. *

Test objectives?

For a thorough approach: VP, IP, VB, IBUnder time pressure, depends on your test objectiveminimal user-confidence: VP only?maximum fault finding: VB first (plus IB?)

_1078642951.pptVersion X1ISEB Software Testing Dynamic. *

State transition testing (analysis)

Uses a model that shows:

states the software may occupy

transitions between the states

events which cause the transitions

actions that result from the transitions

Card inserted

Invalid PIN

Wait for card

Wait for PIN

Cancel

Valid PIN

_1078642474.pptVersion X1ISEB Software Testing Dynamic. *

Three types of systematic technique

Static (non-execution)Examination of documentation, source code listings, etc.Functional (black box)Based on behaviour / functionality of softwareStructural (white box)Based on structure of software

_1078642762.pptVersion X1ISEB Software Testing Dynamic. *

Boundary value analysis (BVA)

Faults tend to lurk near boundariesGood place to look for faultsTest values on both sides of boundaries

_1078642811.pptVersion X1ISEB Software Testing Dynamic. *

Design test cases

Test

Case

Description

Expected Outcome

New Tags

Covered

1

2

Name:John SmithAcc no:1234Loan:2500Term:3 years

Name:ABAcc no:1000Loan:200Term:1 year

Term:3 yearsRepayment:79.86Interest rate:10%Total paid:2874.96

Term:1 yearRepayment:17.92Interest rate:7.5%Total paid:215.00

V1, V2,V3, V4, V5 .....

B1, B3,B5, .....

_1078642660.pptVersion X1ISEB Software Testing Dynamic. *

Black Box Design & Measurement Techniques

Techniques defined in BS 7925-2Equivalence partitioningBoundary value analysisState transition testingCause-effect graphingSyntax testingRandom testingAlso defines how to specify other techniques

_1078642703.pptVersion X1ISEB Software Testing Dynamic. *

Equivalence partitioning (EP)

Divide (partition) the inputs, outputs, etc. Into areas which are the same (equivalent)Assumption: if one value works, all will workOne from each partition better than all from one

_1078642611.pptVersion X1ISEB Software Testing Dynamic. *

Black box versus white box?

Black box appropriateat all levels butdominates higherlevels of testing

White box usedpredominatelyat lower levelsto complimentblack box

_1078642355.pptVersion X1ISEB Software Testing Dynamic. *

Advantages of techniques

Different people: similar probability find faultsGain some independence of thoughtEffective testing: find more faultsFocus attention on specific types of faultKnow you're testing the right thing Efficient testing: find faults with less effortAvoid duplicationSystematic techniques are measurable

Techniques make testing more effective and more efficient

_1078642420.pptVersion X1ISEB Software Testing Dynamic. *

Measurement

Objective assessment of thoroughness of testing (with respect to use of each technique)Useful for comparison of one test effort to anothere.g.

Project A60% Equivalence partitions50% Boundaries75% Branches

Project B40% Equivalence partitions45% Boundaries60% Branches

_1078642273.pptVersion X1ISEB Software Testing Dynamic. *

What is a testing technique?

A procedure for selecting or designing testsBased on a structural or functional model of the softwareSuccessful at finding faults'Best' practiceA way of deriving good test casesA way of objectively measuring a test effort

Testing should be rigorous, thorough and systematic

_1078642242.pptVersion X1ISEB Software Testing Dynamic. *

Why dynamic test techniques?

Exhaustive testing (use of all possible inputs and conditions) is impracticalMust use a subset of all possible test casesMust have high probability of detecting faultsNeed thought processes that help us select test cases more intelligentlyTest case design techniques are such thought processes