23
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

Embed Size (px)

Citation preview

Page 1: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

1 CS 501 Spring 2002

CS 501: Software Engineering

Lecture 23

Reliability III

Page 2: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

2 CS 501 Spring 2002

Administration

Page 3: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

3 CS 501 Spring 2002

Static and Dynamic Verification

Static verification: Techniques of verification that do not include execution of the software.

• May be manual or use computer tools.

Dynamic verification:

• Testing the software with trial data.

• Debugging to remove errors.

Page 4: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

4 CS 501 Spring 2002

Test Design

Testing can never prove that a system is correct. It can only show that (a) a system is correct in a special case, or (b) that it has a fault.

• The objective of testing is to find faults.

• Testing is never comprehensive.

• Testing is expensive.

Page 5: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

5 CS 501 Spring 2002

Testing and Debugging

Testing is most effective if divided into stages:

• Unit testing at various levels of granularity

tests by the developeremphasis is on accuracy of actual code

• System and sub-system testing

uses trial dataemphasis is on integration and interfaces

• Acceptance testing

uses real data in realistic situationsemphasis is on meeting requirements

Page 6: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

6 CS 501 Spring 2002

Acceptance Testing

Alpha Testing: Clients operate the system in a realistic but non-production environment

Beta Testing: Clients operate the system in a carefully monitored production environment

Parallel Testing: Clients operate new system alongside old production system with same data and compare results

Page 7: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

7 CS 501 Spring 2002

The Testing Process

System and Acceptance Testing is a major part of a software project

• It requires time on the schedule

• It may require substantial investment in datasets, equipment, and test software.

• Good testing requires good people!

• Management and client reports are important parts of testing.

What is the definition of "done"?

Page 8: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

8 CS 501 Spring 2002

Testing Strategies

• Bottom-up testing. Each unit is tested with its own test environment.

• Top-down testing. Large components are tested with dummy stubs.

user interfaceswork-flowclient and management demonstrations

• Stress testing. Tests the system at and beyond its limits.

real-time systemstransaction processing

Page 9: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

9 CS 501 Spring 2002

Test Cases

Test cases are specific tests that are chosen because they are likely to find faults.

Test cases are chosen to balance expense against chance of finding serious faults.

• Cases chosen by the development team are effective in testing known vulnerable areas.

• Cases chosen by experienced outsiders and clients will be effective in finding gaps left by the developers.

• Cases chosen by inexperienced users will find other faults.

Page 10: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

10 CS 501 Spring 2002

Test Case Selection: Coverage of Inputs

Objective is to test all classes of input

• Classes of data -- major categories of transaction and data inputs.

Cornell example: (undergraduate, graduate, transfer, ...) by (college, school, program, ...) by (standing) by (...)

• Ranges of data -- typical values, extremes

• Invalid data, reversals, and special cases.

Page 11: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

11 CS 501 Spring 2002

Test Case Selection: Program

Objective is to test all functions of each computer program

• Paths through the computer programs

Program flow graphCheck that every path is executed at least once

• Dynamic program analyzers

Count number of times each path is executed

Highlight or color source code

Can not be used with time critical software

Page 12: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

12 CS 501 Spring 2002

Program Flow Graph

if-then-else loop-while

Page 13: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

13 CS 501 Spring 2002

Statistical Testing

• Determine the operational profile of the software

• Select or generate a profile of test data

• Apply test data to system, record failure patterns

• Compute statistical values of metrics under test conditions

Page 14: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

14 CS 501 Spring 2002

Statistical Testing

Advantages:

• Can test with very large numbers of transactions• Can test with extreme cases (high loads, restarts, disruptions)• Can repeat after system modifications

Disadvantages:

• Uncertainty in operational profile (unlikely inputs)• Expensive• Can never prove high reliability

Page 15: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

15 CS 501 Spring 2002

Regression Testing

REGRESSION TESTING IS ONE OF THE KEY TECHNIQUES OF SOFTWARE ENGINEERING

Applied to modified software to provide confidence that modifications behave as intended and do not adversely affect the behavior of unmodified code.

• Basic technique is to repeat entire testing process after every change, however small.

Page 16: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

16 CS 501 Spring 2002

Regression Testing: Program Testing

1. Collect a suite of test cases, each with its expected behavior.

2. Create scripts to run all test cases and compare with expected behavior. (Scripts may be automatic or have human interaction.)

3. When a change is made, however small (e.g., a bug is fixed), add a new test case that illustrates the change (e.g., a test case that revealed the bug).

4. Before releasing the changed code, rerun the entire test suite.

Page 17: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

17 CS 501 Spring 2002

Discussion of Pfleeger, Chapter 9

Format:

State a question.

Ask a member of the class to answer.

(Sorry if I pronounce your name wrongly.)

Provide opportunity for others to comment.

When answering:

Stand up.Give your name or NetID. Make sure the TA hears it.

Speak clearly so that all the class can hear.

Page 18: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

18 CS 501 Spring 2002

Question 1: Configuration Management

(a) Explain the problem of configuration management.

(b) What techniques are used to avoid software faults in configuration management?

Page 19: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

19 CS 501 Spring 2002

Question 2: Deltas

(a) Explain the term delta in system management.

(b) What is the alternative?

(c) What do you think are the benefits of the two approaches?

Page 20: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

20 CS 501 Spring 2002

Question 3: Function Testing

(a) What is function testing?

(b) What is not tested during the function testing phase?

Page 21: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

21 CS 501 Spring 2002

Question 4: Reliability Testing

(a) Why is testing software reliability different from testing hardware reliability?

(b) Why does six-sigma testing not apply to software?

(c) There is a trade-off between expenditure and software reliability:

(i) What are the implications before release of the software?

(ii) What are the implications after release of the software?

Page 22: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

22 CS 501 Spring 2002

Question 5: Documentation of Testing

Explain the purpose of each of the following:

(a) Test plan

(b) Test specification and evaluation

(c) Test description

(d) Test analysis report

Page 23: 1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III

23 CS 501 Spring 2002

Question 6: Safety Critical Software

A software system fails and several lives are lost. An inquiry discovers that the test plan did not consider the case that caused the failure. Who is responsible:

(a) The testers for not noticing the missing cases?

(b) The Test planners for not writing the complete test plan?

(c) The managers for not having checked the test plan?

(d) The customer for not having done a thorough acceptance test?