13
1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

Embed Size (px)

Citation preview

Page 1: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

1841f06parnas13

Evaluation of Safety Critical Software

David L. Parnas, C ACM, June 1990

Page 2: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

2841f06parnas13

Coin toss

How many heads in a row before you decide the coin is not “fixed”.

Page 3: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

3841f06parnas13

Table 1

Table I shows that, if our design target

was to have the probability of failure be

less than 1 in 1000, performing between

4500 and 5000 tests (randomly chosen

from the appropriate test case distribution)

without failure would mean that the

probability of an unacceptable product

passing the test was less than 1 in a

hundred.

Page 4: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

4841f06parnas13

Table II

Page 5: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

5841f06parnas13

Parnas’ model

Is this the marble model?

We want 95% confidence that the reliability of the software running for 100 days is more than 80 percent.

Page 6: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

6841f06parnas13

Linear Functions

Some engineers believe one can design black box tests without knowledge of what is inside the box. This is, unfortunately, not completely true. If we know that the contents of a black box exhibit linear behavior, the number of tests needed to make sure it would function as specified could be quite small. If we know that the function can be described by a polynomial of order “N,” we can use that information to determine how many tests are needed. If the function can have a large number of discontinuities, far more tests are needed.

Page 7: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

7841f06parnas13

FSM model

In effect, loading a program in the machine selects aterminal submachine consisting of all states that can bereached from the initial state. The software can beviewed as a finite state machine described by two verylarge tables. This model of software allows us to definewhat we mean by the number of faults in the software;it is the number of entries in the table that specifybehavior that would be considered unacceptable.

Page 8: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

8841f06parnas13

Finite State Machine Model

How does a program map to this model The faults are the errors in the transition How is this useful to estimation?

Page 9: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

9841f06parnas13

FSM and the triangle?

For the triangle problem and the cfg as fsm, is Parnas’ fault counts always the same as ours?

Is this true for all programs and faults?

Page 10: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

10841f06parnas13

Information Hiding

Parnas implies that information hiding or object-oriented programming improves the reliability of software

Is this true for OO?

Page 11: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

11841f06parnas13

OO and testing

Given two programs (one OO and one non-OO) that do the same thing, is E(Q) always lower in the OO version?

Page 12: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990

12841f06parnas13

Reliability Growth Models

Other portions of the literature are concerned with reliability growth models. These attempt to predict the reliability of the next (corrected) version on the basis of reliability data collected from previous versions. Most assume the failure rate is reduced whenever an error is corrected. They also assume the reductions in failure rates resulting from each correction are predictable. These assumptions are not justified by either theoretical or empirical studies of programs. Reliability growth models may be useful for management and scheduling purposes, but for safety-critical applications one must treat each modification of the program as a new program. Because even small changes can have major effects, we should consider data obtained from previous versions of the program to be irrelevant.

Page 13: 1 841f06parnas13 Evaluation of Safety Critical Software David L. Parnas, C ACM, June 1990