6
TGDC Meeting, July 2011 VVSG 1.1 Reliability David Flater, Ph.D. Computer Scientist, Software and Systems Division, ITL http://vote.nist.gov

TGDC Meeting, July 2011 VVSG 1.1 Reliability David Flater, Ph.D. Computer Scientist, Software and Systems Division, ITL

Embed Size (px)

Citation preview

Page 1: TGDC Meeting, July 2011 VVSG 1.1 Reliability David Flater, Ph.D. Computer Scientist, Software and Systems Division, ITL

TGDC Meeting, July 2011

VVSG 1.1ReliabilityDavid Flater, Ph.D.

Computer Scientist, Software and Systems Division, ITLhttp://vote.nist.gov

Page 2: TGDC Meeting, July 2011 VVSG 1.1 Reliability David Flater, Ph.D. Computer Scientist, Software and Systems Division, ITL

TGDC Meeting, July 2011 Page 2

Previous Public Review Draft

The reliability benchmarks were made more stringent and traceable to a use case provided by former TGDC member Paul Miller working with other election officials

The test method was changed from a standalone Probability Ratio Sequential Test to classical hypothesis testing using all available data:

A demonstration of non-conformity can easily occur Conclusive results are never guaranteed and are

impossible without at least X volume of testing. The plan was to give a pass to any system that did not demonstrate non-conformity

Page 3: TGDC Meeting, July 2011 VVSG 1.1 Reliability David Flater, Ph.D. Computer Scientist, Software and Systems Division, ITL

TGDC Meeting, July 2011 Page 3

Response from EAC To pass without demonstrating conformity is

unacceptable Testing long enough to demonstrate conformity is

not doable and would be of limited validity anyway. (Reliability can't be tested in; it must be built in)

Move to best practices for quality assurance, reliability engineering and analysis

Volume and stress testing is a validation of that work, not a demonstration of reliability in and of itself

Specific methods of reliability analysis should not be prescribed

Page 4: TGDC Meeting, July 2011 VVSG 1.1 Reliability David Flater, Ph.D. Computer Scientist, Software and Systems Division, ITL

TGDC Meeting, July 2011 Page 4

Impact on VVSG 1.1 The reliability benchmarks will be expressed in

terms of the probabilities of critical and non-critical failures

Manufacturers will be required to deliver credible reliability analyses for their systems (e.g., FMEA). The specific methods to be used will not be prescribed

Hypothesis testing will still be used for accuracy and misfeed rate, but demonstration of conformity will be required

Incidentally, the maintainability and availability sections will go away

Page 5: TGDC Meeting, July 2011 VVSG 1.1 Reliability David Flater, Ph.D. Computer Scientist, Software and Systems Division, ITL

TGDC Meeting, July 2011 Page 5

Limitations In a reliability analysis, the probability of a software

(logic) failure "cannot be determined;"* at best it can be extrapolated from the observed rate of failure or fault correction using a statistical model

The previous reliability tests were strictly hardware-oriented, so this is actually an improvement

Conformity assessment will require the "expert judgment" of a reliability engineer

* Clifton A. Ericson II, Hazard Analysis Techniques for System Safety, 2005, Table 13.1 (Hardware/Software FMEA Characteristics)

Page 6: TGDC Meeting, July 2011 VVSG 1.1 Reliability David Flater, Ph.D. Computer Scientist, Software and Systems Division, ITL

TGDC Meeting, July 2011

Discussion/Questions

Page 6