19
CSC 524 VERIFICATION AND VALIDATION OF SIMULATION MODELS AMUDA, Tosin Joseph 090805009

Verification and Validation of Simulation Models

Embed Size (px)

DESCRIPTION

A review of techniques for validating and verifying simuation models

Citation preview

Page 1: Verification and Validation of Simulation Models

CSC 524

VERIFICATION AND VALIDATION OF SIMULATION

MODELS

AMUDA, Tosin Joseph

090805009

Page 2: Verification and Validation of Simulation Models

UNIVERSITY OF LAGOS

2

Page 3: Verification and Validation of Simulation Models

AbstractSimulation models are more and more used to solve difficult scientific and social problems

and to aid in decision-making. The developers and users of these models, the decision makers

using information obtained from the results of these models, and the individuals affected by

decisions based on such models are all rightly concerned with whether a model and its results

are “correct”.

Consequently, no model can be accepted unless it has passed the tests of validation.

Therefore, it is salient to carry out the procedure of validation to ascertain the credibility of a

simulation model. This usually involve a twin process: validation and verification. This rest

of this article will review several literatures on how to verify and validate our simulation

models in order to ensure model’s credibility to an acceptable level.

3

Page 4: Verification and Validation of Simulation Models

Table of ContentsAbstract......................................................................................................................................2

Section 1: Introduction...............................................................................................................1

Section 2: Verification................................................................................................................1

2.1: Good Programming Practice...........................................................................................1

Section 3: Validation..................................................................................................................1

3.1 Face Validity....................................................................................................................1

3.2 Validation of Model Assumptions...................................................................................1

Structural Assumptions......................................................................................................2

Data Assumptions...............................................................................................................2

3.3 Validating Input-Output Transformations........................................................................2

Hypothesis Testing.............................................................................................................2

Model Accuracy as a Range...............................................................................................2

Confidence Intervals...........................................................................................................3

Section 4: Conclusion.................................................................................................................4

References..................................................................................................................................4

4

Page 5: Verification and Validation of Simulation Models

Section 1: IntroductionThere is always a need to evaluate and improve the performance of a system that evolves

over time. First, the behaviour of such system must be studied. For one to study the

behaviour of a system, one must first come up with a representation (a close approximation)

of such system. This representation of the construction and working of a system of interest is

known as a Model. In addition, experiment will be carried out on the model in order to

imitate the operations of the actual system. This process- usually carried out on a computer-

is known as Simulation. Generally, a model intended for a simulation study is a

mathematical model developed with the help of simulation software.

Simulation models are approximate imitations of real-world systems with several assumption

and they never exactly imitate the real-world system. Due to the assumptions and

approximation, an important issue in modelling is model validity. Therefore, a model should

be verified and validated to the degree needed for the models intended purpose or application

This concern for quantifying and building credibility in simulation models is addressed by

Verification and Validation (V & V). This paper uses the definitions of V & V given in the

classic simulation textbook by Law and Kelton (1991, p.299): "Verification is determining

that a simulation computer program performs as intended, i.e., debugging the computer

program…Validation is concerned with determining whether the conceptual simulation

model (as opposed to the computer program) is an accurate representation of the system

under study". Both verification and validation are processes that accumulate evidence of a

model’s correctness or accuracy for a specific scenario; thus, (V & V) cannot prove that a

model is correct and accurate for all possible scenarios, but, rather, it can provide evidence

that the model is sufficiently accurate for its intended use.

1

Page 6: Verification and Validation of Simulation Models

Another popular author on V & V in simulation relate the various phases of modelling with V

& V in Figure 1: Sargent (1991, p.38) states "the conceptual model is the

mathematical/logical/verbal representation (mimic) of the problem entity developed for a

particular study; and the computerized model is the conceptual model implemented on a

computer. The conceptual model is developed through an analysis and modelling phase, the

computerized model is developed through a computer programming and implementation

phase, and inferences about the problem entity are obtained by conducting computer

experiments on the computerized model in the experimentation phase".

Figure 1Simplified Version of the Modeling Process

There is no standard theory on V&V, therefore, there exist a number of philosophical

theories, statistical techniques; software practices, and so on. However, the emphasis of this

article is on statistical techniques, which may yield reproducible, objective, quantitative data

about the quality of simulation models.

This article is organized as follows. Section 2 discusses verification. Section 3 examines

validation. Section 4 provides conclusions. It is followed by a list of references.

2

Page 7: Verification and Validation of Simulation Models

Section 2: VerificationOnce the simulation model has been programmed, the analysts/programmers must check if

this computer code contains any programming errors ('bugs') to ensure that the conceptual

model is reflected accurately in the computerized representation. The objective of model

verification is to ensure that the implementation of the model is correct.

Various processes and techniques are used to assure the model matches specifications and

assumptions with respect to the model concept. Many common-sense suggestions are

applicable, but none is perfect, for example: 1) general good programming practice such as

object oriented programming, 2) checking of intermediate simulation outputs through tracing

and statistical testing per module, 3) comparing (through statistical tests) final simulation

outputs with analytical results, and 4) animation.

Many software engineering techniques used for software verification are applicable to

simulation model verification.

2.1: Good Programming PracticeSoftware engineers have developed numerous procedures for writing good computer

programs and for verifying the resulting software, in general (not specifically in simulation).

One of the few best software engineering practices are: object oriented programming, formal

technical review, structured walk-throughs, correctness proofs

There are many software engineering testing and quality assurance techniques that can be

utilized to verify a model. Including, but not limited to, have the model checked by an expert

(e.g. chief programmer), making logic flow diagrams that include each logically possible

action, examining the model output for reasonableness under a variety of settings of the input

parameters, and using an interactive debugger.

3

Page 8: Verification and Validation of Simulation Models

Section 3: ValidationOnce the simulation model is programmed correctly, we face the next question: is the

conceptual simulation model (as opposed to the computer program) an accurate

representation of the system under study.

There are many approaches described here from literatures that can be used to validate a

computer model. The approaches range from subjective reviews to objective statistical tests.

By “objectively,” we mean using some type of mathematical procedure or statistical test, e.g.,

hypothesis tests or confidence intervals. One approach that is commonly used is to have the

model builders determine validity of the model through a series of tests.

Naylor and Finger [1967] formulated a three-step approach to model validation that has been

widely followed:

Step 1. Build a model that has high face validity.

Step 2. Validate model assumptions.

Step 3. Compare the model input-output transformations to corresponding input-output

transformations for the real system.

3.1 Face ValidityA model that has face validity appears to be a reasonable imitation of a real-world system to

people who are knowledgeable of the real world system. Face validity is tested by having

users and people knowledgeable with the system examine model output for reasonableness

and in the process identify deficiencies. An added advantage of having the users involved in

validation is that the model's credibility to the users and the user's confidence in the model

increases. Sensitivity to model inputs can also be used to judge face validity. For example, if

a simulation of a fast food restaurant drive through was run twice with customer arrival rates

4

Page 9: Verification and Validation of Simulation Models

of 20 per hour and 40 per hour then model outputs such as average wait time or maximum

number of customers waiting would be expected to increase with the arrival rate

3.2 Validation of Model AssumptionsAssumptions made about a model generally fall into two categories: structural assumptions

about how system works and data assumptions.

Structural AssumptionsAssumptions made about how the system operates and how it is physically arranged are

structural assumptions. For example, the number of servers in a fast food drive through lane

and if there is more than one how are they utilized? Do the servers work in parallel where a

customer completes a transaction by visiting a single server or does one server take orders

and handle payment while the other prepares and serves the order. Many structural problems

in the model come from poor or incorrect assumptions. If possible the workings of the actual

system should be closely observed to understand how it operates. The systems structure and

operation should also be verified with users of the actual system.

Data AssumptionsThere must be a sufficient amount of appropriate data available to build a conceptual model

and validate a model. Lack of appropriate data is often the reason attempts to validate a

model fail. Data should be verified to come from a reliable source. A typical error is

assuming an inappropriate statistical distribution for the data. The assumed statistical model

should be tested using goodness of fit tests and other techniques. Examples of goodness of fit

tests are the Kolmogorov–Smirnov test and the chi-square test. Any outliers in the data

should be checked.

5

Page 10: Verification and Validation of Simulation Models

3.3 Validating Input-Output TransformationsThe model is viewed as an input-output transformation for these tests. The validation test

consists of comparing outputs from the system under consideration to model outputs for the

same set of input conditions. Data recorded while observing the system must be available in

order to perform this test. The model output that is of primary interest should used as the

measure of performance. For example, if system under consideration is a fast food drive

through where input to model is customer arrival time and the output measure of performance

is average customer time in line, then the actual arrival time and time spent in line for

customers at the drive through would be recorded. The model would be run with the actual

arrival times and the model average time in line would be compared actual average time

spent in line using one or more tests.

Hypothesis TestingStatistical hypothesis testing using the t-test can be used as a basis to accept the model as

valid or reject it as invalid.

The hypothesis to be tested is

H0 the model measure of performance = the system measure of performance

versus

H1 the measure of performance ≠ the measure of performance.

The test is conducted for a given sample size and level of significance or α. To perform the

test a number n statistically independent runs of the model are conducted and an average or

6

Page 11: Verification and Validation of Simulation Models

expected value, E(Y), for the variable of interest is produced. Then the test statistic, t0 is

computed for the given α, n, E(Y) and the observed value for the system μ0

 and the critical value for α and n-1 the degrees of freedom

 is calculated.

If

reject H0, the model needs adjustment.

Model Accuracy as a RangeA statistical technique where the amount of model accuracy is specified as a range has

recently been developed. The technique uses hypothesis testing to accept a model if the

difference between a model's variable of interest and a system's variable of interest is within a

specified range of accuracy. A requirement is that both the system data and model data be

approximately Normally Independent and Identically Distributed (NIID). The t-test statistic is

used in this technique. If the mean of the model is μm and the mean of system is μs then the

difference between the model and the system is D = μm - μs. The hypothesis to be tested is if

D is within the acceptable range of accuracy.

Confidence IntervalsConfidence intervals can be used to evaluate if a model is "close enough" to a system for

some variable of interest. The difference between the known model value, μ0, and the system

value, μ, is checked to see if it is less than a value small enough that the model is valid with

respect that variable of interest. The value is denoted by the symbol ε. To perform the test a

7

Page 12: Verification and Validation of Simulation Models

number, n, statistically independent runs of the model are conducted and a mean or expected

value, E(Y) or μ for simulation output variable of interest Y, with a standard deviation S is

produced.

8

Page 13: Verification and Validation of Simulation Models

Section 4: ConclusionThis paper surveyed verification and validation (V&V) of simulation models. It emphasized

statistical techniques that yield reproducible, objective, quantitative data about the quality of

simulation models.

For verification it discussed the following techniques (see Section 2):

1) General good programming practice such as objected oriented programming;

2) Checking of intermediate simulation outputs through tracing and statistical testing per

module

3) Comparing final simulation outputs with analytical results for simplified simulation

models, using statistical tests;

4) Animation.

For validation it discussed the following techniques (see Section 3):

1). Building a model that has high face validity.

2). Validating model assumptions.

3). Comparing the model input-output transformations to corresponding input-output

transformations for the real system.

9

Page 14: Verification and Validation of Simulation Models

References

1.  Banks, Jerry; Carson, John S.; Nelson, Barry L.; Nicol, David M. Discrete-Event System

Simulation Fifth Edition, Upper Saddle River, Pearson Education, Inc. 2010 ISBN

0136062121

2. Sargent, Robert G. VERIFICATION AND VALIDATION OF SIMULATION MODELS.

Proceedings of the 2011 Winter Simulation

Conference. http://www.informs-sim.org/wsc11papers/016.pdf

3. Carson, John, MODEL VERIFICATION AND VALIDATION. Proceedings of the 2002 Winter

Simulation Conference. http://informs-sim.org/wsc02papers/008.pdf

4.  NAYLOR, T. H., AND J. M. FINGER [ 1967], “ Verification of Computer Simulation Models,”

Management Science, Vol. 2, pp. B92– B101., cited in Banks, Jerry; Carson, John S.;

Nelson, Barry L.; Nicol, David M. Discrete-Event System Simulation Fifth Edition, Upper

Saddle River, Pearson Education, Inc. 2010 p. 396 ISBN

0136062121,http://mansci.journal.informs.org/content/14/2/B-92

5.  Sargent, R. G. 2010. “A New Statistical Procedure for Validation of Simulation and Stochastic

Models.” Technical Report SYR-EECS-2010-06, Department of Electrical Engineering and

Computer Science, Syracuse University, Syracuse, New York.

6. Law, A.M., and Kelton, W.D. (1991), Simulation Modeling and Analysis, 2nd ed., McGraw-Hill,

New York.

7. Jack P.C (1992), Theory and Methodology: Verification and validation of simulation models,

European Journal of Operational Research 82 (1995) 145-162

10