59
Practical Strategies for Model Verification March 2015 Webinar 1 There are many attendees today! To avoid disrupting the Voice over Internet Protocol (VoIP) system, I will mute all. Please use the Questions feature on GotoWebinar. Feel free to email questions to [email protected], which we will answer offline. ‐‐ Martin Presented by Martin Bezener, PhD, Statistics StatEase, Inc., Minneapolis, MN [email protected] *Presentation is posted at www.statease.com/webinar.html ©2015 Stat-Ease, Inc.

Practical Strategies for Model Verification Inc

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Practical Strategies for Model Verification Inc

Practical Strategies for Model Verification

March 2015 Webinar 1

There are many attendees today! To avoid disrupting the Voice over Internet Protocol (VoIP)  system,  I will mute all. Please use the Questions feature on GotoWebinar. Feel free to email questions to [email protected], which we will answer off‐line. ‐‐Martin

Presented by Martin Bezener, PhD, StatisticsStat‐Ease, Inc., Minneapolis, MN

[email protected]

*Presentation is posted at www.statease.com/webinar.html

©2015

Stat

-Eas

e, Inc

.

Page 2: Practical Strategies for Model Verification Inc

Hello!

Please click on the “raise hand” button if you can hear me!

March 2015 Webinar 2

Raise-Hand button

©2015

Stat

-Eas

e, Inc

.

Page 3: Practical Strategies for Model Verification Inc

Introduction

March 2015 Webinar 3

Model confirmation is an important step in industrial experimentation.

This step usually receives minimal (if any) attention in practice.

Ignoring this step can result in a model that underperforms when used in future applications.

This step is especially important in predictive modeling.

Model confirmation gives the experimenter confidence that statistical results can be extrapolated to future work.©20

15 S

tat-E

ase,

Inc.

Page 4: Practical Strategies for Model Verification Inc

Agenda

Three major topics that will be discussed:

Part 1: Model checking vs. confirmation

Part 2: Some approaches to confirmation

Part 3: Practical tips and tricks

March 2015 Webinar 4©20

15 S

tat-E

ase,

Inc.

Page 5: Practical Strategies for Model Verification Inc

Agenda

Part 1: Model checking vs. confirmation

Part 2: Some approaches to confirmation

Part 3: Practical tips and tricks

March 2015 Webinar 5©20

15 S

tat-E

ase,

Inc.

Page 6: Practical Strategies for Model Verification Inc

Motivating ExampleTypical Experimentation Sequence

March 2015 Webinar 6

Formulate Problem and Determine Goals

Design the Experiment (DOE)

Perform the Experiment

Analyze Data and Build Model

Check Model Assumptions©2015

Stat

-Eas

e, Inc

.

Page 7: Practical Strategies for Model Verification Inc

Motivating ExampleThe Situation

March 2015 Webinar 7

The Problem: An engineer wants to study a chemical reaction, in particular, the following three factors:

• Time (min): Low = 40, High = 50

• Temperature (deg. C): Low = 80, High = 90

• Catalyst (%): Low = 2, High = 3

The two response of interest are:

• Conversion (%)• Activity

Knowledge of the relationship between the factorsand the response will help him better control thechemical process.

©2015

Stat

-Eas

e, Inc

.

Page 8: Practical Strategies for Model Verification Inc

Motivating ExampleThe Central Composite Design (CCD)

March 2015 Webinar 8

The design consists of 20 runs:

• 6 Center Points

• 8 Factorial (or Cube) Points

• 6 Star (or Axial) Points

12 runs were performed with one machine, 8 runs were performedusing another machine. So there are two blocks.

©2015

Stat

-Eas

e, Inc

.

Page 9: Practical Strategies for Model Verification Inc

Motivating ExampleData Analysis Results

March 2015 Webinar 9

• We only analyze the response Conversion using the Design-Expert® 9 (DX9) software.

• A full quadratic model is fitto the data.

• Main effect A is includedsince AC is significant.

• Lack-of-fit p-value: 0.8574 – no problems here.

• Question: Are the statistical assumptions satisfied?

©2015

Stat

-Eas

e, Inc

.

Page 10: Practical Strategies for Model Verification Inc

Model Checking vs. Confirmation

March 2015 Webinar 10

What are the assumptions that we must check?

Residuals are Normally distributed.

Residual variance is constant.

Residuals are independent across the runs.

©2015

Stat

-Eas

e, Inc

.

Page 11: Practical Strategies for Model Verification Inc

Motivating ExampleModel Checking

Check Residual Normality:

March 2015 Webinar 11

Design-Expert® SoftwareConversion

Color points by value ofConversion:

97.0

51.0

Externally Studentized Residuals

Nor

mal

% P

roba

bilit

y

Residuals vs. Predicted

-2.00 -1.00 0.00 1.00 2.00 3.00

1

5

10

20

30

50

70

80

90

95

99

©2015

Stat

-Eas

e, Inc

.

Page 12: Practical Strategies for Model Verification Inc

Motivating ExampleModel Checking

Check Residual Variance:

March 2015 Webinar 12

Design-Expert® SoftwareConversion

Color points by value ofConversion:

97.0

51.0

Predicted

Exte

rnal

ly S

tude

ntiz

ed R

esid

uals

Residuals vs. Predicted

-6.00

-4.00

-2.00

0.00

2.00

4.00

6.00

40.0 50.0 60.0 70.0 80.0 90.0 100.0

©2015

Stat

-Eas

e, Inc

.

Page 13: Practical Strategies for Model Verification Inc

Motivating ExampleModel Checking

Check Residual Autocorrelation:

March 2015 Webinar 13

Design-Expert® SoftwareConversion

Color points by value ofConversion:

97.0

51.0

Run Number

Exte

rnal

ly S

tude

ntiz

ed R

esid

uals

Residuals vs. Run

-6.00

-4.00

-2.00

0.00

2.00

4.00

6.00

1 4 7 10 13 16 19

©2015

Stat

-Eas

e, Inc

.

Page 14: Practical Strategies for Model Verification Inc

Motivating ExampleModel Checking

Check Model Accuracy:

March 2015 Webinar 14

Design-Expert® SoftwareFactor Coding: ActualConversion (%)

Design points above predicted valueDesign points below predicted value97.0

51.0

X1 = A: timeX2 = B: temperature

Actual FactorC: catalyst = 2

80 82

84 86

88 90

40

42

44

46

48

50

50.0

60.0

70.0

80.0

90.0

100.0

Con

vers

ion

(%)

A: time (min.)

B: temperature (deg C)

©2015

Stat

-Eas

e, Inc

.

Page 15: Practical Strategies for Model Verification Inc

Motivating ExampleModel Checking

Check Model Accuracy:

March 2015 Webinar 15

Design-Expert® SoftwareFactor Coding: ActualConversion (%)

Design points above predicted valueDesign points below predicted value97.0

51.0

X1 = A: timeX2 = B: temperature

Actual FactorC: catalyst = 2.5

80

82

84

86

88

9040

42

44

46

48

50

50.0

60.0

70.0

80.0

90.0

100.0

Con

vers

ion

(%)

A: time (min.) B: temperature (deg C)

©2015

Stat

-Eas

e, Inc

.

Page 16: Practical Strategies for Model Verification Inc

Motivating ExampleModel Checking

March 2015 Webinar 16

Check Model Accuracy:

Looks good, but…

©2015

Stat

-Eas

e, Inc

.

Page 17: Practical Strategies for Model Verification Inc

Model Checking vs. Confirmation

March 2015 Webinar 17

Why is this not enough?

The model will almost surely fit this particular data better than data collected from future replicates of the experiment.

Therefore, assessing the goodness-of-fit of a model using only data that was used to build the model may produce over-optimistic results. Predictive accuracy of the model on new data will also generally be too optimistic.

Model confirmation uses new data (data not used to build the model) to assess model adequacy.

Model confirmation also gives the experimenter an idea of how well a model will predict future cases.

©2015

Stat

-Eas

e, Inc

.

Page 18: Practical Strategies for Model Verification Inc

Model Checking vs. Confirmation

March 2015 Webinar 18

Formulate Problem and Determine Goals

Design the Experiment (DOE)

Perform the Experiment

Analyze Data and Build Model

Check Model Assumptions

Confirm (or Verify) the Model©20

15 S

tat-E

ase,

Inc.

Page 19: Practical Strategies for Model Verification Inc

Motivating ExampleOptimization

March 2015 Webinar 19

Goal: Maximize Conversion

We’ll use the following limits during optimization:

• 40 < Time < 50

• 80 < Temp < 90

• 2 < Catalyst < 3

©2015

Stat

-Eas

e, Inc

.

Page 20: Practical Strategies for Model Verification Inc

Motivating ExampleOptimization Solution

March 2015 Webinar 20

Solution:

• Time = 50• Temp = 90• Catalyst = 3

Design-Expert® SoftwareFactor Coding: ActualConversion (%)

Design Points97.0

51.0

X1 = A: timeX2 = B: temperature

Actual FactorC: catalyst = 3

35 40 45 50 55

75

80

85

90

95Conversion (%)

A: time (min.)

B: te

mpe

ratu

re (d

eg C

)

60

80

100

100

How do we know the solution is valid??©20

15 S

tat-E

ase,

Inc.

Page 21: Practical Strategies for Model Verification Inc

Agenda

Part 1: Model checking vs. confirmation

Part 2: Some approaches to confirmation No confirmation Simple Confirmation Concurrent Confirmation

Part 3: Practical tips and tricks

March 2015 Webinar 21©20

15 S

tat-E

ase,

Inc.

Page 22: Practical Strategies for Model Verification Inc

Agenda

Part 1: Model checking vs. confirmation

Part 2: Some approaches to confirmation No confirmation Simple Confirmation Concurrent Confirmation

Part 3: Practical tips and tricks

March 2015 Webinar 22©20

15 S

tat-E

ase,

Inc.

Page 23: Practical Strategies for Model Verification Inc

No Confirmation

Do not confirm the model:

The experimenter believes the model is valid and the result of it not being valid isn’t catastrophic.

The work covers a non-critical process.

The budget for experiments has been depleted.

Physically not possible to run further experiments.

The primary objective of the experiment is screening.

Not Recommended

March 2015 Webinar 23©20

15 S

tat-E

ase,

Inc.

Page 24: Practical Strategies for Model Verification Inc

Agenda

Part 1: Model checking vs. confirmation

Part 2: Some approaches to confirmation No confirmation Simple Confirmation Concurrent Confirmation

Part 3: Practical tips and tricks

March 2015 Webinar 24©20

15 S

tat-E

ase,

Inc.

Page 25: Practical Strategies for Model Verification Inc

Simple Confirmation(Confirm with a Single Observation)

Run one observation at the optimal settings:

The experimenter believes the model is valid.

Being wrong is unacceptable.

The budget and/or time is tight.

The design doesn’t have blocks, or the block effects are small.

If the observation at the optimum is within the prediction interval, then the model has been confirmed. DX9 provides a fast and flexible way to perform model confirmation.

March 2015 Webinar 25©20

15 S

tat-E

ase,

Inc.

Page 26: Practical Strategies for Model Verification Inc

Simple ConfirmationConfirm with a Single Observation

Confirming results from the RSM Example:

March 2015 Webinar 26

94.02

94.02

©2015

Stat

-Eas

e, Inc

.

Page 27: Practical Strategies for Model Verification Inc

Simple Confirmation(Confirm with Multiple Observations)

Run several observations at the optimal settings:

The experimenter believes the model is valid, but the risk of being wrong is unacceptable.

The organization prefers data driven decisions rather than informed opinion.

The budget and time allow multiple observations.

The model has blocks.

If the average of “n” observations at the optimum is within the prediction interval, then the model has been confirmed.

March 2015 Webinar 27©20

15 S

tat-E

ase,

Inc.

Page 28: Practical Strategies for Model Verification Inc

Simple Confirmation(Confirm with Multiple Observations)

Confirming results from the RSM Example using DX9:

March 2015 Webinar 28

Machine 1(81.0, 85.3)

Machine 2(97.2, 91.2)

88.68

©2015

Stat

-Eas

e, Inc

.

Page 29: Practical Strategies for Model Verification Inc

Simple Confirmation(Confirm with Multiple Observations)

What to do if there are problems?

The model may be overfit and hence generalizes poorly to new data. Try to reduce the model. Also check to see if a transform is necessary using the Box-Cox plot.

There may be outliers or other problematic model points in either the confirmation runs or the model-building runs. These points are not representative of the true underlying process, hence the discrepancy.

The curse of “too much data.” Huge data sets may cause practically irrelevant effects and model irregularities to be detected. This is not usually an issue.

March 2015 Webinar 29©20

15 S

tat-E

ase,

Inc.

Page 30: Practical Strategies for Model Verification Inc

Simple Confirmation(Confirm with Multiple Observations)

How does DX9 compute prediction intervals?

One observations (n = 1):

Many observations (n > 1):

So as n increases, the width of the PI decreases.

March 2015 Webinar 30

1T T0 0,df

2

y t s 1 x X X x

1T T0 0,df

2

1y t s x X X xn

©2015

Stat

-Eas

e, Inc

.

Page 31: Practical Strategies for Model Verification Inc

Simple Confirmation(Confirm with Multiple Observations)

BIG Question: How many confirmation runs at the optima?

The relationship between the probability of detecting an issue (if there actually is one) and the # of confirmation runs will look something like:

March 2015 Webinar 31©20

15 S

tat-E

ase,

Inc.

Page 32: Practical Strategies for Model Verification Inc

Simple Confirmation(Confirm with Multiple Observations)

Of course more is better!!

Try to perform at the very least two confirmation runs at the optimal design point(s).

The best “bang for your buck” is usually around 3 or 4 runs. Further confirmation runs will only give small, incremental improvements.

Avoid excess confirmation runs. Those extra runs could be put to better use as model-building runs.

March 2015 Webinar 32©20

15 S

tat-E

ase,

Inc.

Page 33: Practical Strategies for Model Verification Inc

Agenda

Part 1: Model checking vs. confirmation

Part 2: Some approaches to confirmation No confirmation Simple Confirmation Concurrent Confirmation

Part 3: Practical tips and tricks

March 2015 Webinar 33©20

15 S

tat-E

ase,

Inc.

Page 34: Practical Strategies for Model Verification Inc

Concurrent ConfirmationWhy?

Concurrent confirmation is used when it is not practical to perform confirmation after the DOE is complete:

The DOE runs take a very long time and so will any follow-up confirmation experiments. (E.g. establishing a drug’s expiry date.)

Time on the equipment for running for the DOE is expensive and/or time limited. (E.g. wind tunnel experiments.)

You only have one shot at experimentation. (E.g. running a DOE on a production line.)

March 2015 Webinar 34©20

15 S

tat-E

ase,

Inc.

Page 35: Practical Strategies for Model Verification Inc

Concurrent ConfirmationHow?

Concurrent confirmation is done by adding confirmation runs (aka verification runs) to the design plan.

The verification runs are added to and randomized with the DOE runs.

The verification runs are not used to estimate the model; only the DOE runs are used. The verification runs are essentially “held back”.

The predicted values of the verification runs are compared to their observed values to confirm model predictions.

March 2015 Webinar 35©20

15 S

tat-E

ase,

Inc.

Page 36: Practical Strategies for Model Verification Inc

Drug Stability CharacterizationKinetic Study of Drug Product

How is it done?

Store drug under varying conditions that test its stability to environmental factors; e.g. moisture, oxygen, etc.

Takes a long time (e.g. months) to complete.

Why is it important?

Provides information needed to establish a drug’s expiry date.

Used to establish manufacturing and packaging strategies to ensure the drug meets its expiry date.

March 2015 Webinar 36©20

15 S

tat-E

ase,

Inc.

Page 37: Practical Strategies for Model Verification Inc

Drug Stability CharacterizationThree Factors in this Study

Three factors affecting degradation rate:

Factor 1 awwater activityExpect rate to increase linearly with aw.

Factor 2 O2Expect rate to increase exponentially with increasing oxygen; use logarithmic scale, ln(O2).

Factor 3 TemperatureExpect Arrhenius relationship of rate to temperature; use inverse temperature, (1/T).

March 2015 Webinar 37

0 100%wp RHap

©2015

Stat

-Eas

e, Inc

.

Page 38: Practical Strategies for Model Verification Inc

Drug Stability CharacterizationThe Core Design

March 2015 Webinar 38aw

T

O2

Face Centered Design (FCD) with 15 Runs

©2015

Stat

-Eas

e, Inc

.

Page 39: Practical Strategies for Model Verification Inc

Drug Stability CharacterizationFCD Augmented with Verification Runs

March 2015 Webinar 39

aw

T

O2

points to determine Arrhenius relationship (4 runs)points to evaluate predictions from core design (4 runs)

©2015

Stat

-Eas

e, Inc

.

Page 40: Practical Strategies for Model Verification Inc

Drug Stability CharacterizationAugmented FCD (k % per week)

T °C O2 %aw

0.05 0.20 0.35 0.50

400.2221

500.2221

600.2221

700.2221

March 2015 Webinar 40

2 wk, 4 wk, 8 wk, 16 wk1 wk, 2 wk, 4 wk, 8 wk3 day, 1 wk, 2 wk, 4 wk1 day, 3 day, 1 wk, 2 wk

Factorial points

Axial and center points

Arrhenius points

aw = 0.50 points

©2015

Stat

-Eas

e, Inc

.

Page 41: Practical Strategies for Model Verification Inc

k = 0.0961 k = 0.3403

(These two rates are circled on the next slide)

A: time (weeks)

0.00 2.00 4.00 6.00 8.00TD

P%

aw

=0.3

5 (T

=50,

O2=

0.2)

0.0

0.5

1.0

1.5

2.0

2.5

3.0

A: time (weeks)

0.00 2.00 4.00 6.00 8.00

TDP

% a

w=0

.05

(T=5

0, O

2=0.

2)

0.0

0.5

1.0

1.5

2.0

2.5

3.0

TDP Formation Ratek (% per week)

March 2015 Webinar 41©20

15 S

tat-E

ase,

Inc.

Page 42: Practical Strategies for Model Verification Inc

TDP Degradation RateRuns to Validate Arrhenius Model

March 2015 Webinar 42

1/T K ln(O2)aw

0.05 0.20 0.35 0.50

0.00319336-1.60944 0.0240 0.05110.693153.04452

0.00309454-1.60944 0.0961 0.34030.69315 0.15153.04452

0.00300165-1.60944 0.4668 1.41130.693153.04452

0.00291418-1.60944 1.6955 6.05000.693153.04452

aw

T

O2

©2015

Stat

-Eas

e, Inc

.

Page 43: Practical Strategies for Model Verification Inc

TDP Degradation RateArrhenius Model is Valid!

March 2015 Webinar 43B: 1/T

0.00291418 0.00296071 0.00300724 0.00305377 0.0031003 0.00314683 0.00319336

Ln(k

-TD

P)

-4.5

-3.5

-2.5

-1.5

-0.5

0.5

1.5

2.5

A-

A+

Arrhenius Model Verification

aw = 0.05aw = 0.35

0.2% O2

2 0.9980adjR aw

T

O2

2 0.9966adjR

©2015

Stat

-Eas

e, Inc

.

Page 44: Practical Strategies for Model Verification Inc

TDP Degradation Ratek (% per week)

March 2015 Webinar 44

1/T K ln(O2)aw

0.05 0.20 0.35 0.50

0.00319336-1.60944 0.0240 0.05110.69315 0.0355 0.09113.04452 0.0254 0.0716

0.00309454-1.60944 0.0961 0.1735 0.3403 0.51000.69315 0.1515 0.2157 0.35353.04452 0.2285 0.4870

0.00300165-1.60944 0.4668 1.41130.69315 0.9240 2.55753.04452 0.5638 1.7318

0.00291418-1.60944 1.6955 6.05000.693153.04452

©2015

Stat

-Eas

e, Inc

.

Page 45: Practical Strategies for Model Verification Inc

TDP Degradation RateFCD plus Verification Runs

The verification runs are added to and randomized with the DOE runs.

The verification runs are not used to estimate the model; only the DOE runs are used.

The predicted values of the verification runs are compared to their observed values to confirm model predictions.

March 2015 Webinar 45©20

15 S

tat-E

ase,

Inc.

Page 46: Practical Strategies for Model Verification Inc

TDP Degradation RateANOVA from FCD (without validation runs)

Transform: Natural logANOVA for Response Surface Linear Model

Analysis of variance table [Partial sum of squares - Type III]

Sum of Mean F p-valueSource Squares df Square Value Prob > FModel 27.59 3 9.20 847.40 < 0.0001

A-aw 2.37 1 2.37 218.38 < 0.0001B-ln(O2) 0.11 1 0.11 10.36 0.0082C-1/T 25.10 1 25.10 2313.46 < 0.0001

Residual 0.12 11 0.011Cor Total 27.71 14

Std. Dev. 0.10 R-Squared 0.9957Mean -1.64 Adj R-Squared 0.9945C.V. % 6.36 Pred R-Squared 0.9918PRESS 0.23 Adeq Precision 80.940

March 2015 Webinar 46©20

15 S

tat-E

ase,

Inc.

Page 47: Practical Strategies for Model Verification Inc

TDP Degradation RateDiagnostic Plots (page 1 of 2)

March 2015 Webinar 47

Externally Studentized Residuals

Nor

mal

% P

roba

bilit

y

Normal Plot of Residuals

-4.00 -3.00 -2.00 -1.00 0.00 1.00 2.00 3.00

1

5

10

20

30

50

70

80

90

95

99

Predicted

Ext

erna

lly S

tude

ntiz

ed R

esid

uals

Residuals vs. Predicted

-4.00

-2.00

0.00

2.00

4.00

-4.00 -3.00 -2.00 -1.00 0.00 1.00 2.00©2015

Stat

-Eas

e, Inc

.

Page 48: Practical Strategies for Model Verification Inc

TDP Degradation RateDiagnostics Plots (page 2 of 2)

Looking at the verification runs on the diagnostic plots we see they fall in line with DOE runs.

Model is confirmed!

March 2015 Webinar 48

Actual

Pre

dict

ed

Predicted vs. Actual

-4

-3

-2

-1

0

1

2

-4 -3 -2 -1 0 1 2

©2015

Stat

-Eas

e, Inc

.

Page 49: Practical Strategies for Model Verification Inc

Confirmation Strategies Summary

Confirming model predictions: No confirmation

• Not a recommended strategy.

Simple confirmation• Single observation is better than no confirmation.• Multiple observations is better yet.

Concurrent confirmation• A great strategy when it isn’t practical to follow-up a

completed DOE with additional observations.

March 2015 Webinar 49©20

15 S

tat-E

ase,

Inc.

Page 50: Practical Strategies for Model Verification Inc

Agenda

Part 1: Model checking vs. confirmation

Part 2: Some approaches to confirmation

Part 3: Practical tips and tricks

March 2015 Webinar 50©20

15 S

tat-E

ase,

Inc.

Page 51: Practical Strategies for Model Verification Inc

Practical Tips and Tricks

March 2015 Webinar 51

Pitfall: Avoid using R2 as a way of “confirming” your model!

R2 cannot decrease when adding terms to a model, even if the terms are completely independent of the response.

Hence, R2 will always suggest choosing a larger model – danger of over-fitting!!

Some processes naturally produce noisy, hard-to-measure responses. Even great models can have low R2.

Predicted R2 is a better option. It gives some degree of model confirmation. ©20

15 S

tat-E

ase,

Inc.

Page 52: Practical Strategies for Model Verification Inc

Practical Tips and Tricks

March 2015 Webinar 52

Tip: Always use subject matter knowledge!

If subject-matter knowledge doesn’t suggest that a model should work in practice, reconsider your model!

Model confirmation may suggest that a model is adequate or inadequate, but it doesn’t tell you why that’s the case.

©2015

Stat

-Eas

e, Inc

.

Page 53: Practical Strategies for Model Verification Inc

Practical Tips and Tricks

March 2015 Webinar 53

Tip: When in doubt, choose the smaller model!

Confirmation runs point to evidence of an over or under-fit model.

When several models appear to fit the data well, and are confirmed with future runs, it’s usually (but not always) better to select the simpler model.

©2015

Stat

-Eas

e, Inc

.

Page 54: Practical Strategies for Model Verification Inc

Practical Tips and Tricks

March 2015 Webinar 54

Tip: Focus on confirmation techniques that use external data!

Predicted R2 and PRESS statistic give an idea of how well a model will predict future responses.

Use confirmation runs to verify the predicted response(s) at locations of interest (e.g. minima or maxima) in the design space.

Future versions of Design-Expert will likely include new techniques for out-of-sample confirmation.

©2015

Stat

-Eas

e, Inc

.

Page 55: Practical Strategies for Model Verification Inc

Practical Tips and Tricks

March 2015 Webinar 55

Tip: How to handle blocks?

Perform the confirmation runs at many different blocks –assuming you can repeat at the same blocks used in the model!! Block effects will average out in the confirmation runs.

If you can only perform one confirmation run, look at the ANOVA and choose the block with the lowest effect. Run the confirmation run at that block – again, assuming you actually can repeat a run at the block.

Not much can be done if a block is not repeatable (e.g. year).

©2015

Stat

-Eas

e, Inc

.

Page 56: Practical Strategies for Model Verification Inc

How to Get Help

March 2015 Webinar 56

Search publications posted at www.statease.com.

In Stat-Ease software press for Screen Tips, view reports in annotated mode, look for context-sensitive Help (right-click) or search the main Help system.

Explore the Stat-Ease Experiment Design Forum http://forum.statease.com (read only).

E-mail [email protected] for answers from Stat-Ease’s staff of statistical consultants.

Call 612.378.9449 and ask for “statistical help.”©2015

Stat

-Eas

e, Inc

.

Page 57: Practical Strategies for Model Verification Inc

Support for DOE

March 2015 Webinar 57

The triannual Stat-Teaser newsletter (if you don’t opt out)

Bi-Monthly DOE FAQ Alert e-mail (if you don’t opt out)• Subscribe at: www.statease.com/doealertreg.html.

StatsMadeEasy blog at www.statsmadeeasy.net.

©2015

Stat

-Eas

e, Inc

.

Page 58: Practical Strategies for Model Verification Inc

Thank you all for joining us today!

As a reminder, this presentation is posted at:

www.statease.com/webinar.html

March 2015 Webinar 58©20

15 S

tat-E

ase,

Inc.

Page 59: Practical Strategies for Model Verification Inc

References

1. Alyaa R. Zahran, Christine M. Anderson-Cook and Raymond H. Myers, “Fraction of Design Space to Assess Prediction”, Journal of Quality Technology, Vol. 35, No. 4, October 2003.

2. Heidi B. Goldfarb, Christine M. Anderson-Cook, Connie M. Borror and Douglas C. Montgomery, “Fraction of Design Space plots for Assessing Mixture and Mixture-Process Designs”, Journal of Quality Technology, Vol. 36, No. 2, October 2004.

3. Steven DE Gryze, Ivan Langhans and Martina Vandebroek, “Using the Intervals for Prediction: A Tutorial on Tolerance Intervals for Ordinary Least-Squares Regression”, Chemometrics and Intelligent Laboratory Systems, 87 (2007) 147 – 154.

4. Pat Whitcomb (2009), “Sizing Mixture (RSM) Designs” on the Chemical and Process Industries web site at: //asq.org/cpi/,

5. Gerald J. Hahn and William Q. Meeker, Statistical Intervals; A Guide for Practitioners, (1991) John Wiley and Sons, Inc,

6. Gary Oehlert and Patrick Whitcomb (2001), Sizing Fixed Effects for Computing Power in Experimental Designs, Quality and Reliability Engineering International, July 27, 2001.

March 2015 Webinar 59©20

15 S

tat-E

ase,

Inc.