12
Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design often offers a more effective way to accomplish this than control charting. Control charting is a passive statistical method. We monitor the process and wait for information that may lead to a useful change. Experimental design is an active statistical method. We perform a series of tests on a process making changes in the inputs and observing the corresponding effect of the outputs. Application of experimental design in the product/process design stage is also important and can result in improved yield and reduced costs.

Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

Embed Size (px)

Citation preview

Page 1: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

Experimental Design

• If a process is in statistical control but has poor capability it will often be necessary to reduce variability.

• Experimental design often offers a more effective way to accomplish this than control charting.

• Control charting is a passive statistical method. We monitor the process and wait for information that may lead to a useful change.

• Experimental design is an active statistical method. We perform a series of tests on a process making changes in the inputs and observing the corresponding effect of the outputs.

• Application of experimental design in the product/process design stage is also important and can result in improved yield and reduced costs.

Page 2: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

Guidelines for Experimental Design

• Identify the problem– Have a clear question you would like to answer

– Suspect that there is something to be gained from experimentation

– Must be feasible to take the process off-line for experimentation

• Choose the factors (inputs) that will be varied in the experiment and their levels (values).– Put together a team of individuals who understand

the process at different levels(engineers, managers, technicians, operators)

– Identify the controllable and uncontrollable factors that may impact process performance.

– Rank the controllable factors (usually just monitor environmental factors)

– Determine the levels for each controllable factor chosen

• Formulate hypothesis

• Select the appropriate experimental design

• Conduct the experiments

• Analyze the results

• Draw conclusions

• Take action

Page 3: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

• Good experimental design should:– Eliminate known sources of bias

– Guard against unknown sources of bias

– Ensure that the experiment provides useful information about the process without using excessive experimental resources.

Page 4: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

Experiments with a single factor• We will begin by studying the most simple

experimental design (an experiment with 1 factor) in detail.

• The data analysis for single factor designs is similar to that of more complex experiments.

• We will use analysis of variance (ANOVA) as the primary statistical tool for analyzing output from experiments

Page 5: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

An Example

• A manufacturer of paper used for making grocery bags is interested in improving the tensile strength of the product. The manufacturing process specs currently call for 10% hardwood concentration. The paper has an average tensile strength of 15 psi. The process engineer and the operators suspect that tensile strength is a function of the pulp hardwood concentration in the paper. Economic considerations dictate that the range of possible hardwood concentrations range between 5% and 20%. The process engineer decides to investigate 4 levels of hardwood concentration: 5%, 10%, 15%, and 20%. She takes six specimens (replicates) at each level yielding 24 total specimens. Each specimen is then tested in random order. The results are as follows:

ObservationsConcentration

(%)1 2 3 4 5 6

5 7 8 15 11 9 1010 12 17 13 18 19 1515 14 18 19 17 16 1820 19 25 22 23 18 20

Page 6: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

Null and Alternative Hypothesis• Null Hypothesis: The mean response under each

factor (treatment) is equal

• Alternative Hypothesis: At least one of the treatment means is different.

Page 7: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

• ANOVA makes inferences about means from examination of the variability in the experiment.

• The total variability can be measured by the total squared deviation of each response from the overall mean. This is termed Total Sum of Squares (TSS)

• ANOVA then partitions TSS into two parts: Between treatment Sum of Squares (BSS) and Within treatment Sum of Squares (ESS).

• SST = BSS + ESS

• BSS represents the difference between factor level means and the grand average and so gives an indication of the differences between factor levels on the response.

• Differences between observations within each factor level and the factor level mean are due to random error. Therefore the within treatment sum of squares is often termed ESS (Error sum of squares)

Page 8: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

• If the difference between BSS and ESS is large the null hypothesis will be rejected (The factor effects are significant)

• Before comparing the sums of squares we must scale them by their degrees of freedom:

• Let a = total number of factors

n = total number of observations within each factor

Then, there are an = N observations

SST has N - 1 degrees of freedom

Since there are a levels of the factor SSB has (a-1) degrees of freedom

within each factor there are n observations (replicates) providing (n-1) degrees of freedom for estimation. Since there are a factors ESS has a(n-1) degrees of freedom.

The ratio of a sum of squares to its number of degrees of freedom is termed a mean square.

Page 9: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

• Once the mean squares are calculated for the BSS and ESS, the ratio BSS/ESS provides a statistic that follows the F distribution. The critical value is then compared with the calculated value to determine the outcome of the hypothesis test.

• The ANOVA table provides a convenient summary of this information:

Source ofVariation

Sum of Squares Degrees offreedom

Mean Square F

Between Factors SSB a-1 MSfactor Msfactor/MSerrorWithin Factors SSE a(n-1) MEerrorTotal SST An

Page 10: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

Randomized Block Designs• Blocking involves grouping experimental units

which have similar effects on the response variable. That is, we seek to eliminate the effect of extraneous factors within a block so that the between treatment effect (our main concern) can be more precisely measured.

• Example:

Suppose we wish to investigate the differences in raw materials from three different vendors. Processing will take place on two machines. If we randomly assign raw materials to machines we will not be able to claim that differences in output are due to vendors because we have not eliminated the effects of the machines. If we instead form two blocks consisting of the two machines differences in output from each block represent differences in vendors whereas differences in output between machines will represent differences due to blocking, I.e., whether blocking was successful.

Page 11: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

• With a blocking design, the variance is partitioned into 3 parts. Sum of squares due to treatment (SSTR), Sum of squares due to the blocking factor (SSBF), and SSE. Therefore, successful blocking minimizes variation between observations within a block while maximizing the variation between blocks. Since SSBF is eliminated from the experimental error the result is an increase in the precision of the experiment.

• The ANOVA table for a randomized block design would be as follows:

Source ofVariation

Sum of Squares Degrees offreedom

Mean Square F

Treatments SSTR a-1 MStreatment MStreatment/Mserror

Blocks SSBF b-1 MSblock MSblock/MSerrorError SSE (a-1)(b-1) MSerrorTotal SST ab-1

Page 12: Experimental Design If a process is in statistical control but has poor capability it will often be necessary to reduce variability. Experimental design

Example

• We wish to test the effect of four different chemicals on the strength of a particular fabric. It is known that the effect of these chemicals varies considerably across fabric specimens. We take 5 fabric samples and randomly apply each chemical to each fabric. We have now isolated the effect of the chemical in the fairly homogeneous environment of a single sample of fabric. The results are as follows:

Fabric SampleChemical 1 2 3 4 5

1 1.3 1.6 0.5 1.2 1.12 2.2 2.4 0.4 2 1.83 1.8 1.7 0.6 1.5 1.34 3.9 4.4 2 4.1 3.4