62

Experimental and Causal-Comparative Designs

  • Upload
    cliff

  • View
    37

  • Download
    0

Embed Size (px)

DESCRIPTION

Experimental and Causal-Comparative Designs. Purpose. Examine the possible influences that one factor or condition may have on another factor or condition cause-and-effect relationships ideally, by controlling all factors except those whose possible effects are the focus of investigation. - PowerPoint PPT Presentation

Citation preview

Page 1: Experimental  and  Causal-Comparative Designs
Page 2: Experimental  and  Causal-Comparative Designs

Purpose

• Examine the possible influences that one factor or condition may have on another factor or condition

• cause-and-effect relationships

• ideally, by controlling all factors except those whose possible effects are the focus of investigation

Page 3: Experimental  and  Causal-Comparative Designs

What is Experimentation?• Why do events occur under some conditions and not

under others?

• Research methods that answers these questions are called causal methods– ex post facto research designs - observes what is or what

has been, also has the potential for discovering causality, but researcher is required to accept the world as found

– experiment allows the researcher to alter systematically the variables of interest and observe what changes follow

Page 4: Experimental  and  Causal-Comparative Designs

Experiments

• Studies involving intervention by the researcher beyond that required for measurement

• The researcher manipulates the independent or explanatory variable and then observes whether the hypothesized dependent variable is affected by the intervention

Page 5: Experimental  and  Causal-Comparative Designs

Example of Bystanders and Thieves

• Students were asked to an office where they had an opportunity to see a fellow student steal some money from a receptionist’s desk. A confederate of the experimenter, did the stealing. The hypothesis concerned whether people observing a theft would be more like to report it (1) if they saw the crime alone or (2) if they were in the company of someone else.

Page 6: Experimental  and  Causal-Comparative Designs

Variables in the Study• Independent - was the state of either being alone

when observing the theft or being in the company of another person.

• Dependent - whether the subjects reported observing the crime

• the results indicated that people were more likely to report the theft if they observed it alone rather than in another person’s company

Page 7: Experimental  and  Causal-Comparative Designs

How did the researchers come to this conclusion?

• first there must be an agreement between the independent and dependent variables– the presence or absence of one is associated

with the presence or absence of the other– more reports of the theft came from lone

observers than from paired observers

Page 8: Experimental  and  Causal-Comparative Designs

How did they come to this conclusion?

• second, the time order of the occurrence of the variables must be considered.– The dependent variable should proceed the

independent variable.– It is unlikely that people could report a theft

before observing it

Page 9: Experimental  and  Causal-Comparative Designs

How did they come to this conclusion?

• Third - researchers are confident that other extraneous variables did not influence the dependent variable– researchers controlled their ability to confound the

planned comparison

– the event was staged without the observer’s knowledge

– only the receptionist, observers, and the “criminal” were in the office

– the same process was repeated with each trial

Page 10: Experimental  and  Causal-Comparative Designs

Conducting an Experiment

• Experiment is the premier scientific methodology for establishing causation

• however the resourcefulness and creativeness of the researcher are needed to make the experiment live up to its potential

• to make it successful the researcher must plan carefully

Page 11: Experimental  and  Causal-Comparative Designs

Seven Activities to Accomplish

• Select relevant variables

• Specify the level(s) of treatment

• Control the experimental environment

• Choose the experimental design

• Select and assign the subject

• Pilot-test, revise and test

• Analyze the data

Page 12: Experimental  and  Causal-Comparative Designs

Selecting Relevant Variables

• It is the researcher’s task to translate an amorphous problem into the hypothesis that best states the objectives of the research

• hypothesis is a relational statement because it describes a relationship between two or more variables

• researcher must select variables that are the best operational representation of the original concepts

Page 13: Experimental  and  Causal-Comparative Designs

Specifying the Levels of Treatment

• Treatment levels of the independent variable are the various aspects of the treatment condition. – For example, if education was hypothesized to have

an effect on employment stability, it might be divided a high-school, college, graduate

• based on simplicity and common sense • alternatively a control group could provide a

base level for comparison

Page 14: Experimental  and  Causal-Comparative Designs

Controlling the Experimental Environment

• The potential for distorting the effect of treatment on the dependent variable must be controlled

• examples : videotaping instructions, arrangement of room, time of administration, experimenter’s contact with subjects

Page 15: Experimental  and  Causal-Comparative Designs

Choosing the Experimental Design

• Experimental design serves as positional and statistical plans to designate relationships between experimental treatment and the experimenter’s observations or measurement points

Page 16: Experimental  and  Causal-Comparative Designs

Selecting and Assigning Subjects

• Represent the population to be generalized

• random assignment

• matching - each experimental and control subject match

Page 17: Experimental  and  Causal-Comparative Designs

Pilot Testing, Revising and Testing

• Pilot test - reveal errors in design

• refinements

Page 18: Experimental  and  Causal-Comparative Designs

Analyzing the Data

• If planning and pretesting have occurred, experimental data will take an order and structure.

Page 19: Experimental  and  Causal-Comparative Designs

Validity in Experimentation

• Always a question if the results are true

• internal validity - do the conclusions we draw about the demonstrated experimental relationship truly imply cause?

• External validity - does an observed causal relationship generalize across person, settings and times

Page 20: Experimental  and  Causal-Comparative Designs

Internal Validity

• History• during the time an experiment is taking

place, some events may occur that confuse the relationship being studied

• take a control measurement (O1) of the dependent variable before introducing the manipulation (X), after the manipulation we take an after measurement (O2) of the dependent variable. Then the difference between O1 and O2 is the change that the manipulation caused

Page 21: Experimental  and  Causal-Comparative Designs

Maturation

• Changes occur within the subject that of the function of the passage of time and not specific to any particular event

• special concern when study covers a long time

• hunger, bored, tired are also factors in shorter test

Page 22: Experimental  and  Causal-Comparative Designs

Testing

• The process of taking a test can affect the scores of a second test

• the more experience of taking the first test can have a learning effect that influences the results of the second test

Page 23: Experimental  and  Causal-Comparative Designs

Instrumentation

• Changes between observations

• using different questions at each measurement

• using different observers or interviewers

• observer experience, boredom, fatigue and anticipation of results can all distort the results of separate observations

Page 24: Experimental  and  Causal-Comparative Designs

Selection

• Differential selection of subjects for experimental and control group.

• Groups must be equivalent in every respect

• if subjects are randomly assigned to experimental and control groups, the selection problem can be largely overcome

Page 25: Experimental  and  Causal-Comparative Designs

Statistical Regression

• This factor operates especially when groups have been selected by their extreme scores

• suppose we only take the workers with top 25% and bottom 25% of productivity scores

• no matter what is done between O1 and O2 there is a strong tendency for the average of the high scores at O1 to decline at O2 and for the low scores at O1 to increase

• In the second measurement, members of both groups score more closely to their long-run mean scores

Page 26: Experimental  and  Causal-Comparative Designs

Experiment Mortality• Composition of the group changes during the test

• attrition - people dropout

• because members of the control group are not affected by the testing situation, they are less likely to withdraw

• diffusion or imitation of treatment - if the people in control and experimental group talk, they learn of the treatment eliminating the difference between the group

Page 27: Experimental  and  Causal-Comparative Designs

Experiment Mortality

• Compensatory equalization - the experimental treatment is much more desirable, there may be an administrative reluctance to deprive the control group members

• Compensatory rivalry - when members of the control group know they are the control group. This may generate competitive pressures causing them to try harder

Page 28: Experimental  and  Causal-Comparative Designs

Experiment Mortality• Resentful demoralization of the disadvantage -

when the treatment is desirable and the experiment is obtrusive, control members may become resentful of their deprivation and lower their cooperation and output

• local history - when one assigns all experimenters to one group and all control people to another - there can be idiosyncratic events that may confound

Page 29: Experimental  and  Causal-Comparative Designs

External Validity

• Internal validity factors cause confusion about whether the experimental treatment (X) or extraneous factors are the source of observation differences.

• In contrast, external validity is concerned with the interaction of the experimental treatment with other factors and the resulting impact on abilities to generalize to times, settings, or persons

Page 30: Experimental  and  Causal-Comparative Designs

The Reactivity of Testing on X

• Is one of sensitizing subjects by the pretest so they respond to the experimental stimulus in a different way.

• A before measurement of the level of knowledge about the ecology programs of a company will often sensitize the subject to the various experimental communication efforts that might then be made about the company

Page 31: Experimental  and  Causal-Comparative Designs

Interaction of Selection of X

• The process by which test subject are selected

• the population from which one selects subjects may not be same as the population to which one wishes to generalize the results

Page 32: Experimental  and  Causal-Comparative Designs

Other Reactive Factors

• Experimental setting themselves may have a biasing effect on the subject’s response to X

• if subjects know they are participating, they may have a tendency to role-play

• external validity may be hard to control because it is a matter of generalization

• try and secure as much internal validity requirements

Page 33: Experimental  and  Causal-Comparative Designs

Experimental Research Designs

• Many• vary widely in their power to control

contamination of the relationship between independent and dependent variables

• the most widely accepted designs are based on this characteristic of control:– preexperiments

– true experiments

– field experiments

Page 34: Experimental  and  Causal-Comparative Designs

Key to Design Symbols

• X - an X represents the introduction of an experimental stimulus to a group. The effects of this independent variable(s) are of major interest

• O - an O identifies a measurement or observation activity

• R - an R indicates that the group members have been randomly assigned to a group.

Page 35: Experimental  and  Causal-Comparative Designs

Keys to Timing• The X’s and O’s in the diagram are read

from left to right in temporal order.

• O X O O

• X’s and O’s vertical to each other indicate that the stimulus and or observation take place simultaneously

O X

X

Page 36: Experimental  and  Causal-Comparative Designs

Keys to Selection

• Parallel rows that are not separated by dashed lines indicate that comparison groups have been equalized by the random process

• those separated with a dashed line have not been so equalized

X O O X O

O O

Page 37: Experimental  and  Causal-Comparative Designs

Seven Activities to Accomplish

• Select relevant variables

• Specify the level(s) of treatment

• Control the experimental environment

• Choose the experimental design

• Select and assign the subject

• Pilot-test, revise and test

• Analyze the data

Page 38: Experimental  and  Causal-Comparative Designs
Page 39: Experimental  and  Causal-Comparative Designs

Preexperimental Designs

• One-Shot Case Study• One-Group Pretest-Posttest Design• Static Group Comparison

• All three are weak in their scientific measurement power because they fail to control the various threats to internal validity. This is especially true of the one-shot case study.

Page 40: Experimental  and  Causal-Comparative Designs

One-Shot Case Study

• X• Treatment or

manipulation of independent variable

• O• Observation or

measurement of dependent variable

An example is an employee education campaign about new technologies without prior measurement of employee knowledge. Results would reveal only how much the employees know after the campaign, but there is no way to judge the effectiveness of the campaign. The lack of pretest and control group make this design inadequate for establishing causality.

Page 41: Experimental  and  Causal-Comparative Designs

One-Group Pretest-Posttest Design

O X O

Pretest Manipulation Posttest

Can be used for the educational example, but how well does it control for history? Maturation? Testing effect?

Page 42: Experimental  and  Causal-Comparative Designs

Static Group Comparison X O1

O2

This design provides for two groups, one of which receives the experimental stimulus while the other serves as a control. A forest fire or other natural disaster is the experimental treatment, and the psychological trauma (or property loss) suffered by the residents is the measured outcome. A pretest before the fire would be possible … but. The control group, receiving the posttest, would consist of residents whose property was spared. Weakest link, no way certain that the two groups are equivalent.

Page 43: Experimental  and  Causal-Comparative Designs

True Experimental Designs

• Major deficiency of the preexperimental designs is they fail to provide comparison groups that are equivalent.

• The way to achieve equivalence is through matching and randomization.

• Two Classical– Pretest-Posttest Control Group Design– Posttest-Only Control Group Design

Page 44: Experimental  and  Causal-Comparative Designs

Pretest-Posttest Control Group Design

R O1 X O2

R O3 O4

The effect of the experimental variable is E = ( O2 – O1 ) – ( O4 – O3 )

In this design, the seven major internal validity problems are dealt with fairly well, although there are still some difficulties. Local history may occur in one group and not the other, communication between people in test and control groups, and mortality.

Page 45: Experimental  and  Causal-Comparative Designs

Solomon Four-Group DesignR O1 X O2

R O3 O4

R X O5

R O6The addition of the two groups that are not pretested provides a distinct advantage. If the researcher finds that O5 and O do not differ from the top two groups observation, the researcher can generalize findings to situations where no pretest was given. The Solomon Four-Group Design enhances the external validity

Page 46: Experimental  and  Causal-Comparative Designs

Posttest-Only Control Group Design

R X O1

R O2

In this design the pretest measurements are omitted. Pretests are not really necessary when it is possible to randomize.

Experimental effect is ( O1 – O2 )

Since the subjects are measured only once, the threats of testing and instrumentation are reduced.

Page 47: Experimental  and  Causal-Comparative Designs

Extensions of True Experimental Designs

Those which were discussed were classical design forms, but researchers normally use an operational extension of the basic design in

(1) The number of different experimental stimuli that are considered simultaneously by the experimenter

(2) The extent to which assignment procedures are used to increase precision

Page 48: Experimental  and  Causal-Comparative Designs

Factor

• Widely used to denote an independent variable• May be divided into treatment levels, which

represent subgroups• Active factors – are those that the experimenter can

manipulate by causing a subject to receive one level or another

• Blocking factor – can only identify and classify the subject on an existing level (gender,age,organizational rank)

Page 49: Experimental  and  Causal-Comparative Designs

Completely Randomized DesignR O1 X1 O2

R O3 X2 O4

R O5 X3 O6 Experiment: to determine the ideal difference in price between a store’s private brand of vegetables and national brands. There will be three price spreads (treatment levels) of 7, 12 and 17 cents. 18 stores are randomly divided (6 to each treatment group). The price differential is maintained for a period and then a tally is made of the sales volumes and gross profit of the cans for each group of stores.

Page 50: Experimental  and  Causal-Comparative Designs

Randomized Block DesignThe critical reason for randomize block design is that the sample size is too small that is risky to depend on random assignment alone. Small samples such as 18 stores are typical in field experiments because of high costs. Another reason for blocking is to learn whether treatments bring different results among various groups of subjects.

Assume there is reason to believe that lower-income families are more sensitive to price differentials than are higher-income families. This factor could seriously distort our results unless we stratify the stores by customer income.

Page 51: Experimental  and  Causal-Comparative Designs

Randomized Block DesignActive Factor – Blocking Factor – Customer Income

Price Difference High Medium Low

7 cents R X1 X1 X1

12 cents R X2 X2 X2

17 cents R X3 X3 X3 The O’s have been omitted. The horizontal rows no longer indicate a time sequence but various levels of the blocking factor. Before and after measurements are associated with each of the treatments.

One can measure both main effects and interaction effects.

Page 52: Experimental  and  Causal-Comparative Designs

Latin Square Design Customer Income

Store Size High Medium Low

Large X1 X1 X1

Medium X2 X2 X2

Small X3 X3 X3 Latin square may be used when there are two major extraneous factors. Continuing the store example, we decide to block on size of the store and income (9 stores). One treatment per cell.

Assumes there is no interaction between treatments and blocking factors. With the above design we cannot determine the interrelationships among store size, customer income, and price spreads. (this would require 27 cells)

Page 53: Experimental  and  Causal-Comparative Designs

Factorial Design Price Spread

Unit Price Information 7cents 12 cents 17 cents

Yes X1 Y1 X1 Y2 X1 Y3

No X2 Y1 X2 Y2 X2 Y3One misconception is that a researcher can manipulate only one variable at a time. With factorial designs you can deal with more that one simultaneously. Our pricing experiment. We are interesting in finding the effect of posting unit prices on the shelf to aid shopper decision making. Above includes both price differentials and the unit pricing. This is known as a 2x3, with two levels and three levels of intensity. Stores are randomized, assigned to one of six treatments. Results can answer the following questions:

What are the sales effects of different price spreads between company and national brands?

What are the sales effects of using unit-price marking on the shelves?

What are the sales-effect interrelations between price spread and the presence of unit-price information?

Page 54: Experimental  and  Causal-Comparative Designs

Covariance Analysis• You can directly control extraneous variables through

blocking• It is also possible to apply some degree of indirect statistical

control one or more variables through analysis of covariance• In our store example, we carried out a completely

randomized design, only to later reveal a contamination effect from differences in average customer income levels.

• With covariance analysis, you can still do some statistical blocking on average customer income even after the experiment has been run

Page 55: Experimental  and  Causal-Comparative Designs

Field Experiments: Quasi or Semi Experiments

• In the field you often cannot control enough of the extraneous variables or the experimental treatment to use a true experimental design. Because the stimulus condition occurs in a natural environment, a field experiment is required.

Page 56: Experimental  and  Causal-Comparative Designs

Modern Day Bystander and Thief

• Electronic surveillance to prevent shrinkage due to shoplifting

• Shopper comes to counter to see special designer frames from a salesperson behind a counter. The salesperson, a confederate of the researcher, replied that she would get them from a another case and disappears. The thief selected two pairs of sunglasses from an open display, deactivated the security tags at the counter, and walked out of the store

Page 57: Experimental  and  Causal-Comparative Designs

Modern Day Bystander and Thief

• 25% of the subjects (store customers) reported the theft upon the return of the salesperson

• 63% reported it when the salesperson asked• Unlike previous studies, the presence of a second

customer did not reduce the willingness to report a theft

• Notice this study was not possible with a control group, a pretest or randomization of customers.

Page 58: Experimental  and  Causal-Comparative Designs

Nonequivalent Control Design Group

O1 X O2

O3 O4

This differs form the pretest-posttest group design, because the test and control groups are not randomly assigned. There are two varieties. One intact equivalent design, in which membership is naturally assembled. ( use different classes in a school) The second, self-selected experimental group design, are recruited (weaker). Comparison of pretest (O1O2 ) is one degree of equivalence.

Page 59: Experimental  and  Causal-Comparative Designs

Separate Sample Pretest-Posttest Design

R O1 (X)

R X O2

This design is most applicable when we cannot know when and to whom to introduce the treatment but we can decide when and whom to measure. The bracketed treatment is shown to suggest that the experimenter cannot control the treatment. Assume a company is planning an intense campaign to change its employee’s attitudes toward energy conservation. It might draw 2 random samples of employees, one of which is interviewed about energy use attitudes before the information campaign. After the campaign the other group is interviewed.

Page 60: Experimental  and  Causal-Comparative Designs

Group Time Series Design• Time series introduces repeated observations before and after

the treatment and allows subjects to act as their own controls• A single treatment group has before-after measurements as the

only controls• Also a multiple design with 2 or more comparison groups as

well as the repeated measurements• Especially useful where regularly kept records are a natural

part of the environment• Time series approach is also a good way to study unplanned

events in an ex post facto manner.• Ex. Federal price controls – before and after records

Page 61: Experimental  and  Causal-Comparative Designs

Experiments

• Ability to uncover causal relationships• Provisions for controlling extraneous and

environmental variables• Convenience of creating test situations

rather than trying to look for them• Replicating findings to rule out

idiosyncratic or isolated results• Ability to exploit naturally occurring events

Page 62: Experimental  and  Causal-Comparative Designs

Question to Answer

• Describe how you would operationalize variables for experimental testing in the following research question: What are the performance differences between 10 microcomputers connected in a LAN and one minicomputer with 10 terminals?