Upload
others
View
0
Download
0
Embed Size (px)
Citation preview
Topics Ahead Week 10-11: Experimental design; Running
experiment
Week 12: Survey Design; ANOVA
Week 13: Correlation and Regression; Non Parametric Statistics
Week 14: Computational Methods; Simulation; Design Research
Week 15: Research Report Writing
Week 16: Final Presentation
Presentations Week 13
One slide on research question
Literature review: structured; holes you will fill in
Your research design General design and key components
Data: type, sources, possible analysis methods
Week 16:
Everything
Detailed design Data collection method
Data analysis method
Instruments: survey, experimental protocols, etc.
Experimental Design
Previous Example
New Tradition
Goal?New Tradition
=?
Challenges
Internal validity
How to guarantee what you have observed is true?
External validity
How to guarantee what you have observed can be applied in other situations?
Construct validity
To what extend does your study measure the construct of interest?
Threats to Internal Validity: Biased Samples
Group selection Self-identified
Assigned by researchers
Assigned arbitrarily
Are the groups equal?
Choosing groups based on their differences results in having groups that are different Difference should be less than 5%.
How About Matching?
Almost impossible to perfectly match individual participants
No identical participants
Too many relevant factors/variables Some we know, but some we don’t.
Matching on pretest scores Selection by maturation interactions: participants
growing in different ways
How can you make sure equal scores mean subjects are equal?
So …
Trying to keep everything except the treatment constant is very difficult, if not impossible
Selection is a big problem
Internal validity is threatened.
Only option is to rule out extraneous variables.
Use random assignment
Identify extraneous variables and then try to rule
them out
Dilemma
Attempts to rule out threats to internal validity may hurt external validity
Not studying a heterogeneous group of participants
Not studying participants in a naturalistic setting
You Need To
Be cautious about accepting cause-effect statements
If the study is not an experiment, the study’s internal validity is probably threatened by at least one of 8 threats to validity argued by Campbell and Stanley (1963)
Some Threats To Internal Validity
History: events that occur between treatments
Testing: changes resulting from the practice and experience gained in the test
Instrumentation: the change of the way subjects are measured
Regression effects: selection of subjects based on extreme scores
Mortality: the loss of subjects
Selection: groups are selected differently
Maturation: treatment effects are really due to natural growth or development
Selection by maturation interactions: participants growing in different ways
Experimental Study
Experimental designs can rule out most threats to internal validity.
What Is An Experimental
Study?
Basic Logic
Get a hypotheses
Causal relationship
Independent Variable Dependent Variable
Manipulate the independent variable (IV) and measure the dependent variable (DV)
Have experimental and control groups
Similar, but treated differently
Random selected
Statistically analyze the difference
Significant result or not?
Key Issue
Apply treatment on one group but not the other.
The only difference between two groups is the treatment.
If DV shows any difference, it is due to the treatment!
All other factors should be the same, at least in theory.
Task, observation instrument, procedure.
Subjects
HOW CAN WE GUARANTEE SUBJECTS IN TWO GROUPS ARE THE SAME?
Group Assignment
Random assignment Select subjects for two groups from the same
subject pool.
Assign them to two groups randomly.
Without random assignment, you do not have an experiment
Methods for Random Selection
Random
Pick up a series of numbers to sort
Excel
Generate random numbers and then sort.
Toss a coin
Tail experimental group
Head control group
Using a Random Table
Using Random Table
Steps
Using Excel
Generate a set of random numbers
Using the rand() function
Copy the values!
IV and DV CONTROL
Data Collection and Analysis
Knowing what to measure Variables
Calculating means Sample means
Comparing sample means Statistic methods
Independent Variable
Treatment
Simple experiments
Treatment: yes and no
Experimental group: applied
Control group: not applied
Treatment: different levels and no
Multiple experimental groups and one control group
Manipulation IV
Making the treatment observable and significant
Statistic Analysis
Rule out random errors Errors = random errors + systematic errors by the
treatment
Statistically significant results
You can declare the causal effects.
May not be the same direction you expected
Non-significant results
You didn’t find it
But it doesn’t mean it doesn’t exist
Type of Experiments
Often Seen Types
Simple experiment
One independent variable
Two groups
Multi-group experiment
One independent variable
Multiple groups (>2)
Factorial design
Two or more independent variables
At least four groups
Simple Experiment
t test
Could be between- or within-subject design
Questions Raised by Results
Questions raised by non-significant results Enough participants?
Participants homogeneous enough?
Experiment sufficiently standardized?
Data coded carefully?
DV sensitive and reliable enough?
Pilot Study Testing various aspects of study
Subjects
IV
DV
Procedure and protocols
With a smaller group of subjects
With the SAME procedure and protocols Must have all experimental materials ready before the pilot study
Issues to look at Procedure and protocols (e.g., instruction, task, etc.)
Data collection method
The number of subjects required