IT3010 Lecture 9 - Experiments

  • View
    68

  • Download
    5

  • Category

    Science

Preview:

DESCRIPTION

This lecture is about designing and conducting experiments.

Citation preview

IT3010 / TDT39Research Methodology

Week 9: Experiments

Name, title of the presentation

Figure 3.1 in: B. J. Oates, Researching Information Systems and Computing. London: Sage Publications, 2006.

The research process

Experiments

What do you think about when you hear experiment?

Experiment: Definition

• A strategy that investigatescause and effect relationships.

• Tries to prove or disprove that a cause and effect hypothesis is true:– "A causes B", "A increases B's occurrence", "A eliminates B".

• Should include many instances, as opposed to case study.

• Should include only a few study parameters, as opposed to case study.

J McGrath

Experiment: Characteristics

• Precise observations and measurements.• Pre-test and post-test observations.• Proving or disproving a hypothesis.• Identification of causal factors, i.e. one-directional links• Explanation and prediction.• Repetition.

Cause Effect

Independent

variable

Dependent

variable

Conducting an experiment

• Find the hypothesis to be tested.– Must be testable and disprovable.

• Find the dependent and independent variable(s).– Independent variable is altered by altering dependent variable.– A (independent) causes B (dependent).

• Find the control mechanisms.– Mechanisms that help you control all contaminating variables.

• Observe and measure.– Often quantitative data is collected through structured observations.– Remember before and after observations (or control groups).

• Be careful about internal and external validity.• Document everything so that others can repeat the

experiment.

Controlling unwanted factors

• Eliminate the factor from your experiment.– E.g. via exclusion criteria. "Exclude students with programming skills".

• Hold the factor constant, if you cannot eliminate it.– E.g. via inclusion criteria. "Include only seniors 60-65 years old".

• Use large random selection.– E.g. in opinion surveys. Let the statistical distribution take care of it.

• Use control groups.– Similar groups, the only difference: Change in independent variable.

• Blind experiments.– Controls researcher and subject bias.

Cause Effect

Unwanted factor

Validity threats

• Internal validity: Show that results are attributable only to changes in independent variable. Threats:

– Differences between experimental and control group.– History, i.e. "what has happened in between".– Maturation, due to age, practice, boredom etc.– Instrumentation, i.e. faulty measurement equipment.– Experimental mortality, i.e. changes in observed groups' composition.– Reactivity and experimenter effects, e.g. "behaving correctly".

• External validity: Show that your results are generalizable. Threats:

– Using only special types of participants, e.g. students.– Using samples that are not representative of the population.– Too few participants.– Non-representative test cases.

Types of experiment

• "Pure" lab experiment– High control over parameters.– Unrealistic settings.

• Quasi-experiments, or field experiments– Realistic settings.– Free flow of contaminating factors, difficult to conclude.

• Uncontrolled trial– Fake experiments.– Forget it if you don't have pre-test measurements.– Better to use case study.

Advantages and disadvantages

• Advantages:– Well-established method.– The only way to show cause-effect relationships.– Don't have the cost associated with field work.

• Disadvantages:– Create artificial situations that don't exist in the IT world.– Often impossible to control all the parameters.– Difficult to recruit representative samples.– Bias can invalidate results.

Next week (46)

• Can we finalize paper presentations?

Recommended