8
ACADEMIC EMERGENCY MEDICINE • January 1999, Volume 6, Number 1 67 Research Fundamentals: IV. Choosing a Research Design KENT N. HALL, MD, RASHMI U. KOTHARI, MD Abstract. Once a research question or hypothesis has been derived, the investigator must determine which research methodology can best answer his or her question. Prospective, randomized, controlled tri- als are often considered the sine qua non of research design. However, this study design is not always fea- sible, and often an alternate design will adequately answer the question at significantly less cost. All re- search designs have potential advantages and limi- tations. The decision of which study design to use is often a compromise between science and resources. This article was prepared by members of the SAEM Research Committee to describe the fundamental re- search concepts of research design. This paper defines different research methodologies and discusses their different uses, strengths, and weaknesses. It also de- scribes the process of randomization and blinding. Fi- nally, the concept of bias and its remedies is deline- ated. Key words: clinical trial methods; prospective; retrospective; cohorts; case – control; interventional study designs; bias; research. ACADEMIC EMER- GENCY MEDICINE 1999; 6:67 – 74 BASIC RESEARCH PARADIGMS (TABLE 1) Most medical research is performed 1) to establish the frequency of a disease or characteristic, 2) to compare two groups, or 3) to establish causation between a therapy or agent and an endpoint or outcome. The research design chosen to fulfill these goals must be able to answer the question posed by the researcher. Study designs can be cat- egorized as observational or interventional. Obser- vational studies, in which the investigator ob- serves events without attempting to influence them, are done to answer the first two types of questions. Following a population over a period of time, and then analyzing the differences between those who do and do not get a specific disease, is an example of an observational study. Observa- tional studies can identify an association between two variables. However, they cannot establish a causal link. For example, the result of an obser- vational study may be that hypertension and stroke are associated with each other, but the statement cannot be accurately made that hyper- tension causes stroke, based on the result of an observational study alone. From the Department of Emergency Medicine, University of Cincinnati, Cincinnati, OH (KNH, RUK). Series editor: Roger J. Lewis, MD, PhD, Department of Emer- gency Medicine, Harbor–UCLA Medical Center, Torrance, CA. Accepted September 11, 1998. Address for correspondence and reprints: Rashmi U. Kothari, MD, University of Cincinnati Medical Center, Department of Emergency Medicine, 231 Bethesda Avenue, Cincinnati, OH 45267-0769. Fax: 513-558-5791; e-mail: [email protected] Interventional studies, also known as clinical or experimental trials, are used to address the ques- tion of causation. In this study type the investi- gator intervenes in the events being studied by manipulating a specific variable, and observing the effect of this manipulation on a specified outcome. For example, a clinical trial might be designed to investigate the effects of a new drug on outcome after ischemic stroke. In such a trial, the investi- gators would intervene in the management of the patients by the delivery of the drug or an appro- priate placebo in order to establish whether the use of the drug would result in an improved neu- rologic outcome. Observational studies can be subdivided into retrospective and prospective studies. The primary difference between these subdivisions is the timing of the measurement of the outcome variable (Table 1). In retrospective studies, the event of interest has already occurred. The investigator divides the study population into subjects and controls based on whether they do or do not have the outcome variable. Comparisons are then made between these two groups. For example, if an investigator wanted to retrospectively determine whether there is an association between smoking and myocardial infarction (MI), he or she would look at groups of patients with and without MI. The prevalence of smoking in both of these samples would then be determined. In a prospective study, the outcome variable has not yet occurred; the investigator will measure it in the future. The investigator follows the popu- lation over time, looking for the presence of the outcome variable. He or she then compares those

Research Fundamentals : IV. Choosing a Research Design

Embed Size (px)

Citation preview

Page 1: Research Fundamentals : IV. Choosing a Research Design

ACADEMIC EMERGENCY MEDICINE • January 1999, Volume 6, Number 1 67

Research Fundamentals: IV. Choosing aResearch Design

KENT N. HALL, MD, RASHMI U. KOTHARI, MD

Abstract. Once a research question or hypothesishas been derived, the investigator must determinewhich research methodology can best answer his orher question. Prospective, randomized, controlled tri-als are often considered the sine qua non of researchdesign. However, this study design is not always fea-sible, and often an alternate design will adequatelyanswer the question at significantly less cost. All re-search designs have potential advantages and limi-tations. The decision of which study design to use isoften a compromise between science and resources.

This article was prepared by members of the SAEMResearch Committee to describe the fundamental re-search concepts of research design. This paper definesdifferent research methodologies and discusses theirdifferent uses, strengths, and weaknesses. It also de-scribes the process of randomization and blinding. Fi-nally, the concept of bias and its remedies is deline-ated. Key words: clinical trial methods; prospective;retrospective; cohorts; case–control; interventionalstudy designs; bias; research. ACADEMIC EMER-GENCY MEDICINE 1999; 6:67–74

BASIC RESEARCH PARADIGMS (TABLE 1)

Most medical research is performed 1) to establishthe frequency of a disease or characteristic, 2) tocompare two groups, or 3) to establish causationbetween a therapy or agent and an endpoint oroutcome. The research design chosen to fulfillthese goals must be able to answer the questionposed by the researcher. Study designs can be cat-egorized as observational or interventional. Obser-vational studies, in which the investigator ob-serves events without attempting to influencethem, are done to answer the first two types ofquestions. Following a population over a period oftime, and then analyzing the differences betweenthose who do and do not get a specific disease, isan example of an observational study. Observa-tional studies can identify an association betweentwo variables. However, they cannot establish acausal link. For example, the result of an obser-vational study may be that hypertension andstroke are associated with each other, but thestatement cannot be accurately made that hyper-tension causes stroke, based on the result of anobservational study alone.

From the Department of Emergency Medicine, University ofCincinnati, Cincinnati, OH (KNH, RUK).Series editor: Roger J. Lewis, MD, PhD, Department of Emer-gency Medicine, Harbor–UCLA Medical Center, Torrance, CA.Accepted September 11, 1998.Address for correspondence and reprints: Rashmi U. Kothari,MD, University of Cincinnati Medical Center, Department ofEmergency Medicine, 231 Bethesda Avenue, Cincinnati, OH45267-0769. Fax: 513-558-5791; e-mail: [email protected]

Interventional studies, also known as clinical orexperimental trials, are used to address the ques-tion of causation. In this study type the investi-gator intervenes in the events being studied bymanipulating a specific variable, and observing theeffect of this manipulation on a specified outcome.For example, a clinical trial might be designed toinvestigate the effects of a new drug on outcomeafter ischemic stroke. In such a trial, the investi-gators would intervene in the management of thepatients by the delivery of the drug or an appro-priate placebo in order to establish whether theuse of the drug would result in an improved neu-rologic outcome.

Observational studies can be subdivided intoretrospective and prospective studies. The primarydifference between these subdivisions is the timingof the measurement of the outcome variable (Table1). In retrospective studies, the event of interesthas already occurred. The investigator divides thestudy population into subjects and controls basedon whether they do or do not have the outcomevariable. Comparisons are then made betweenthese two groups. For example, if an investigatorwanted to retrospectively determine whether thereis an association between smoking and myocardialinfarction (MI), he or she would look at groups ofpatients with and without MI. The prevalence ofsmoking in both of these samples would then bedetermined.

In a prospective study, the outcome variable hasnot yet occurred; the investigator will measure itin the future. The investigator follows the popu-lation over time, looking for the presence of theoutcome variable. He or she then compares those

Page 2: Research Fundamentals : IV. Choosing a Research Design

68

RE

SE

AR

CH

DE

SIG

NS

Hall,

Koth

ari•

RE

SE

AR

CH

DE

SIG

NS

TABLE 1. Characteristics of Observational and Interventional Research Designs

Name Types Characteristics Uses Strengths Weaknesses

ObservationalCase–control Prospective Select sample from popula-

tion at risk with disease.Select sample from popula-

tion at risk without disease.Measure predictor variables

to see if they occur.

Study rare conditions.Gives odds ratio.

Short duration.Relatively inexpensive.Relatively small.

Bias from sampling two popu-lations.

No sequence of events estab-lished.

Biases: predictor measure-ments; survivor bias.

Only one outcome variable.No information on preva-

lence, incidence, or excessrisk.

Retrospective Select sample from popula-tion at risk with disease.

Select sample from popula-tion at risk without disease.

Measure predictor variablesthat have already occurred.

Study rare conditions.Gives odds ratio.

Short duration.Relatively inexpensive.Relatively small.

Bias from sampling two popu-lations.

No sequence of events estab-lished.

Biases: predictor measure-ments; survivor bias.

Only one outcome variable.No information on preva-

lence, incidence, or excessrisk.

Cohort Prospective Select sample from popula-tion.

Predictor variables measured.Sample followed.Outcome variable(s) mea-

sured.Comparison between cohorts

with and without out-come(s) compared.

Study several outcomes.Gives incidence, relative risk,

excess risk.

Avoids measuring predictorsand survival bias.

Number of outcome events in-creases over time.

More control over subject se-lection and measurements.

Often requires large samplesizes.

Not useful in rare conditions.Can be expensive.Lasts longer.

Retrospective Identify cohort previously as-sembled.

Measure variables that oc-curred in past.

Follow-up cohort.Measure outcome variables

(now or past).Comparison between cohorts

with and without out-come(s) measured.

Study several outcomes.Gives incidence, relative risk,

excess risk.

Avoids measuring predictorsand survival bias.

Number of outcome events in-creases over time.

Less expensive.Shorter.

Often requires large samplesizes.

Not useful in rare conditions.Less control over subject se-

lection and measurements.

Page 3: Research Fundamentals : IV. Choosing a Research Design

ACADEMIC EMERGENCY MEDICINE • January 1999, Volume 6, Number 1 69C

ross

-sec

tion

alS

elec

tsa

mp

lefr

omp

opu

la-

tion

.M

easu

rep

red

icto

ran

dou

t-co

me

vari

able

s.

Stu

dy

seve

ral

outc

omes

.E

xplo

rato

rybe

fore

coh

ort

stu

dy.

Giv

esp

reva

len

ce,

and

rela

-ti

vep

reva

len

ce.

Con

trol

over

subj

ect

sele

ctio

nan

dm

easu

rem

ents

.S

hor

td

ura

tion

.

Fre

quen

cyof

even

tsn

otes

-ta

blis

hed

.B

ias:

pre

dic

tor

mea

sure

men

tsan

dsu

rviv

or.

Not

use

ful

inra

reco

nd

itio

ns.

No

info

rmat

ion

onin

cid

ence

ortr

ue

rela

tive

risk

.

Inte

rven

tion

alS

elec

tsa

mp

lefr

omp

opu

la-

tion

.M

easu

reba

seli

ne

vari

able

s.R

and

omiz

e.A

pp

lyin

terv

enti

ons.

Fol

low

coh

orts

.M

easu

reou

tcom

eva

riab

le(s

).

Su

gges

tca

usa

tion

.P

rod

uce

stro

nge

stev

iden

ceof

cau

sati

on.

Som

etim

esgi

ves

fast

er,

less

exp

ensi

vean

swer

than

ob-

serv

atio

nal

.

Oft

enex

pen

sive

and

tim

e-co

nsu

min

g.S

ome

ques

tion

su

nsu

itab

lefo

rex

per

imen

tal

des

ign

(eth

i-ca

lba

rrie

rsor

outc

omes

too

rare

).S

tan

dar

diz

edin

terv

enti

ons

dif

fere

nt

from

com

mon

pra

ctic

e.A

nsw

erve

ryli

mit

edqu

esti

onon

ly.

subjects who develop the outcome variable withthose who do not. Applying a prospective design tothe above example, the investigator might identifya study population, and divide it into two groupsbased on the smoking status of each individual. Heor she would then follow the entire population overtime and record the incidence of MI in smokers andnonsmokers. Types of observational studies in-clude case–control, cohort, and cross-sectional. Allexperimental (i.e., interventional) studies are pro-spective.

Establishing the Frequency of a Condition.

Establishing the frequency of a condition is bestdone using an observational study design, specifi-cally a cross-sectional study. For example, we maywant to assess the frequency of acute myocardialinfarctions (AMIs) in smokers in our ED popula-tion. In a cross-sectional study, the study sampleis selected (e.g., the ED population), and the out-come variable (e.g., AMI) and the predictor varia-ble (smoking) are measured simultaneously. Be-cause the data are collected at one point in time,the investigator has no control over extraneous fac-tors. In contrast to a cohort study (which is dis-cussed later), there is no follow-up period in across-sectional study.

Cross-sectional studies are effective in deter-mining the prevalence of a disease (the proportionof the population that has a disease at a particularpoint in time) and the coexistence of associatedvariables. They do not give us information aboutdisease incidence (the proportion of the populationthat gets an illness over a specified period of time).Cross-sectional studies can usually be done quicklyand avoid the problem of losing patients to follow-up (which can introduce bias in the results). Cross-sectional studies are often preludes to prospectivecohort or experimental studies because they areusually quick and relatively inexpensive. They pro-vide a ‘‘snapshot’’ of baseline demographics andclinical characteristics, and can sometimes revealassociations that will allow refinements in theplanning of more detailed studies.

Cohort studies can also be used to establish thefrequency of a disease or event. A cohort is a groupof patients with a specific characteristic (e.g., pa-tients with asthma). An investigator may want toanswer the question, ‘‘Do patients with asthmahave concurrent upper respiratory infections(URIs) more often than patients who do not haveasthma?’’ Using a retrospective cohort study design,the investigator might look at all ED records of thecohort of patients with asthma (the outcome vari-able), and match these records with the cohort ofpatients without asthma. The investigator canthen review the records to see how often the pres-ence of a URI (the predictor variable) was docu-

Page 4: Research Fundamentals : IV. Choosing a Research Design

70 RESEARCH DESIGNS Hall, Kothari • RESEARCH DESIGNS

mented in these two cohorts. The retrospective na-ture of this study design means the investigator isdependent on documentation of a URI (and anasthma exacerbation) to properly classify patients.

If the same question is answered using a pro-spective cohort study, the same data are obtained.However, rather than reviewing past cases, the co-horts are defined (e.g., patients with asthma, pa-tients without asthma who are otherwise similar),the parameter of interest (e.g., presence of URI) isestablished, and the information is collected pro-spectively. The prospective format is superior tothe retrospective design because the investigatoris more likely to capture all eligible cases of inter-est (e.g., the investigator can define exactly whathe or she considers an ‘‘asthmatic’’ and then ensurethat all cases match this definition). In addition,prospective data collection is more likely to be com-plete, and the data set is less subject to interpre-tation. However, a prospective cohort study takeslonger and costs more than a retrospective studybecause the investigator must wait for the cases topresent. When studying a rare disease or infre-quent clinical condition, a prospective cohort de-sign may be prohibitively expensive and logisti-cally not feasible.

Cohort studies are particularly effective at de-scribing the incidence of disease and analyzing as-sociations between risk factors and outcomes.Though a cohort study may show a temporal linkbetween a predictive factor (e.g., URI) and an end-point (e.g., asthma exacerbation), it cannot ‘‘prove’’that the observed factor was the cause of the end-point. This is an error of interpretation commonlymade in the lay press.

Comparing Groups. Sometimes investigatorswant to compare different patient groups to deter-mine better ways of medical management. For ex-ample, we may wish to determine whether the af-filiation with a primary care physician has aneffect on an asthmatic patient’s ability to use a me-tered dose inhaler (MDI). If we find that patientswithout a primary care physician are less able toappropriately use a MDI, we could then institutean educational program for these patients whenthey present to the ED. The best study design tomake this comparison is a prospective cohortstudy. Alternative design choices would be a ret-rospective cohort study or a case–control study.

Using this example, a prospective cohort studydesign could be used to compare ED asthma pa-tients who have a primary care physician withthose who do not. Patients with asthma would beidentified using predefined criteria for inclusioninto the study. These patients would then be di-vided into two cohorts, those with and without aprimary care physician. The ability to use a MDI

would be determined prospectively, and analysiscomparing these two cohorts performed. Becausethe data points are decided beforehand, and a datacollection instrument is specifically developed toprospectively collect the necessary information, thedata should be complete. The prospective nature ofthe study requires a significant investment of time,effort, and dollars.

The same study could be done in a retrospectivetime frame. In this circumstance, patients wouldbe identified based on their ED discharge diagnosis(asthma) and whether they reported having a pri-mary care physician. Their medical records wouldbe reviewed to assess their ability to use a MDI.Because the data are being collected from previ-ously accrued information, they are subject to bias.For example, if the patient had a primary carephysician, but did not declare this on ED admis-sion, he or she would be included in the wrong co-hort. Similarly, if the documentation of the pa-tient’s ability to use a MDI is not included in themedical record (which is often true), then the out-come for this patient is unknown. The meaning ofthese missing data must interpreted by the re-searcher. Is it because the patient obviously knewhow to use the MDI, or because the provider didnot evaluate the patient’s ability to use it?

This study could also be performed using acase–control design. However, the case-control de-sign is best suited for instances when the diseaseof interest is unusual. In case–control studies in-dividuals with (cases) and without (controls) a dis-ease or outcome are identified. The investigatorthen reviews records or interviews the cases andcontrols to determine what factor(s) of interest arepresent in each of these groups. For example, aprospective study comparing asthma patients withpatients suffering from nocturnal asthma only,might take an inordinately long time to accumu-late a meaningful number of patients with noctur-nal asthma. With a case–control design, the re-searcher would define the type of case of interest(e.g., patients with nocturnal asthma), identifythose cases, and match them to controls (e.g., otherasthmatics) on all important variables (e.g., age,sex, years with asthma symptoms). Because of theretrospective nature of this study, the data aremore likely to be inaccurate or incomplete. Fur-ther, some of the variables would be open to inter-pretation because no predefined parameters ex-isted when the data were collected. For example,if the medical record indicates a patient has noc-turnal asthma, the question must be asked ‘‘Whatcriteria were used to make this diagnosis?’’

Establishing Causation. Thus far, the researchmethods discussed have centered on establishingassociations between groups. In many circum-

Page 5: Research Fundamentals : IV. Choosing a Research Design

ACADEMIC EMERGENCY MEDICINE • January 1999, Volume 6, Number 1 71

TABLE 2. Glossary of Terms Used in Study Design

Bias A systematic error in study design that results in variation that distorts the study findingsin one direction.

Blinding A method to decrease bias in which the subject or investigator (single blinding) or both (doubleblinding) are unaware of the subject’s assignment to interventional or control samples.

Cohort design Individuals are separated into groups (cohorts) based on specific characteristics. Prospectivecohort design involves enrolling subjects and measuring predictor variables before theoutcome variable has occurred. In retrospective cohort design, the investigator measurespredictor variables after the outcome variable has occurred.

Confounding When a third variable is a cause of both the predictor and outcome variables.

Cross-sectional Similar to cohort study design, except that outcome and design predictor variables are mea-sured at the same time.

Incidence The number of new cases of a disease over a period of time divided by the number of peopleat risk over the same time period.

Interventional A special type of cohort study in which the investigator manipulates a predictor variable,and measures the effect of this manipulation on a specific outcome variable.

Odds ratio An approximation of the relative risk obtained, using a case–control study design. The oddsratio is calculated as follows:

Disease No Disease

Risk factor present a bRisk factor absent c d

Odds ratio = a ?d/b ?c.

Outcome variable That endpoint variable identified by the investigator as the measurement of interest usedto assess study results.

Prospective A study design in which predictor variables are selected and measured before the designoutcome of interest occurs.

Prevalence The number of people who have a disease at one point in time divided by the number ofpeople at risk at that point.

Randomization Technique where all study subjects have equal probabilities of being chosen.

Retrospective A study design in which the sample is defined and data are collected after the outcome hasoccurred.

Validity The degree to which a variable actually represents what it is supposed to be representing.External validity is the degree to which a finding in a study represents the population asa whole. Internal validity is the degree to which a finding from a single experimentalstudy represents the study population within that clinical environment.

stances, the investigator is instead interested inevaluating a cause–effect relationship betweenvariables.

Only an interventional (experimental) studythat controls important variables that may other-wise skew the results (confounders) can establishcausation. If ED patients with an asthma exacer-bation are treated with magnesium and seem toimprove enough so that they do not require hos-pital admission, it is tempting to speculate thatmagnesium has had a significant effect on theiroutcome. However, until an interventional study isperformed, where patients are randomly distrib-

uted to magnesium and no-magnesium protocols,and confounding variables (such as the severity ofthe asthma exacerbation) are equally distributedbetween the two groups, causation cannot be es-tablished.

Interventional (experimental) studies can gen-erally be considered as either therapeutic or pre-ventive. Therapeutic trials are conducted on pa-tients with a particular disease to determine theability of an agent or procedure to diminish symp-toms, prevent recurrence, or decrease morbidity ormortality from that disease. Preventive interven-tional trials involve the evaluation of whether an

Page 6: Research Fundamentals : IV. Choosing a Research Design

72 RESEARCH DESIGNS Hall, Kothari • RESEARCH DESIGNS

applied agent or procedure reduces the risk of de-veloping the disease among individuals withoutthe disease at the time of study enrollment.

Interventional studies are structured similarlyto cohort studies. The investigator identifies pre-dictive factors and outcomes of interest. In theabove example of the use of magnesium in asthma,the predictive variable is the use of magnesium orplacebo as part of a standardized therapeutic reg-imen. The outcomes of interest might be needed forhospital admission, total time in the ED, and im-provement in peak expiratory flow rate (PEFR).Recording of responses is done prospectively;therefore, more complete data are collected foreach subject. The major strength of interventionalstudies is the ability to control many confoundingfactors. In many ways, interventional studies re-semble the controlled experiments done by basicscientists and provide the strongest evidence forcause and effect. In most instances, interventionalstudies can be done more quickly than prospectiveobservational studies.

Although interventional (experimental) trialsoffer the strongest support of a causal relationship,they have numerous limitations. They are often ex-pensive and time-consuming, especially if the dif-ference between the treatment and control groupsis small or if the disease of interest is rare. In ad-dition, ethical considerations may preclude theevaluation of an exposure or new treatment, anddo preclude the withholding of already proventherapies. If, for example, we wanted to test theefficacy of magnesium alone in management of thepatient with an acute asthma exacerbation, wewould have to withhold b-agonist therapy. Becausethe benefit from magnesium therapy alone is un-known, while the benefit of b-agonist therapy iswell known, this would be unethical.

RANDOMIZATION AND BLINDING

Fundamental to the validity (Table 2) of interven-tional studies is the establishment of comparablepopulations (through random sampling or selectionof appropriate controls) and blinding of the observ-ers. Randomization is the process of assigning pa-tients to ‘‘intervention’’ and ‘‘control’’ groups in amanner not influenced by the investigator. Tech-niques of randomization include simple, stratified,and systematic. Simple randomization occurswhen patients are randomized into one of thetreatment groups without consideration of theirclinical situations. By definition, all patients musthave equal chances of being selected into eachstudy group. Random number tables are often usedto assign patients to particular study groups whensimple randomization is used.

In stratified randomization, patients are ini-tially placed into strata (groups) based on theirclinical characteristics. Characteristics that definethese strata may include age, gender, ethnicity, orduration of disease. A random sample is then se-lected from each stratum. Using this method, allknown potentially important subgroups can berepresented in the study, and precise estimates ofpopulation parameters are derived.

In systematic sampling, every nth patient fromthe potential patient list is selected. This methodof sampling allows for random sampling withoutknowing the exact extent of the population to bestudied, and therefore is useful under field condi-tions. Because of the cyclical nature of this sam-pling technique, it is not recommended when cy-clical trends in the variables of interest occur. Forexample, if an investigator wanted to documentthe prevalence of ED visits for alcohol-related ill-ness and injury, and used a sampling time framethat systematically did not include the weekend,the prevalence of the condition of interest would beunderestimated. Systematic sampling is a commonmethod of sampling in epidemiologic research.

There are other methods of randomization, andtheir descriptions are beyond the scope of this ar-ticle. It is important to consult an epidemiologist,research design consultant, statistician, or mentorearly in the study design process to choose themost relevant randomization strategy for yourstudy.

Blinding refers to the lack of knowledge regard-ing aspects of the study from the perspective of thesubject, the investigator, or both. Double blindingoccurs when neither the subject nor the investi-gator is aware of which treatment group the sub-ject has been randomized to. Sometimes it is notpossible to completely blind both the investigatorand the subject to the study group assignment. Insuch cases, single blinding can be used, where ei-ther the subject or the investigator is unaware ofgroup assignment. Blinding is one method of de-creasing the potential for bias in a study.

BIAS IN STUDY DESIGN

Bias is any systematic error in a study that canresult in distortion of the associations under con-sideration. There are two major categories of bias:selection and observation. Selection bias occurswhen the study sample is not representative of thetarget population of interest. For example, traumapatients managed at a Level 1 trauma center prob-ably do not reflect the population of trauma victimspresenting to community hospitals; they may se-lectively have more severe injuries and thus ahigher mortality. This selection bias can be avoided

Page 7: Research Fundamentals : IV. Choosing a Research Design

ACADEMIC EMERGENCY MEDICINE • January 1999, Volume 6, Number 1 73

by rigorous selection criteria defining a study sam-ple that is similar to the population in which thetreatment will actually be used.

Another example of selection bias may occurwhen subjects are not randomly allocated to inter-vention and control groups in an interventionalstudy. If, in the example of magnesium for asthmamanagement, patients in the magnesium treat-ment group had a higher initial PEFR, the factthat they had a lower admission rate would not besurprising. In this case, the difference between thegroups in the outcome variable (hospital admis-sion) might be because the patients had less severeasthma exacerbations.

Observation or information bias occurs whenthere is a systematic error in the way data are ob-tained from various study groups. Examples of ob-servation bias include recall or reporting bias, in-terviewer bias, and loss of enrolled patients tofollow-up. Recall bias occurs when subjects, ortheir surrogates (e.g., family members), recallevents differently because of their experiences. Forexample, patients with AMI may have vivid mem-ories of events surrounding their AMIs. They may,therefore be more likely to remember what theywere doing immediately preceding their AMIs, andmay erroneously identify a particular action as‘‘causing’’ the infarction.

Interviewer bias, also called observer bias, oc-curs when the person collecting data is less thanobjective because of his or her previous experi-ences, preconceived ideas, prejudices, or knowl-edge of other outcomes in the case. For example,in performing a retrospective study on risk factorsand liver disease, a researcher may review medicalrecords more thoroughly for evidence of alcohol useif the patient is known to have liver disease. Sim-ilarly, an investigator who believes that the pri-mary care physician is critical in the successfultreatment of asthma may unconsciously lookharder for evidence that the patient who identifiesa primary care physician understands how to usea MDI.

Bias due to loss of the patient to follow-up oc-curs in prospective studies when a subject ‘‘disap-pears’’ from the study for an unknown reason. Theeffect of this bias depends on the number of pa-tients lost and whether there is a disproportionateloss from either the case or the control group. Anexample of this type of bias occurs when patientswho are being followed for deleterious effects afterbeing triaged out of an ED without being seen can-not be located to determine their subsequenthealth status. Is the fact they could not be locatedan indication of a deleterious effect (e.g., did theydie), is it because they are so healthy they do notseek other medical care, or does it reflect effects ofother variables, such as socioeconomic factors

which make it more difficult for the patient to re-ceive medical care?

Bias is best controlled through rigorous studydesign. Once bias has been introduced, it is possi-ble to control some of its effects in data analysis.However, selection bias cannot be mitigated byanalysis, and may completely invalidate an other-wise excellent study. Design features that mini-mize bias include appropriate choices of study pop-ulation and sources of data, blinding of subjectsand/or investigators to the nature of the experi-mental group in which the subject is enrolled, andspecific features of the data collection process.

Selection of subjects and controls, matched asclosely as possible in all characteristics except thatbeing studied (treatment, presence of disease, etc.),greatly reduces the potential for selection bias.This requires the investigator to thoroughly re-view, and be familiar with, the relevant patientcharacteristics that may be associated with differ-ent outcomes. For example, in evaluating the effi-cacy of a neuroprotective agent on long-term sur-vival from head trauma, choosing subjects andcontrols with matched initial Glasgow Coma Scalescores is essential.

Blinding is an excellent way to decrease obser-vational bias in experimental studies. Knowledgeof a participant’s treatment status might, con-sciously or unconsciously, influence the identifica-tion or reporting of relevant events by either thepatient or the investigator. The likelihood of suchbias is directly related to the degree of subjectivityof the outcome under study. If the end point beingconsidered is objective (i.e., death), observationbias is unlikely. If the end point is more subjective(i.e., decreased pain), it is more likely to be biasedby a clinician’s or patient’s knowledge of the as-signed study group.

Appropriate construction of data collection in-struments and selection and training of study per-sonnel are important methods to decrease bias.Data collection instruments, whether question-naires, interviews, or physical examinations,should be standardized so that, as nearly as pos-sible, identical data are collected from all subjects,regardless of whether they are ‘‘cases’’ or ‘‘controls.’’In questionnaires and interviewing, closed-endedobjective questions are subject to the least bias.Open-ended, subjective questions are prone to re-call bias, as well as bias of interpretation by theinvestigator. Similarly, specific aspects of the phys-ical examination that are important should be in-cluded on the data collection instrument. The pres-ence or absence of a finding should be specificallyasked, not just presumed. Similarly, if a gradingsystem is to be used (e.g., for heart murmurs), thenthe definition of the different levels must be clearlydelineated for the clinician who performs the ex-amination.

Page 8: Research Fundamentals : IV. Choosing a Research Design

74 RESEARCH DESIGNS Hall, Kothari • RESEARCH DESIGNS

CONCLUSION

Medical research provides a rational basis for med-ical practice. The first step in any research projectis to identify the research question. The next crit-ical step is to choose the study design. Fundamen-tal to this is to answer the following questions:

• What is the nature of the question?• What are the logistic constraints (money, time,manpower)?• What ethical issues need to be addressed?When these questions are answered to the satis-faction of the researcher, then a proper study de-sign can be chosen, which will enhance the inves-tigator’s ability to develop and complete a researchproject that is sound in design.

Recommended Reading

1. Fletcher RH, Fletcher SW, Wagner EH. ClinicalEpidemiology—The Essentials. Baltimore: Williams and Wil-kins, 1988.2. Friedman GD. Primer of Epidemiology. New York: McGraw-Hill, 1974.3. Friedman LM, Furberg CD, DeMets DL. Fundamental ofClinical Trials, ed 2. Littleton, MA: PSG, 1985.4. Hayden GF, Kramer MS, Horwitz RI. The case–controlstudy: a practical review for the clinician. JAMA. 1982; 247:326–31.5. Hennekens CH, Buring JE, Mayrent SL. Epidemiology inMedicine. Boston: Little, Brown, 1987.6. Hulley SB, Cummings SR. Designing Clinical Research.Baltimore: Williams and Wilkins, 1988.7. Marks RG. Designing a Research Project. New York: VanNostrand Reinhold, 1982.8. Meinert CL. Clinical Trials: Design, Conduct, and Analysis.New York: Oxford University Press, 1986.9. Riegelman RK. Studying a Study and Testing a Test. Bos-ton: Little, Brown, 1981.

v

REFLECTIONS

Prepared for Tornado Aftermath

An emergency medicine resident, donned in universal precautions attire, awaits tornado disaster victimsafter Vanderbilt University Medical Center is alerted of 1001 casualties when a tornado was an unin-vited guest at a school picnic on April 16, 1998. Photograph by DONNA JONES BAILEY, Nashville,Tennessee.