Presenter: Dr Himani Moderator: Dr P.R.Deshmukh. Framework Introduction Terms to understand Types of...
48
Biases Presenter: Dr Himani Moderator: Dr P.R.Deshmukh
Presenter: Dr Himani Moderator: Dr P.R.Deshmukh. Framework Introduction Terms to understand Types of bias Selection bias and types of selection bias Information
Framework Introduction Terms to understand Types of bias
Selection bias and types of selection bias Information bias and
types of information bias How to control bias Biases specific for
case control study Biases specific for cohort study Biases specific
for Clinical trial Biases specific for screening programmes
Confounding
Slide 3
Introduction Quality of clinical study depends on internal
& external factors. Studies have internal validity when
reported differences between exposed & unexposed individuals
can only be attributed to exposure under investigation.
Slide 4
tr External validity Internal validity Target population All iv
drug absuers in san Francisco Actual subjects 100 pts who get
studied Actual measurements Proportion with +ve response to ELISA
test Intended Sample (200 pts in SFGH clinic in July 1988) Intended
variables (proportion with Abs to AIDS virus) Phenomeno n of
interest (proportion infected by AIDS virus) Truth in universe
Truth in the study Findings in study Research question Study
planActual study Errors Error Design implemen t infer
Slide 5
Sources of Error Random ErrorSystematic Error
Slide 6
Random Error Random error is when a value of the sample
measurement diverges- due to chance alone- from that of true
population value Sources of Random error o Individual biological
variation o Sampling Error o Measurement Error
Slide 7
Systematic Error/Bias Any trend in the collection, analysis,
interpretation, publication, or review of data that can lead to
conclusions that are systematically different from the truth.
Sources of Error Basic measurement technique is wrong Variations
between observers or subjects Systematically differentiating
between 2 groups: being compared at the point of selection or
making measurements
Slide 8
Types of bias Selection bias Information bias/Measurement bias
Bias due to confounding
Slide 9
Examples of Random Error, Bias, and confounding in the same
study Study: In a cohort study the babies of women who bottle feed
and women who breastfeed are compared: Observation: Incidence of
gastroenteritis, as recorded in medical records, is lower in the
babies who are breast feed
Slide 10
Example of Random Error By chance, there are more episodes of
gastroenteritis in the bottle-fed group in the study sample,
producing a type 1 error. (When in truth breast feeding is not
protective against gastroenteritis). Or, also by chance, no
difference in risk was found, producing a type 2 error (When in
truth breast feeding is protective against gastroenteritis).
Slide 11
Example of Bias The medical records of bottle-fed babies only
are less complete (perhaps bottle fed babies go to the doctor less)
than those of breast fed babies, and thus record fewer episodes of
gastro- enteritis in them only.
Slide 12
Example of confounding The mothers of breast-fed babies are of
higher social class, and the babies thus have better hygiene, less
crowding and perhaps other factors that protect against
gastroenteritis. Crowding and hygiene are truly protective against
gastroenteritis, but we mistakenly attribute their effects to
breast feeding. This is called confounding.
Slide 13
Selection bias Selection bias is a systematic error resulting
from the way the subjects are either selected in a study or else
are selectively lost to follow up. Selection bias can cause an
overestimate or underestimate of association.
Slide 14
Study of asbestos exposure and lung cancer, the exposure is
distributed among cases and controls in target population:
DiseasedNon diseased Exposed100200 Unexposed100400 True OR in
target population = (100*400)/ (100*200) =2
Slide 15
If the selection probabilities for all the cells in the table
are equal at 90%, the 2*2 table of selection probabilities would
be: DiseasedNon diseased Exposed100*0.90= 90200*0.90 = 180
Unexposed100*0.90 = 90400*0.90= 360 OR= (90*360)/ (90*180) =2
Slide 16
If selection probabilities are unequal, and non proportional,
then selection bias will occur DiseasedNon diseased
Exposed100*0.90= 90200*0.90 = 180 Unexposed100*0.90 = 90400*0.70=
280 OR= (90*280)/ (90*180) = 1.6
Slide 17
Self-Selection Bias Common source of selection bias Volunteers
induced bias Individuals who volunteer for study possess different
characteristics than average general population Example: A case
control study explored an association of family history of heart
disease and presence of heart disease in subjects. Volunteers were
recruited. Subjects with heart disease may be more likely to
participate if they have family history
Slide 18
Expos ed Diseased YN Y300200 N 300 Expos ed Diseased YN Y240
(80%) 120 (60%) N120 (60%) 180 (60%) True Self selection bias OR=
2.25OR= 3.0
Slide 19
Berksons Bias Hospital selective bias Patients with two
concurrent diseases or health problems are more likely to be
admitted to hospital than those with single condition Example:
people who have peptic ulcer and also who smoke are more likely to
be admitted to hospital than people who have either of them. A
Case-Control study trying to evaluate relationship between smoking
and peptic ulcer may find a stronger association between 2.
Slide 20
Incidence prevalence bias Survivorship bias, Neymans Bias
Estimate the risk of disease on basis of data collected at a given
point in a series of survivors rather than on data gathered during
a certain time period in a group of incident cases Case-control and
crossectional study Example: case control study to assess the
protective effect of physical exercise on MI
Slide 21
Healthy Worker Effect Form of selection bias General population
is often used in occupational studies of mortality, since data is
readily available, and they are mostly unexposed Example: A
comparison between health status of military and civilian
population may show a better health status of soldiers because
during initial medical examination during which unfit persons are
excluded
Slide 22
Bias due to loss to follow up Without losses TENormal
OC+209,980 OC-109,990 RR= 2 (truth) Final sample TENormal OC+85,980
OC-85,990 RR= 1 (biased ) After 40% loss to follow up Differential
loss to follow up in a prospective cohort study on oral
contraceptives and thromboembolism
Slide 23
Information bias/ measurement bias inadequate means for
obtaining information about subjects in the study are inadequate.
TYPES: Non differential missclassification bias Differential
missclassification bias
Slide 24
Nondifferential misscclassification bias When errors in
exposure or outcome status occur with approximately equal frequency
in groups being compared 1. Equally inaccurate memory of exposures
in both groups. Example:Case-control study of heart disease and
past activity 2. Recording and coding errors in records and
databases. Example:ICD9 code used in discharge summaries 3. Using
surrogate measures of exposure. 4. Non-specific or broad
definitions of exposure or outcome. Example: do you smoke? vs (how
much, how often, how long) to define exposure to tobacco
Slide 25
Example : A Case- Control study comparing CAD cases &
controls for history of diabetes. CADControls Diabetes4010 No
diabetes 6090 OR= (40*90)/(10*60) = 6 CADControls Diabetes205 No
diabetes 8095 OR= (20*95)/ (5*80)= 4.75 With non-differential
Misclassification (only half of the diabetics are correctly
recorded as such in case and controls) True relationship
Slide 26
Effect: with a dichotomous exposure( eg smoking vs non-
smoking), non-differential misclassification minimizes differences
& causes an underestimate of effect, i.e. bias toward null
Slide 27
Differential misclassification When errors in classification of
exposure or outcome are more frequent in one group 1. Differences
in accurately remembering exposures (unequal). Example: mothers of
children with birth defects will remember drugs taken during
pregnancy 2. Interviewer or recorder bias. Example:interviewerhas
subconscious better about hypothesis 3. More accurate information
in one of the groups. Example:case-control study with cases from
one facility and controls from another with differences in record
keeping
Slide 28
Recall Bias People with disease may remember exposures
differently (more or less accurately) than those without disease To
minimize: Use a control group that has a different disease Use
questionnaires that are constructed to maximize accuracy and
completeness For socially sensitive questions, such as alcohol and
drug use, use self-administered questionnaire instead of an
interviewer If possible, assess past exposures from pre-existing
records
Slide 29
Interviewer bias Systematic differences in soliciting,
recording, or interpreting information Minimized by Blinding the
interviewers if possible Using standarized questionnaires
consisting of closed ended, eay to understand questions Training
all interviewer to adhere to the question and answer format
strictly Obtaining data or verifying data by examining pre existing
records (eg medical records or employment records)
Slide 30
Biases in Case- Control study Selection bias Information bias
Bias due to confounding
Slide 31
Biases in cohort study Selection bias Follow up bias
Information bias Bias due to confounding Post hoc bias
Slide 32
Biases in clinical Trial Selection Bias Ascertainment bias
Consent bias Dilution bias Attrition bias Analytical bias
Publication bias Choice of question bias Choice of population bias
Technical bias Chance bias
Slide 33
Ascertainment Bias This occurs when the person reporting the
outcome can be biased. Example, of homeopathy study of histamine,
showed an effect when researchers were not blind to the allocation
but no effect when they were. Multiple sclerosis treatment appeared
to be effective when clinicians unblinded but ineffective when
blinded
Slide 34
Consent bias This occurs when consent to take part in the trial
occurs AFTER randomisation. Most frequent danger in Cluster
trials
Slide 35
Dilution bias This occurs when the intervention or control
group get the opposite treatment. This affects all trials where
there is non-adherence to the intervention. For example, in a trial
of calcium and vitamin D about 4% of the controls are getting the
treatment and 35% of the intervention group stop taking their
treatment. This will dilute any apparent treatment effect.
Slide 36
Attrition Bias Usually most trials lose participants after
randomisation. This can cause bias, particularly if attrition
differs between groups. If a treatment has side-effects this may
make drop outs higher among the less well participants, which can
make a treatment appear to be effective when it is not We can avoid
some of the problems with attrition bias by using Intention to
Treat Analysis, where we keep as many of the patients in the study
as possible even if they are no long on treatment
Slide 37
Biases in screening programmes Volunteer bias Lead time bias
Length time bias Overdiagnosis bias
Slide 38
Lead time bias Natural history of disease in hypothetical
patient with colon cancer
Slide 39
Length time bias Form of selection bias Length time bias can
occur when lengths of intervals are analysed by selecting intervals
that occupy randomly chosen points in time or space Example: fast
growing tumor has shorter incubation period than slow growing
tumor
Slide 40
Overdiagnosis Bias Persons who initiate screening program have
almost unlimited enthusiasm for the program. Even cytologists
reading pap smears may become so enthusiastic that they may tend to
overread the smears (false positive readings). Consequently the
abnormal group will be diluted with women who are free of disease.
If normal individuals in the screened group are more likely to be
diagnosed as positive than are normal individuals in the unscreened
group (eg identified as having cancer when in reality they do not),
one could get a false impression of increased rates of detection
and diagnosis of early-stage disease as a result of screening.
Slide 41
How to control bias Selection bias Sampling the cases and
controls in the same way Matching Randomization Using a population
based sample
Slide 42
Control of measurement bias Development of explicit, objective
criteria for measuring environmental characteristics and health
outcomes Careful consistent data collection- for example, through
use of standardized instruments; objectives, closed ended
questionnaires; valid instruments Careful consistent use of data
instruments- for example, through use of standardized training and
instruction manuals, blinding to the extent possible Development
and application of quality control/ quality assurance procedures
Use of multiple sources of data Data cleaning and coding Analysis
and adjustment, if necessary, to take account of measurement
bias
Slide 43
Confounding Mixing or blurring of effects Researcher attempts
to relate exposure to outcome but actually measures effect of 3 rd
factor, termed as confounding variable. A confounding variable is
associated with exposure, affects outcome, but not an intermediate
link in chain of causation between exposure and outcome
Slide 44
For a variable to be a confounder It should be a known risk
factor for the disease or the outcome It should be associated with
the exposure It should not be in direct chain or linked between the
exposure and outcome It should be differentially distributed in the
two group
Slide 45
Hypothetical case control study to evaluate association between
HTN and CHD Exposed (HTN)Cases (CHD+)Control (CHD -) Yes3018 No7082
Total100 OR= 1.96 Table ; distribution of cases and control by age
Age (yr)Cases (CHD+)Controls (CHD-)
AgeTotal Exposed (HTN+) Not exposed (HTN-) %exposed (%
HTN+)