Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Running head: CORRELATION ≠ CAUSATION 1
Do Researchers Practice What They Preach?
Unjustified Causal Language in Psychological Scientists’ Descriptions of Their Work
April Bleske-Rechek, Michaela M. Gunseor, Carly M. Murray,
Jenna R. Maly, Mark C. Priebe, and Stephanie R. Darling
University of Wisconsin-Eau Claire
CORRELATION ≠ CAUSATION 2
Abstract
People’s tendency to conflate correlation with causation has been described as a “gap” in the
cognitive rules and procedures that guide judgment and decision-making (Stanovich, 2009).
Science journalists have been accused of confusing correlation and causation (Resnick, 2016),
but scientists themselves may succumb to the confusion (Robinson et al., 2007). Here, we
investigated the frequency with which researchers in psychology use unjustified causal language
in descriptions of their scholarly work. We recorded causal language in poster submissions listed
in the 2015 Association for Psychological Science (APS) convention program (N = 660). Causal
language was coded as justified if the research design included an imposed manipulation of the
causal variable referenced. On average, 59% [95% CI: 55%, 63%] of submissions used causal
language, but only 48% [43%, 53%] of those that used causal language described a research
design that justified it. Because the standard for poster acceptance at APS is not rigorous, our
data may overestimate use of unjustified causal language among psychological researchers;
however, because we used very conservative criteria for designating language as causal, our data
may actually underestimate psychological researchers’ use of unjustified causal language. We
discuss possible explanations for unjustified use of causal language, such as conventional habits,
lack of experience with scientific writing, inadequate oversight (by reviewers or research
mentors), and outright confusion. Overall, our findings imply that researchers themselves may
mistake correlation for causation.
Keywords: correlation; causation; psychological science; mindware; spin
CORRELATION ≠ CAUSATION 3
Do Researchers Practice What They Preach?
Unjustified Causal Language in Psychological Scientists’ Descriptions of Their Work
A core tenet of introductory material in the social sciences, and in particular psychology,
is that “correlation does not imply causation” (Stanovich, 2010). In fact, the ability to distinguish
between correlation and causation is a foundational outcome of scientific inquiry in the
undergraduate psychology major (American Psychological Association, 2013). However, the
tendency to conflate correlation with causation is a persistent bias of human cognition
(Stanovich, 2009), and research suggests that typical adults (Bleske-Rechek, Morrison, &
Heidtke, 2015), as well as those who report for the media (Yavchitz et al., 2012), commonly
succumb to the bias. In the current study, we investigate whether psychological scientists
accurately distinguish between correlation and causation in scholarly descriptions of their
research.
The human inclination to conflate correlation with causation has been described as a
mindware “gap” (Stanovich, 2009) that “leads us astray practically every day” (Dobelli, 2013, p.
110). Incorrect inferences of causality from contiguous or associated events can impair judgment
and decision-making, as evidenced by the fallout from the widely held, but mistaken, belief that
vaccines cause symptoms of autism (Daley & Glanz, 2011); as such, a clear understanding of the
distinction between correlation and causation is a central element of both general scientific
literacy (Anelli, 2011) and psychological science literacy (APA, 2013; McGovern, Furumoto,
Halpern, Kimble, & McKeachie, 1991). Introductory psychology instructors and textbook writers
commonly address the distinction between correlation and causation (Boneau, 1990). Students
trained in psychology are taught that true experiments (and natural experiments; Rutter, 2007)
are the gold standard for making causal claims about behavior (APA, 2013; Leary, 2012;
CORRELATION ≠ CAUSATION 4
Hatfield, Faunce, & Job, 2006), and that observed correlations are merely one step in the path
toward causal inference.
In the social sciences, consistent application of the distinction between correlation and
causation is not easy and likely requires extensive practice. First, factors that often hold implicit
causal status, such as socioeconomic status and family structure, are not generally amenable to
true manipulation. Second, people tend to have intuitive notions about causes of human behavior
(Nisbett & Wilson, 1977) that are likely to override their reflective reasoning, especially when
the issue is one they feel strongly about. In one set of studies, for example, researchers asked
people to read a hypothetical summary of a psychological study on two variables (e.g., video
game playing and aggressiveness) and then draw inferences about the relationship between the
two variables based on what they read about the study description and findings. People
incorrectly inferred causality from descriptions of non-experimental research as often as they
correctly inferred causality from descriptions of experimental research (Bleske-Rechek et al.,
2015); moreover, the participants drew causal inferences more frequently for causal statements
that coincided with common intuitive beliefs (video game playing causes an increase in
aggressiveness) than for causal statements that were less intuitive (aggressiveness causes an
increase in video game playing).
One exacerbating source of confusion about correlation and causation might be the
media. In one informal survey about what journalists most often get wrong (Resnick, 2016),
social scientists’ most frequent complaint was that journalists jump from correlational evidence
to causal inference. Some psychology professors even maintain and trade examples of media
headlines and blurbs that include unsupported causal claims (e.g., see Jon Mueller’s website at
http://jfmueller.faculty.noctrl.edu/100/correlation_or_causation.htm). Systematic research on
CORRELATION ≠ CAUSATION 5
news stories related to health products and procedures, in particular, has documented that only
35% of stories do a satisfactory job of reviewing the study method and pointing out limitations of
non-experimental designs (Schwitzer, 2008). Other studies in health and medicine suggest that
news headlines commonly “spin” scientific results (Yavchitz et al., 2012) and make exaggerated
causal claims (Cooper, Lee, Goldacre, & Sanders, 2011) that are then perpetuated (Sumner et al.,
2014). For example, one study of 130 news items from Google Health News found that nearly
50% claimed a causal effect despite a non-randomized study design (Haneef, Lazarus, Ravaud,
Yavchitz, & Boutron, 2015).
Scientists themselves may not be immune to the conflation of correlation with causation.
Press releases, which are often reviewed by primary researchers and serve as templates for
journalists’ news stories (Taylor et al., 2015), have been criticized for exaggerating positive
findings and failing to acknowledge limitations of uncontrolled studies (Woloshin, Schwartz,
Casella, Kennedy, & Larson, 2009; Yachhitz et al., 2012). Researchers in education have been
criticized for using inappropriate causal statements in their articles (Reinhart, Haring, Levin,
Patall, & Robinson, 2013; Robinson, Levin, Thomas, Pituch, & Vaugh, 2007), and counseling
psychologists have been criticized for utilizing counseling practices that have not been supported
by randomized experimental designs (Ray et al., 2011). However, no study has systematically
investigated the frequency with which psychological scientists, who are explicitly trained about
causal versus non-causal claims, conflate correlation with causation. Thus, we undertook the
current study to investigate the frequency with which psychological scientists make causal
inferences in scholarly descriptions of their work, and the frequency with which their causal
statements are clearly justified by their findings and research design.
Method
CORRELATION ≠ CAUSATION 6
Sample
We reviewed a total of 660 accepted poster submissions from the 2015 Association for
Psychological Science (APS) convention program. The entire 2015 convention program was
available online at http://www.psychologicalscience.org/index.php/convention/2015-convention-
program#.VxfIH_krLcs. Each submission included a title, abstract (≤ 50 words), and supporting
summary (≤ 500 words). The 2015 APS Convention held 21 general poster sessions, with each
session containing between 120 and 140 accepted submissions, for a total of nearly 3,000
submissions. For Poster Session 1, we coded all 136 submissions together as we established the
coding rules. For Poster Sessions 2 through 21, we sampled randomly by reviewing every fifth
submission (5, 10, 15, etc.). We anticipated that we would obtain a large and representative
sample of the population by reviewing every fifth poster submission. Although we encountered
clusters of submissions about a given topic (e.g., mental health issues among members of the
military), the content and discipline of the submissions (developmental, cognitive, neuroscience,
etc.) varied both within and across sessions.
Procedure
We followed a coding procedure similar to that utilized by Robinson et al. (2007); that is,
each submission was coded independently by multiple members of the research team, and we
used a random sample of submissions to check on coder agreement. After recording each poster
number and full title, we coded each submission for use of causal language in the title, abstract,
and supporting summary; if causal language was present, we recorded whether the research
design and findings supported that language.
Coding rules. Each section of each submission was reviewed for use of causal language
and coded as “yes” for use of causal language if the authors used words such as influence, effect,
CORRELATION ≠ CAUSATION 7
produce, lead, boost, etc., in the description of their primary study. Table 1 shows the observed
words and phrases that we designated as causal. Each section of each submission was coded as
“no” if (1) there was no causal language; (2) the causal language was uncertain (preceded by
“may” or “perhaps”); (3) the causal language was posed in the form of a question; (4) the causal
language was used in the authors’ description of past studies that provided a rationale for the
primary study; (5) the causal language was used in discussion of the potential implications of the
primary study’s findings; or (6) the causal language was used in a statistical sense (e.g., “main
effect of”). Finally, the words “modulate,” “mediate” and “moderate” were not designated as
causal, nor were ambiguous phrases such as “role in” and “contribute to.”
For any submission that included causal language in one or more sections, we
subsequently coded whether or not the causal language was justified. The language was coded as
justified if (1) the researchers’ design included an imposed manipulation and the causal words
pertained to the manipulated variable; (2) the researchers used a longitudinal ‘pre-post’ design
involving a comparison group; (3) the research involved a single-subject design (e.g., ABAB
design); or (4) the research was a genetically informed design (e.g., twin or adoption design) that
used causal language pertaining to genetic versus environmental influences. The causal language
was coded as unjustified if (1) the researchers’ design included an imposed manipulation, but
used causal words pertaining to either a subject variable or a variable that was not manipulated;
(2) the researchers’ design was cross-sectional or qualitative and did not include an imposed
manipulation; (3) the researchers utilized a longitudinal (pre-post) design without a control group
comparison; or (4) the researchers included too little information about the study design to
determine whether the presumed causal variable had been manipulated.
CORRELATION ≠ CAUSATION 8
Coding procedure. Five members of the research team (hereafter members A, B, C, D,
and E) piloted and finalized the coding rules by coding all submissions from Poster Session 1 (n
= 136) together. For each of the remaining even-numbered sessions (sessions 2, 4, 6 … 20), the
first author (Member A) and Members B and C independently coded every fifth submission (i.e.,
#5, 10, 15, etc.). For each of the remaining odd-numbered sessions (sessions 3, 5, 7 … 21),
Members A, D, and E independently coded every fifth submission. Thus, every submission was
coded by the first author and two other researchers. Each week the first author and the other
researchers compared the ratings they gave to the submissions they had been assigned to code
over the previous week. All discrepant judgments were resolved via discussion. The most
common discrepancy in judgments involved cases in which the study included an imposed
manipulation, but the causal language used was in reference to non-manipulated subject
variables. To conclude the coding, two members of the original team (A and D) and a new
member (F) went through all submissions to record the number of causal words and phrases used
in each submission and the specific causal words and phrases used in each section (title, abstract,
and supporting summary).
Coder agreement. Because we did not track coder agreement during the original coding
procedure, we compiled an independent sample of poster submissions in order to check percent
agreement. To ensure that our agreement check was on a sample of submissions not yet coded by
any members of the team, we drew two numbers out of a hat to determine which poster sessions
we would use, and then we coded every fifth submission, but this time using #1, 6, 11, etc. Then,
we systematically checked pairwise agreement between the first author and each of the other
raters. Pairwise agreements for inclusion of causal language ranged from 75% to 100% (MKappa
CORRELATION ≠ CAUSATION 9
= .74); pairwise agreements for justified use of causal language ranged from 70% to 100%
(MKappa = .63).
Results
Complete data files are publicly available on the Open Science Framework at
https://osf.io/ktv97. Following recommendations in The New Statistics (Cumming, 2012), Figure
1 shows a point estimate and 95% confidence interval for the percent of submissions in each
poster session that used causal language. Averaged across the 21 poster sessions, 59% [95% CI:
55%, 63%] of accepted submissions contained cause-and-effect language in either the title,
abstract, or supporting summary. As shown in Figure 1, the poster sessions varied from 46% to
77% in the percentage using causal language. Despite the variability in point estimates, all
confidence intervals included the overall mean of 59%, and all confidence intervals overlapped
with each other. Of the accepted submissions, 32% contained causal language in the title, 36%
contained causal language in the short abstract, and 49% contained causal language in the
supporting summary. Table 1 illustrates that the submissions included a wide variety of causal
phrases, with the most common phrases including derivatives of effect, influence, and impact.
Figure 2 shows a point estimate and 95% confidence interval of the percent of posters in
each session whose causal language was coded as unjustified. Averaged across the 21 poster
sessions, 52% [95% CI: 47%, 57%] of those with causal language were coded as unjustified in
their use. The point estimates from poster session to poster session ranged from 19% to 73%. Of
the 21 confidence intervals, 16 included the overall mean of 52%. Not one confidence interval
included zero. Of the poster submissions with causal language in the title, 46% were unjustified;
of those with causal language in the abstract, 45% were unjustified; and of those with causal
CORRELATION ≠ CAUSATION 10
language in the supporting summary, 51% were unjustified. Table 2 provides examples of causal
language we observed and the explanation for our rating of each as justified or unjustified.
Discussion
A core tenet of training in psychological science is that correlation does not equal
causation. However, the findings of this study suggest that psychological researchers do not
consistently apply that core tenet in scholarly descriptions of their work. We reviewed 660 poster
submissions accepted to the 2015 convention of the premier scientific organization of
psychology, APS; we found that over half of the submissions included causal language, yet less
than half of the submissions that included causal language were justified in its use. Words and
phrases such as “effect of,” “affect,” “influence of,” and “impact of” were especially common,
and they were used inappropriately in the context of relationships between measured variables.
On one hand, our numbers may overestimate the frequency with which unjustified causal
language is used among psychological scientists. Although APS is the premier professional
organization for psychological scientists, the standard of acceptance for poster submissions at
APS is far lower than it is for papers (talks). Further, undergraduate students and graduate
students are more likely to be primary authors of posters as opposed to papers, and they may
have less experience with research design and scientific writing than established scholars do.
Thus, we expect that unjustified causal language would be less prevalent in convention symposia
talks and in peer-reviewed journal articles. However, regardless of standards of acceptance and
the educational level of the authors, the fact remains that the research presented at these poster
sessions is brought to the public eye and recorded on the authors’ professional curriculum vitae
as evidence of scholarly productivity and professional development. Moreover, if the poster
submissions were generated by junior scientists or graduate students, they ideally would have
CORRELATION ≠ CAUSATION 11
been reviewed by a research adviser. Thus, our findings raise the question of whether research
advisers are making similar errors themselves, or whether they are just not taking advantage of
key teaching moments with their students.
On the other hand, our numbers may underestimate the actual frequency with which
causal language is used without justification. Under our coding scheme, we did not designate
relevant words as causal if they were posed as a question or qualified by modal words such as
“may.” For example, the title “Does smiling promote longevity?” would not qualify as causal
under our coding scheme, but many readers would likely still interpret it as causal. Other authors
used “may” and “perhaps” throughout their submission; they were not coded as using causal
language under our definition, but research suggests that readers perceive such language as
causal (Adams et al., 2016). We also did not denote words such as “moderate” and “modulate”
as causal because, although they are sometimes used in a causal sense, they have a history of use
in the context of manipulated and measured (non-manipulated) variables (Baron & Kenny,
1986). Indeed, in our sample, these words were used in 9% of submissions that described a true
experiment with a manipulated variable and 12% of submissions that described a study with
measured (non-manipulated) variables.
We are particularly concerned that about one half of the titles that included causal
language were coded as unjustified. Some people attend to only a study title to draw inferences
about an entire study. Journalists, for example, use titles to help them draft headlines and frame
their research blurbs, which is alarming given evidence that misleading headlines affect how
readers process subsequent information (Ecker, Lewandowsky, Chang, & Pillai, 2014). That
said, nearly half of the abstracts with causal language were coded as unjustified, as well. The
abstract is the most valuable part of a research paper, and many scientists admit to reading solely
CORRELATION ≠ CAUSATION 12
the abstract or using only the abstract to decide on whether they will cite a paper (Shamby,
2015). If findings are over-interpreted in the abstract, those findings might be over-interpreted by
others who read only the abstract.
We did not track potential correlates of using unjustified causal language, such as each
submission’s sub-discipline within psychology or the researchers’ native language, academic
status, or institutional prestige. For each of these variables, there was substantial noise. Any
given poster submission fell into multiple sub-disciplines at once; many submissions included
authors from multiple institutions of varying prestige; and the author order as listed in the
program was not always the actual author order, so sometimes it was not possible to ascertain the
academic status of the first author. In future research with formally published materials from
domain-specific journals, we aim to include variables such as sub-discipline and academic status
as potential correlates of use of unjustified causal language.
The findings of the current study suggest that educational novice researchers are not
alone in their use of causal language to describe non-causal data in the relatively informal setting
of a poster session (Robinson et al., 2007; Reinhart et al., 2013), and hence that some unjustified
causal language that occurs in the news may originate among scientists themselves. In other
words, the “spin” that shows up in press releases and news articles might sometimes originate
with scientists themselves – just as others have speculated may be happening (Haneef et al.,
2015; Yavchitz et al., 2012). One avenue for future research is to look at published research in
psychology journals in conjunction with their press releases and news stories. Similar to how
research in the medical domain has shown that press release quality is positively associated with
the quality of subsequent news stories (Sumner et al., 2014), we expect that press release quality
is positively associated with accurate descriptions in original journal articles. Science journalists
CORRELATION ≠ CAUSATION 13
are charged with creating catchy and readable news stories, but that must be balanced against a
carefully crafted description of the research. The best way to maximize the chance for balance is
to ensure the original scientific publication is accurate.
Although our data demonstrate that scientists are not immune to the tendency to misuse
causal language, they do not clarify the reasons for this misuse. Previous research implies that
confusion may play a role (Adams et al., 2016; Bleske-Rechek et al., 2015), such that
psychological researchers may not have a complete understanding of the conditions required to
infer causality (Robinson et al., 2007). However, confusion may not be the only explanation. In
our sample, for example, there may have been a lack of experience with scientific writing or
inadequate oversight by reviewers and research mentors. In addition, it is possible that some
scientists have shorthand habits for discussing their findings with others but continue on with
that shorthand when writing. Or, perhaps some scientists use unjustified causal language
purposefully to enhance the apparent importance of their research (Robinson et al., 2007).
Regardless of the processes involved, we propose that systematic efforts be put in place to
remedy this communication error. Such efforts should be directed not only at the lay public, who
have to independently navigate the implications of research headlines and blurbs they are
exposed to on a daily basis, but also at scientists, who are responsible for communicating what
their findings could mean, do mean and do not mean (Lilienfeld, 2002).
CORRELATION ≠ CAUSATION 14
Declaration of Conflicting Interests
The authors declared no conflicts of interests with respect to the authorship or the publication of
this article.
CORRELATION ≠ CAUSATION 15
References
Adams, R. C., Sumner, P., Stonkute, S., Williams, A., Boivin, K., Chambers, C., & Bott, L.
(2016). How readers understand causal and correlational expressions used in news
headlines. Journal of Experimental Psychology: Applied. Advanced online publication.
doi:10.1037/xap0000100
American Psychological Association. (2013). APA guidelines for the undergraduate psychology
major: Version 2.0. Retrieved from
http://www.apa.org/ed/precollege/undergrad/index.aspx
Anelli, C. (2011). Scientific literacy: What is it, are we teaching it, and does it matter? American
Entomologist, 57, 235-243.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social
psychological research: Conceptual, strategic, and statistical considerations. Journal of
Personality and Social Psychology, 51, 1173-1182.
Bleske-Rechek, A., Morrison, K., & Heidtke, L. (2015). Causal inference from descriptions of
experimental and non-experimental research: Public understanding of correlation-versus-
causation. Journal of General Psychology, 142, 48-70.
Boneau, C. A. (1990). Psychological literacy: A first approximation. American Psychologist, 45,
891-900.
Cooper, B. E. J., Lee, W. E., Goldacre, B. M., & Sanders, T. A. B. (2011). The quality of the
evidence for dietary advice given in UK national newspapers. Public Understanding of
Science, 21, 664-673.
Cumming, G. (2012). Understanding the new statistics: Effect sizes, confidence intervals, and
meta-analysis. New York, NY, USA: Taylor & Francis Group.
CORRELATION ≠ CAUSATION 16
Daley, M. F., & Glanz, J. M. (2011, September 1). Straight talk about vaccination. Scientific
American. Retrieved August 23, 2016, from
http://www.scientificamerican.com/article/straight-talk-about-vaccination/
Dobelli, R. (2013). The art of thinking clearly. New York, NY: Harper Collins.
Ecker, U. K., Lewandowsky, S., Chang, E. P., & Pillai, R. (2014). The effects of subtle
misinformation in news headlines. Journal of Experimental Psychology: Applied, 20,
323-335.
Hamby, S. (2015). On scientific writing in the information era: Tailoring papers for internet
searching and other 21st century realities. Psychology of Violence, 5, 103-111.
Haneef, R., Lazarus, C., Ravaud, P., Yavchitz, A., & Boutron, I. (2015). Interpretation of results
of studies evaluating an intervention highlighted in Google health news: A cross-
sectional study of news. PLoS ONE, 10, e0140889. doi:10.1371/journal.pone.0140889
Hatfield, J., Faunce, G. J., & Job, R. F. (2006). Avoiding confusion surrounding the phrase
‘correlation does not imply causation.’ Teaching of Psychology, 33, 49-51.
Leary, M. R. (2012). Introduction to behavioral research methods (6th ed.). Boston, MA, USA:
Pearson.
Lilienfeld, S. O. (2002). When worlds collide: Social science, politics, and the Rind et al. (1998)
child sexual abuse meta-analysis. American Psychologist, 57, 176-188.
McGovern, T. V., Furumoto, L., Halpern, D. F., Kimble, G. A., & McKeachie, W. J. (1991).
Liberal education, study in depth, and the arts and sciences major—Psychology.
American Psychologist, 46, 598-605.
Nisbett, R. E., & Wilson, T. D. (1977): Telling more than we can know: Verbal reports on
mental processes. Psychological Review, 84, 231-259.
CORRELATION ≠ CAUSATION 17
Norris, S. P., Phillips, L. M., & Korpan, C. A. (2003). University students’ interpretation of
media reports of science and its relationship to background knowledge, interest, and
reading difficulty. Public Understanding of Science, 12, 123-145.
Ray, D. C., Hull, D. M., Thacker, A. J., Pace, L. S., Swan, K. L., Carlson, S. E., & Sullivan, J.
M. (2011). Research in counseling: A 10-year review to inform practice. Journal of
Counseling & Development, 89, 349–359.
Reinhart, A. L., Haring S. H., Levin, J. R., Patall, E. A., & Robinson, D. H. (2013). Models of
not-so-good behavior: Yet another way to squeeze causality and recommendations for
practice out of correlational data. Journal of Educational Psychology, 105, 241–247.
Resnick, B. (2016, January 22). What journalists get wrong about social science, according to 20
scientists. Vox: Science & Health. Available at: http://www.vox.com/science-and-
health/2016/1/22/10811320/journalists-social-science.
Robinson, D. H., Levin, J. R., Thomas, G. D., Pituch, K. A., & Vaugh, S. (2007). The incidence
of ‘causal’ statements in teaching-and-learning research journals. American Educational
Research Journal, 44, 400–413.
Rutter, M. (2007). Proceeding from observed correlation to causal inference. Perspectives on
Psychological Science, 2, 377-395.
Shermer, M. (2011). The believing brain. New York, NY, USA: Times Books.
Stanovich, K. E. (2009). What intelligence tests miss: The psychology of rational thought. New
Haven, CT, USA: Yale University.
Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Boston, MA, USA:
Allyn & Bacon.
CORRELATION ≠ CAUSATION 18
Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Venetis, C. A., Davies, A., . . .
Chambers, C. D. (2014). The association between exaggeration in health-related science
news and academic press releases: Retrospective observational study. British Medical
Journal, 349, g7015.
Taylor, J. W., Long, M., Ashley, E., Denning, A., Gout, B., Hansen, K., …Newton, P. M.
(2015). When medical news comes from press releases: A case study of pancreatic cancer
and processed meat. PLoS ONE, 10, e0127848. doi:10.1371/journal.pone.0127848
Woloshin, S., Schwartz, L. M., Casella, S. L., Kennedy, A. T., & Larson, R. J. (2009). Press
releases by academic medical centers: Not so academic? Annals of Internal Medicine,
150, 613-618.
Yavchitz, A., Boutron, I., Bafeta, A., Marroun, I., Charles, P., Mantz, J., & Ravaud, P. (2012).
Misrepresentation of randomized controlled trials in press releases and news coverage: A
cohort study. PLOS Medicine, 9, e1001308.
CORRELATION ≠ CAUSATION 19
Table 1
The Specific Causal Words/Phrases in 393 Submissions Coded as Using Causal Language
Number of Times Observed In…
Phrase* Title Abstract Supporting Summary
Effect/Affect 92 91 213
Influence 33 59 137
Impact 22 28 75
**Increase/Decrease 8 19 58
Lead/Led 3 21 44
Improve 4 13 26
Reduce 3 15 25
Benefit 3 7 23
Enhance 6 7 9
Protect/Buffer 3 4 14
Promote 2 5 10
Due to/Because of 0 5 4
Note. A total of 393 accepted submissions used causal language in at least one section of their submission. The numbers here sum up to more than 393 because some submissions included causal language in multiple sections of the submission or multiple, different causal words and phrases. *A variety of other words and phrases were observed occasionally: abolish, activate, aided by, alter, amplify, antidote for, attenuate, as a barrier to, boost, as a consequence of, avoid, cause, create, dampen, delayed by, depend on, determine, dilute, diminish, discourage, disrupt, drive, elicit, eliminate, emerge from, enable, equalize, evoke, exacerbate, facilitate, foster, generate, give, harm, help, impair, impede, impel, induce, inhibit, interfere, legitimize, magnify, make, minimize, mitigate, override, place at risk, preserve, produce, react to, redirect, relieve, rely on, resolve, result in/of, reverse, role of “X” on “Y,” source of, stimulate, strengthen, suppress, threaten, yield. **These words were coded as causal when used as verbs (e.g., “…analytical thinking increases acceptance of secular ideas…”).
CORRELATION ≠ CAUSATION 20
Table 2
Sample Cases of Causal Language
Example Rating Explanation
In Titles:
The Influence of Eldercare Arrangement Characteristics on Work Accommodations
Unjustified In the study, eldercare arrangement characteristics were measured, not manipulated.
Increasing Character Size and Length of Presentation Improves Both Accuracy and Reaction Time of a Dynamic Visual Acuity Task
Justified In the study, stimuli character size and length of presentation were manipulated.
In Abstracts:
“…examined the influence of culture on emotional and neuroendocrine responses…”
Unjustified In this phrase, culture refers to a self-reported cultural orientation (individualistic or collectivistic).
“…these responses to victims are not closely related and are affected in different ways by victim culpability and misfortune severity.”
Justified The study used vignettes about a house fire, in which the researchers manipulated their portrayal of the target victim’s culpability and extent of fire damage.
In Supporting Summaries:
“…we show that situational factors - such as incumbency and seat competitiveness – sometimes override the effects of long-lasting beliefs – such as party affiliation – on the use of moral appeals.”
Unjustified In the study, incumbency, seat competitiveness, and party affiliation were measured, not manipulated variables.
“….pro-positive attentional training can redirect attention among depressed individuals…”
Justified In this pre-post intervention study, depressed individuals were randomly assigned to either a control group or pro-positive training group.
CORRELATION ≠ CAUSATION 21
Figure 1. Percent of submissions in each poster session that used causal language. Each dot
represents a point estimate and each horizontal line its corresponding 95% CI. The dashed
vertical line represents the overall average (59%).
CORRELATION ≠ CAUSATION 22
Figure 2. Of poster submissions using causal language, the percent in each poster session that
were considered unjustified in their use of that language. Each dot represents a point estimate
and each horizontal line its corresponding 95% CI. The dashed vertical line represents the overall
average (52%).