Upload
regina-nuzzo
View
355
Download
1
Embed Size (px)
DESCRIPTION
Practical, applied advice for understanding and writing about statistics in science journalism.
Citation preview
Writing about P-values in science journalism
Regina Nuzzo, Ph.D. Freelance Science Writer
Statistics Professor, Gallaudet University @ReginaNuzzo
National Association of Science Writers Columbus, Ohio October 18, 2014
I can’t even.
Writing about P-values in science journalism:
P-values are the hurdle to publication – and thus media attention. They’re worth understanding.
P-values: A bad romance. P-values can never satisfy our needs, but we keep coming back to them.
What do the statistics mean?
What does the result imply?
How plausible is the conclusion?
How newsworthy is the study?
P-values: A bad romance. P-values can never satisfy our needs, but we keep coming back to them.
Science writers’ needs:
“What does the P-value
really mean?”
“We are 95% confident that the effect is true.”
“There is a 5% chance that the findings are due to chance.”
“. . . the mathematic probability of his findings being a statistical fluke are one in 74 billion.”
“. . . the mathematic probability of his findings being a statistical fluke are one in 74 billion.”
“. . . there was a 3.9% probability that chance accounted for the difference.”
“. . . the mathematic probability of his findings being a statistical fluke are one in 74 billion.”
“. . . there was a 3.9% probability that chance accounted for the difference.”
“ . . it has just a 0.00003% probability that the result is due to chance.”
“. . . the mathematic probability of his findings being a statistical fluke are one in 74 billion.”
“. . . there was a 3.9% probability that chance accounted for the difference.”
“ . . it has just a 0.00003% probability that the result is due to chance.”
“By convention, a p-value higher than 0.05 usually indicates that the results of the study, however good or bad, were probably due only to chance.”
“. . . the mathematic probability of his findings being a statistical fluke are one in 74 billion.”
“. . . there was a 3.9% probability that chance accounted for the difference.”
“ . . it has just a 0.00003% probability that the result is due to chance.”
“By convention, a p-value higher than 0.05 usually indicates that the results of the study, however good or bad, were probably due only to chance.”
“On the plus side, if a newspaper column runs 20 times, I guess it’s ok for it to be wrong once—we still have 95% confidence in it, right?”
Andrew Gelman Professor of Statistics Columbia University http://andrewgelman.com
“By convention, a p-value higher than 0.05 usually indicates that the results of the study, however good or bad, were not reliably different enough from random chance.”
“A p-value indicates how
unusual a result would be, if it
were only a chance
occurrence.”
“By convention, journal editors reject papers unless they report a p-value less than 0.05.”
“The finding was fairly inconsistent with random chance.”
“What does the result imply?”
Size mat te rs . 1. Report the actual effect. 2. Probe researchers. Ask: “What is the effect size?” “What is the confidence interval?” “What is the R-squared?” *
* R-squared is surprisingly easy.
“ . . . by Nature's calculation the split-second attitudes explained only about 2% of the differences in people’s happiness”
“ . . . this effect remained significant controlling for all covariates [B = 0.14, SE = 0.06, t(232) = 2.15, P = 0.032;
effect size r = 0.14].”
0.14 * 0.14 = 0.0196 = 1.96%
“How plausible is the conclusion?”
“Extraordinary claims require extraordinary evidence.”
Ask: “How plausible was the hypothesis in the first place?” “What other evidence supports this?” “Putting the data aside, did you have a prior reason to think this would be important?’
P-values are not always strong evidence.
Nuzzo, R. Scientific method: statistical errors. Nature 2014, 506:150–152. 13.
“How news-worthy is the study?”
Use your judgment: Did they set out to study this – or did they just stumble upon the finding? Did they cherry pick their results – or did they disclose all their findings and methods? Was this “exploratory” or “validating”?
P-Hacking:
Lots of p-values in the tables – but only a few barely below 0.05. Abstract talks about an incidental finding – but ignores what they set out to study in the first place.
“Exploiting -- perhaps unconsciously -- researcher degrees of freedom until p<.05.”
Help on the Horizon
Science Journalists!
Prob(your attention is appreciated) > 0
Thank you!
References:
http://www.dailytelegraph.com.au/proof-we-all-have-psychic-powers/story-e6freuy9-1225955980141
http://online.wsj.com/articles/SB125511780864976689
http://www.nature.com/news/physicists-find-new-particle-but-is-it-the-higgs-1.10932
http://www.nytimes.com/2013/03/12/science/putting-a-value-to-real-in-medical-research.html
http://andrewgelman.com/2013/03/12/misunderstanding-the-p-value/
http://www.nature.com/news/newlyweds-gut-feelings-predict-marital-happiness-1.14261
http://www.nature.com/news/scientific-method-statistical-errors-1.14700
http://www.urbandictionary.com/define.php?term=p-hacking
Image Credits:
http://atlantis.haktanir.org/ch3.html
http://home.nordnet.fr/~scharlet/histoire/DetroitAthleticClub.htm
http://commons.wikimedia.org/wiki/File:Explosions.jpg
http://wellcomeimages.org/indexplus/image/V0030067.html
http://commons.wikimedia.org/wiki/File:Lou_Grant_Ed_Asner_1977.JPG
http://hellcorpceo.deviantart.com/art/A-lovely-unicorn-197967762
http://pixabay.com/en/firefighter-fire-helmet-rescue-23755/