13
Special Issue Paper Received 14 March 2012, Accepted 15 March 2012 Published online in Wiley Online Library (wileyonlinelibrary.com) DOI: 10.1002/sim.5402 Improving the reporting of randomised trials: the CONSORT Statement and beyond Douglas G. Altman, a * David Moher b and Kenneth F. Schulz c An extensive and growing number of reviews of the published literature demonstrate that health research publications have frequent deficiencies. Of particular concern are poor reports of randomised trials, which make it difficult or impossible for readers to assess how the research was conducted, to evaluate the reliabil- ity of the findings, or to place them in the context of existing research evidence. As a result, published reports of trials often cannot be used by clinicians to inform patient care or to inform public health policy, and the data cannot be included in systematic reviews. Reporting guidelines are designed to identify the key information that researchers should include in a report of their research. We describe the history of reporting guidelines for randomised trials culminating in the CONSORT Statement in 1996. We detail the subsequent development and extension of CONSORT and consider related initiatives aimed at improving the reliability of the medical research literature. Copyright © 2012 John Wiley & Sons, Ltd. ‘Results from randomized controlled trials (RCTs) can have an immediate impact on patient care. Accurate and complete reporting is essential to determine whether trial design, conduct, and analyses are scientifically creditable.’ [1] ‘Without accessible and usable reports, research cannot help patients and their clinicians.’ [2] ‘By itself, accurate, transparent reporting doesn’t make good science. Knowing that editors expect a high standard of accuracy and transparency in reports of finished research can, however, encourage researchers do a better job in planning and carrying out the research in the first place. Accurate, transparent reporting is like turning the light on before you clean up a room: It doesn’t clean it for you, but does tell you where the problems are.’ [3] 1. Introduction Medical research is conducted to advance scientific knowledge and directly or indirectly lead to improve- ments in the treatment or prevention of disease. Each research study should have the potential to add usefully to existing knowledge. Publications are the predominant means of communicating research findings. A research report is usu- ally the only tangible evidence that the study was conducted, and of what was found. Indeed research that is not published or made available to interested readers might as well not have been conducted. Sadly a high proportion of research studies, including clinical trials, never result in any substantive publication [4, 5]. a Centre for Statistics in Medicine, University of Oxford, Wolfson College, Linton Road, Oxford, U.K. b Ottawa Hospital Research Institute, Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Canada c FHI360, Research Triangle Park, NC 27709, U.S.A. *Correspondence to: Douglas G. Altman, Centre for Statistics in Medicine, University of Oxford, Wolfson College, Linton Road, Oxford, U.K. E-mail: [email protected] Developed from a presentation at a workshop ‘Clinical trials: past, present and future’, Division of Cardiovascular Sciences, NHLBI, Bethesda, MD, USA (Sept 2010). Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

Improving the reporting of randomised trials: the CONSORT Statement and beyond

Embed Size (px)

Citation preview

Special Issue Paper

Received 14 March 2012, Accepted 15 March 2012 Published online in Wiley Online Library

(wileyonlinelibrary.com) DOI: 10.1002/sim.5402

Improving the reporting of randomisedtrials: the CONSORT Statementand beyond‡

Douglas G. Altman,a*† David Moherb and Kenneth F. Schulzc

An extensive and growing number of reviews of the published literature demonstrate that health researchpublications have frequent deficiencies. Of particular concern are poor reports of randomised trials, whichmake it difficult or impossible for readers to assess how the research was conducted, to evaluate the reliabil-ity of the findings, or to place them in the context of existing research evidence. As a result, published reportsof trials often cannot be used by clinicians to inform patient care or to inform public health policy, and thedata cannot be included in systematic reviews. Reporting guidelines are designed to identify the key informationthat researchers should include in a report of their research. We describe the history of reporting guidelinesfor randomised trials culminating in the CONSORT Statement in 1996. We detail the subsequent developmentand extension of CONSORT and consider related initiatives aimed at improving the reliability of the medicalresearch literature. Copyright © 2012 John Wiley & Sons, Ltd.

‘Results from randomized controlled trials (RCTs) can have an immediate impact on patient care. Accurateand complete reporting is essential to determine whether trial design, conduct, and analyses are scientificallycreditable.’ [1]

‘Without accessible and usable reports, research cannot help patients and their clinicians.’ [2]‘By itself, accurate, transparent reporting doesn’t make good science. Knowing that editors expect a high

standard of accuracy and transparency in reports of finished research can, however, encourage researchersdo a better job in planning and carrying out the research in the first place. Accurate, transparent reporting islike turning the light on before you clean up a room: It doesn’t clean it for you, but does tell you where theproblems are.’ [3]

1. Introduction

Medical research is conducted to advance scientific knowledge and directly or indirectly lead to improve-ments in the treatment or prevention of disease. Each research study should have the potential to addusefully to existing knowledge.

Publications are the predominant means of communicating research findings. A research report is usu-ally the only tangible evidence that the study was conducted, and of what was found. Indeed researchthat is not published or made available to interested readers might as well not have been conducted.Sadly a high proportion of research studies, including clinical trials, never result in any substantivepublication [4, 5].

aCentre for Statistics in Medicine, University of Oxford, Wolfson College, Linton Road, Oxford, U.K.bOttawa Hospital Research Institute, Department of Epidemiology and Community Medicine, Faculty of Medicine, Universityof Ottawa, Ottawa, Canada

cFHI360, Research Triangle Park, NC 27709, U.S.A.*Correspondence to: Douglas G. Altman, Centre for Statistics in Medicine, University of Oxford, Wolfson College, LintonRoad, Oxford, U.K.

†E-mail: [email protected]‡Developed from a presentation at a workshop ‘Clinical trials: past, present and future’, Division of Cardiovascular Sciences,NHLBI, Bethesda, MD, USA (Sept 2010).

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

Clinicians read articles to learn how to treat their patients better; patients read articles to help makedecisions about treatment options. Researchers read articles to judge the impact on their own ongoingand future research, and perhaps to help them plan a similar study. In the future, systematic reviewerswill read those same articles to discover whether the study was relevant to their research question andif so need to be able to extract the necessary information about the methods and findings for the studyto be incorporated into a meta-analysis. These and other readers of a research report need a clear under-standing of exactly what was conducted; they need to know if they can rely on the findings, for whichthey need a good understanding of what was carried out.

Publication alone is insufficient; critical appraisal is essential when deciding whether clinical trialfindings should influence healthcare. To have value a journal article must contain certain essential infor-mation about the research — both what was done and what was found. Before readers can considerthe possible impact of the new findings they need to be convinced that the study findings are reliable:‘Assessment of reliability of published articles is a necessary condition for the scientific process’ [6].One key role of peer review, therefore, is to ensure that readers can judge the reliability of the findings.Unfortunately, as discussed below, despite peer review far too many published research reports lack theessential information that readers need.

Concern about the completeness of research reports is a relatively recent phenomenon, linked tothe rise of systematic reviews [7]. However, there are some early examples of the recognition of theimportance of how research findings are communicated. One of the earliest we know was the statisti-cian/anatomist Donald Mainland, who devoted a whole chapter of his 1938 textbook to ‘Publication ofdata and results’ [8]. Indeed, in one of Mainland’s earliest methodological publications (in 1934) he hadcommented on the importance of how numerical results were presented (Box 1).

Scientific manuscripts should present sufficient data so that the reader can fully evaluate the informa-tion and reach his or her own conclusions about the results. Is the study relevant? Is it reliable? Authorshave a social obligation to ensure that patients’ voluntary participation in clinical trials leads to a pub-lished report, and that the report provides the necessary information. Editors and peer reviewers shouldencourage and facilitate that process, to maximise the benefit of the research for future patients andother stakeholders.

Although the broad principles apply to many types of health care research, all of the above considera-tions are especially important for reporting of RCTs because these have the greatest potential for directlyaffecting the care of patients. Indeed, as Drummond Rennie observed, ‘The whole of medicine dependson the transparent reporting of clinical trials’ [12].

2. What should be included in report of a randomized trial?

While there is obviously important information in the Introduction of a journal article, and evensometimes in the Discussion, the prime concern relates to what should appear in the Methods andResults sections.

One approach to considering what is ‘adequate’ is that the paper should contain enough informationto allow other researchers to repeat the study; it should in principle be reproducible [13]. The Interna-tional Committee of Medical Journal Editors’ (ICMJE) guidance has included the following statementsince 1988:

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

‘Describe statistical methods with enough detail to enable a knowledgeable reader with access to the originaldata to verify the reported results.’ [14]

It is very surprising, however, that the ICMJE guidelines do not extend the same broad principle ofreproducibility to other key study aspects. A report of a research study should include in the Methodsfull details of all key aspects of how the study was carried out. Reports of RCTs in particular shouldinclude full details of the participants and interventions and precise definitions of all outcomes (and howthey were measured), among other issues. In essence, it should be the aspiration that the study could bereplicated based on the information provided. Likewise, the Results section should include a descriptionand enumeration of the trial participants, describe attrition, and present the main findings (correspondingto the prespecified plan included in the protocol). Among the results we should expect to see are analysesof the prespecified primary and secondary outcomes. A guiding principle here is that the summarisedresults should be presented in adequate detail for the numerical results to be incorporated into a futuremeta-analysis.

3. What do we mean by poor reporting?

Poor reporting, more accurately ‘bad reporting’, is primarily when a key aspect of the methods or resultsis missing, incomplete or ambiguous. The terms ambiguous and incomplete are not fully distinct; forexample a brief statement that a trial was double blind with no further information is clearly inadequate,and may be seen as incomplete and thus also ambiguous. Authors should clarify who was blinded, andhow [15]. By contrast presenting an analysis of adverse effect solely by the statement like ‘no significantdifference was seen between the two groups’ is clearly incomplete. An operational definition of com-plete presentation of results is that the data can be incorporated into a future meta-analysis. As notedalready, systematic reviewers frequently encounter major problems when trying to ascertain the methodsand results of randomised trials. Box 2 shows some illustrative examples of such problems.

Poor reporting practices include misrepresentation of the study. Examples include mislabelling a studyas a randomised trial when it was not (or vice versa), misinterpreting the results, and drawing incor-rect and thus misleading inferences. Many of these manoeuvres can be summarised as ‘spin’ in whichresearchers misdirect readers by rhetorical devices or by focusing interpretation on nonprimary analy-ses. Spin has been found in a high proportion of reports of RCTs in which all primary outcomes showednonsignificant results [20].

Before considering how we can improve the quality of publications we give a brief overview of theempirical evidence of poor reporting of randomised trials.

4. Evidence of poor reporting

There is a massive amount of evidence, published regularly, that many published reports of the findingsof randomized trials omit vital information — we cannot tell exactly how the research was conducted.Similar problems have been seen in numerous areas of medical research [21–24], and indeed RCTsmay well be reported somewhat better than observational studies. However, it is RCTs that have themost potential to directly impact patients [1, 25], and so it is vital that these important studies arereported adequately.

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

4.1. Reporting methods and findings

A huge number of reviews of published reports of RCTs have appeared, with the rate escalating in recentyears. Dechartres et al. reviewed 177 literature reviews published from 1987 to 2007, 58% of which werepublished after 2002 [26]. The rate seems to have escalated further in the last few years.

Table I shows an illustrative comparison of the reporting of key items in two reviews of trial reportsindexed in PubMed in 2000 and 2006. Not only are the percentages with adequate reporting mostlybelow 50%, which is appalling, but the improvement over time was quite modest. Furthermore, randomsequence generation and allocation concealment, aspects that make a randomized trial unique, werepoorly described in about 2/3 and 3/4, respectively, of trial reports Other reviews have likewise foundthis key information to be the worst reported. Another example of poor reporting for randomised trialsrelates to providing adequate details of treatments received such that they could be used by clinicians.Glasziou et al. assessed descriptions of treatments in 80 published articles (55 randomised trials and25 systematic reviews). Critical elements of the interventions were missing in 41 of those studies [25].Similar concerns have been expressed for cancer trials [27].

4.2. Selective reporting

The medical research literature is the evidence base for clinical practice. Deficiencies in that evidencebase affect the health of large numbers of people; those deficiencies are largely avoidable. Of particularconcern is reporting that is driven by the study findings — in particular, the preferential nonreporting ofnonsignificant results.

There has been longstanding concern about the nonpublication of results of some trials (often confus-ingly called ‘publication bias’; we prefer the term ‘nonpublication bias’) for which there is considerableempirical evidence [28, 29]. A recent study showed that about 20% of RCTs in stroke over a 50-yearperiod had never been published; other studies have found higher rates of nonpublication [28].

However, this is not the only concern. Evidence has also begun to accumulate of biased reportingwithin publications. Dwan et al. reviewed 16 cohort studies that assessed study publication bias and out-come reporting bias in randomised controlled trials [4]. Those studies together show strong evidence thattrials with positive (i.e. statistically significant) results were more likely to be published and outcomesthat were statistically significant were more likely to be fully reported. Most disturbing is the evidenceof frequent discrepancies between publications and original trial protocols: 40%–62% of studies hadat least one primary outcome changed, newly introduced or omitted [30, 31]. Selective reporting leadsto biased information available for inclusion in systematic reviews and may harm patients. Inadequatereporting may also hinder clinical practice directly. In a review of trials of highly active antiretroviraltherapy (HAART) published in 2000–2008 only about a third (16/49) reported adverse events (AEs)with no preselection; the others all reported only some of the AEs, such as the most frequent, those withP < 0:05, or simply ‘selected’ AEs. The authors wrote:

‘These facts obstruct our ability to choose HAART based on currently published data.’ And they observed that‘Authors and editors should ensure that reporting of AEs in HAART trials follows the CONSORT guidelinesfor reporting on harms in randomized trials.’ [32]

Reporting guidelines such as CONSORT, discussed below, may help to identify or prevent selectivepublication within studies, especially in conjunction with greater availability of trial protocols and trialregistration. We discuss those topics later. Reporting guidelines cannot, of course, affect those studiesthat are never published.

Table I. Reporting of key aspects of trial methods (% of articles) in reviews ofreports of RCTs indexed on PubMed in 2000 [33] and 2006 [34].

Dec 2000 Dec 2006(ND519) (ND616)

Defined primary outcome(s) 45% 53%Sample size calculation 27% 45%Method of random sequence generation 21% 34%Method of allocation concealment 18% 25%Whether blinded 40% 41%

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

5. The CONSORT Statement

Poor reporting indicates a collective failure of authors, peer reviewers, and editors on a truly massivescale. Clearly there may be a lack of awareness of what is the key information that should be includedin the report of an RCT. Thus guidelines might help to educate and so lead to more complete reportsof RCTs.

There were occasional early calls for better reporting of RCTs (see Box 3), but the few early guidelinesfor RCT publications [35, 36] sadly had little impact. Serious attempts to write guidelines specificallyrestricted to the reporting of research studies began in the 1990s. In 1994 two almost contemporarypublications in leading general medical journals (SORT and Asilomar) presented guidelines for report-ing randomised controlled trials [37, 38]. They had each arisen from independent meetings of groupsof researchers and editors concerned with improving the standard of reporting. Although they hadoverlapping content there were some notable differences in emphasis.

At the suggestion of Drummond Rennie, Deputy Editor of the Journal of the American Medical Asso-ciation, representatives from both groups met in 1996, in Chicago, USA. Their remit was to mergethe best of the SORT and Asilomar proposals into a single, coherent evidence-based recommendation.A single recommendation would have a better likelihood of appealing to journals and thus improvedissemination. The meeting resulted in the CONsolidated Standards Of Reporting Trials (CONSORT)Statement, published in 1996 [41]. The CONSORT Statement comprised a checklist and flow diagramfor reporting the results of an RCT. CONSORT is intended to facilitate the complete and transparentreporting of RCTs and aid in their critical appraisal and interpretation. It can therefore be useful forauthors, peer reviewers, editors, and systematic reviewers.

The rationale for including items in the checklist was that they were all necessary to evaluate thestudy — readers need this information to be able to judge the reliability and relevance of the findings.Inclusion of items was based on relevant empirical evidence, whenever possible The checklist is seenas the minimum set of information. It should not be taken to indicate that items not included should notbe reported. Clearly any important information about the trial should be reported, whether or not it isspecifically addressed in the checklist.

The flow diagram shows the passage of trial participants through the trial from recruitment to finalanalysis (Figure 1); it has proved to be the most widely adopted of the CONSORT recommendations.

5.1. Updating the CONSORT Statement

At meetings in 1999 and 2000 the CONSORT Group worked on a revision of the original CONSORTchecklist and flow diagram, taking account of new methodological studies. The Group discussed themerits of including each item in light of current evidence, and determined, by consensus, the changesto be made in the revision. As when developing the original CONSORT Statement, the intention was tokeep only those items deemed fundamental to reporting standards for an RCT. Some items, while notregarded as essential, could well be highly desirable and should still be included in a report of a random-ized controlled trial even though they are not included in CONSORT. Such items included approval ofan institutional ethics review board and sources of funding for the trial [42].

The revised CONSORT Statement was published in 2001 Without precedent it appeared simultane-ously in three prestigious international medical journals [43–45]. The 2001 CONSORT checklist hadmore structure than the 1996 checklist: the items were numbered from 1 to 22, and each item had a briefterm indicating its topic (such as ‘Blinding’ or ‘Baseline data’). Besides a major revision of the checklist,

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

Figure 1. Flow diagram of the progress through the phases of a parallel randomised trial of two groups (that is,enrolment, intervention allocation, follow-up and data analysis).

there were also small changes to the flow diagram. A major new initiative was an accompanying detailedexplanatory paper.

One feature of the original CONSORT Statement in 1996 [41] was the near absence of explanationof the concepts or justification for the importance of specific information being needed in reports ofrandomized trials. It was recommended that the value of the CONSORT Statement, and probably also itsacceptability, could be enhanced by the development of a second publication that clarified the scientificbackground and explained why each issue was important [46].

Therefore when the CONSORT Statement was revised in 1999, the opportunity was taken to develop,in parallel to the revised checklist, a detailed explanatory document. The resulting 32-page Explanationand Elaboration (E&E) document was published simultaneously with the revised CONSORT Statementin the Annals of Internal Medicine in 2001 [47]. It was recognized as an important innovation and theidea has subsequently been taken up by other reporting guideline groups.

The E&E document addressed each checklist item individually. For each item, key methodologicalissues were explained and a summary of the empirical evidence about the importance of reporting thatitem was provided. Examples of clear reporting were also given for each checklist item.

Since the revision in 2001, the evidence base to inform CONSORT grew considerably; empirical datahighlighted new concerns regarding the reporting of randomized controlled trials, such as selective out-come reporting [4, 30]. A further update of the CONSORT recommendations was initiated at a meetingin 2007. In 2010 the third version of CONSORT, ‘CONSORT 2010’, was published in nine journals [15],along with an updated E&E manuscript [48]. The CONSORT Group recommends that the CONSORTStatement checklist is read in conjunction with the accompanying E&E document.

CONSORT 2010 added several new items asking authors to: describe the trial design they used (e.g.,parallel) and the allocation ratio; address any important changes to methods after trial commencement,

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

with a discussion of reasons; report any changes to the primary and secondary outcome (endpoint) mea-sures after the trial commenced; describe why the trial ended or was stopped; give information abouttheir trial’s registration; indicate the availability of the trial’s protocol; and provide information aboutthe trial’s funding. Many smaller changes that were made to clarify the checklist were detailed in thepaper [15].

CONSORT is an evolving tool; its content needs to be reassessed periodically [49]. The CONSORTGroup met in September 2011 and initiated discussions that will in due course lead to another revisionof the recommendations.

6. CONSORT extensions

When the 2001 update of CONSORT was being prepared it was decided to clarify that the main focuswas on two-arm parallel group trials. Of course, the guidance is largely relevant to all RCTs, but otherdesigns might have additional issues that should be addressed. Extensions were planned for six trialdesigns, but as yet only two have been published — for cluster randomised trials [50] and for nonin-feriority and equivalence trials [51]. Both of those are currently being updated to take account of thechanges in CONSORT 2010.

These design specific extensions generally lead to additions to the checklist items, almost always toamplify the existing item but occasionally addressing some quite new element. They may require mod-ification of the flow diagram — certainly modifications will be needed for crossover trials when thatextension is completed. Further extensions will address multi-arm parallel group trials, crossover trials,factorial trials, within-person randomised trials, and N-of-1 trials.

Two other extensions of CONSORT affect almost all trials. They relate to the reporting of harms [52]and the content of abstracts of reports of trial findings. There are also several reporting guidelines basedon CONSORT that were done independently of the CONSORT group [53, 54].

7. Impact of and adherence to CONSORT

The impact of the CONSORT Statement can be measured in a number of ways It may be helpful toborrow the classification of Pathman et al. [55], who considered four stages in adoption of guidelines:awareness, agreement, adoption and adherence. Although awareness of CONSORT has grown steadilyand is now quite high, adherence remains well below a desirable level as evidenced by Table I.

7.1. Endorsement by journals and other bodies

Support for and dissemination of the CONSORT recommendations has grown steadily since 1996. In2011 more than 600 journals, published around the world and in many languages, have explicitly sup-ported the Statement (‘adoption’). Many other healthcare journals support it without our knowledge.Moreover, many hundreds more have implicitly supported it with the endorsement of the Statementby the International Committee of Medical Journal Editors (www.icmje.org). Other prominent editorialgroups, the Council of Science Editors and the World Association of Medical Editors, also officiallysupport CONSORT.

Although this endorsement is substantial, many more journals should endorse CONSORT and encour-age adherence to it. Among 165 high impact journals, only 38% mentioned CONSORT in their ‘Instruc-tions to Authors’ in 2007 and, moreover, only 37% of those stated that adhering was a requirement [56].Editors of those same 165 journals were surveyed. Of the 64 (39%) who responded, only 62% said theyrequired adherence to the CONSORT Statement, only 41% reported incorporating the statement intotheir peer-review process, and only 47% reported incorporating it into their editorial process [56].

7.2. The impact of CONSORT on the quality of trial reports

A remarkable number of studies have considered the quality of reporting of RCTs in relation toCONSORT. An earlier review [57] found eight but a recent update has considered 50 articles [58].These mainly considered the reporting of CONSORT items before and after the 2001 CONSORT State-ment, or compared reporting in journals that have or have not endorsed CONSORT. Those studies aresomewhat problematic as confounding cannot be eliminated and they could not consider directly the useof CONSORT for individual articles. Nevertheless, the overall evidence does support the idea that useof CONSORT is associated with somewhat better reporting for most items considered. That good news

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

has to be tempered by the fact that the medical literature situation clearly remains far short of the idealin which all trials are well reported. The improvements seen so far are modest (for example, see Table I),and in particular it remains shocking that less than half of reports of RCTs specify how participantswere randomised to interventions. We should perhaps not be too surprised that passive dissemina-tion through publications has a small effect. Even among journals that support CONSORT, as yet fewrequire compliance.

7.3. The influence of CONSORT on other reporting guidelines

The CONSORT model has been adopted by several other groups Indeed the QUOROM Statement wasdeveloped after a meeting held in 1996, only a few months after the initial CONSORT Statement waspublished and the meeting to develop MOOSE was held in 1997 [59]. The production of guidelines, espe-cially for narrow topics, has increased in recent years. By October 2011 over 140 reporting guidelineshave been identified (www.equator-network.org). Table II indicates some of the more general guidelinesfor medical research. Experience gained in developing CONSORT and several other guidelines led tothe development of recommendations for future guideline developers.

Several reporting guidelines have followed the CONSORT model and produced long E&E papers toaccompany a new reporting guideline [60–63]. Because guideline developers usually wish to publish thereporting guideline and the E&E paper simultaneously, the lengthy process of E&E development hasmeant in some instances a long delay in the publication of the recommendations.

CONSORT has also been the basis for guidelines for nonmedical experimental studies, such asREFLECT for research on livestock [64] and guidelines for software engineering [65].

8. Beyond CONSORT — related initiatives

8.1. Publication of trial protocols

‘Electronic publication of a protocol could be simply the first element in a sequence of “threaded” elec-tronic publications, which continues with reports of the resulting research (published in sufficient detailto meet some of the criticisms of less detailed reports published in print journals), followed by depositionof the complete data set’ [74]. Ten years on threaded publications are much more a reality [75]. Publica-tion of trials protocols has been widely called for as a way to aid the detection of selective reporting andalso to deter it [30, 76]. The forthcoming SPIRIT guidelines lay out recommendations for the content ofa clinical trial protocol [77].

8.2. Trial registration

The prevalence and consequences of nonpublication of entire trials and selective reporting of outcomeswithin trials, and from among multiple analyses, have been well documented [4, 28] Selective reportingof research is unethical [78, 79]. Yet despite repeated calls over more than 25 years to register clinicaltrials at their inception, with a unique trial identification number, and to record essential details about thetrial, registration became a reality only when provoked by legislation following a medicolegal case ofwithholding data. The World Health Organisation states that ‘the registration of all interventional trials is

Table II. Reporting guidelines for major types of research study.

Reporting guideline Year of publication Scope

CONSORT [41, 43, 47, 48, 66] 1996, 2001, 2010 Randomised trialsQUOROM [67] 1999 Systematic reviews /meta-analyses of randomised trialsMOOSE [59] 2000 Systematic reviews of observational studiesSTARD [62, 68] 2003 Diagnostic test accuracy studiesREMARK [63, 69] 2005 Prognostic studies of tumour markersSTROBE [60, 70] 2007 Observational epidemiological studiesPRISMA [61, 71] 2009 Systematic reviews /meta-analyses of randomised trials(replaces QUOROM)ARRIVE [72] 2010 Animal research

For a more comprehensive list see Simera et al. [73] or www.equator-network.org.

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

a scientific, ethical and moral responsibility’ (www.who.int/ictrp/en). By registering a randomised trial,authors typically report a minimal set of information and obtain a unique trial registration number.

In 2004 the ICMJE introduced a policy that they would consider trials for publication only if they hadbeen registered before the enrolment of the first participant. Although uptake is improving, and regis-tration is extending to reporting results [80, 81], empirical evidence has shown that registration remainsimperfect [82, 83] and further that important discrepancies between the registry entry and subsequentjournal publication are common [84].

8.3. Data publication/sharing

Data sharing is a general term that includes a spectrum of possibilities from allowing bona fideresearchers limited access to some of the data, perhaps conditional on providing a clear study plan,through to publication of the full data set for anyone to access. Sharing data is not a new idea — seeBox 4. One group has suggested that ‘Authors would be more careful if they knew that anyone would beable to perform a truly independent analysis of their results.’ [85]

Practical guidance is available in the form of a minimum standard for anonymising (de-identifying)data for the purposes of publication in a peer reviewed biomedical journal or sharing with otherresearchers [88, 89]. This article also includes basic advice on file preparation. Examples are emergingof full data sets published in an online journal [90].

The ultimate level of transparency in research goes beyond even publication of the raw data toinclude publication also of the statistical code that generated the analyses in publications, and indeedunpublished analyses. Such ‘reproducible research’ [13, 91] is feasible but as yet extremely rare inclinical research.

Further transparency comes from explicit linking of related articles, as discussed above.

9. Concluding remarks

Growing evidence shows widespread deficiencies in the reporting of health research studies. Problematicissues include (but are not limited to) nonreporting or delayed reporting of entire studies; omission ofcrucial information in the description of research methods and interventions; selective reporting of onlysome outcomes; presenting data and graphs in confusing and misleading ways; and omissions from ormisinterpretation of results in abstracts. These deficiencies have serious consequences for clinical prac-tice, research, policymaking, and ultimately for patients [92]. Poor reporting also has a financial impact;it is a waste of money for funders, and for society, if the research cannot be understood and used byclinicians and others [2]. The body of evidence of poor reporting continues to accumulate, and showsonly rare pockets of excellence, yet evidence of efforts to improve the situation are sparse. Guidelinesand checklists aim to guide researchers towards providing accounts of their research that will benefitpatients [73]. The number of reporting guidelines has grown considerably in recent years.

The first widely known guideline was the CONSORT Statement, most recently updated in 2010 [15],providing recommendations for reporting the results of RCTs. CONSORT urges completeness, clarity,and transparency of reporting to reflect the actual trial design, conduct, and results. CONSORT is about

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

reporting, not conduct. Thus the Statement does not include recommendations for designing and con-ducting RCTs. The checklist items should elicit clear pronouncements of what the authors did, and how,but do not contain any judgments on what the authors should have done. CONSORT 2010 should notbe used as an instrument to evaluate the quality of an RCT. Nor is it appropriate to use the checklist toconstruct a ‘quality score’ [15]. Other reporting guidelines should be treated in the same way.

CONSORT and other reporting guidelines are primarily aimed at authors, but they are also a valu-able resource for peer reviewers and editors. Thorough peer review is very time-consuming and requiresexperience and skills across a wide range of aspects. What is clearly needed is that even if a reviewerfails to identify a weakness in the study, deficiencies have the potential to be detected at a later date —that is, after publication — if a suitably experienced person re-examines the paper. Transparent reportingcan compensate for the inevitable fallibility of peer review. Currently systematic reviewers spend far toolong trying to work out how a study was conducted and what the major findings were (see Box 2). Muchdata cannot be used because of inadequate reporting.

The CONSORT Group’s current emphasis is on endorsement and, especially, adherence; more jour-nals should use an effective strategy to improve reporting. We are working on a knowledge translationstrategy to better understand the barriers and facilitators to implementing CONSORT in journals.

Efforts to improve reporting need to be supported by greater awareness of research organisations andresearch funders of the need to maximise the value of research by ensuring that all RCTs are published;they should require that research findings are published in a way that they can benefit current and futurepatients. As yet such support has not been forthcoming.

References1. Dancey JE. From quality of publication to quality of care: translating trials to practice. Journal of the National Cancer

Institute 2010; 102:670–671.2. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009; 374:86–89.3. Davidoff F. News from the International Committee of Medical Journal Editors. Annals of Internal Medicine 2000;

133:229–231.4. Dwan K, Altman DG, Arnaiz JA, Bloom J, Chan AW, Cronin E, Decullier E, Easterbrook PJ, Von Elm E, Gamble C,

Ghersi D, Ioannidis JP, Simes J, Williamson PR. Systematic review of the empirical evidence of study publication biasand outcome reporting bias. PLoS One 2008; 3:e3081.

5. Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entriesto published reports for randomised controlled trials. Cochrane Database of Systematic Reviews 2011; 1:MR000031.

6. Ziman JM. Reliable Knowledge: An Exploration of the Grounds for Belief in Science. Cambridge University Press:Cambridge, 1978.

7. Chalmers TC. Clinical trial quality needs to be improved to facilitate metaanalyses. Online Journal of Current ClinicalTrials 1993; Doc No 89. [1541 words; 1514 paragraphs].

8. Mainland D. The Treatment of Clinical and Laboratory Data. Oliver & Boyd: Edinburgh, 1938.9. Mainland D. Chance and the blood count. Canadian Medical Association Journal 1934; 30:656–658.

10. Feynman R. Cargo cult science. Engineering Science 1974; 37:10–13.11. O’Fallon JR, Duby SD, Salsburg DS, et al. Should there be statistical guidelines for medical research papers? Biometrics

1978; 34:687–695.12. Rennie D. CONSORT revised–improving the reporting of randomized trials. Journal of the American Medical Association

2001; 285:2006–2007.13. Laine C, Goodman SN, Griswold ME, Sox HC. Reproducible research: moving toward research the public can really trust.

Annals of Internal Medicine 2007; 146:450–453.14. International Committee of Medical Journal Editors. Uniform requirements for manuscripts submitted to biomedical

journals: writing and editing for biomedical publication. Available from: http://www.icmje.org/urm_full.pdf (Accessed8 August 2012).

15. Schulz KF, Altman DG, Moher D. CONSORT 2010 Statement: updated guidelines for reporting parallel group randomizedtrials. Annals of Internal Medicine 2010; 152:726–732.

16. Meuffels DE, Reijman M, Scholten RJ, Verhaar JA. Computer assisted surgery for knee ligament reconstruction. CochraneDatabase of Systematic Reviews 2011; 6:CD007601.

17. Gordon M, Findley R. Educational interventions to improve handover in health care: a systematic review. Med Educ 2011;45:1081–1089.

18. Casas JP, Kwong J, Ebrahim S. Telemonitoring for chronic heart failure: not ready for prime time. Cochrane Database ofSystematic Reviews 2010; 8:ED000008.

19. Yeung CA. New guidelines for trial reporting – CONSORT 2010. Available from: http://www.bmj.com/rapid-response/2011/11/02/new-guidelines-trial-reporting---consort-2010 (Accessed 8 August 2012).

20. Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statisticallynonsignificant results for primary outcomes. Journal of the American Medical Association 2010; 303:2058–2064.

21. Smidt N, Rutjes AW, van der Windt DA, Ostelo RW, Reitsma JB, Bossuyt PM, Bouter LM, de Vet HC. Quality of reportingof diagnostic accuracy studies. Radiology 2005; 235:347–353.

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

22. Mallett S, Timmer A, Sauerbrei W, Altman DG. Reporting of prognostic studies of tumour markers: a review of publishedarticles in relation to REMARK guidelines. British Journal of Cancer 2010; 102:173–180.

23. Collins GS, Mallett S, Omar O, Yu LM. Developing risk prediction models for type 2 diabetes: a systematic review ofmethodology and reporting. BMC Medicine 2011; 9:103.

24. Pocock SJ, Collier TJ, Dandreo KJ, de Stavola BL, Goldman MB, Kalish LA, Kasten LE, McCormack VA. Issues in thereporting of epidemiological studies: a survey of recent practice. BMJ 2004; 329:883.

25. Glasziou P, Meats E, Heneghan C, Shepperd S. What is missing from descriptions of treatment in trials and reviews? BMJ2008; 336:1472–1474.

26. Dechartres A, Charles P, Hopewell S, Ravaud P, Altman DG. Reviews assessing the quality or the reporting of random-ized controlled trials are increasing over time but raised questions about how quality is assessed. Journal of ClinicalEpidemiology 2011; 64:136–144.

27. Duff JM, Leather H, Walden EO, LaPlant KD, George TJ Jr. Adequacy of published oncology randomized controlled trialsto provide therapeutic details needed for clinical application. Journal of the National Cancer Institute 2010; 102:702–705.

28. Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, Hing C, Kwok CS, Pang C, Harvey I. Dissemination andpublication of research findings: an updated review of related biases. Health Technology Assessment 2010; 14:iii, ix-xi,1–193.

29. Rothstein HR, Sutton AJ, Borenstein M. Publication bias in meta-analysis: prevention, assessment and adjustments. InPublication Bias in Meta-Analysis: Prevention, Assessment and Adjustments. John Wiley & Sons, Ltd.: Chichester, 2005.

30. Chan AW, Hrobjartsson A, Haahr MT, Gotzsche PC, Altman DG. Empirical evidence for selective reporting of outcomesin randomized trials: comparison of protocols to published articles. Journal of the American Medical Association 2004;291:2457–2465.

31. Chan AW, Krleza-Jeric K, Schmid I, Altman DG. Outcome reporting bias in randomized trials funded by the CanadianInstitutes of Health Research. Canadian Medical Association Journal 2004; 171:735–740.

32. Chowers MY, Gottesman BS, Leibovici L, Pielmeier U, Andreassen S, Paul M. Reporting of adverse events in randomizedcontrolled trials of highly active antiretroviral therapy: systematic review. Journal of Antimicrobial Chemotherapy 2009;64:239–250.

33. Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet 2005;365:1159–1162.

34. Hopewell S, Dutton S, Yu LM, Chan AW, Altman DG. The quality of reports of randomised trials in 2000 and 2006:comparative study of articles indexed in PubMed. BMJ 2010; 340:c723.

35. Grant A. Reporting controlled trials. British Journal of Obstetrics and Gynaecology 1989; 96:397–400.36. Squires BP, Elmslie TJ. Reports of randomized controlled trials: what editors want from authors and peer reviewers.

Canadian Medical Association Journal 1990; 143:381–382.37. Asilomar Working Group on Recommendations for Reporting of Clinical Trials in the Biomedical Literature. Call

for comments on a proposal to improve reporting of clinical trials in the biomedical literature. Working Group onRecommendations for Reporting of Clinical Trials in the Biomedical Literature. Annals of Internal Medicine 1994;121:894–895.

38. The Standards of Reporting Trials G. A proposal for structured reporting of randomized controlled trials. Journal of theAmerican Medical Association 1994; 272:1926–1931.

39. Daniels M. Scientific appraisement of new drugs in tuberculosis. American Review of Tuberculosis 1950; 61:751–756.40. DerSimonian R, Charette LJ, McPeek B, Mosteller F. Reporting on methods in clinical trials. New England Journal of

Medicine 1982; 306:1332–1337.41. Begg C, Cho M, Eastwood S, Horton R, Moher D, Olkin I, Pitkin R, Rennie D, Schulz KF, Simel D, Stroup DF. Improv-

ing the quality of reporting of randomized controlled trials. The CONSORT statement. Journal of the American MedicalAssociation 1996; 276:637–639.

42. Chalmers I. Current Controlled Trials: an opportunity to help improve the quality of clinical research. Current ControlledTrials in Cardiovascular Medicine 2000; 1:3–8.

43. Moher D, Schulz KF, Altman D. The CONSORT statement: revised recommendations for improving the quality of reportsof parallel-group randomized trials. Journal of the American Medical Association 2001; 285:1987–1991.

44. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality ofreports of parallel-group randomised trials. Lancet 2001; 357:1191–1194.

45. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality ofreports of parallel-group randomized trials. Annals of Internal Medicine 2001; 134:657–662.

46. Altman DG. Better reporting of randomised controlled trials: the CONSORT statement. BMJ 1996; 313:570–571.47. Altman DG, Schulz KF, Moher D, Egger M, Davidoff F, Elbourne D, Gøtzsche PC, Lang T. The revised CONSORT

statement for reporting randomized trials: explanation and elaboration. Annals of Internal Medicine 2001; 134:663–694.48. Moher D, Hopewell S, Schulz KF, Montori V, Gotzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG.

CONSORT 2010 Explanation and Elaboration: updated guidelines for reporting parallel group randomised trials. BMJ2010; 340:c869.

49. Moher D. CONSORT: an evolving tool to help improve the quality of reports of randomized controlled trials. ConsolidatedStandards of Reporting Trials. Journal of the American Medical Association 1998; 279:1489–1491.

50. Campbell MK, Elbourne DR, Altman DG. CONSORT statement: extension to cluster randomised trials. BMJ 2004;328:702–708.

51. Piaggio G, Elbourne DR, Altman DG, Pocock SJ, Evans SJ. Reporting of noninferiority and equivalence randomizedtrials: an extension of the CONSORT statement. Journal of the American Medical Association 2006; 295:1152–1160.

52. Ioannidis JPA, Evans SJW, Gotzsche PC, O’Neill RT, Altman DG, Schulz KF, Moher D, the CG. Improving the reportingof harms in randomized trials: Expansion of the CONSORT statement. Annals of Internal Medicine 2004.

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

53. Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, Schulz KF. CONSORT for reporting randomisedtrials in journal and conference abstracts. Lancet 2008; 371:281–283.

54. Hopewell S, Clarke M, Moher D, Wager E, Middleton P, Altman DG, Schulz KF. CONSORT for reporting randomizedcontrolled trials in journal and conference abstracts: explanation and elaboration. PLoS Medicine 2008; 5:e20.

55. Pathman DE, Konrad TR, Freed GL, Freeman VA, Koch GG. The awareness-to-adherence model of the steps to clinicalguideline compliance. The case of pediatric vaccine recommendations. Medical Care 1996; 34:873–889.

56. Hopewell S, Altman DG, Moher D, Schulz KF. Endorsement of the CONSORT Statement by high impact factor medicaljournals: a survey of journal editors and journal ‘Instructions to Authors’. Trials 2008; 9:20.

57. Plint AC, Moher D, Morrison A, Schulz K, Altman DG, Hill C, Gaboury I. Does the CONSORT checklist improvethe quality of reports of randomised controlled trials? A systematic review. Medical Journal of Australia 2006;185:263–267.

58. Turner L, Shamseer L, Altman DG, Weeks L, Kober T, Dias S, Schulz KF, Plint AC, Moher D. Does adherence to theCONsolidated Standards Of Reporting Trials (CONSORT) Statement influence the quality of reporting of randomisedcontrolled trials: An updated systematic review. Cochrane Database of Systematic Reviews. in press.

59. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, Moher D, Becker BJ, Sipe TA, Thacker SB. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies inEpidemiology (MOOSE) group. Journal of the American Medical Association 2000; 283:2008–2012.

60. Vandenbroucke JP, von Elm E, Altman DG, Gotzsche PC, Mulrow CD, Pocock SJ, Poole C, Schlesselman JJ, Egger M.Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Annalsof Internal Medicine 2007; 147:W163–194.

61. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J,Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. Annals of Internal Medicine 2009; 151:W65–94.

62. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, Moher D, Rennie D, de Vet HC, Lijmer JG.The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clinical Chemistry 2003;49:7–18.

63. Altman DG, McShane LM, Sauerbrei W, Taube SE. Reporting recommendations for tumor marker prognostic studies(REMARK): explanation and elaboration. BMC Medicine 2012; 10:51.

64. Sargeant JM, O’Connor AM, Gardner IA, Dickson JS, Torrence ME. The REFLECT statement: reporting guidelines forrandomized controlled trials in livestock and food safety: explanation and elaboration. Zoonoses Public Health 2010;57:105–136.

65. Jedlitschka A, Pfahl D. Reporting guidelines for controlled experiments in software engineering. Proceedings of the2005 International Symposium on Empirical Software Engineering (ISESE), Noosa Heads, Queensland, Australia, 2005;92–101.

66. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomisedtrials. PLoS Medicine 2010; 7:e1000251.

67. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF. Improving the quality of reports of meta-analysesof randomised controlled trials: the QUOROM statement. Quality of reporting of meta-analyses. Lancet 1999;354:1896–1900.

68. Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, Lijmer JG, Moher D, Rennie D, de VetHC. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD Initiative. Radiology 2003;226:24–28.

69. McShane LM, Altman DG, Sauerbrei W, Taube SE, Gion M, Clark GM. Reporting recommendations for tumor markerprognostic studies (REMARK). Journal of the National Cancer Institute 2005; 97:1180–1184.

70. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Obser-vational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. PLoS Medicine2007; 4:e296.

71. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: thePRISMA statement. BMJ 2009; 339:b2535.

72. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVEguidelines for reporting animal research. PLoS Biology 2010; 8:e1000412.

73. Simera I, Moher D, Hoey J, Schulz KF, Altman DG. A catalogue of reporting guidelines for health research. EuropeanJournal of Clinical Investigation 2010; 40:35–53.

74. Chalmers I, Altman DG. How can medical journals help prevent poor medical research? Some opportunities presented byelectronic publishing. Lancet 1999; 353:490–493.

75. Altman DG, Furberg CD, Grimshaw JM, Rothwell PM. Lead editorial: Trials - using the opportunities of electronicpublishing to improve the reporting of randomised trials. Trials 2006; 7:6.

76. Chan AW. Bias, spin, and misreporting: time for full access to trial protocols and results. PLoS Medicine 2008; 5:e230.77. Chan A-W, Tetzlaff J, Altman DG, Gøtzsche PC, Hróbjartsson A, Krleza-Jeric K, Laupacis A, Moher D. The SPIRIT

initiative: Defining standard protocol items for randomized trials [conference abstract]. German Journal for Evidence andQuality in Health Care 2008; 102:S27.

78. Strech D. Normative arguments and new solutions for the unbiased registration and publication of clinical trials. Journalof Clinical Epidemiology 2012; 65:276–281.

79. Antes G, Chalmers I. Under-reporting of clinical trials is unethical. Lancet 2003; 361:978–979.80. Tse T, Williams RJ, Zarin DA. Reporting “basic results” in ClinicalTrials.gov. Chest 2009; 136:295–303.81. Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials.gov results database–update and key issues. New

England Journal of Medicine 2011; 364:852–860.

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012

D. G. ALTMAN, D. MOHER AND K. F. SCHULZ

82. Viergever RF, Ghersi D. The quality of registration of clinical trials. PLoS One 2011; 6:e14701.83. Dekkers OM, Soonawala D, Vandenbroucke JP, Egger M. Reporting of noninferiority trials was incomplete in trial

registries. Journal of Clinical Epidemiology 2011; 64:1034–1038.84. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in

randomized controlled trials. Journal of the American Medical Association 2009; 302:977–984.85. Hartung J, Cottrell JE, Giffin JP. Absence of evidence is not evidence of absence. Anesthesiology 1983; 58:298–300.86. Galton F. Biometry. Biometrika 1901; 1:7–10.87. Dunn HL. Application of statistical methods in physiology. Physiological Reviews 1929; 9:275–398.88. Hrynaszkiewicz I, Norton ML, Vickers AJ, Altman DG. Preparing raw clinical data for publication: guidance for journal

editors, authors, and peer reviewers. Trials 2010; 11:9.89. Hrynaszkiewicz I, Norton ML, Vickers AJ, Altman DG. Preparing raw clinical data for publication: guidance for journal

editors, authors, and peer reviewers. BMJ 2010; 340:c181.90. Sandercock PA, Niewada M, Czlonkowska A. The International Stroke Trial database. Trials 2011; 12:101.91. Peng RD, Dominici F, Zeger SL. Reproducible epidemiologic research. American Journal of Epidemiology 2006;

163:783–789.92. Simera I, Altman DG. Writing a research article that is “fit for purpose”: EQUATOR Network and reporting guidelines.

Evidence Based Medicine 2009; 14:132–134.

Copyright © 2012 John Wiley & Sons, Ltd. Statist. Med. 2012