How to Report a Research Study
Paul Cronin, MD, MS, James V. Rawson, MD, Marta E. Heilbrun, MD, MS, Janie M. Lee, MD, MSc,AineM.Kelly,MD,MS,MA,PinaC. Sanelli,MD,MPH,BrianW.Bresnahan, PhD, AngelisaM. Paladin,MD
Ac
FrUMRaUURaA.Yo24e-
ªht
10
Incomplete reporting hampers the evaluation of results and bias in clinical research studies. Guidelines for reporting study design and
methods have been developed to encourage authors and journals to include the required elements. Recent efforts have been made tostandardize the reporting of clinical health research including clinical guidelines. In this article, the reporting of diagnostic test accuracy
studies, screening studies, therapeutic studies, systematic reviews and meta-analyses, cost-effectiveness assessments (CEA), recom-
mendations and/or guidelines, and medical education studies is discussed. The available guidelines, many of which can be found at
the Enhancing the QUAlity and Transparency Of health Research network, on how to report these different types of health research arealso discussed. We also hope that this article can be used in academic programs to educate the faculty and trainees of the available re-
sources to improve our health research.
Key Words: Cost-effectiveness assessments; diagnostic test accuracy; enhancing the quality and transparency of health research;guidelines; medical education; recommendations; reporting; screening; systematic reviews and meta-analyses; therapy.
ªAUR, 2014
This article is the first in a series of two articles that will
review how to report and how to critically appraise
research in health care. In this article, the reporting
of diagnostic test accuracy and screening studies, therapeutic
studies, systematic reviews and meta-analyses, cost-effective-
ness studies, recommendations and/or guidelines, and medi-
cal education studies is discussed. The available guidelines
on how to report these different types of health research are
also discussed. The second article will review the evolution
of standardization of critical appraisal techniques for health
research.
Recent efforts have been made to standardize both the
reporting and the critical appraisal of clinical health research
including clinical guidelines. In 2006, Enhancing the
QUAlity and Transparency Of health Research (EQUATOR)
network was formed to improve the quality of reporting
health research (1). Recognizing the need to critically assess
the methodological quality of studies and the wide spread de-
ficiencies and lack of standardization in primary research
reporting, the network brought together international stake-
holders including editors, peer reviewers, and developers of
guidelines to improve both the quality of research publications
and the quality of the research itself (1). Many of the
ad Radiol 2014; 21:1088–1116
om the Division of Cardiothoracic Radiology, Department of Radiology,niversity of Michigan Hospitals, B1 132G Taubman Center/5302, 1500 Eastedical Center Drive, Ann Arbor, MI 48109 (P.C., A.M.K.); Department ofdiology and Imaging, Medical College of Georgia, Georgia Regents
niversity, Augusta, Georgia (J.V.R.); Department of Radiology, University oftah School of Medicine, Salt Lake City, Utah (M.E.H.); Department ofdiology, University of Washington, Seattle, Washington (J.M.L., B.W.B.,M.P.); and Department of Radiology, Weill Cornell Medical College/Newrk Presbyterian Hospital, New York, New York (P.C.S.). Received March, 2014; accepted April 30, 2014. Address correspondence to: P.C.mail: [email protected]
AUR, 2014tp://dx.doi.org/10.1016/j.acra.2014.04.016
88
presentations at the joint Radiological Alliance for Health
Service Research/Alliance of Clinical-Educators in Radio-
logy session at the 2013 Association of University Radiologists
annual meeting highlighted reporting guidelines available at
the EQUATOR network (1). The EQUATORnetwork goals
are raising awareness of the crucial importance of accurate and
complete reporting of research; becoming the recognized
global center providing resources, education, and training
relating to the reporting of health research and use of reporting
guidelines; assisting in the development, dissemination, and
implementation of reporting guidelines; monitoring the status
of the quality of reporting across health research literature; and
conducting research evaluating or pertaining to the quality of
reporting (1). The desired result of these goals is to improve
the quality of health care research reportingwhich subsequently
improves patient care.
The EQUATOR Network Resource Centre provides up-
to-date resources related to health research reporting mainly
for authors of research articles; journal editors and peer re-
viewers; and reporting guideline developers to enable better
reporting, reviewing, and editing. Within the library for
health research reporting, the EQUATOR network has
developed and maintains a digital library that provides publi-
cations related to writing research articles; reporting guide-
lines and guidance on scientific writing; the use of reporting
guidelines in editorial and peer review processes; the develop-
ment of reporting guidelines; and evaluations of the quality of
reporting. The library contains comprehensive lists of the
available reporting guidelines, listed by study type. These
include experimental studies, observational studies, diagnostic
accuracy studies, biospecimen reporting, reliability and agree-
ment studies, systematic reviews and meta-analyses, qualita-
tive research, mixed-methods studies, economic evaluations,
and quality improvement studies. The network has developed
several standards for reporting research including the
TABLE 1. STARD Items and Explanation (13–28)
Item
Title/Abstract/Key words
1 Identify the article as a study of diagnostic
accuracy
Use the term ‘‘diagnostic accuracy’’ in the title or abstract.
In 1991, the National Library of Medicine’s MEDLINE database introduced a specific keyword (MeSH heading) for
diagnostic studies: ‘‘Sensitivity and Specificity.’’
Introduction
2 State the research questions or study aims, such as
estimating diagnostic accuracy or comparing
accuracy between tests or across participant groups.
Describe the scientific background, previous work on the subject, the remaining uncertainty, and, hence, the
rationale for their study.
Clearly specified research questions help the readers to judge the appropriateness of the study design and data
analysis.
Methods Participants
3 Describe the study population: The inclusion and
exclusion criteria, setting and locations where
data were collected.
Diagnostic accuracy studies describe the behavior of a test under particular circumstances and should report its
inclusion and exclusion criteria for selecting the study population. The spectrum of the target disease can vary
and affect test performance.
4 Describe participant recruitment and sampling: How
eligible patients are identified.
Was recruitment based on presenting symptoms, results from previous tests, or the fact that the participants had
received the index tests or the reference standard?
Describe how eligible subjects were identified and whether the study enrolls consecutive or random sampling of
patients. Study designs are likely to influence the spectrum of disease represented.
5 Describe participant sampling: Was the study population a consecutive series of participants defined by the selection criteria in item 3 and 4? If
not, specify how participants were further selected.
6 Describe data collection: Was data collection
planned before the index test and reference
standard were performed (prospective study) or
after (retrospective study)?
Prospective data collection has many advantages: better data control, additional checks for data integrity and
consistency, and a level of clinical detail appropriate to the problem. As a result, there will be fewer missing or
uninterpretable data items.
Retrospective data collection starts after patients have undergone the index test and the reference standard and
often relies on chart review. Studies with retrospective data collection may reflect routine clinical practice
better than a prospective study, but alsomay fail to identify all eligible patients or to provide data of high quality.
Methods Test methods
7 Describe the reference standard and its rationale. The reference standard is used to distinguish patients with andwithout disease.When it is not possible to subject
all patients to the reference standard for practical or ethical reasons, composite reference standard is an
alternative. The components may reflect different definitions or strategies for disease diagnosis.
8 Describe technical specifications of material and
methods involved including how and when
measurements were taken, and/or cite references
for index tests and reference standard.
Describe the methods involved in the execution of index test and reference standard in sufficient detail to allow
other researchers to replicate the study. Differences in the execution of the index test and reference standard
are a potential source of variation in diagnostic accuracy.
The description should cover the full test protocol including the specification of materials and instruments
together with their instructions for use.
9 Describe the definitions and rationale for the units,
thresholds and/or categories of the index tests and
reference standard.
Test results can be truly dichotomous (eg, present or absent), have multiple categories or be continuous. Clearly
describe how and when category boundaries are used.
10 Describe the number, training, and expertise of the
persons executing and reading the index tests
and the reference standard.
Variability in themanipulation, processing, or reading of the index test or reference standard will affect diagnostic
accuracy.
Professional background, expertise, and prior training to improve interpretation and to reduce interobserver
variation all affect the quality of reading.
(Continued on next page)
Academic
Radiology,Vol21,No9,September2014
HOW
TO
REPORTARESEARCHSTUDY
1089
TABLE 1. (continued) STARD Items and Explanation (13–28)
Item
11 Describe whether or not the readers of the index
tests and reference standard were blind (masked)
to the results of the other test and describe any
other clinical information available to the readers.
Knowledge of the results of the reference standard can influence the reading of the index test, and vice versa,
leading to inflated measures of diagnostic accuracy.
Methods Statistical methods
12 Describe methods for calculating or comparing
measures of diagnostic accuracy, and the statistical
methods used to quantify uncertainty (eg, 95%
confidence intervals).
Sensitivity, specificity, PPV, NPV, ROC, likelihood ratio and odds ratio.
13 Describe methods for calculating test reproducibility,
if done.
Reproducibility of the index test and reference standard varies. Poor reproducibility adversely affects diagnostic
accuracy. If possible, authors should evaluate the reproducibility of the test methods used in their study and
report their procedure to do so.
Results Participants
14 Report when study was performed, including
beginning and end dates of recruitment.
Technology behind many tests advances continuously, leading to improvements in diagnostic accuracy. There
may be a considerable gap between the dates of the study and the publication date of the study report.
15 Report Clinical and demographic characteristics of
the study population.
Description of the demographic and clinical characteristics are usually presented in a table, such as age, sex,
spectrum of presenting symptoms, comorbidity, current treatments, recruitment centers.
16 Report the number of participants satisfying the
criteria for inclusion who did or did not undergo
the index tests and/or the reference standard.
Describe why participants failed to receive either test.
Flow diagram is strongly recommended.
Results Test results
17 Report time interval between the index tests and
the reference standard, and any treatment
administered in between.
When delay occurs between doing the index test and the reference standard the condition of the patient may
change, leading toworsening or improvement of the disease. Similar concerns apply if treatment is started after
doing the index test but before doing the reference standard.
18 Report distribution of severity of disease (define
criteria).
Demographic and clinical features of the study population can affect measures of diagnostic accuracy. Many
diseases are not pure dichotomous states but cover a continuum, ranging fromminute pathological changes to
advanced clinical disease. Test sensitivity is often higher in studies with a higher proportion of patients with
more advanced stages of the target condition.
19 Report a cross tabulation of the results of the index
tests (including indeterminate and missing results)
by the results of the reference standard; for
continuous results, the distribution of the test
results by the results of the reference standard.
Cross tabulations of test results in categories and graphs of distributions of continuous results are essential to
allow scientific colleagues to (re)calculate measures of diagnostic accuracy or to perform alternative analyses,
including meta-analysis.
20 Report any adverse events from performing the
index tests or the reference standard.
Not all tests are safe. Measuring and reporting of adverse events in studies of diagnostic accuracy can provide
additional information about the clinical usefulness of a particular test.
Results Estimates
21 Report estimates of diagnostic accuracy and
measures of statistical uncertainty (eg, 95%
confidence intervals).
Report a value of how well the test results correspond with the reference standard. The values presented in the
report should be taken as estimateswith some variation.Many journals require or strongly encourage the use of
confidence intervals as measures of precision. A 95% confidence interval is conventional.
CRONIN
ETAL
Academic
Radiology,Vol21,No9,September2014
1090
TABLE 2. Checklist for Reporting the Reference Case Cost-Effectiveness Analysis (61)
Framework
1 Background of the problem
2 General framing and design of the analysis
3 Target population for intervention
4 Other program descriptors (eg, care setting, model of
delivery, timing of intervention)
5 Description of comparator programs
6 Boundaries of the analysis
7 Time horizon
8 Statement of the perspective of the analysis
Data and Methods
9 Description of event pathway
10 Identification of outcomes of interest in analysis
11 Description of model used
12 Modeling assumptions
13 Diagram of event pathway (model)
14 Software used
15 Complete description of estimates of effectiveness,
resource use, unit costs, health states, and
quality-of-life weights and their sources
16 Methods for obtaining estimates of effectiveness, costs,
and preferences
17 Critique of data quality
18 Statement of year of costs
19 Statement of method used to adjust costs for inflation
20 Statement of type of currency
21 Source and methods for obtaining expert judgment
22 Statement of discount rates
Results
23 Results of model validation
24 Reference case results (discounted at 3% and
undiscounted): total costs and effectiveness,
incremental costs and effectiveness, and incremental
cost-effectiveness ratios
25 Results of sensitivity analyses
26 Other estimates of uncertainty, if available
27 Graphical representation of cost-effectiveness results
28 Aggregate cost and effectiveness Information
29 Disaggregated results, as relevant
30 Secondary analyses using 5% discount rate
31 Other secondary analyses, as relevant
Discussion
32 Summary of reference case results
33 Summary of sensitivity of results to assumptions and
uncertainties in the analysis
34 Discussion of analysis assumptions having important
ethical implications
35 Limitations of the study
36 Relevance of study results for specific policy questions
or decisions
37 Results of related cost-effectiveness analyses
38 Distributive implications of an intervention
22
Reporthow
indeterm
inate
results,missingdata
and
outliers
oftheindextests
were
handled.
Uninterpretable,indeterm
inate,andinterm
ediate
testresultsposeaproblem
intheassessmentofadiagnostic
test.Byitself,thefrequencyofthesetestresultsis
anim
portantindicatoroftheoverallusefulnessofthetest.
Furtherm
ore,ignoringsuchtestresultscanproducebiasedestimatesofdiagnosticaccuracyiftheseresults
occurmore
frequentlyin
patients
withdiseasethanin
thosewithout,orviceversa.
23
Reportestimatesofvariability
ofdiagnosticaccuracy
betw
eensubgroupsofparticipants,readers
or
centers,ifdone.
Sincevariability
istherule
ratherthantheexception,researchers
should
explore
possible
sourcesof
heterogeneityin
results,within
thelim
itsoftheavaila
ble
sample
size.Thebestpracticeis
toplansubgroup
analysesbefore
thestartofthestudy.
24
Reportestimatesoftestreproducibility,ifdone.
Reportallmeasuresoftestreproducibility
perform
edduringthestudy.Forquantitativeanalyticalmethods,
reportthecoefficientofvariation(CV).
Discussion
25
Discusstheclin
icalapplic
ability
ofthestudyfindings.
Provideageneralinterpretationoftheresultsin
thecontextofcurrentevidenceanditsapplic
ability
inpractice.
Clearlydefinethemethodologicalshortcomingsofthestudy,how
itpotentially
affectedtheresultsand
approachesto
limititssignificance.
Discussdifferencesbetw
eenthecontextofthestudyandothersettingsandpatientgroupsin
whichthetestis
likely
tobeused.
Providefuture
directionforthis
work
inadvancingclin
icalpracticeorresearchin
this
field.
NPV,negativepredictivevalue;PPV,positivepredictivevalue;ROC,receiveroperatingcharacteristic.
Academic Radiology, Vol 21, No 9, September 2014 HOW TO REPORT A RESEARCH STUDY
consolidated standards of reporting trials (CONSORT)
checklist and flow diagram for randomized control trials
(RCTs) (2–12); the transparent reporting of evaluations
1091
TABLE 3. International Society for Pharmacoeconomics and Outcomes Research Randomized Control Trial Cost-EffectivenessAnalysis (ISPOR RCT-CEA) Task Force Report of Core Recommendations for Conducting Economic Analyses Alongside ClinicalTrials (62)
Trial design
1 Trial design should reflect effectiveness rather than efficacy when possible.
2 Full follow-up of all patients is encouraged.
3 Describe power and ability to test hypotheses, given the trial sample size.
4 Clinical end points used in economic evaluations should be disaggregated.
5 Direct measures of outcome are preferred to use of intermediate end points.
Data elements
6 Obtain information to derive health state utilities directly from the study population.
7 Collect all resources that may substantially influence overall costs; these include those related and unrelated to the intervention.
Database design and management
8 Collection and management of the economic data should be fully integrated into the clinical data.
9 Consent forms should include wording permitting the collection of economic data, particularly when it will be gathered from third-
party databases and may include pre- and/or post-trial records.
Analysis
10 The analysis of economic measures should be guided by a data analysis plan and hypotheses that are drafted prior to the onset of
the study.
11 All cost-effectiveness analyses should include the following: an intention-to-treat analysis; common time horizon(s) for
accumulating costs and outcomes; a within-trial assessment of costs and outcomes; an assessment of uncertainty; a common
discount rate applied to future costs and outcomes; an accounting for missing and/or censored data.
12 Incremental costs and outcomes should be measured as differences in arithmetic means, with statistical testing accounting for
issues specific to these data (eg, skewness, mass at zero, censoring, construction of QALYs).
13 Imputation is desirable if there is a substantial amount of missing data. Censoring, if present, should also be addressed.
14 One or more summary measures should be used to characterize the relative value of the intervention.
15 Examples include ratio measures, difference measures, and probability measures (eg, cost-effectiveness acceptability curves).
16 Uncertainty should be characterized. Account for uncertainty that stems from sampling, fixed parameters such as unit costs and
the discount rate, and methods to address missing data.
17 Threats to external validity—including protocol-driven resource use, unrepresentative recruiting centers, restrictive inclusion and
exclusion criteria, and artificially enhanced compliance—are best addressed at the design phase.
18 Multinational trials require special consideration to address intercountry differences in population characteristics and treatment
patterns.
19 When models are used to estimate costs and outcomes beyond the time horizon of the trial, good modeling practices should be
followed.
Models should reflect the expected duration of the intervention on costs and outcomes.
20 Subgroup analyses based on prespecified clinical and economic interactions, when found to be significant ex post, are
appropriate. Ad hoc subgroup analysis is discouraged.
Reporting the results
21 Minimum reporting standards for cost-effectiveness analyses should be adhered to for those conducted alongside clinical trials.
22 The cost-effectiveness report should include a general description of the clinical trial and key clinical findings.
23 Reporting should distinguish economic data collected as part of the trial vs. data not collected as part of the trial.
24 The amount of missing data should be reported. If imputation methods are used, the method should be described.
25 Methods used to construct and compare costs and outcomes, and to project costs and outcomes beyond the trial period should
be described.
26 The results section should include summaries of resource use, costs, and outcome measures, including point estimates and
measures of uncertainty. Results should be reported for the time horizon of the trial, and for projections beyond the trial (if
conducted).
27 Graphical displays are recommended for results not easily reported in tabular form (eg, cost-effectiveness acceptability curves,
joint density of incremental costs and outcomes).
QALYs, quality-adjusted life years.
CRONIN ET AL Academic Radiology, Vol 21, No 9, September 2014
with nonrandomized designs (TREND) checklist for
nonrandomized trials; the standards for the reporting of
diagnostic accuracy studies (STARD) checklist and flow
diagram for diagnostic test accuracy studies (13–28); the
strengthening the reporting of observational studies in
epidemiology (STROBE) checklists for cohort, case–control,
and cross-sectional studies; the preferred reporting items of
1092
systematic reviews and meta-analyses (PRISMA) checklist
and flow diagram for systematic reviews and meta-analyses
(29); the consolidated criteria for reporting qualitative research
(COREQ) and enhancing transparency in reporting the
synthesis of qualitative research (ENTREQ) checklists for
reporting qualitative research; standards for quality improve-
ment reporting excellence (SQUIRE) checklist for quality
TABLE 4. Standards for Developing Trustworthy ClinicalPractice Guidelines
Standard 1 Establishing transparency
Standard 2 Management of conflict of interest
Standard 3 Guideline development group composition
Standard 4 Clinical practice guideline-systematic review
intersection
Standard 5 Establishing evidence foundations for and rating
strength of recommendations
Standard 6 Articulations of recommendations
Standard 7 External review
Standard 8 Updating
Based on clinical practice guidelines we can trust, Institute of Med-
icine, National Academic Press, 2011 (64).
Academic Radiology, Vol 21, No 9, September 2014 HOW TO REPORT A RESEARCH STUDY
improvement studies; consolidated health economic evalua-
tion reporting standards (CHEERS) for health economics
studies (30–39); and statement on reporting of evaluation
studies in health informatics (STARE-HI) for studies of
health informatics.
HOW TO REPORT STUDIES
Screening Studies and Diagnostic Test AccuracyStudies
There are no specific EQUATORnetwork recommendations
for reporting screening studies. However, in general, report-
ing screening studies should incorporate the items important
to diagnostic test accuracy studies. Screening is the application
of a test to detect a disease in an individual who has no known
signs or symptoms. The purpose of screening is to prevent or
delay the development of advanced disease through earlier
detection and enable treatment of disease that is both less
morbid and more effective.
Screening is distinct from other diagnostic tests in that pa-
tients undergoing screening are asymptomatic, and the chance
of having the disease of interest is lower in asymptomatic pa-
tients compared to those presenting with symptoms. For a
screening study, the most important factors to consider are
the characteristics of the population to be screened, the
screening regimens being compared, the diagnostic test per-
formance of the screening test or tests, and the outcome mea-
sure selected. Additional considerations are the diagnostic
consequences which occur during a patient’s screening
episode, such as additional downstream testing for those
with positive test results and follow-up monitoring of those
with negative test results.
Studies of diagnostic tests evaluate a test for diagnosing a
disease by comparing the test in patients with and without
disease using a reference standard. A diagnostic test accuracy
study provides evidence on how well a test correctly identifies
or rules out disease and informs subsequent decisions about
treatment for clinicians, their patients, and health care pro-
viders. An example would be an assessment of the test accu-
racy of computed tomography pulmonary angiography to
detect pulmonary embolism (PE) in patients with suspected
PE such as the PIOPED II trial (40). A frequently recommen-
ded and used guideline for the reporting of diagnostic test
accuracy research is STARD (13–28). The objective of
the STARD initiative is to improve the accuracy and
completeness of reporting of studies of diagnostic accuracy,
to allow readers to assess the potential for bias in the study
(internal validity), and to evaluate its generalizability
(external validity) (41). The STARD statement consists of a
checklist of 25 items and recommends the use of a flow dia-
gram which describes the design of the study and the flow of
patients (Appendix Table 1). Table 1 outlines all 25 items with
an explanation of each item. More than 200 biomedical
journals encourage the use of the STARD statement in their
instructions for authors (41). Its main advantage includes a
systematic approach to addressing the key components of
the study design, conduct, and analysis of diagnostic accuracy
studies for complete and accurate reporting. A checklist and
flowchart guide the author and/or reviewer to ensure that
all key components are addressed. Flaws in study design can
lead to biased, optimistic estimates of diagnostic accuracy.
Exaggerated and biased results from poorly designed and re-
ported diagnostic studies could ultimately lead to erroneous
practices in clinical care and inflated health care costs. Using
the STARD criteria for complete and accurate reporting
allows the reader to detect the potential for bias in the study
(internal validity) and to assess the generalizability and appli-
cability of the results (external validity).
There are issues with STARD. Imaging diagnostic test
technology may change faster than other diagnostic accuracy
test technology. Therefore, the generation of imaging test
technology may be very important; however, this information
is not build in to STARD. There are practical issues with diag-
nostic imaging tests. The reference standard is considered to
be the best available method for establishing the presence or
absence of the disease. The reference standard can be a single
method, or a combination of methods, to establish the pres-
ence of disease. It can include laboratory tests, imaging tests,
pathology but also dedicated clinical follow-up of subjects.
Although it is preferred to use the same reference standard
in all patients in a study, this may not always be possible
with diagnostic imaging tests. When multiple criteria are
used for the reference standard, it is important to describe
its rationale, patient selection, and application in the study
design to avoid bias.
Diagnostic tests are developed and improved at a fast pace but
frequently increase health care costs. It is no longer acceptable in
this era of evidence-based health care decision making to omit
critical information required for the readership, regulatory
bodies, and insurers to determine the value of a diagnostic
test. This is particularly important, because the evidence stan-
dards for regulatory test approval are unfamiliar to many in
the medical and clinical research community and do not readily
crosswalk to the elements embedded in STARD (42).However,
the demand for diagnostics will likely increase as health care
moves to more personalized medicine.
1093
TABLE 5. Reporting Guidelines by Research Study Design, Acronym, Web site URL, and Bibliographic Reference
Research
Study Design
Reporting Guidelines
Provided For
Reporting Guideline
Acronym
Reporting Guideline Web
Site URL
Full Text if
Available Full Bibliographic Reference
Diagnostic test accuracy Studies of diagnostic
accuracy
STARD http://www.stard-
statement.org/
Full-text PDF documents of
the STARD Statement,
checklist, flow diagram
and the explanation and
elaboration document
Bossuyt PM, Reitsma JB, Bruns DE,
Gatsonis CA, Glasziou PP, Irwig LM,
Lijmer JG, Moher D, Rennie D, de Vet
HC. Towards complete and accurate
reporting of studies of diagnostic
accuracy: the STARD initiative.
Standards for Reporting of Diagnostic
Accuracy.
Clin Chem. 2003; 49(1):1–6.
PMID: 12507953 (17).
BMJ. 2003; 326(7379):41–44.
PMID: 12511463 (14).
Radiology. 2003; 226(1):
24–28. PMID: 12511664
(13).
Ann Intern Med. 2003;
138(1):40–44. PMID:
12513043 (27).
Am J Clin Pathol. 2003;
119(1):18–22.
PMID: 12520693 (15).
Clin Biochem. 2003; 36(1):
2–7.
PMID: 12554053 (16).
Clin Chem Lab Med. 2003;
41(1):68–73.
PMID: 12636052 (17).
Clinical trials, experimental
studies
Parallel group randomised
trials
CONSORT http://www.consort-
statement.org/
Full-text PDF documents of
the CONSORT 2010
Statement, CONSORT
2010 checklist,
CONSORT 2010 flow
diagram, and the
CONSORT 2010
Explanation and
Elaboration document
Schulz KF, Altman DG, Moher D, for the
CONSORT Group. CONSORT 2010
Statement: updated guidelines for
reporting parallel group randomised
trials.
Ann Int Med. 2010;
152(11):726–32.
PMID: 20335313 (8).
BMC Medicine. 2010; 8:18.
PMID: 20334633 (7).
BMJ. 2010; 340:c332. PMID:
20332509 (5).
J Clin Epidemiol. 2010; 63(8):
834–40. PMID: 20346629
(9).
Lancet. 2010; 375(9721):
1136 supplementary
webappendix.
Obstet Gynecol. 2010;
115(5):1063–70.
PMID: 20410783 (10).
Open Med. 2010; 4(1):60–68.
PLoS Med. 2010; 7(3):
e1000251. PMID:
20352064 (12).
Trials. 2010; 11:32. PMID:
20334632 (6).
Trials assessing
nonpharmacologic
treatments
CONSORT
nonpharmacological
treatment
interventions
http://www.consort-
statement.org/
extensions/interventions/
non-pharmacologic-
treatment-interventions/
The full text of the extension
for trials assessing
nonpharmacologic
treatments
Boutron I, Moher D, Altman DG, Schulz
K, Ravaud P,
for the CONSORT group. Methods
and Processes
of the CONSORT Group: example of
an extension
Ann Intern Med. 2008:
W60–W67. PMID:
18283201 (75).
CRONIN
ETAL
Academic
Radiology,Vol21,No9,September2014
1094
for trials assessing nonpharmacologic
treatments.
Cluster randomised trials CONSORT Cluster http://www.consort-
statement.org/
extensions/designs/
cluster-trials/
The full text of the extension
for cluster randomised
trials
Campbell MK, Piaggio G, Elbourne DR,
Altman DG; for the CONSORT Group.
Consort 2010 statement: extension to
cluster randomised trials.
BMJ. 2012; 345:e5661.
PMID: 22951546 (51).
Reporting randomised trials
in journal and conference
abstracts
CONSORT for abstracts http://www.consort-
statement.org/
extensions/data/
abstracts/
The full text of the extension
for journal and
conference abstracts
Hopewell S, ClarkeM,Moher D,Wager E,
Middleton P, Altman DG, Schulz KF,
the CONSORT Group. CONSORT for
reporting randomised trials in journal
and conference abstracts.
Lancet. 2008; 371(9609):
281–283. PMID: 18221781
(76).
Reporting of pragmatic trials
in health care
CONSORT pragmatic
trials
http://www.consort-
statement.org/
extensions/designs/
pragmatic-trials/
The full text of the extension
for pragmatic trials in
health care
Zwarenstein M, Treweek S, Gagnier JJ,
AltmanDG, Tunis S, HaynesB, Oxman
AD, Moher D; CONSORT group;
Pragmatic Trials in Healthcare
(Practihc) group. Improving the
reporting of pragmatic trials: an
extension of the CONSORT
statement.
BMJ. 2008; 337:a2390.
PMID: 19001484 (77).
Reporting of harms in
randomized trials
CONSORT Harms http://www.consort-
statement.org/
extensions/data/harms/
Ioannidis JPA, Evans SJW, Gotzsche
PC, O’Neill RT, Altman DG, Schulz K,
Moher D, for the CONSORT Group.
Better Reporting of Harms in
Randomized Trials: An Extension of
the CONSORT Statement.
Ann Intern Med. 2004; 141
(10):781–788. PMID:
15545678 (78).
Patient-reported outcomes
in randomized trials
CONSORT-PRO http://www.consort-
statement.org/
extensions/data/pro/
The full text of the extension
for patient reported
outcomes (PROs)
Calvert M, Blazeby J, Altman DG, Revicki
DA, Moher D, Brundage MD;
CONSORT PRO Group. Reporting of
patient-reported outcomes in
randomized trials: the CONSORTPRO
extension.
JAMA. 2013; 309(8):814–
822. PMID; 23443445
(79).
Reporting of noninferiority
and equivalence
randomized trials
CONSORT noninferiority http://www.consort-
statement.org/
extensions/designs/non-
inferiority-and-
equivalence-trials/
The full text of the extension
for noninferiority and
equivalence randomized
trials
Piaggio G, Elbourne DR, Pocock SJ,
Evans SJ, Altman DG; CONSORT
Group. Reporting of noninferiority and
equivalence randomized trials:
extension of the CONSORT 2010
statement.
JAMA. 2012; 308(24):2594–
2604. PMID: 23268518
(80).
Defining standard protocol
items for clinical trials
SPIRIT http://www.spirit-
statement.org/
The full text of the SPIRIT
2013 Statement
Chan A-W, Tetzlaff JM, Altman DG,
Laupacis A, Gøtzsche PC, Krle�za-
Jeri�c K, Hr�objartsson A, Mann H,
Dickersin K, Berlin J, Dor�e C,
Parulekar W, Summerskill W, Groves
T, Schulz K, Sox H, Rockhold FW,
Rennie D, Moher D. SPIRIT 2013
Statement: defining standard protocol
items for clinical trials.
Ann Intern Med. 2013;
158(3):200–207. PMID:
23295957 (81).
Systematic reviews/
meta-analyses/HTA
Systematic reviews and
meta-analyses
PRISMA http://www.prisma-
statement.org/
Full-text PDF documents of
the PRISMA Statement,
checklist, flow diagram
and the PRISMA
Moher D, Liberati A, Tetzlaff J, Altman
DG, The PRISMA Group. Preferred
Reporting Items for Systematic
PLoS Med. 2009; 6(7):
e1000097. PMID:
19621072 (82).
BMJ. 2009; 339:b2535.
(Continued on next page)
Academic
Radiology,Vol21,No9,September2014
HOW
TO
REPORTARESEARCHSTUDY
1095
TABLE 5. (continued) Reporting Guidelines by Research Study Design, Acronym, Web site URL, and Bibliographic Reference
Research
Study Design
Reporting Guidelines
Provided For
Reporting Guideline
Acronym
Reporting Guideline Web
Site URL
Full Text if
Available Full Bibliographic Reference
Explanation and
Elaboration
Reviews and Meta-Analyses: The
PRISMA Statement.
PMID: 19622551.
Ann Intern Med. 2009;
151(4):264–269, W64.
PMID: 19622511 (83).
J Clin Epidemiol. 2009;
62(10):1006–1012. PMID:
19631508 (84).
Open Med. 2009; 3(3);
123–130
Reporting systematic
reviews in journal and
conference abstracts
PRISMA for Abstracts Beller EM, Glasziou PP, Altman DG,
Hopewell S, Bastian H, Chalmers I,
Gøtzsche PC, Lasserson T, Tovey D;
PRISMA for Abstracts Group.
PRISMA for Abstracts: reporting
systematic reviews in journal and
conference abstracts.
PLoS Med. 2013; 10(4):
e1001419. PMID:
23585737 (85).
Meta-analysis of individual
participant data
Riley RD, Lambert PC, Abo-Zaid G.
Meta-analysis of individual participant
data: rationale, conduct, and
reporting.
BMJ. 2010; 340:c221. PMID
20139215 (86).
Economic evaluations Economic evaluations of
health interventions
CHEERS http://www.ispor.org/
taskforces/Economic
PubGuidelines.asp
Information about the
CHEERS Statement and a
full-text PDF copy of the
CHEERS checklist
Husereau D, Drummond M, Petrou S,
Carswell C, Moher D, Greenberg D,
Augustovski F, Briggs AH, Mauskopf
J, Loder E. Consolidated Health
Economic Evaluation Reporting
Standards (CHEERS) statement
Eur J Health Econ. 2013;
14(3):367–372. PMID:
23526140 (30).
Value Health. 2013; 16(2):
e1–e5. PMID: 23538200
(31).
Clin Ther. 2013; 35(4):
356–363. PMID: 23537754
(32).
Cost Eff Resour Alloc. 2013;
11(1):6. PMID: 23531194
(33).
BMC Med. 2013; 11:80.
PMID: 23531108 (34).
BMJ. 2013; 346:f1049.
PMID: 23529982 (35).
Pharmacoeconomics. 2013;
31(5):361–367. PMID:
23529207 (36).
J Med Econ. 2013; 16(6):
713–719. PMID: 23521434
(37).
Int J Technol Assess Health
Care. 2013; 29(2):117–
122. PMID: 23587340
(38).
BJOG. 2013; 120(6):765–
CRONIN
ETAL
Academic
Radiology,Vol21,No9,September2014
1096
770. PMID: 23565948
(39).
Clinical trials, diagnostic
accuracy studies,
experimental studies,
observational studies
Narrative in reports of
medical research
Schriger DL. Suggestions for improving
the reporting of clinical research: the
role of narrative.
Ann Emerg Med. 2005;
45(4):437–443. PMID:
15795727 (87).
Qualitative research Qualitative research
interviews and focus
groups
COREQ Tong A, Sainsbury P, Craig J.
Consolidated criteria for reporting
qualitative research (COREQ): a 32-
item checklist for interviews and focus
groups.
Int J Qual Health Care. 2007;
19(6):349–357. PMID
17872937
Observational studies For completeness,
transparency and data
analysis in case reports
and data from the point of
care.
CARE http://www.care-statement.
org/
The CARE checklist and the
CAREwriting template for
authors
Gagnier JJ, Kienle G, Altman DA, Moher
D, Sox H, Riley D; the CARE Group.
The CARE Guidelines: consensus-
based clinical case reporting
guideline development.
BMJ Case Rep. 2013; http://
dx.doi.org/10.1136/bcr-
2013-201554
PMID: 24155002 (88).
Global Adv Health Med.
2013; 10.7453/gahmj.
2013.008
Dtsch Arztebl Int. 2013;
110(37):603–608.
PMID: 24078847 Full-text
in English/Full-text in
German
J Clin Epidemiol. 2013. Epub
ahead of print. PMID:
24035173 (89).
J Med Case Rep. 2013; 7(1):
223. PMID: 24228906
(90).
J Diet Suppl. 2013; 10(4):
381–90. PMID: 24237192
(91).
Reliability and agreement
studies
Reliability and agreement
studies
GRRAS Kottner J, Audig�e L, Brorson S, Donner
A, Gajeweski BJ, Hr�objartsson A,
Robersts C, Shoukri M, Streiner DL.
Guidelines for reporting reliability and
agreement studies (GRRAS) were
proposed.
J Clin Epidemiol. 2011;
64(1):96–106 PMID:
21130355 (92).
Int J Nurs Stud. 2011;
48(6):661–671. PMID:
21514934 (93).
Qualitative research,
systematic reviews/
meta-analyses/HTA
Synthesis of qualitative
research
ENTREQ Tong A, Flemming K, McInnes E, Oliver
S, Craig J. Enhancing transparency in
reporting the synthesis of qualitative
research: ENTREQ.
BMC Med Res Methodol.
2012; 12(1):181. PMID
23185978 (74).
Qualitative research Qualitative research
interviews and focus
groups
COREQ http://intqhc.oxfordjournals.
org/content/19/6/349.
long
Full text Tong A, Sainsbury P, Craig J.
Consolidated criteria for reporting
qualitative research (COREQ): a 32-
item checklist for interviews and focus
groups.
Int J Qual Health Care. 2007;
19(6):349–357. PMID:
17872937 (73).
Mixed-methods studies Mixed methods studies in
health services research
GRAMMS O’Cathain A, Murphy E, Nicholl J. The
quality of mixed methods studies in
health services research.
J Health Serv Res Policy.
2008; 13(2):92–98. PMID:
18416914 (94).
(Continued on next page)
Academic
Radiology,Vol21,No9,September2014
HOW
TO
REPORTARESEARCHSTUDY
1097
TABLE 5. (continued) Reporting Guidelines by Research Study Design, Acronym, Web site URL, and Bibliographic Reference
Research
Study Design
Reporting Guidelines
Provided For
Reporting Guideline
Acronym
Reporting Guideline Web
Site URL
Full Text if
Available Full Bibliographic Reference
Quality improvement
studies
Quality improvement in
health care
SQUIRE http://squire-statement.org/ Davidoff F, Batalden P, Stevens D,
Ogrinc G, Mooney S. Publication
guidelines for quality improvement in
health care: evolution of the SQUIRE
project.
Qual Saf Health Care. 2008;
17 Suppl 1:i3-i9. PMID:
18836063 (95).
BMJ. 2009; 338:a3152.
PMID: 19153129 (96).
Jt Comm J Qual Patient Saf.
2008; 34(11):681–687.
PMID: 19025090 (97).
Ann Intern Med. 2008;
149(9):670–676. PMID:
18981488 (98).
J Gen Intern Med. 2008;
23(12):2125–2130. PMID:
18830766 (99)
Health informatics Evaluation studies in health
informatics
STARE-HI Talmon J, Ammenwerth E, Brender J, de
Keizer N, Nykanen P, Rigby M.
STARE-HI - Statement on reporting of
evaluation studies in Health
Informatics.
Int JMed Inform. 2009; 78(1):
1–9. PMID: 18930696
(100).
CARE, case reports; CHEERS, consolidated health economic evaluation reporting standards; CONSORT, consolidated standards of reporting trials; COREQ, consolidated criteria for report-
ing qualitative research; ENTREQ, enhancing transparency in reporting the synthesis of qualitative research; GRAMMS, good reporting of a mixed-methods study; GRRAS, guidelines for re-
porting reliability and agreement studies; HTA, health technology assessment; PRISMA, preferred reporting items for systematic reviews and meta-analyses; SPIRIT, standard protocol items:
recommendations for interventional trials; SQUIRE, standards for quality improvement reporting excellence; STARD, standards for reporting of diagnostic accuracy; STARE-HI, statement on
reporting of evaluation studies in health informatics.
CRONIN
ETAL
Academic
Radiology,Vol21,No9,September2014
1098
Academic Radiology, Vol 21, No 9, September 2014 HOW TO REPORT A RESEARCH STUDY
Therapeutic Studies
The double-blind RCT is the reference standard approach
to trial design and is most comprehensively reported by
following the CONSORT Statement. The CONSORT
Statement is an evidence-based, minimum set of recommen-
dations for reporting RCTs. It offers a standard way for au-
thors to prepare reports of trial findings, facilitating their
complete and transparent reporting, and aiding their critical
appraisal and interpretation. The CONSORT Statement
comprises a 25-item checklist and a flow diagram (Appendix
Table 2), along with some brief descriptive text. The checklist
items focus on reporting how the trial was designed, analyzed,
and interpreted; the flow diagram displays the progress of all
participants through the trial (43). The checklist items pertain
to the content of the title, abstract, introduction, methods, re-
sults, discussion, and other information. The flow diagram is
intended to depict the passage of participants through an
RCT. The revised flow diagram depicts information from
four stages of a trial (enrollment, intervention allocation,
follow-up, and analysis). The diagram explicitly shows the
number of participants, for each intervention group, included
in the primary data analysis (43). Themost up-to-date revision
of the CONSORT Statement is CONSORT 2010 (2–12).
Randomization is thought to improve the reliability and
validity of study results by mitigating selection biases. For
example, consider the example where both therapy A and
therapy B are options for treating a disease. It is known that
the treatment is less effective in older patients. A study is per-
formed, which shows therapy A to be more effective. But the
mean age of patients undergoing therapy A is 10 years less
than the mean age in therapy B arm. This difference is statis-
tically significant. Given this result, the conclusion that ther-
apy A is better than therapy B is tempered by the knowledge
that any benefit may be due the selection of younger patients
for therapy A. With randomization, the ages between therapy
groups should not be statistically different. Thus, if a benefit of
therapy A persists, the conclusion that therapy A is more
accurate has more validity. Adherence to the principles in
the CONSORT Statement is intended to improve study
reporting by making explicit selection, randomization, and
assignment criteria to ensuring that it is the treatments, rather
than the patient factors that drive results.
To assess the strengths and limitations of RCTs, readers
need and deserve to know the quality of their methods. Pre-
vious studies have shown that reports of low-quality RCTs,
compared to reports of higher quality ones, overestimate the
effectiveness of interventions by about 30% across a variety
of health care conditions. Advantages of the CONSORT
guidelines are increased transparency allowing the reader to
better assess the strengths and limitations of the RCT and
reduced risk of overestimating effect (44).
However, the CONSORT Statement is rarely directly
applicable to therapeutic trials in radiology, as these are usually
nonpharmacologic and/or technical studies including percu-
taneous and endovascular interventions. These studies have
specific issues that introduce bias to results reporting. This in-
cludes challenges because of impossible or partial blinding,
clustering, the experience of providers and centers, and
both patient and provider willingness to undergo randomiza-
tion (45). Thus, the CONSORT Statement has been specif-
ically modified to guide reporting of nonpharmacologic
treatments (46). In addition, modifications have been made
to promote systematic reporting for cohort and comparative
study designs, using the principles from the CONSORT
Statement (45,47).
The most critical modifications to the CONSORT
Statement for reporting interventional and therapeutic study
results relate to treatment and provider details. Precise descrip-
tions of the experimental treatment and comparator should be
reported (48). In addition, descriptions of how, why, and
when treatment is modified help to demonstrate sufficient
separation of study arms, as study arm crossover may limit
the conclusions that can be drawn from interventional trials
(49). Although rarely included, it is recommended that trials
describe in detail the volume of experience of providers, as pa-
tient outcomes are directly related to experience (48,50).
Correlations between patients could be introduced on the
basis of undocumented similarities in process for instance by
a single provider or practice location. Clustering is the instance
in which the subjects in one trial arm are more like each other
than the subjects in another arm, thus reducing statistical po-
wer and introducing a challenge to interpreting study results.
For example, if therapyA is only offered in a homogenous sub-
urban community and therapy B is offered only in a high-
density urban center, the subjects who undergo therapy A
will tend to be like each other but unlike those undergoing
therapy B. If the outcome is better in the population undergo-
ing therapy A, fewer patients will be required to demonstrate a
statistically significant benefit of therapy A compared to ther-
apy B. However, the benefit is due to the similarity of patients
by group, rather than the therapy itself. To overcome this clus-
tering, it is necessary to recruit more patients and explicitly ac-
count for the similarities in each group. This accounting
should be described in the sample size calculation and statistical
reporting. Alternatively, if both therapy A and therapy B were
offered in either center and subjects were randomly assigned to
either therapy, sample size could be reduced.
The main CONSORT Statement is based on the ‘‘stan-
dard’’ two-group parallel design. However, there are several
different types of randomized trials, some of which have
different designs, interventions, and data. To help improve
the reporting of these trials, the CONSORT Group has
been involved in extending and modifying the main
CONSORT Statement for application in these various areas
including design extensions such as cluster trials, noninferior-
ity and equivalence trials and pragmatic trials; intervention ex-
tensions for nonpharmacological treatment interventions; and
data extensions for patient-reported outcomes and harms.
Future directions for CONSORT are new CONSORT ex-
tensions and update the various CONSORT extensions to
reflect the 2010 checklist. For additional details, it is
1099
CRONIN ET AL Academic Radiology, Vol 21, No 9, September 2014
recommended that the reader review the Extended CON-
SORT statement, which may be found on the EQUATOR
network (1,51).
Meta-analyses
A systematic review sums up the best available research on a
specific question, synthesizing the results of several studies.
A meta-analysis is a quantitative statistical analysis of two or
more separate but similar experiments or studies to test the
pooled data for statistical significance (52). Systematic reviews
and meta-analyses have become increasingly important in
health care. Clinicians read them to keep up to date with their
field (53,54). When reporting a meta-analysis of diagnostic
test accuracy studies, important domains are problem formu-
lation, data acquisition, quality appraisal of eligible studies, sta-
tistical analysis of quantitative data, and clinical interpretation
of the evidence. With regard to problem formulation, it is
important to define the question and objective of the review
and establish criteria for including studies in the review. For
data acquisition, a literature search should be conducted to
retrieve the relevant literature. For the quality appraisal of
eligible studies, variables of interest should be extracted
from the data. Studies should be assessed for quality and appli-
cability to the clinical problem at hand. The evidence should
be summarizing qualitatively, and if appropriate, quantita-
tively, that is, a meta-analysis performed. With regard to the
statistical analysis of quantitative data, diagnostic accuracy
should be estimated, the data displayed, and heterogeneity
and publication bias assessed. For statistical analysis of quanti-
tative data, the robustness of estimates of diagnostic accuracy
using sensitivity analyses (if applicable) should be assessed,
and one should explore and explain heterogeneity in test
accuracy using subgroup analysis (if applicable). For clinical
interpretation of the evidence, there should be a graphic
display of how the evidence alters the pretest probability
calculating the post-test probability. This is a lot to report.
Thankfully, there is a guideline for reporting systematic re-
views and meta-analyses, PRISMA (29).
In 1996, to address the suboptimal reporting of meta-
analyses, an international group developed a guidance called
the Quality Of Reporting Of Meta-analyses Statement,
which focused on the reporting of meta-analyses of RCTs
(55). In 2009, the guideline was updated to address several
conceptual and practical advances in the science of systematic
reviews and was renamed PRISMA. The aim of the PRISMA
Statement is to help authors report a wide array of systematic
reviews to assess the benefits and harms of a health care inter-
vention. PRISMA focuses on ways in which authors can
ensure the transparent and complete reporting of systematic
reviews and meta-analyses and has adopted the definitions of
systematic review and meta-analysis used by the Cochrane
Collaboration (56). PRISMA is an evidence-based minimum
set of items for reporting in systematic reviews and meta-
analyses. The aim of which is to help authors improve the
reporting of systematic reviews and meta-analyses (29).
1100
PRISMA focuses on randomized trials, but PRISMA can
also be used as a basis for reporting systematic reviews of other
types of research, particularly evaluations of interventions.
The PRISMA Statement consists of a 27-item checklist and
a four-phase flow diagram (Appendix Table 3). The advan-
tages of using PRISMA when reporting a systematic review
and meta-analysis are an inclusion of assessments of bias,
such as publication or small sample size bias, and heterogeneity.
This is important as there is overwhelming evidence for the
existence of these biases and their impact on the results of
systematic reviews (57). Even when the possibility of publica-
tion bias is assessed, there is no guarantee that systematic re-
viewers have assessed or interpreted it appropriately, and the
absence of reporting such an assessment does not necessarily
indicate that it was not done. However, reporting an assess-
ment of possible publication bias is likely to be a marker of
the thoroughness of the conduct of the systematic review (57).
A limitation of PRISMA with regard to imaging studies is
that it is primarily designed for the reporting of systematic re-
views and meta-analyses of therapeutic studies and RCTs
which are not the predominant research design performed
in radiology. Another issue is methodological differences in
performing and reporting systematic reviews and meta-
analyses of diagnostic imaging accuracy studies compared to
therapeutic studies. Therefore, some of the PRISMA items
are not applicable to the evidence synthesis of diagnostic im-
aging accuracy studies. PRISMA is useful but can be difficult
to apply in diagnostic imaging accuracy studies because of the
quality and variability of the studies available. Systematic re-
views and meta-analyses of diagnostic imaging accuracy the
studies have inherent high heterogeneity, and it is uncertain
if test for heterogeneity listed in PRISMA is applicable to
these systematic reviews and meta-analyses. Developing a
new reporting guideline based on the current PRISMA, but
designed for reporting systematic reviews and meta-analyses
of diagnostic imaging accuracy studies would be a helpful
future direction. This reporting guideline would have to
take into account the methodological differences of reporting
systematic reviews and meta-analyses of diagnostic imaging
accuracy studies. However, PRISMA is a reasonable tool to
ensure the transparent and complete reporting of systematic
reviews and meta-analyses.
Cost-Effectiveness Assessments
Economic evaluations of health interventions pose a particular
challenge for reporting because substantial information on
costs, outcomes, and health systemsmust be conveyed to allow
scrutiny of findings. Despite a growth in published health eco-
nomic reports, existing reporting guidelines are not widely
adopted. Challenges include having to assess international
studies with differing systems and cost structures, studies
assessing multiple end points and multiple stakeholder per-
spectives, and alternative time horizons. Consolidating and
updating existing guidelines and promoting their use in an
efficient manner also add complexity. A checklist is one way
Academic Radiology, Vol 21, No 9, September 2014 HOW TO REPORT A RESEARCH STUDY
to help authors, editors, and peer reviewers use guidelines to
improve reporting (30–39).
In health care, if newer innovations are more expensive, re-
searchers may conduct CEA, and/or a specific type of CEA,
cost–utility analysis. These are derivatives of cost–benefit
analysis (CBA) and have increased in use in health care and
in imaging over time (58,59). Elixhauser et al. summarized
cost–benefit and CEA studies between 1979 and 1993 (60).
CEA researchers generally have not measured benefit effects
using monetary metrics, as done in traditional CBAs. In
1993, the US Panel on Cost-Effectiveness analyses in Health
and Medicine was convened to recommend standards. Their
recommendations for standard reporting of reference case an-
alyses are presented in Table 2 (61). The panel recommended
that reports of cost effectiveness should allow determination of
whether the results can be juxtaposed with those of other
CEAs. Elements to include in a journal report are summarized
in the checklist (61).
The International Society for Pharmacoeconomics and
Outcomes Research (ISPOR) quality improvement in cost-
effectiveness research task force recommended that the society
help make guidelines available to authors and reviewers to
support the quality, consistency, and transparency of health
economic and outcomes research reporting in the biomedical
literature. The concept for this task force arose in response
from the results of a survey of medical journal editors. The
survey revealed that the vast majority of respondents (either
submitting authors or reviewers) used no guidelines, require-
ments, or checklists for health economics or outcomes
research. Most respondents also indicated that they would
be willing to incorporate guidelines if they were made avail-
able by a credible professional health economics and outcomes
research (HEOR) organization. Although there are a number
of different health economic guidelines and checklists avail-
able in the public domain for conduct and reporting
HEOR, they are not widely used in biomedical publishing.
However, there appears to be fairly broad consensus among
available recommendations and guidelines, as well as stability
of content for several years with regard to reporting of
HEOR. Under its mandate to examine issues in quality
improvement of CER, the ISPOR RCT-CEA Task Force
Report of core recommendations for conducting economic
analyses alongside clinical trials was developed and is provided
in Table 3 (62). The CHEERS Statement attempts to opti-
mize the reporting of health economic evaluations and
consolidate and update previous health economic evaluation
guidelines into one current, useful reporting guidance
(Appendix Table 4) (30–39). The primary audiences for the
CHEERS statement are researchers reporting economic
evaluations and the editors and peer reviewers assessing
them for publication. Economic evaluations of health
interventions pose a particular challenge for reporting. The
advantage of CHEERS is it consolidates and updates
existing guidelines and promotes their use in a user-friendly
manner. CHEERS should lead to better reporting, and ulti-
mately better health decisions (30–39).
Recommendations and/or Guidelines
Guidelines represent a bridge between research and clinical
practice and were defined by the Institute of Medicine Com-
mittee to Advise the Public Health Service on Clinical Practice
Guidelines. Clinical Practice Guidelines: Directions For A New
Program.Washington DC: National Academy Press in 1990:
Practice guidelines are systematically developed state-
ments to assist practitioner and patient decisions
about appropriate health care for specific clinical
circumstances.
Guidelines, like all research publications, can be formatted
in many ways. In 2011, the US Institute of Medicine pub-
lished eight standards for developing guidelines, summarized
in Table 4 (63). These standards include the expectation that
recommendations detail precise actions and the circumstance
it should be performed. Of note, these standards also require a
review of the guidelines with scientific and/or clinical experts
and organizations, patients, and representatives of the public.
The National Guideline Clearinghouse is hosted by the
Agency for Healthcare Research and Quality (ARHQ) and
contains over 2600 guidelines including Guidelines by the
American College of Radiology (64). The guidelines in the
database come from multiple sources. Overlapping guidelines
from different organizations may have different intended out-
comes or be designed for a different patient population. Thus,
some of the guidelines in the National Guideline Clearing-
house contradict each other, for example, breast cancer
screening guidelines.
Medical Education Studies
Types of medical education studies include curricular innova-
tions which may follow the Kern six-step process (including
problem identification, targeted needs assessment, goals and
objectives, deciding educational strategies, implementation,
and evaluation); consensus conference proceedings, identi-
fying and addressing knowledge gaps which may use a formal
process to achieve consensus such as the Delphi method; qual-
itative research studies; quantitative research studies; and
mixed-methods research studies (65,66). Curricular
innovations can be subjective, assessing learner satisfaction
or self-reported confidence or objective assessing knowledge,
skills, attitudes, behaviors, or performance (65). Educational
studies can be descriptive, such as case reports/case series,
correlational (ecologic) studies, or cross-sectional studies.
They can also be analytical, such as case–control studies,
cohort/prospective studies, or RCTs (67). Medical education
study designs include true experimental designs, such as pro-
spective cohort studies which have a pretest or post-test with
control group design; the Solomon four-group design, which
has two intervention and two control groups, half of each
group taking the pretest and all taking the post-test; and
the post-test only with control group design. Quasi-
experimental designs include time series design (repeated
1101
CRONIN ET AL Academic Radiology, Vol 21, No 9, September 2014
testing of the same group), nonequivalent control group
(comparison group) design, separate sample pretest or post-
test design, and separate sample pretest or post-test with con-
trol group design. Pre-experimental designs include one
group pretest or post-test design, static group comparison
design (equivalent to cross-sectional studies), and case studies
(68). Most often, educational case reports describe new
curricula, but case reports of unusual educational problems
may warrant publication as well. Correlational (ecologic)
studies examine associations between exposures and an
outcome with the unit of analysis being greater than the indi-
viduals exposed such as a geographic region. Although there
are several types of cross-sectional study designs, the most
common in medical education research are survey studies.
Previous authors have looked at the subject matter of educa-
tional research projects and found that the majority of publica-
tions focused on subjectivemeasures such as trainee assessment
and satisfaction (69). Less often,medical education research has
focused on faculty performance. Educational projects can
focus on aspects of professionalism such as ethics, morality,
tenure, career choice, and promotion, but these types of pro-
jects are relatively uncommon.The impact of education on pa-
tient clinical outcomes should be studied but is difficult
because of the distance between education received and clin-
ical practice but also because of confounding factors that affect
medical practitioners. The behavior of patients as a result of
educational intervention could also be studied, such as the
impact of patient educational efforts on the course of their
chronic illness but this type of study is rare. Similarly, cost-
effectiveness studies on education and teaching are difficult
to carry out because of the difficulty of quantifying the costs
and effectiveness of teaching and educational interventions
(69). A conceptual framework is a way of thinking about and
framing the research question for a study, representing how
educational theories, models, and systems work. The frame-
work used to guide a study will determine which research as-
pects to focus on. Well-designed studies will pose the research
question in the context of conceptual framework being used
such as established and validated educational theories, models,
and evidence-based medical education guidelines (70).
Bordage et al. found that the main weaknesses of medical
educational research were a sample size too small or biased;
data instrument either inappropriate or suboptimal or insuffi-
ciently described; insufficient data presented; inaccurate or
inconsistent data reported; defective tables or figures;
text too difficult to follow or understand; insufficient or
incomplete problem statement; statistical analysis either inap-
propriate or incomplete or insufficiently described; overinter-
pretation of results; and review of literature which was
inadequate, incomplete, inaccurate, or outdated (71).
Conversely, good-quality medical educational research had a
well-designed study; a well-written manuscript; practical,
useful implications; a sample size that was sufficiently large
to detect an effect; a well-stated and -formulated problem; a
research question that is novel, important and/or timely, and
relevant and/or critical; a review of the literature that is
1102
thoughtful and/or focused and/or up to date; interpretation
of the results that took into account the limitations of the
study; and/or had a unique approach to data analysis (71).
There are no specific EQUATOR network recommenda-
tions for reporting medical educational research studies. Seven
domains for improving the reporting of methods and results in
educational clinical trials have been proposed by Stiles et al.
(72). These include the introduction and background;
outcome measures; sample selection; interventions; statistical
plan; adverse events; and results (72).
Qualitative research interviews and focus groups can be a
part of medical education research. Qualitative research ex-
plores complex phenomena encountered by clinicians, health
care providers, policy makers, and consumers. A checklist for
the explicit and comprehensive reporting of qualitative studies
(in-depth interviews and focus groups) has been developed,
COREQ for interviews and focus groups which is a 32-
item checklist (Appendix Table 5) (73). The checklist items
are grouped into three domains: research team and reflexivity;
study design; and data analysis and reporting. This can be
found at the EQUATOR network. The advantages of
COREQ is that it can help researchers to report important as-
pects of the research team, study methods, context of the
study, findings, analysis, and interpretations. Also, it can
enable readers to identify poorly designed studies and inade-
quate reporting which can lead to inappropriate application
of qualitative research in decision making, health care, health
policy, and future research (73).
In addition, there is a reporting guideline for the reporting
of the syntheses of multiple qualitative studies. The syntheses
of multiple qualitative studies can pull together data across
different contexts, generate new theoretical or conceptual
models, identify research gaps, and provide evidence for the
development, implementation, and evaluation of health inter-
ventions. This reporting guideline is ENTREQ (74). This is a
21-item checklist (Appendix Table 6), also available at the
EQUATOR network (74).
CONCLUSIONS
The reporting of diagnostic test accuracy studies, screening
studies, therapeutic studies, systematic reviews and meta-
analyses, cost-effectiveness studies, recommendations and/or
guidelines, and medical education studies is discussed in this
article. The available guidelines, which can be found at the
EQUATOR network, are summarized in Table 5. We also
hope that this article can be used in academic programs to
educate the faculty and trainees of the available resources at
the EQUATOR network to improve our health research.
REFERENCES
1. The EQUATOR Network website. http://www.equator-network.org/.
Accessed December 21, 2013.
2. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and
elaboration: updated guidelines for reporting parallel group randomised
trials. BMJ 2010; 340:c869.
Academic Radiology, Vol 21, No 9, September 2014 HOW TO REPORT A RESEARCH STUDY
3. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and
elaboration: updated guidelines for reporting parallel group randomised
trials. J Clin Epidemiol 2010; 63(8):e1–37.
4. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and
elaboration: updated guidelines for reporting parallel group randomised
trials. Int J Surg 2012; 10(1):28–55.
5. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. BMJ 2010;
340:c332.
6. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. Trials 2010; 11:32.
7. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. BMC Med
2010; 8:18.
8. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomized trials. Ann Intern
Med 2010; 152(11):726–732.
9. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. J Clin Epidemiol
2010; 63(8):834–840.
10. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomized trials. Obstet Gynecol
2010; 115(5):1063–1070.
11. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. Int J Surg
2011; 9(8):672–677.
12. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated
guidelines for reporting parallel group randomised trials. PLoS Med
2010; 7(3):e1000251.
13. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: The STARD Initiative.
Radiology 2003; 226(1):24–28.
14. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: the STARD initiative.
BMJ 2003; 326(7379):41–44.
15. Bossuyt PM, Reitsma JB, Bruns DE, et al. Toward complete and accurate
reporting of studies of diagnostic accuracy. The STARD initiative. Am J
Clin Pathol 2003; 119(1):18–22.
16. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: the STARD initiative. Clin
Biochem 2003; 36(1):2–7.
17. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: the STARD initiative. Clin
Chem Lab Med 2003; 41(1):68–73.
18. Bossuyt PM, Reitsma JB, Bruns DE, et al. [Reporting studies of diag-
nostic accuracy according to a standard method; the Standards for Re-
porting of Diagnostic Accuracy (STARD)]. Ned Tijdschr Geneeskd 2003;
147(8):336–340.
19. Bossuyt PM, Reitsma JB, Bruns DE, et al. Toward complete and accurate
reporting of studies of diagnostic accuracy: the STARD initiative. Acad
Radiol 2003; 10(6):664–669.
20. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: the STARD initiative.
AJR Am J Roentgenol 2003; 181(1):51–55.
21. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: the STARD initiative. Ann
Clin Biochem 2003; 40(Pt 4):357–363.
22. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: the STARD initiative. Clin
Radiol 2003; 58(8):575–580.
23. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: the STARD initiative. The
Standards for Reporting of Diagnostic Accuracy Group. Croat Med J
2003; 44(5):635–638.
24. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: the STARD initiative.
Fam Pract 2004; 21(1):4–10.
25. Bossuyt PM, Reitsma JB, Bruns DE, et al. The STARD statement for re-
porting studies of diagnostic accuracy: explanation and elaboration.
Clin Chem 2003; 49(1):7–18.
26. Bossuyt PM, Reitsma JB, Bruns DE, et al. The STARD statement for re-
porting studies of diagnostic accuracy: explanation and elaboration.
Ann Intern Med 2003; 138(1):W1–W12.
27. Bossuyt PM, Reitsma JB, Bruns DE, et al. Towards complete and accu-
rate reporting of studies of diagnostic accuracy: The STARD Initiative.
Ann Intern Med 2003; 138(1):40–44.
28. Pai M, Sharma S. Better reporting of studies of diagnostic accuracy. In-
dian J Med Microbiol 2005; 23(4):210–213.
29. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for report-
ing systematic reviews andmeta-analyses of studies that evaluate health
care interventions: explanation and elaboration. PLoS Med 2009; 6(7):
e1000100.
30. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. Eur J
Health Econ 2013; 14(3):367–372.
31. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. Value
Health 2013; 16(2):e1–e5.
32. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. Clin Ther
2013; 35(4):356–363.
33. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. Cost Eff
Resour Alloc 2013; 11(1):6.
34. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. BMC Med
2013; 11:80.
35. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. BMJ
2013; 346:f1049.
36. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. Pharma-
coeconomics 2013; 31(5):361–367.
37. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. J Med
Econ 2013; 16(6):713–719.
38. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. Int J Tech-
nol Assess Health Care 2013; 29(2):117–122.
39. Husereau D, Drummond M, Petrou S, et al. Consolidated Health Eco-
nomic Evaluation Reporting Standards (CHEERS) statement. BJOG
2013; 120(6):765–770.
40. Stein PD, Fowler SE, Goodman LR, et al. Multidetector computed tomog-
raphy for acute pulmonary embolism. N Engl J Med 2006; 354(22):
2317–2327.
41. The STARD website. http://www.stard-statement.org/. Accessed
February 7, 2014.
42. Johnston KC, Holloway RG. There is nothing staid about STARD: prog-
ress in the reporting of diagnostic accuracy studies. Neurology 2006;
67(5):740–741.
43. The CONSORT website. http://www.consort-statement.org/. Accessed
February 7, 2014.
44. Moher D, Jones A, Lepage L, et al. Use of the CONSORT statement and
quality of reports of randomized trials: a comparative before-and-after
evaluation. JAMA 2001; 285(15):1992–1995.
45. Reeves BC, Gaus W. Guidelines for reporting non-randomised studies.
Forsch Komplementarmed Klass Naturheilkd 2004; 11(Suppl 1):46–52.
46. Boutron I, Moher D, Altman DG, et al. Extending the CONSORT state-
ment to randomized trials of nonpharmacologic treatment: explanation
and elaboration. Ann Intern Med 2008; 148(4):295–309.
47. Reeves BC. A framework for classifying study designs to evaluate health
care interventions. Forsch Komplementarmed Klass Naturheilkd 2004;
11(Suppl 1):13–17.
48. SalemR, Lewandowski RJ, Gates VL, et al. Research reporting standards
for radioembolization of hepatic malignancies. J Vasc Interv Radiol 2011;
22(3):265–278.
49. Kallmes DF, Comstock BA, Heagerty PJ, et al. A randomized trial of ver-
tebroplasty for osteoporotic spinal fractures. N Engl J Med 2009; 361(6):
569–579.
50. Jacquier I, Boutron I, Moher D, et al. The reporting of randomized clinical
trials using a surgical intervention is in need of immediate improvement: a
systematic review. Ann Surg 2006; 244(5):677–683.
51. Campbell MK, Piaggio G, Elbourne DR, et al. Consort 2010 statement:
extension to cluster randomised trials. BMJ 2012; 345:e5661.
52. Davey J, Turner RM, Clarke MJ, et al. Characteristics of meta-analyses
and their component studies in the Cochrane database of systematic
1103
CRONIN ET AL Academic Radiology, Vol 21, No 9, September 2014
reviews: a cross-sectional, descriptive analysis. BMCMed ResMethodol
2011; 11:160.
53. Oxman AD, Cook DJ, Guyatt GH. Users’ guides to the medical literature.
VI. How to use an overview. Evidence-Based Medicine Working Group.
JAMA 1994; 272(17):1367–1371.
54. Swingler GH, Volmink J, Ioannidis JP. Number of published systematic
reviews and global burden of disease: database analysis. BMJ 2003;
327(7423):1083–1084.
55. Moher D, Cook DJ, Eastwood S, et al. Improving the quality of reports of
meta-analyses of randomised controlled trials: the QUOROM statement.
Quality of reporting ofmeta-analyses. Lancet 1999; 354(9193):1896–1900.
56. Green S, Higgins J, eds. Glossary. Cochrane handbook for systematic re-
views of interventions 4.2.5. The Cochrane Collaboration. Available,
http://www.cochrane.org/resources/glossary.htm; 2005. Accessed
February 21, 2014.
57. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for system-
atic reviews and meta-analyses: the PRISMA statement. BMJ 2009; 339:
b2535.
58. Otero HJ, Rybicki FJ, Greenberg D, et al. Twenty years of cost-
effectiveness analysis in medical imaging: are we improving? Radiology
2008; 249(3):917–925.
59. Cost-effectiveness analysis registry https://research.tufts-nemc.org/
cear4/, accessed March 2, 2014.
60. Elixhauser A, Luce BR, TaylorWR, et al. Health care CBA/CEA: an update
on the growth and composition of the literature. Med Care 1993; 31(7
Suppl). JS1–11, JS8–149.
61. Siegel JE, Weinstein MC, Russell LB, et al. Recommendations for report-
ing cost-effectiveness analyses. Panel on Cost-Effectiveness in Health
and Medicine. JAMA 1996; 276(16):1339–1341.
62. Ramsey S, Willke R, Briggs A, et al. Good research practices for cost-
effectiveness analysis alongside clinical trials: the ISPOR RCT-CEA
Task Force report. Value Health 2005; 8(5):521–533.
63. Institute of Medicine website http://www.nap.edu/catalog.php?record_
id=13058. Accessed February 7, 2014.
64. National Guidelines Clearinghouse. http://www.guideline.gov. Accessed
May 3, 2010.
65. Yarris LM, Deiorio NM. Education research: a primer for educators in
emergency medicine. Acad Emerg Med 2011; 18(Suppl 2):S27–S35.
66. Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical
education. Acad Med 2004; 79(10):955–960.
67. Carney PA, Nierenberg DW, Pipas CF, et al. Educational epidemiology:
applying population-based design and analytic approaches to study
medical education. JAMA 2004; 292(9):1044–1050.
68. Lynch DC,Whitley TW,Willis SE. A rationale for using synthetic designs in
medical education research. Adv Health Sci Educ Theory Pract 2000;
5(2):93–103.
69. Prystowsky JB, Bordage G. An outcomes research perspective on med-
ical education: the predominance of trainee assessment and satisfaction.
Med Educ 2001; 35(4):331–336.
70. Bordage G. Conceptual frameworks to illuminate andmagnify. Med Educ
2009; 43(4):312–319.
71. Bordage G. Reasons reviewers reject and accept manuscripts: the
strengths and weaknesses in medical education reports. Acad Med
2001; 76(9):889–896.
72. Stiles CR, Biondo PD, Cummings G, et al. Clinical trials focusing on can-
cer pain educational interventions: core components to include during
planning and reporting. J Pain Symptom Manage 2010; 40(2):301–308.
73. Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualita-
tive research (COREQ): a 32-item checklist for interviews and focus
groups. Int J Qual Health Care 2007; 19(6):349–357.
74. Tong A, Flemming K, McInnes E, et al. Enhancing transparency in report-
ing the synthesis of qualitative research: ENTREQ. BMC Med Res Meth-
odol 2012; 12:181.
75. Boutron I, Moher D, Altman DG, et al. Methods and processes of the
CONSORT Group: example of an extension for trials assessing nonphar-
macologic treatments. Ann Intern Med 2008; 148(4):W60–W66.
76. Hopewell S, Clarke M, Moher D, et al. CONSORT for reporting rando-
mised trials in journal and conference abstracts. Lancet 2008;
371(9609):281–283.
1104
77. Zwarenstein M, Treweek S, Gagnier JJ, et al. Improving the reporting of
pragmatic trials: an extension of the CONSORT statement. BMJ 2008;
337:a2390.
78. Ioannidis JP, Evans SJ, Gotzsche PC, et al. Better reporting of harms in
randomized trials: an extension of the CONSORT statement. Ann Intern
Med 2004; 141(10):781–788.
79. Calvert M, Blazeby J, Altman DG, et al. Reporting of patient-reported out-
comes in randomized trials: the CONSORT PRO extension. JAMA 2013;
309(8):814–822.
80. Piaggio G, Elbourne DR, Pocock SJ, et al. Reporting of noninferiority and
equivalence randomized trials: extension of the CONSORT 2010 state-
ment. JAMA 2012; 308(24):2594–2604.
81. Chan AW, Tetzlaff JM, Altman DG, et al. SPIRIT 2013 statement: defining
standard protocol items for clinical trials. Ann Intern Med 2013; 158(3):
200–207.
82. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for system-
atic reviews and meta-analyses: the PRISMA statement. PLoS Med
2009; 6(7):e1000097.
83. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for system-
atic reviews and meta-analyses: the PRISMA statement. Ann Intern Med
2009; 151(4). 264–9, W64.
84. Moher D, Liberati A, Tetzlaff J, et al. Preferred reporting items for system-
atic reviews andmeta-analyses: the PRISMA statement. J Clin Epidemiol
2009; 62(10):1006–1012.
85. Beller EM, Glasziou PP, Altman DG, et al. PRISMA for abstracts: report-
ing systematic reviews in journal and conference abstracts. PLoS Med
2013; 10(4):e1001419.
86. Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individual partici-
pant data: rationale, conduct, and reporting. BMJ 2010; 340:c221.
87. Schriger DL. Suggestions for improving the reporting of clinical research:
the role of narrative. Ann Emerg Med 2005; 45(4):437–443.
88. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensus-
based clinical case reporting guideline development. BMJ Case Rep
2013; 2013.
89. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensus-
based clinical case report guideline development. J Clin Epidemiol 2014;
67(1):46–51.
90. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensus-
based clinical case reporting guideline development. J Med Case Rep
2013; 7(1):223.
91. Gagnier JJ, Kienle G, Altman DG, et al. The CARE guidelines: consensus-
based clinical case report guideline development. J Diet Suppl 2013;
10(4):381–390.
92. Kottner J, Audige L, Brorson S, et al. Guidelines for Reporting Reliability
and Agreement Studies (GRRAS) were proposed. J Clin Epidemiol 2011;
64(1):96–106.
93. Kottner J, Audige L, Brorson S, et al. Guidelines for Reporting Reliability
and Agreement Studies (GRRAS) were proposed. Int J Nurs Stud 2011;
48(6):661–671.
94. O’Cathain A, Murphy E, Nicholl J. The quality ofmixedmethods studies in
health services research. J Health Serv Res Policy 2008; 13(2):92–98.
95. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for quality
improvement in health care: evolution of the SQUIRE project. Qual Saf
Health Care 2008; 17(Suppl 1):i3–i9.
96. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for quality
improvement studies in health care: evolution of the SQUIRE project.
BMJ 2009; 338:a3152.
97. Davidoff F, Batalden PB, Stevens DP, et al. Development of the SQUIRE
Publication Guidelines: evolution of the SQUIRE project. Jt Comm J Qual
Patient Saf 2008; 34(11):681–687.
98. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for
improvement studies in health care: evolution of the SQUIRE Project.
Ann Intern Med 2008; 149(9):670–676.
99. Davidoff F, Batalden P, Stevens D, et al. Publication guidelines for quality
improvement studies in health care: evolution of the SQUIRE project. J
Gen Intern Med 2008; 23(12):2125–2130.
100. Talmon J, Ammenwerth E, Brender J, et al. STARE-HI–statement on re-
porting of evaluation studies in health informatics. Int J Med Inform
2009; 78(1):1–9.
APPENDIX TABLE 1. STARD checklist for reporting of studies of diagnostic accuracy (13–28).
Section and Topic Item# On Page#
Title/Abstract/Keywords 1 Identify the article as a study of diagnostic accuracy (recommend MeSH heading
’sensitivity and specificity’).
Introduction 2 State the research questions or study aims, such as estimating diagnostic accuracy or
comparing accuracy between tests or across participant groups.
Methods
Participants 3 The study population: The inclusion and exclusion criteria, setting and locations where
data were collected.
4 Participant recruitment: Was recruitment based on presenting symptoms, results from
previous tests, or the fact that the participants had received the index tests or the
reference standard?
5 Participant sampling: Was the study population a consecutive series of participants
defined by the selection criteria in item 3 and 4? If not, specify how participants were
further selected.
6 Data collection: Was data collection planned before the index test and reference standard
were performed (prospective study) or after (retrospective study)?
Test methods 7 The reference standard and its rationale.
8 Technical specifications of material and methods involved including how and when
measurements were taken, and/or cite references for index tests and reference
standard.
9 Definition of and rationale for the units, cut-offs and/or categories of the results of the
index tests and the reference standard.
10 The number, training and expertise of the persons executing and reading the index tests
and the reference standard.
11 Whether or not the readers of the index tests and reference standard were blind (masked)
to the results of the other test and describe any other clinical information available to the
readers.
Statistical methods 12 Methods for calculating or comparingmeasures of diagnostic accuracy, and the statistical
methods used to quantify uncertainty (e.g. 95% confidence intervals).
13 Methods for calculating test reproducibility, if done.
Results
Participants 14 When study was performed, including beginning and end dates of recruitment.
15 Clinical and demographic characteristics of the study population (at least information on
age, gender, spectrum of presenting symptoms).
16 The number of participants satisfying the criteria for inclusion who did or did not undergo
the index tests and/or the reference standard; describe why participants failed to
undergo either test (a flow diagram is strongly recommended).
Test results 17 Time-interval between the index tests and the reference standard, and any treatment
administered in between.
18 Distribution of severity of disease (define criteria) in those with the target condition; other
diagnoses in participants without the target condition.
19 A cross tabulation of the results of the index tests (including indeterminate and missing
results) by the results of the reference standard; for continuous results, the distribution
of the test results by the results of the reference standard.
20 Any adverse events from performing the index tests or the reference standard.
Estimates 21 Estimates of diagnostic accuracy and measures of statistical uncertainty (e.g. 95%
confidence intervals).
22 How indeterminate results, missing data and outliers of the index tests were handled.
23 Estimates of variability of diagnostic accuracy between subgroups of participants,
readers or centers, if done.
24 Estimates of test reproducibility, if done.
Discussion 25 Discuss the clinical applicability of the study findings.
STARD, standards for reporting of diagnostic accuracy.
Academic Radiology, Vol 21, No 9, September 2014 HOW TO REPORT A RESEARCH STUDY
1105
APPENDIX TABLE 2. CONSORT 2010 checklist of information to include when reporting a randomized trial (2–12).
Section/Topic
Item
No Checklist Item
Reported on
Page No
Title and abstract
1a Identification as a randomised trial in the title
1b Structured summary of trial design, methods, results, and conclusions
Introduction
Background and objectives 2a Scientific background and explanation of rationale
2b Specific objectives or hypotheses
Methods
Trial design 3a Description of trial design (such as parallel, factorial) including allocation ratio
3b Important changes to methods after trial commencement (such as eligibility criteria), with reasons
Participants 4a Eligibility criteria for participants
4b Settings and locations where the data were collected
Interventions 5 The interventions for each group with sufficient details to allow replication, including how and when they were actually
administered
Outcomes 6a Completely defined pre-specified primary and secondary outcome measures, including how and when they were assessed
6b Any changes to trial outcomes after the trial commenced, with reasons
Sample size 7a How sample size was determined
7b When applicable, explanation of any interim analyses and stopping guidelines
Randomisation:
Sequence generation 8a Method used to generate the random allocation sequence
8b Type of randomisation; details of any restriction (such as blocking and block size)
Allocation concealment
mechanism
9 Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps
taken to conceal the sequence until interventions were assigned
Implementation 10 Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions
Blinding 11a If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes)
and how
11b If relevant, description of the similarity of interventions
Statistical methods 12a Statistical methods used to compare groups for primary and secondary outcomes
12b Methods for additional analyses, such as subgroup analyses and adjusted analyses
Results
Participant flow (a diagram
is strongly recommended)
13a For each group, the numbers of participants whowere randomly assigned, received intended treatment, andwere analysed for the
primary outcome
13b For each group, losses and exclusions after randomisation, together with reasons
Recruitment 14a Dates defining the periods of recruitment and follow-up
14b Why the trial ended or was stopped
Baseline data 15 A table showing baseline demographic and clinical characteristics for each group
Numbers analysed 16 For each group, number of participants (denominator) included in each analysis andwhether the analysis was by original assigned
groups
Outcomes and estimation 17a For each primary and secondary outcome, results for each group, and the estimated effect size and its precision (such as 95%
confidence interval)
17b For binary outcomes, presentation of both absolute and relative effect sizes is recommended
(Continued on next page)
Academic
Radiology,Vol21,No9,September2014
HOW
TO
REPORTARESEARCHSTUDY
1107
APPENDIX TABLE 2. (continued) CONSORT 2010 checklist of information to include when reporting a randomized trial (2–12).
Section/Topic
Item
No Checklist Item
Reported on
Page No
Ancillary analyses 18 Results of any other analyses performed, including subgroup analyses and adjusted analyses, distinguishing pre-specified from
exploratory
Harms 19 All important harms or unintended effects in each group (for specific guidance see CONSORT for harms [28])
Discussion
Limitations 20 Trial limitations, addressing sources of potential bias, imprecision, and, if relevant, multiplicity of analyses
Generalisability 21 Generalisability (external validity, applicability) of the trial findings
Interpretation 22 Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence
Other information
Registration 23 Registration number and name of trial registry
Protocol 24 Where the full trial protocol can be accessed, if available
Funding 25 Sources of funding and other support (such as supply of drugs), role of funders
CRONIN
ETAL
Academic
Radiology,Vol21,No9,September2014
1108
CONSORT 2010 flow diagram
Academic Radiology, Vol 21, No 9, September 2014 HOW TO REPORT A RESEARCH STUDY
1109
APPENDIX TABLE 3. The PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist (29).
Section/topic # Checklist Item
Reported on
Page#
Title
Title 1 Identify the report as a systematic review, meta-analysis, or both.
Abstract
Structured summary 2 Provide a structured summary including, as applicable: background; objectives; data sources; study eligibility criteria,
participants, and interventions; study appraisal and synthesis methods; results; limitations; conclusions and implications
of key findings; systematic review registration number.
Introduction
Rationale 3 Describe the rationale for the review in the context of what is already known.
Objectives 4 Provide an explicit statement of questions being addressed with reference to participants, interventions, comparisons,
outcomes, and study design (PICOS).
Methods
Protocol and registration 5 Indicate if a review protocol exists, if andwhere it can be accessed (e.g.,Web address), and, if available, provide registration
information including registration number.
Eligibility criteria 6 Specify study characteristics (e.g., PICOS, length of follow-up) and report characteristics (e.g., years considered, language,
publication status) used as criteria for eligibility, giving rationale.
Information sources 7 Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional
studies) in the search and date last searched.
Search 8 Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated.
Study selection 9 State the process for selecting studies (i.e., screening, eligibility, included in systematic review, and, if applicable, included
in the meta-analysis).
Data collection process 10 Describe method of data extraction from reports (e.g., piloted forms, independently, in duplicate) and any processes for
obtaining and confirming data from investigators.
Data items 11 List and define all variables for which data were sought (e.g., PICOS, funding sources) and any assumptions and
simplifications made.
Risk of bias in individual studies 12 Describe methods used for assessing risk of bias of individual studies (including specification of whether this was done at
the study or outcome level), and how this information is to be used in any data synthesis.
Summary measures 13 State the principal summary measures (e.g., risk ratio, difference in means).
Synthesis of results 14 Describe themethods of handling data and combining results of studies, if done, includingmeasures of consistency (e.g., I2)
for each meta-analysis.
Risk of bias across studies 15 Specify any assessment of risk of bias that may affect the cumulative evidence (e.g., publication bias, selective reporting
within studies).
Additional analyses 16 Describe methods of additional analyses (e.g., sensitivity or subgroup analyses, meta-regression), if done, indicating which
were pre-specified.
Results
Study selection 17 Give numbers of studies screened, assessed for eligibility, and included in the review, with reasons for exclusions at each
stage, ideally with a flow diagram.
Study characteristics 18 For each study, present characteristics for which data were extracted (e.g., study size, PICOS, follow-up period) and
provide the citations.
Risk of bias within studies 19 Present data on risk of bias of each study and, if available, any outcome level assessment (see item 12).
CRONIN
ETAL
Academic
Radiology,Vol21,No9,September2014
1110
Results of individual studies 20 For all outcomes considered (benefits or harms), present, for each study: (a) simple summary data for each intervention
group (b) effect estimates and confidence intervals, ideally with a forest plot.
Synthesis of results 21 Present results of each meta-analysis done, including confidence intervals and measures of consistency.
Risk of bias across studies 22 Present results of any assessment of risk of bias across studies (see Item 15).
Additional analysis 23 Give results of additional analyses, if done (e.g., sensitivity or subgroup analyses, meta-regression [see Item 16]).
Discussion
Summary of evidence 24 Summarize the main findings including the strength of evidence for each main outcome; consider their relevance to key
groups (e.g., healthcare providers, users, and policy makers).
Limitations 25 Discuss limitations at study and outcome level (e.g., risk of bias), and at review-level (e.g., incomplete retrieval of identified
research, reporting bias).
Conclusions 26 Provide a general interpretation of the results in the context of other evidence, and implications for future research.
Funding
Funding 27 Describe sources of funding for the systematic review and other support (e.g., supply of data); role of funders for the
systematic review.
Academic
Radiology,Vol21,No9,September2014
HOW
TO
REPORTARESEARCHSTUDY
1111
APPENDIX TABLE 4. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) checklist. Items to include when reporting economic evaluations of healthinterventions (30–39).
Section/Item
Item
No. Recommendation
Reported on Page
No./Line No.
Title and abstract
Title 1 Identify the study as an economic evaluation or use more specific terms such as ‘‘cost-effectiveness analysis’’, and
describe the interventions compared.
Abstract 2 Provide a structured summary of objectives, perspective, setting, methods (including study design and inputs), results
(including base case and uncertainty analyses), and conclusions.
Introduction
Background and
objectives
3 Provide an explicit statement of the broader context for the study.
Present the study question and its relevance for health policy or practice decisions.
Methods
Target population and
subgroups
4 Describe characteristics of the base case population and subgroups analysed, including why they were chosen.
Setting and location 5 State relevant aspects of the system(s) in which the decision(s) need(s) to be made.
Study perspective 6 Describe the perspective of the study and relate this to the costs being evaluated.
Comparators 7 Describe the interventions or strategies being compared and state why they were chosen.
Time horizon 8 State the time horizon(s) over which costs and consequences are being evaluated and say why appropriate.
Discount rate 9 Report the choice of discount rate(s) used for costs and outcomes and say why appropriate.
Choice of health
outcomes
10 Describe what outcomeswere used as themeasure(s) of benefit in the evaluation and their relevance for the type of analysis
performed.
Measurement of
effectiveness
11a Single study-based estimates: Describe fully the design features of the single effectiveness study and why the single study
was a sufficient source of clinical effectiveness data.
11b Synthesis-based estimates: Describe fully the methods used for identification of included studies and synthesis of clinical
effectiveness data.
Measurement and
valuation of preference
based outcomes
12 If applicable, describe the population and methods used to elicit preferences for outcomes.
Estimating resources
and costs
13a Single study-based economic evaluation: Describe approaches used to estimate resource use associated with the
alternative interventions. Describe primary or secondary research methods for valuing each resource item in terms of its
unit cost. Describe any adjustments made to approximate to opportunity costs.
13b Model-based economic evaluation: Describe approaches and data sources used to estimate resource use associated with
model health states. Describe primary or secondary research methods for valuing each resource item in terms of its unit
cost. Describe any adjustments made to approximate to opportunity costs.
Currency, price date,
and conversion
14 Report the dates of the estimated resource quantities and unit costs. Describemethods for adjusting estimated unit costs to
the year of reported costs if necessary. Describe methods for converting costs into a common currency base and the
exchange rate.
Choice of model 15 Describe and give reasons for the specific type of decision-analytical model used. Providing a figure to show model
structure is strongly recommended.
Assumptions 16 Describe all structural or other assumptions underpinning the decision-analytical model.
(Continued on next page)
Academic
Radiology,Vol21,No9,September2014
HOW
TO
REPORTARESEARCHSTUDY
1113
APPENDIX TABLE 4. (continued) ConsolidatedHealth Economic EvaluationReporting Standards (CHEERS) checklist. Items to includewhen reporting economic evaluations ofhealth interventions (30–39).
Section/Item
Item
No. Recommendation
Reported on Page
No./Line No.
Analytical methods 17 Describe all analytical methods supporting the evaluation. This could include methods for dealing with skewed, missing, or
censored data; extrapolation methods; methods for pooling data; approaches to validate or make adjustments (such as
half cycle corrections) to a model; and methods for handling population heterogeneity and uncertainty.
Results
Study parameters 18 Report the values, ranges, references, and, if used, probability distributions for all parameters. Report reasons or sources
for distributions used to represent uncertainty where appropriate.
Providing a table to show the input values is strongly recommended.
Incremental costs and
outcomes
19 For each intervention, report mean values for the main categories of estimated costs and outcomes of interest, as well as
mean differences between the comparator groups. If applicable, report incremental cost-effectiveness ratios.
Characterising
uncertainty
20a Single study-based economic evaluation: Describe the effects of sampling uncertainty for the estimated incremental cost
and incremental effectiveness parameters, together with the impact of methodological assumptions (such as discount
rate, study perspective).
20b Model-based economic evaluation: Describe the effects on the results of uncertainty for all input parameters, and
uncertainty related to the structure of the model and assumptions.
Characterising
heterogeneity
21 If applicable, report differences in costs, outcomes, or costeffectiveness that can be explained by variations between
subgroups of patients with different baseline characteristics or other observed variability in effects that are not reducible
by more information.
Discussion
Study findings,
limitations,
generalisability, and
current knowledge
22 Summarise key study findings and describe how they support the conclusions reached. Discuss limitations and the
generalisability of the findings and how the findings fit with current knowledge.
Other
Source of funding 23 Describe how the study was funded and the role of the funder in the identification, design, conduct, and reporting of the
analysis. Describe other non-monetary sources of support.
Conflicts of interest 24 Describe any potential for conflict of interest of study contributors in accordance with journal policy. In the absence of a
journal policy, we recommend authors comply with International Committee of Medical Journal Editors
recommendations.
CRONIN
ETAL
Academic
Radiology,Vol21,No9,September2014
1114
APPENDIX TABLE 5. Consolidated criteria for reporting qualitative studies (COREQ): 32-item checklist (73).
No. Item Guide Questions/Description
Domain 1: Research team and reflexivity
Personal characteristics
1. Interviewer/facilitator Which author/s conducted the interview or focus group?
2. Credentials What were the researcher’s credentials? e.g. PhD, MD
3. Occupation What was their occupation at the time of the study?
4. Gender Was the researcher male or female?
5. Experience and training What experience or training did the researcher have?
Relationship with participants
6. Relationship established Was a relationship established prior to study commencement?
7. Participant knowledge of the interviewer What did the participants know about the researcher? e.g. personal goals, reasons
for doing the research
8. Interviewer characteristics What characteristics were reported about the interviewer/facilitator? e.g. Bias,
assumptions, reasons and interests in the research topic
Domain 2: Study design
Theoretical framework
9. Methodological orientation and Theory What methodological orientation was stated to underpin the study? e.g. grounded
theory, discourse analysis, ethnography, phenomenology, content analysis
Participant selection
10. Sampling How were participants selected? e.g. purposive, convenience, consecutive,
snowball
11. Method of approach How were participants approached? e.g. face-to-face, telephone, mail, email
12. Sample size How many participants were in the study?
13. Non-participation How many people refused to participate or dropped out? Reasons?
Setting
14. Setting of data collection Where was the data collected? e.g. home, clinic, workplace
15. Presence of non-participants Was anyone else present besides the participants and researchers?
16. Description of sample What are the important characteristics of the sample? e.g. demographic data, date
Data collection
17. Interview guide Were questions, prompts, guides provided by the authors? Was it pilot tested?
18. Repeat interviews Were repeat interviews carried out? If yes, how many?
19. Audio/visual recording Did the research use audio or visual recording to collect the data?
20. Field notes Were field notes made during and/or after the interview or focus group?
21. Duration What was the duration of the interviews or focus group?
22. Data saturation Was data saturation discussed?
23. Transcripts returned Were transcripts returned to participants for comment and/or correction?
Domain 3: Analysis and findings
Data analysis
24. Number of data coders How many data coders coded the data?
25. Description of the coding tree Did authors provide a description of the coding tree?
26. Derivation of themes Were themes identified in advance or derived from the data?
27. Software What software, if applicable, was used to manage the data?
28. Participant checking Did participants provide feedback on the findings?
Reporting
29. Quotations presented Were participant quotations presented to illustrate the themes/findings? Was each
quotation identified? e.g. participant number
30. Data and findings consistent Was there consistency between the data presented and the findings?
31. Clarity of major themes Were major themes clearly presented in the findings?
32. Clarity of minor themes Is there a description of diverse cases or discussion of minor themes?
Academic Radiology, Vol 21, No 9, September 2014 HOW TO REPORT A RESEARCH STUDY
1115
APPENDIX TABLE 6. Enhancing transparency in reporting the synthesis of qualitative research: the ENTREQ statement (74)
No. Item Guide and Description
1 Aim State the research question the synthesis addresses.
2 Synthesis methodology Identify the synthesis methodology or theoretical framework which underpins the synthesis, and
describe the rationale for choice of methodology (e.g. meta-ethnography, thematic synthesis,
critical interpretive synthesis, grounded theory synthesis, realist synthesis, meta-aggregation,
meta-study, framework synthesis).
3 Approach to searching Indicate whether the search was pre-planned (comprehensive search strategies to seek all available
studies) or iterative (to seek all available concepts until they theoretical saturation is achieved).
4 Inclusion criteria Specify the inclusion/exclusion criteria (e.g. in terms of population, language, year limits, type of
publication, study type).
5 Data sources Describe the information sources used (e.g. electronic databases (MEDLINE, EMBASE, CINAHL,
psycINFO, Econlit), grey literature databases (digital thesis, policy reports), relevant organisational
websites, experts, information specialists, generic web searches (Google Scholar) hand searching,
reference lists) and when the searches conducted; provide the rationale for using the data sources.
6 Electronic search strategy Describe the literature search (e.g. provide electronic search strategies with population terms, clinical
or health topic terms, experiential or social phenomena related terms, filters for qualitative research,
and search limits).
7 Study screening methods Describe the process of study screening and sifting (e.g. title, abstract and full text review, number of
independent reviewers who screened studies).
8 Study characteristics Present the characteristics of the included studies (e.g. year of publication, country, population,
number of participants, data collection, methodology, analysis, research questions).
9 Study selection results Identify the number of studies screened and provide reasons for study exclusion (e.g., for
comprehensive searching, provide numbers of studies screened and reasons for exclusion
indicated in a figure/flowchart; for iterative searching describe reasons for study exclusion and
inclusion based on modifications to the research question and/or contribution to theory
development).
10 Rationale for appraisal Describe the rationale and approach used to appraise the included studies or selected findings (e.g.
assessment of conduct (validity and robustness), assessment of reporting (transparency),
assessment of content and utility of the findings).
11 Appraisal items State the tools, frameworks and criteria used to appraise the studies or selected findings (e.g. Existing
tools: CASP, QARI, COREQ, Mays and Pope; reviewer developed tools; describe the domains
assessed: research team, study design, data analysis and interpretations, reporting).
12 Appraisal process Indicate whether the appraisal was conducted independently by more than one reviewer and if
consensus was required.
13 Appraisal results Present results of the quality assessment and indicate which articles if any, were weighted/excluded
based on the assessment and give the rationale.
14 Data extraction Indicate which sections of the primary studies were analysed and how were the data extracted from
the primary studies? (e.g. all text under the headings ‘‘results/conclusions’’ were extracted
electronically and entered into a computer software).
15 Software State the computer software used, if any.
16 Number of reviewers Identify who was involved in coding and analysis.
17 Coding Describe the process for coding of data (e.g. line by line coding to search for concepts).
18 Study comparison Describe how were comparisons made within and across studies (e.g. subsequent studies were
coded into pre-existing concepts, and new concepts were created when deemed necessary).
19 Derivation of themes Explain whether the process of deriving the themes or constructs was inductive or deductive.
20 Quotations Provide quotations from the primary studies to illustrate themes/constructs, and identify whether the
quotations were participant quotations of the author’s interpretation.
21 Synthesis output Present rich, compelling and useful results that go beyond a summary of the primary studies (e.g. new
interpretation, models of evidence, conceptual models, analytical framework, development of a
new theory or construct).
CRONIN ET AL Academic Radiology, Vol 21, No 9, September 2014
1116