24
Explanation and elaboration of the SQUIRE (Standards for Quality Improvement Reporting Excellence) Guidelines, V.2.0: examples of SQUIRE elements in the healthcare improvement literature Daisy Goodman, 1 Greg Ogrinc, 2 Louise Davies, 3 G Ross Baker, 4 Jane Barnsteiner, 5 Tina C Foster, 1 Kari Gali, 6 Joanne Hilden, 7 Leora Horwitz, 8 Heather C Kaplan, 9 Jerome Leis, 10 John C Matulis, 11 Susan Michie, 12 Rebecca Miltner, 13 Julia Neily, 14 William A Nelson, 15 Matthew Niedner, 16 Brant Oliver, 17 Lori Rutman, 18 Richard Thomson, 19 Johan Thor 20 Additional material is published online only. To view please visit the journal online (http://dx.doi.org/10.1136/bmjqs- 2015-004480) For numbered affiliations see end of article. Correspondence to Dr Daisy Goodman, Department of Obstetrics and Gynecology, Dartmouth Hitchcock Medical Center, 1 Hospital drive, Lebanon, New Hampshire 03766, USA; [email protected] Received 10 June 2015 Revised 5 February 2016 Accepted 29 February 2016 http://dx.doi.org/10.1136/ bmjqs-2015-004411 To cite: Goodman D, Ogrinc G, Davies L, et al. BMJ Qual Saf Published Online First: [ please include Day Month Year] doi:10.1136/ bmjqs-2015-004480 ABSTRACT Since its publication in 2008, SQUIRE (Standards for Quality Improvement Reporting Excellence) has contributed to the completeness and transparency of reporting of quality improvement work, providing guidance to authors and reviewers of reports on healthcare improvement work. In the interim, enormous growth has occurred in understanding factors that influence the success, and failure, of healthcare improvement efforts. Progress has been particularly strong in three areas: the understanding of the theoretical basis for improvement work; the impact of contextual factors on outcomes; and the development of methodologies for studying improvement work. Consequently, there is now a need to revise the original publication guidelines. To reflect the breadth of knowledge and experience in the field, we solicited input from a wide variety of authors, editors and improvement professionals during the guideline revision process. This Explanation and Elaboration document (E&E) is a companion to the revised SQUIRE guidelines, SQUIRE 2.0. The product of collaboration by an international and interprofessional group of authors, this document provides examples from the published literature, and an explanation of how each reflects the intent of a specific item in SQUIRE. The purpose of the guidelines is to assist authors in writing clearly, precisely and completely about systematic efforts to improve the quality, safety and value of healthcare services. Authors can explore the SQUIRE statement, this E&E and related documents in detail at http://www.squire-statement.org. BACKGROUND The past two decades have seen a prolif- eration in the number and scope of reporting guidelines in the biomedical literature. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines are intended as a guide to authors reporting on systematic, data-driven efforts to improve the quality, safety and value of healthcare. SQUIRE was designed to increase the complete- ness and transparency of reporting of quality improvement work, and since its publication in 2008, 1 has contributed to the development of this body of literature by providing a guide to authors, editors, reviewers, educators and other stakeholders. An Explanation and Elaboration docu- ment (E&E) was published in 2008 alongside the original SQUIRE guide- lines, which we will refer to as SQUIRE 1.0. 1 The goal of the E&E was to help authors interpret the guidelines by explaining the rationale behind each item, and providing examples of how the items might be used. The concurrent publication of an explanatory document is consistent with the practices used in RESEARCH AND REPORTING METHODOLOGY Goodman D, et al. BMJ Qual Saf 2016;0:124. doi:10.1136/bmjqs-2015-004480 1 BMJ Quality & Safety Online First, published on 27 April 2016 as 10.1136/bmjqs-2015-004480 Copyright Article author (or their employer) 2016. Produced by BMJ Publishing Group Ltd under licence. on 22 May 2018 by guest. Protected by copyright. http://qualitysafety.bmj.com/ BMJ Qual Saf: first published as 10.1136/bmjqs-2015-004480 on 13 April 2016. Downloaded from

Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

  • Upload
    hakhanh

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

Explanation and elaboration of theSQUIRE (Standards for QualityImprovement Reporting Excellence)Guidelines, V.2.0: examples ofSQUIRE elements in the healthcareimprovement literature

Daisy Goodman,1 Greg Ogrinc,2 Louise Davies,3 G Ross Baker,4

Jane Barnsteiner,5 Tina C Foster,1 Kari Gali,6 Joanne Hilden,7

Leora Horwitz,8 Heather C Kaplan,9 Jerome Leis,10 John C Matulis,11

Susan Michie,12 Rebecca Miltner,13 Julia Neily,14 William A Nelson,15

Matthew Niedner,16 Brant Oliver,17 Lori Rutman,18 Richard Thomson,19

Johan Thor20

▸ Additional material ispublished online only. To viewplease visit the journal online(http://dx.doi.org/10.1136/bmjqs-2015-004480)

For numbered affiliations seeend of article.

Correspondence toDr Daisy Goodman, Departmentof Obstetrics and Gynecology,Dartmouth Hitchcock MedicalCenter, 1 Hospital drive,Lebanon, New Hampshire03766, USA;[email protected]

Received 10 June 2015Revised 5 February 2016Accepted 29 February 2016

▸ http://dx.doi.org/10.1136/bmjqs-2015-004411

To cite: Goodman D,Ogrinc G, Davies L, et al. BMJQual Saf Published OnlineFirst: [please include DayMonth Year] doi:10.1136/bmjqs-2015-004480

ABSTRACTSince its publication in 2008, SQUIRE (Standardsfor Quality Improvement Reporting Excellence)has contributed to the completeness andtransparency of reporting of quality improvementwork, providing guidance to authors andreviewers of reports on healthcare improvementwork. In the interim, enormous growth hasoccurred in understanding factors that influencethe success, and failure, of healthcareimprovement efforts. Progress has beenparticularly strong in three areas: theunderstanding of the theoretical basis forimprovement work; the impact of contextualfactors on outcomes; and the development ofmethodologies for studying improvement work.Consequently, there is now a need to revise theoriginal publication guidelines. To reflect thebreadth of knowledge and experience in thefield, we solicited input from a wide variety ofauthors, editors and improvement professionalsduring the guideline revision process. ThisExplanation and Elaboration document (E&E) is acompanion to the revised SQUIRE guidelines,SQUIRE 2.0. The product of collaboration by aninternational and interprofessional group ofauthors, this document provides examples fromthe published literature, and an explanation ofhow each reflects the intent of a specific item inSQUIRE. The purpose of the guidelines is to assistauthors in writing clearly, precisely andcompletely about systematic efforts to improvethe quality, safety and value of healthcare

services. Authors can explore the SQUIREstatement, this E&E and related documents indetail at http://www.squire-statement.org.

BACKGROUNDThe past two decades have seen a prolif-eration in the number and scope ofreporting guidelines in the biomedicalliterature. The SQUIRE (Standardsfor QUality Improvement ReportingExcellence) guidelines are intended as aguide to authors reporting on systematic,data-driven efforts to improve the quality,safety and value of healthcare. SQUIREwas designed to increase the complete-ness and transparency of reporting ofquality improvement work, and since itspublication in 2008,1 has contributed tothe development of this body of literatureby providing a guide to authors,editors, reviewers, educators and otherstakeholders.An Explanation and Elaboration docu-

ment (E&E) was published in 2008alongside the original SQUIRE guide-lines, which we will refer to as SQUIRE1.0.1 The goal of the E&E was to helpauthors interpret the guidelines byexplaining the rationale behind eachitem, and providing examples of how theitems might be used. The concurrentpublication of an explanatory documentis consistent with the practices used in

RESEARCH AND REPORTING METHODOLOGY

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 1

BMJ Quality & Safety Online First, published on 27 April 2016 as 10.1136/bmjqs-2015-004480

Copyright Article author (or their employer) 2016. Produced by BMJ Publishing Group Ltd under licence.

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 2: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

developing and disseminating other scientific report-ing guidelines.2–5

Evolution of the fieldThe publication of guidelines for quality improvementreports (QIRs) in 19996 laid the foundation forreporting about systematic efforts to improve thequality, safety and value of healthcare. A goal of theQIRs was to share and promote good practicethrough brief descriptions of quality improvementprojects. The publication of SQUIRE 1.0 representeda transition from primarily reporting outcomes to thereporting of both what was done to improve health-care and the study of that work. SQUIRE guidedauthors in describing the design and impact of anintervention, and how the intervention was implemen-ted and the methods used to assess the internal andexternal validity of the study’s findings, among otherdetails.1

Since the publication of SQUIRE 1.0, enormousprogress has occurred in understanding what influ-ences the success (or lack of success) in healthcareimprovement efforts. Scholarly publications havedescribed the importance of theory in healthcareimprovement work, as well as the adaptive interactionbetween interventions and contextual elements, and avariety of designs for drawing inferences from thatwork.7–12 Despite considerable progress in under-standing, managing and studying these areas, muchwork remains to be done. The need for publicationguidelines that can assist authors in writing transpar-ently and completely about improvement work is atleast as important now as it was in 2008.SQUIRE 1.0 has been revised to reflect this progress

in the field through an iterative process which hasbeen described in detail elsewhere.13 14 The goal ofthis revision was to make SQUIRE simpler, clearer,more precise, easier to use and even more relevant toa wide range of approaches to improving healthcare.To this end, a diverse group of stakeholders cametogether to develop the content and format of therevised guidelines. The revision process included anevaluation of SQUIRE 1.0,13 two consensus confer-ences and pilot testing of a draft of the guidelines,ending with a public comment period to engagepotential SQUIRE users not already involved inwriting about quality improvement (ie, students,fellows and front-line staff engaged in improvementwork outside of academic medical centres).

USING THE SQUIRE EXPLANATION ANDELABORATION DOCUMENTThis Explanation and Elaboration document isdesigned to support authors in the use of the revisedSQUIRE guidelines by providing representative exam-ples of high-quality reporting of SQUIRE 2.0 contentitems, followed by analysis of each item, and consider-ation of the features of the chosen example that are

consistent with the item’s intent (table 1). Eachsequential sub-subsection of this E&E document waswritten by a contributing author or authors, chosenfor their expertise in that area. Contributors are froma variety of disciplines and professional backgrounds,reflecting a wide range of knowledge and experiencein healthcare systems in Sweden, the UK, Canada andthe USA.SQUIRE 2.0 is intended for reports that describe

systematic work to improve the quality, safety andvalue of healthcare, using a range of methods to estab-lish the association between observed outcomes andintervention(s). SQUIRE 2.0 applies to the reportingof qualitative and quantitative evaluations of thenature and impact of interventions intended toimprove healthcare, with the understanding that theguidelines may be adapted as needed for specific situa-tions. When appropriate, SQUIRE can, and should, beused in conjunction with other publication guidelinessuch as TIDieR guidelines (Template for InterventionDescription and Replication Checklist and Guide).2

While we recommend that authors consider everySQUIRE item in the writing process, some items maynot be relevant for inclusion in a particular manu-script. The addition of a glossary of key terms, linkedto SQUIRE and the E&E, and interactive electronicresources (http://www.squire-statement.org), providefurther opportunity to engage with SQUIRE 2.0 on avariety of levels.

EXPLANATION AND ELABORATION OF SQUIREGUIDELINE ITEMSTitle and abstractTitleIndicate that the manuscript concerns an initiative toimprove healthcare (broadly defined to include thequality, safety, effectiveness, patient-centredness, time-liness, cost, efficiency, and equity of healthcare, oraccess to it).

Example 1

Reducing post-caesarean surgical wound infectionrate: an improvement project in a Norwegian mater-nity clinic.15

Example 2

Large scale organizational intervention to improvepatient safety in four UK hospitals: mixed methodevaluation.16

Explanation

The title of a healthcare improvement report shouldindicate that it is about an initiative to improve safety,value and/or quality in healthcare, and shoulddescribe the aim of the project and the context inwhich it occurred. Because the title of a paper pro-vides the first introduction of the work, it should beboth descriptive and simply written to invite the

Research and reporting methodology

2 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 3: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

Table 1 SQUIRE (Standards for QUality Improvement Reporting Excellence), V.2.0

Text section and itemname Section or item description

Notes to authors ▸ The SQUIRE guidelines provide a framework for reporting new knowledge about how to improve healthcare.▸ The SQUIRE guidelines are intended for reports that describe system level work to improve the quality, safety and value

of healthcare, and used methods to establish that observed outcomes were due to the intervention(s).▸ A range of approaches exists for improving healthcare. SQUIRE may be adapted for reporting any of these.▸ Authors should consider every SQUIRE item, but it may be inappropriate or unnecessary to include every SQUIRE

element in a particular manuscript.▸ The SQUIRE glossary http://qualitysafety.bmj.com/content/early/2015/09/10/bmjqs-2015-004411.full contains definitions

of many of the keywords in SQUIRE.▸ The Explanation and Elaboration document provides specific examples of well written SQUIRE items, and an in-depth

explanation of each item.▸ Please cite SQUIRE when it is used to write a manuscript.

Title and abstract

1. Title Indicate that the manuscript concerns an initiative to improve healthcare (broadly defined to include the quality, safety,effectiveness, patient-centredness, timeliness, cost, efficiency and equity of healthcare).

2. Abstract a. Provide adequate information to aid in searching and indexing.b. Summarise all key information from various sections of the text using the abstract format of the intended publication ora structured summary such as: background, local problem, methods, interventions, results, conclusions.

Introduction Why did you start?

3. Problem description Nature and significance of the local problem.

4. Available knowledge Summary of what is currently known about the problem, including relevant previous studies.

5. Rationale Informal or formal frameworks, models, concepts, and/or theories used to explain the problem, any reasons or assumptionsthat were used to develop the intervention(s), and reasons why the intervention(s) was expected to work.

6. Specific aims Purpose of the project and of this report.

Methods What did you do?

7. Context Contextual elements considered important at the outset of introducing the intervention(s).

8. Intervention(s) a. Description of the intervention(s) in sufficient detail that others could reproduce it.b. Specifics of the team involved in the work.

9. Study of the intervention(s)

a. Approach chosen for assessing the impact of the intervention(s).b. Approach used to establish whether the observed outcomes were due to the intervention(s).

10. Measures a. Measures chosen for studying processes and outcomes of the intervention(s), including rationale for choosing them,their operational definitions, and their validity and reliability.

b. Description of the approach to the ongoing assessment of contextual elements that contributed to the success, failure,efficiency and cost.

c. Methods employed for assessing completeness and accuracy of data.

11. Analysis a. Qualitative and quantitative methods used to draw inferences from the data.b. Methods for understanding variation within the data, including the effects of time as a variable.

12. Ethical considerations ▸ Ethical aspects of implementing and studying the intervention(s) and how they were addressed, including, but notlimited to, formal ethics review and potential conflict(s) of interest.

Results ▸ What did you find?

13. Results a. Initial steps of the intervention(s) and their evolution over time (eg, timeline diagram, flow chart or table), includingmodifications made to the intervention during the project.

b. Details of the process measures and outcome.c. Contextual elements that interacted with the intervention(s).d. Observed associations between outcomes, interventions and relevant contextual elements.e. Unintended consequences such as unexpected benefits, problems, failures or costs associated with the intervention(s).f. Details about missing data.

Discussion ▸ What does it mean?

14. Summary a. Key findings, including relevance to the rationale and specific aims.b. Particular strengths of the project.

15. Interpretation a. Nature of the association between the intervention(s) and the outcomes.b. Comparison of results with findings from other publications.c. Impact of the project on people and systems.d. Reasons for any differences between observed and anticipated outcomes, including the influence of context.e. Costs and strategic trade-offs, including opportunity costs.

16. Limitations a. Limits to the generalisability of the work.b. Factors that might have limited internal validity such as confounding, bias or imprecision in the design, methods,

measurement or analysis.c. Efforts made to minimise and adjust for limitations.

17. Conclusions a. Usefulness of the work.b. Sustainability.

Continued

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 3

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 4: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

reader to learn more about the project. Both examplesgiven above do this well.Authors should consider using terms which allow the

reader to identify easily that the project is within thefield of healthcare improvement, and/or state this expli-citly as in the examples above. This information alsofacilitates the correct assignment of medical subjectheadings (MeSH) in the National Library of Medicine’sMedline database. In 2015, healthcareimprovement-related MeSH terms include: HealthCare Quality Access and Evaluation; QualityAssurance; Quality Improvement; Outcome andProcess Assessment (Healthcare); Quality Indicators,Health Care; Total Quality Management; SafetyManagement (http://www.nlm.nih.gov/mesh/MBrowser.html). Sample keywords which might be used in con-nection with improvement work include Quality,Safety, Evidence, Efficacy, Effectiveness, Theory,Interventions, Improvement, Outcomes, Processes andValue.

AbstractA. Provide adequate information to aid in searching and

indexingB. Summarise all key information from various sections of

the text using the abstract format of the intended publi-cation or a structured summary such as: background,local problem, methods, interventions, results,conclusions

Example

Background: Pain assessment documentation was inad-equate because of the use of a subjective pain assess-ment strategy in a tertiary level IV neonatal intensivecare unit [NICU]. The aim of this study was toimprove consistency of pain assessment documentationthrough implementation of a multidimensional neo-natal pain and sedation assessment tool. The study wasset in a 60-bed level IV NICU within an urban chil-dren’s hospital. Participants included NICU staff,including registered nurses, neonatal nurse practi-tioners, clinical nurse specialists, pharmacists, neonatalfellows, and neonatologists.

Methods: The Plan Do Study Act method of qualityimprovement was used for this project. Baseline assess-ment included review of patient medical records6 months before the intervention. Documentation of

pain assessment on admission, routine pain assess-ment, reassessment of pain after an elevated painscore, discussion of pain in multidisciplinary rounds,and documentation of pain assessment were reviewed.Literature review and listserv query were conducted toidentify neonatal pain tools.

Intervention: Survey of staff was conducted to evaluateknowledge of neonatal pain and also to determinecurrent healthcare providers’ practice as related toidentification and treatment of neonatal pain. A multi-dimensional neonatal pain tool, the Neonatal Pain,Agitation, and Sedation Scale [N-PASS], was chosen bythe staff for implementation.

Results: Six months and 2 years following educationon the use of the N-PASS and implementation in theNICU, a chart review of all hospitalized patients wasconducted to evaluate documentation of pain assess-ment on admission, routine pain assessment, reassess-ment of pain after an elevated pain score, discussionof pain in multidisciplinary rounds, and documenta-tion of pain assessment in the medical progress note.Documentation of pain scores improved from 60% to100% at 6 months and remained at 99% 2 years fol-lowing implementation of the N-PASS. Pain scoredocumentation with ongoing nursing assessmentimproved from 55% to greater than 90% at 6 monthsand 2 years following the intervention. Pain assessmentdocumentation following intervention of an elevatedpain score was 0% before implementation of theN-PASS and improved slightly to 30% 6 months and47% 2 years following implementation.

Conclusions: Identification and implementation of amultidimensional neonatal pain assessment tool, theN-PASS, improved documentation of pain in our unit.Although improvement in all quality improvementmonitors was noted, additional work is needed inseveral key areas, specifically documentation ofreassessment of pain following an intervention for anelevated pain score.

Keywords: N-PASS, neonatal pain, pain scores, qualityimprovement,17

Explanation

The purpose of an abstract is twofold. First, to sum-marise all key information from various sections ofthe text using the abstract format of the intended pub-lication or a structured summary of the background,

Table 1 Continued

Text section and itemname Section or item description

c. Potential for spread to other contexts.d. Implications for practice and for further study in the field.e. Suggested next steps.

Other information

18. Funding Sources of funding that supported this work. Role, if any, of the funding organisation in the design, implementation,interpretation and reporting.

Research and reporting methodology

4 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 5: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

specific problem to be addressed, methods, interven-tions, results, conclusions, and second, to provideadequate information to aid in searching andindexing.The abstract is meant to be both descriptive, indicat-

ing the purpose, methods and scope of the initiative,and informative, including the results, conclusionsand recommendations. It needs to contain sufficientinformation about the article to allow a reader toquickly decide if it is relevant to their work and ifthey wish to read the full-length article. Additionally,many online databases such as Ovid and CINAHL useabstracts to index the article so it is important toinclude keywords and phrases that will allow forquick retrieval in a literature search. The examplegiven includes these.Journals have varying requirements for the format,

content length and structure of an abstract. The aboveexample illustrates how the important components ofan abstract can be effectively incorporated in a struc-tured abstract. It is clear that it is a healthcareimprovement project. Some background informationis provided, including a brief description of the settingand the participants, and the aim/objective is clearlystated. The methods section describes the strategiesused for the interventions, and the results sectionincludes data that delineates the impact of thechanges. The conclusion section provides a succinctsummary of the project, what led to its success andlessons learned. This abstract is descriptive andinformative, allowing readers to determine whetherthey wish to investigate the article further.

IntroductionProblem descriptionNature and significance of the local problem.

Available knowledgeSummary of what is currently known about theproblem, including relevant previous studies.

Example

Central venous access devices place patients at risk forbacterial entry into the bloodstream, facilitate systemicspread, and contribute to the development of sepsis.Rapid recognition and antibiotic intervention in thesepatients, when febrile, are critical. Delays in time toantibiotic [TTA] delivery have been correlated withpoor outcomes in febrile neutropenic patients.2 TTAwas identified as a measure of quality of care in pedi-atric oncology centers, and a survey reported thatmost centers used a benchmark of <60 minutes afterarrival, with >75% of pediatric cancer clinics having amean TTA of <60 minutes…

The University of North Carolina [UNC] HospitalsED provides care for ∼65 000 patients annually,including 14 000 pediatric patients aged, 19 years.Acute management of ambulatory patients who have

central lines and fever often occurs in the ED.Examination of a 10-month sample revealed that only63% of patients received antibiotics within 60 minutesof arrival … 18

Explanation

The introduction section of a quality improvementarticle clearly identifies the current relevant evidence,the best practice standard based on the current evi-dence and the gap in quality. A quality gap describesthe difference between practice at the local level andthe achievable evidence-based standard. The authorsof this article describe the problem and identify thequality gap by stating that “Examination of a10-month sample revealed only 63% of the patientsreceived antibiotics within 60 minutes of arrival andthat the benchmark of <60 minutes and that delays indelivering antibiotics led to poorer outcomes.”18 Thetiming of antibiotic administration at the nationallevel compared with the local level provides an achiev-able standard of care, which helps the authors deter-mine the goal for their antibiotic administrationimprovement project.Providing a summary of the relevant evidence and

what is known about the problem provides back-ground and support for the improvement projectand increases the likelihood for sustainable success.The contextual information provided by describingthe local system clarifies the project and reflectsupon how suboptimal care with antibiotic adminis-tration negatively impacts quality. Missed diagnoses,delayed treatments, increased morbidity andincreased costs are associated with a lack of quality,having relevance and implications at both the localand national levels.Improvement work can also be done on a national

or regional level. In this case, the term ‘local’ in theSQUIRE guidelines should be interpreted more gener-ally as the specific problem to be addressed. Forexample, Murphy et al describe a national initiativeaddressing a healthcare quality issue.19 The introduc-tion section in this article also illuminates current rele-vant evidence, best practice based on the currentevidence, and the gap in quality. However, the qualitygap reported here is the difference in knowledge ofstatin use for patients at high risk of cardiovascularmorbidity and mortality in Ireland compared withEuropean clinical guidelines: “Despite strong evidenceand clinical guidelines recommending the use ofstatins for secondary prevention, a gap exists betweenguidelines and practice … A policy response thatstrengthens secondary prevention, and improves riskassessment and shared decision-making in the primaryprevention of CVD [cardiovascular disease] isrequired.”19

Improvement work can also address a gap in knowl-edge, rather than quality. For example, work might bedone to develop tools to assess patient experience for

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 5

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 6: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

quality improvement purposes.20 Interventions toimprove patient experience, or to enhance team com-munication about patient safety21 may also addressquality problems, but in the absence of an established,evidence-based standard.

RationaleInformal or formal frameworks, models, concepts,and/or theories used to explain the problem, anyreasons or assumptions that were used to develop theintervention(s), and reasons why the intervention(s)was expected to work.

Example 1

The team used a variety of qualitative methods …tounderstand sociotechnical barriers. At each step of col-lection, we categorised data according to the FITT[‘Fit between Individuals, Task, and Technology’]model criteria … Each component of the activitysystem [ie, user, task and technology] was clearlydefined and each interface between components wasexplored by drawing from several epistemological dis-ciplines including the social and cognitive sciences.The team designed interventions to address each iden-tified FITT barrier……. By striving to understand thebarriers affecting activity system components and theinterfaces between them, we were able to develop aplan that addressed user needs, implement an inter-vention that articulated with workflow, study the con-textual determinants of performance, and act inalignment with stakeholder expectations.22

Example 2

…We describe the development of an intervention toimprove medication management in multimorbidity bygeneral practitioners (GPs), in which we applied thesteps of the BCW[Behaviour Change Wheel]23 toenable a more transparent implementation of theMRC [Medical Research Council] framework fordesign and evaluation of complex interventions….

…we used the COM-B [capability, opportunity, motiv-ation—behaviour] model to develop a theoreticalunderstanding of the target behaviour and guide ourchoice of intervention functions. We used the COM-Bmodel to frame our qualitative behavioural analysis ofthe qualitative synthesis and interview data. We codedempirical data relevant to GPs’ …capabilities, …

opportunities and …motivations to highlight why GPswere or were not engaging in the target behaviour andwhat needed to change for the target behaviour to beachieved.

The BCW incorporates a comprehensive panel of nineintervention functions, shown in figure 1, which weredrawn from a synthesis of 19 frameworks ofbehavioural-intervention strategies. We determinedwhich intervention functions would be most likely toeffect behavioural change in our intervention bymapping the individual components of the COM-Bbehavioural analysis onto the published BCW linkagematrices…24

Explanation

The label ‘rationale’ for this guideline item refers tothe reasons the authors have for expecting that anintervention will ‘work.’ A rationale is always presentin the heads of researchers; however, it is importantto make this explicit and communicate it in healthcarequality improvement work. Without this, learningfrom empirical studies may be limited and opportun-ities for accumulating and synthesising knowledgeacross studies restricted.8

Authors can express a rationale in a variety of ways,and in more than one way in a specific paper. Theseinclude providing an explanation, specifying under-lying principles, hypothesising processes or mechan-ism of change, or producing a logic model (often inthe form of a diagram) or a programme theory. Therationale may draw on a specific theory with clearcausal links between constructs or on a general frame-work which indicates potential mechanisms of changethat an intervention could target.A well developed rationale allows the possibility of

evaluating not just whether the intervention had aneffect, but how it had that effect. This provides a basisfor understanding the mechanisms of action of theintervention, and how it is likely to vary across, forexample, populations, settings and targets. An explicitrationale leads to specific hypotheses about mechan-isms and/or variation, and testing these hypothesesprovides valuable new knowledge, whether or notthey are supported. This knowledge lays the founda-tion for optimising the intervention, accumulating evi-dence about mechanisms and variation, and advancingtheoretical understanding of interventions in general.The first example shows how a theory (the

‘Fit between Individuals, Task and Technology’

Figure 1 Behaviour change wheel (adapted from Sinnott et al 24

and Mitchie et al).25

Research and reporting methodology

6 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 7: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

framework) can identify and clarify the social andtechnological barriers to healthcare improvementwork. The study investigated engagement with a com-puterised system to support decisions about post-operative deep vein thrombosis (DVT) prophylaxis:use of the framework led to 11 distinct barriers beingidentified, each associated with a clearly specifiedintervention which was undertaken.The second example illustrates the use of an integra-

tive theoretical framework for intervention develop-ment.25 The authors used an integrative frameworkrather than a specific theory/model/framework. Thiswas in order to start with as comprehensive a frame-work as possible, since many theories of behaviourchange are partial. This example provides a cleardescription of the framework and how analysing thetarget behaviour using an integrative theoreticalmodel informed the selection of intervention content.Interventions may be effective without the effects

being brought about by changes identified in thehypothesised mechanisms; on the other hand, theymay activate the hypothesised mechanisms withoutchanging behaviour. The knowledge gained through atheory-based evaluation is essential for understandingprocesses of change and, hence, for developing moreeffective interventions. This paper also cited evidencefor, and examples of, the utility of the framework inother contexts.

Specific aimsPurpose of the project and of this report

Example

The collaborative quality improvement [QI] projectdescribed in this article was conducted to determinewhether care to prevent postoperative respiratoryfailure as addressed by PSI 11 [Patient SafetyIndicator #11, a national quality indicator] could beimproved in a Virtual Breakthrough Series [VBTS]collaborative…..26

Explanation

The specific aim of a project describes why it was con-ducted, and the goal of the report. It is essential to statethe aims of improvement work clearly, completely andprecisely. Specific aims should align with the nature andsignificance of the problem, the gap in quality, safetyand value identified in the introduction, and reflect therationale for the intervention(s). The example givenmakes it clear that the goal of this multisite initiative wasto improve or reduce postoperative respiratory failureby using a virtual breakthrough series.When appropriate, the specific aims section of a

report about healthcare improvement work shouldstate that both process and outcomes will be assessed.Focusing only on assessment of outcomes ignores thepossibility that clinicians may not have adopted thedesired practice, or did not adopt it effectively, during

the study period. Changing care delivery is the foun-dation of improvement work and should also be mea-sured and reported. In the subsequent methodssection, the example presented here also describes theprocess measures used to evaluate the VBS.

METHODSContextContextual elements considered important at theoutset of introducing the intervention(s)Example 1

CCHMC [Cincinnati Children’s Hospital MedicalCenter]is a large, urban pediatric medical center andthe Bone Marrow Transplant [BMT] team performs100 to 110 transplants per year. The BMT unit con-tains 24 beds and 60–70% of the patients on the floorare on cardiac monitors…The clinical providers…include 14 BMT attending physicians, 15 fellows, 7NPs [nurse practitioners], and 6 hospitalists…TheBMT unit employs ∼130 bedside RNs [registerednurses] and 30 PCAs[patient care assistants]. Familymembers take an active role…27

Example 2

Pediatric primary care practices were recruited throughthe AAP QuIIN [American Academy of PediatricsQuality Improvement Innovation Network] and theAcademic Pediatric Association’s Continuity ResearchNetwork. Applicants were told that Maintenance ofCertification [MOC] Part 4 had been applied for, butwas not assured. Applicant practices provided informa-tion on their location, size, practice type, practicesetting, patient population and experience with qualityimprovement [QI] and identified a 3-memberphysician-led core improvement team. …. Practiceswere selected to represent diversity in practice types,practice settings, and patient populations. In eachselected practice the lead core team physician and insome cases the whole practice had previous QI experi-ence…table 1 summarizes practice characteristics forthe 21 project teams.28

Explanation

Context is known to affect the process and outcomeof interventions to improve the quality of health-care.29 This section of a report should describe thecontextual factors that authors considered importantat the outset of the improvement initiative. The goalof including information on context is twofold. First,describing the context in which the initiative tookplace is necessary to assist readers in understandingwhether the intervention is likely to ‘work’ in theirlocal environment, and, more broadly, the generalis-ability of the finding. Second, it enables the research-ers to examine the role of context as a moderator ofsuccessful intervention(s). Specific and relevant ele-ments of context thought to optimise the likelihoodof success should be addressed in the design of theintervention, and plans should be made a priori to

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 7

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 8: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

measure these factors and examine how they interactwith the success of the intervention.Describing the context within the methods section

orients the reader to where the initiative occurred. Insingle-centre studies, this description usually includesinformation about the location, patient population,size, staffing, practice type, teaching status, systemaffiliation and relevant processes in place at the startof the intervention, as is demonstrated in the firstexample by Dandoy et al27 reporting a QI effort toreduce monitor alarms. Similar information is alsoprovided in aggregate for multicentre studies. In thesecond example by Duncan et al,28 a table is used todescribe the practice characteristics of the 21 partici-pating paediatric primary care practices, and includesinformation on practice type, practice setting, practicesize, patient characteristics and use of an electronichealth record. This information can be used by thereader to assess whether his or her own practicesetting is similar enough to the practices included inthis report to enable extrapolation of the results. Theauthors state that they selected practices to achievediversity in these key contextual factors. This waslikely done so that the team could assess the effective-ness of the interventions in a range of settings andincrease the generalisability of the findings.Any contextual factors believed a priori would

impact the success of their intervention should be spe-cifically discussed in this section. Although theauthors’ rationale is not explicitly stated, the examplesuggests that they had specific hypotheses about keyaspects of a practice’s context that would impactimplementation of the interventions. They addressedthese contextual factors in the design of their study inorder to increase the likelihood that the interventionwould be successful. For example, they stated specific-ally that they selected practices with previous health-care improvement experience and strong physicianleadership. In addition, the authors noted that prac-tices were recruited through an existing research con-sortium, indicating their belief that projectsponsorship by an established external network couldimpact success of the initiative. They also noted thatpractices were made aware that American Board ofPediatrics Maintenance of Certification Part 4 credithad been applied for but not assured, implying thatthe authors believed incentives could impact projectsuccess. While addressing context in the design of theintervention may increase the likelihood of success,these choices limit the generalisability of the findingsto other similar practices with prior healthcareimprovement experience, strong physician leadershipand available incentives.This example could have been strengthened by

using a published framework such as the Model forUnderstanding Success in Quality (MUSIQ),10

Consolidated Framework for ImplementationResearch (CFIR),29or the Promoting Action on

Research Implementation in Health Services (PARiHS)model30 to identify the subset of relevant contextualfactors that would be examined.10 11 The use of suchframeworks is not a requirement but a helpful optionfor approaching the issue of context. The relevance ofany particular framework can be determined byauthors based on the focus of their work—MUSIQwas developed specifically for microsystem or organ-isational QI efforts, whereas CFIR and PARiHS weredeveloped more broadly to examine implementationof evidence or other innovations.If elements of context are hypothesised to be

important, but are not going to be addressed specific-ally in the design of the intervention, plans tomeasure these contextual factors prospectively shouldbe made during the study design phase. In these cases,measurement of contextual factors should be clearlydescribed in the methods section, data about howcontextual factors interacted with the interventionsshould be included in the results section, and theimplications of these findings should be explored inthe discussion. For example, if the authors of theexamples above had chosen this approach, they wouldhave measured participating team’s’ prior healthcareimprovement experience and looked for differences insuccessful implementation based on whether practiceshad prior experience or not. In cases where contextwas not addressed prospectively, authors are stillencouraged to explore the impact of context on theresults of intervention(s) in the discussion section.

Intervention(s)A. Description of the intervention(s) in sufficient detail that

others could reproduce itB. Specifics of the team involved in the work

Example 1

We developed the I-PASS Handoff Bundle through aniterative process based on the best evidence from theliterature, our previous experience, and our previouslypublished conceptual model. The I-PASS HandoffBundle included the following seven elements: theI-PASS mnemonic, which served as an anchoring com-ponent for oral and written handoffs and all aspects ofthe curriculum; a 2-hour workshop [to teachTeamSTEPPS teamwork and communication skills, aswell as I-PASS handoff techniques], which was highlyrated; a 1-hour role-playing and simulation session forpracticing skills from the workshop; a computermodule to allow for independent learning; a facultydevelopment program; direct-observation tools usedby faculty to provide feedback to residents; and aprocess-change and culture-change campaign, whichincluded a logo, posters, and other materials to ensureprogram adoption and sustainability. A detaileddescription of all curricular elements and the I-PASSmnemonic have been published elsewhere and are pro-vided in online supplementary appendix table,

Research and reporting methodology

8 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 9: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

available with the full text of this article at NEJM.org.I-PASS is copyrighted by Boston Children’s Hospital,but all materials are freely available.

Each site integrated the I-PASS structure into oral andwritten handoff processes; an oral handoff and awritten handoff were expected for every patient.Written handoff tools with a standardized I-PASSformat were built into the electronic medical recordprograms [at seven sites] or word-processing programs[at two sites]. Each site also maintained an implemen-tation log that was reviewed regularly to ensure adher-ence to each component of the handoff program.21

Example 2

All HCWs [healthcare workers] on the study units,including physicians, nurses and allied health profes-sionals, were invited to participate in the overall studyof the RTLS [real-time location system] through pre-sentations by study personnel. Posters describing theRTLS and the study were also displayed on the partici-pating units… Auditors wore white lab coats as perusual hospital practice and were not specifically identi-fied as auditors but may have been recognisable tosome HCWs. Auditors were blinded to the studyhypothesis and conducted audits in accordance withthe Ontario Just Clean Your Hands programme.31

Explanation

In the same way that reports of basic science experi-ments provide precise details about the quantity, speci-fications and usage of reagents, equipment, chemicalsand materials needed to run an experiment, so tooshould the description of the healthcare improvementintervention include or reference enough detail thatothers could reproduce it. Improvement efforts arerarely unimodal and descriptions of each componentof the intervention should be included. For additionalguidance regarding the reporting of interventions,readers are encouraged to review the TIDieR guide-lines: http://www.ncbi.nlm.nih.gov/pubmed/24609605.

In the first example above21 about the multisiteI-PASS study to improve paediatric handoff safety, theauthors describe seven different elements of the inter-vention, including a standardised mnemonic, severaleducational programmes, a faculty development pro-gramme, observation/feedback tools and even the pub-licity materials used to promote the intervention.Every change that could have contributed to theobserved outcome is noted. Each element is brieflydescribed and a reference to a more detailed descrip-tion provided so that interested readers can seek moreinformation. In this fashion, complete informationabout the intervention is made available, yet the fulldetails do not overwhelm this report. Note that notall references are to peer-reviewed literature as someare to curricular materials in the website MedEdPortal (https://www.mededportal.org), and others areto online materials.

The online supplementary appendix available withthis report summarises key elements of each compo-nent which is another option to make details availableto readers. The authors were careful to note situationsin which the intervention differed across sites. At twosites the written handoff tool was built into word-processing programmes, not the electronic medicalrecord. Since interventions are often unevenly appliedor taken up, variation in the application of interven-tion components across units, sites or clinicians isreported in this section where applicable.The characteristics of the team that conducted the

intervention (for instance, type and level of training,degree of experience, and administrative and/or aca-demic position of the personnel leading workshops)and/or the personnel to whom the intervention wasapplied should be specified. Often the influence ofthe people involved in the project is as great as theproject components themselves. The second exampleabove,31 from an elegant study of the Hawthorneeffect on hand hygiene rates, succinctly describes boththe staff that were being studied and characteristics ofthe intervention personnel: the auditors tracking handhygiene rates.

Study of the interventionA. Approach chosen for assessing the impact of the inter-

vention(s)B. Approach used to establish whether the observed out-

comes were due to the intervention(s)

Example 1

The nonparametric Wilcoxon-Mann-Whitney test wasused to determine differences in OR use amongRadboud UMC [University Medical Centre] and thesix control UMCs together as a group. To measure theinfluence of the implementation of new regulationsabout cross functional teams in May 2012 in RadboudUMC, a [quasi-experimental] time-series design wasapplied and multiple time periods before and after thisintervention were evaluated.32

Example 2

To measure the perceptions of the intervention onpatients and families and its effect on transition out-comes, a survey was administered in the paediatric cysticfibrosis clinic at the start of the quality improvementintervention and 18 months after the rollout process.The survey included closed questions on demographicsand the transition materials [usefulness of guide andnotebook, actual use of notebook and guide, which spe-cific notebook components were used in clinic and athome]. We also elicited open-ended feedback…..

A retrospective chart review assessed the ways patientstransferred from the paediatric to adult clinic beforeand after the transition programme started. In add-ition, we evaluated differences in BMI [body massindex] and hospitalizations 1 year after transfer to theadult centre.33

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 9

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 10: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

Explanation

Broadly, the study of the intervention is the reflec-tion upon the work that was done, its effects on thesystems and people involved, and an assessment of theinternal and external validity of the intervention.Addressing this item will be greatly facilitated by thepresence of a strong rationale, because when authorsare clear about why they thought an interventionshould work, the path to assessing the what, when,why and how of success or failure becomes easier.The study of the intervention may at least partly

(but not only) be accomplished through the studydesign used. For example, a stepped wedge design orcomparison control group can be used to study theeffects of the intervention. Other examples of waysto study the intervention include, but are not limitedto, stakeholder satisfaction surveys around the inter-vention, focus groups or interviews with involvedpersonnel, evaluations of the fidelity of implementa-tion of an intervention, or estimation of unintendedeffects through specific analyses. . The aims andmethods for this portion of the work should beclearly specified. The authors should indicatewhether these evaluative techniques were performedby the authors themselves, or an outside team, andwhat the relationship was between the authors andthe evaluators. The timing of the ‘study of the inter-vention’ activities relative to the intervention shouldbe indicated.In the first example,32 the cross-functional team

study, the goal was to improve utilisation of operatingroom time by having a multidisciplinary, interprofes-sional group proactively manage the operating roomschedule. This project used a prespecified study designto study an intervention, including an interventionand a control group. They assessed whether theobserved outcomes were due to the intervention orsome other cause (internal validity) by comparingoperating room utilisation over time at the interven-tion site to utilisation at the control site. They under-stood the possible confounding effects of system-widechanges to operating room policies, and plannedtheir analysis to account for this by using a quasi-experimental time series design. The authors usedstatistical results to determine the validity of theirfindings, suggesting that the decrease in variation inuse was indicative of organisational learning.In a subsequent section of this report, the authors

also outlined an evaluation they performed to makesure that improved efficiency of operating room wasnot associated with adverse changes in operative mor-tality or complication rates. This is an example of howan assessment of unintended impact of the intervention—an important component of studying the interven-tion—might be completed. An additional way to assessimpact in this particular study might have been toobtain information from staff on their impressions of

the programme, or to assess how cross-functionalteams were implemented at this particular site.In the second example,33 a programme to improve

the transition from paediatric to adult cystic fibrosiscare was implemented and evaluated. The authorsused a robust theoretical framework to help developtheir work in this area, and its presence supportedtheir evaluative design by showing whose feedbackwould be needed in order to determine success:healthcare providers, patients and their families. Inthis paper, the development of the intervention incor-porated the principle of studying it through PDSAcycles, which were briefly reported to give the readera sense of the validity of the intervention. Outcomesof the intervention were assessed by testing howpatients’ physical parameters changed over timebefore and after the intervention. To test whetherthese changes were likely to be related to the imple-mentation of the new transition programme, patientsand families were asked to complete a survey, whichdemonstrated the overall utility of the intervention tothe target audience of families and patients. Thesurvey also helped support the assertion that the inter-vention was the reason patient outcomes improved bytesting whether people actually used the interventionmaterials as intended.

MeasuresA. Measures chosen for studying processes and outcomes

of the intervention(s), including rationale for choosingthem, their operational definitions, and their validityand reliability

B. Description of the approach to the ongoing assessmentof contextual elements that contributed to the success,failure, efficiency, and cost of the improvement

C. Methods employed for assessing completeness andaccuracy of data

Example

Improvement in culture of safety and ‘transformative’effects—Before and after surveys of staff attitudes incontrol and SPI1[the Safer Patients Initiative, phase 1]hospitals were conducted by means of a validatedquestionnaire to assess staff morale, attitudes, andaspects of culture [the NHS National Staff Survey]…

Impact on processes of clinical care—To identify anyimprovements, we measured error rates in control andSPI1 hospitals by means of explicit [criterion based]and separate holistic reviews of case notes. The studygroup comprised patients aged 65 or over who hadbeen admitted with acute respiratory disease: this is ahigh risk group to whom many evidence based guide-lines apply and hence where significant effects wereplausible.

Improving outcomes of care—We reviewed case notesto identify adverse events and mortality and assessedany improvement in patients’ experiences by using a

Research and reporting methodology

10 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 11: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

validated measure of patients’ satisfaction [the NHSpatient survey]…

To control for any learning or fatigue effects, or both,in reviewers, case notes were scrambled to ensure thatthey were not reviewed entirely in series. Agreementon prescribing error between observers was evaluatedby assigning one in 10 sets of case notes to bothreviewers, who assessed cases in batches, blinded toeach other’s assessments, but compared and discussedresults after each batch.16

Explanation

Studies of healthcare improvement should docu-ment both planned and actual changes to the structureand/or process of care, and the resulting intendedand/or unintended (desired or undesired) changes inthe outcome(s) of interest.34 While measurement isinherently reductionistic, those evaluating the workcan provide a rich view by combining multiple per-spectives through measures of clinical, functional,experiential, and cost outcome dimensions.35–37

Measures may be routinely used to assess healthcareprocesses or designed specifically to characterise theapplication of the intervention in the clinical process.Either way, evaluators also need to consider the influ-ence of contextual factors on the improvement effortand its outcomes.7 38 39 This can be accomplishedthrough a mixed method design which combines datafrom quantitative measurement, qualitative interviewsand ethnographical observation.40–43 In the studydescribed above, triangulation of complementary datasources offers a rich picture of the phenomena understudy, and strengthens confidence in the inferencesdrawn.The choice of measures and type of data used will

depend on the particular nature of the initiative understudy, on data availability, feasibility considerationsand resource constraints. The trustworthiness of thestudy will benefit from insightful reporting of thechoice of measures and the rationale for choosingthem. For example, in assessing ‘staff morale, atti-tudes, and aspects of ‘culture’ that might be affected’by the SPI1, the evaluators selected the 11 most rele-vant of the 28 survey questions in the NHS StaffSurvey questionnaire and provided references todetailed documentation for that instrument. To assesspatient safety, the authors’ approach to reviewing casenotes ‘was both explicit (criterion based) and implicit(holistic) because each method identifies a differentspectrum of errors’.16

Ideally, measures would be perfectly valid, reliable,and employed in research with complete and accuratedata. In practice, such perfection is impossible.42

Readers will benefit from reports of the methodsemployed for assessing the completeness and accuracyof data, so they can critically appraise the data and theinferences made from it.

AnalysisA. Qualitative and quantitative methods used to draw infer-

ences from the dataB. Methods for understanding variation within the data,

including the effects of time as a variable

Example 1

We used statistical process control with our primaryprocess measure of family activated METs [MedicalEmergency Teams] displayed on a u-chart. We usedestablished rules for differentiating special versuscommon cause variation for this chart. We next calcu-lated the proportion of family-activated versusclinician-activated METs which was associated withtransfer to the ICU within 4 h of activation. We com-pared these proportions using χ2 tests.44

Example 2

The CDMC [Saskatchewan Chronic DiseaseManagement Collaborative] did not establish a stablebaseline upon which to test improvement; therefore,we used line graphs to examine variation occurring atthe aggregate level [data for all practices combined]and linear regression analysis to test for statistically sig-nificant slope [alpha=0.05]. We used small multiples,rational ordering and rational subgrouping to examinedifferences in the level and rate of improvementbetween practices.

We examined line graphs for each measure at the prac-tice level using a graphical analysis technique calledsmall multiples. Small multiples repeat the same graph-ical design structure for each ‘slice’ of the data; in thiscase, we examined the same measure, plotted on thesame scale, for all 33 practices simultaneously in onegraphic. The constant design allowed us to focus onpatterns in the data, rather than the details of thegraphs. Analysis of this chart was subjective; theauthors examined it visually and noted, as a group,any qualitative differences and unusual patterns.

To examine these patterns quantitatively, we used arational subgrouping chart to plot the average monthto month improvement for each practice on an Xbar-Schart.45

Example 3

Key informant interviews were conducted with stafffrom 12 community hospital ICUs that participated ina cluster randomized control trial [RCT] of a QI inter-vention using a collaborative approach. Data analysisfollowed the standard procedure for grounded theory.Analyses were conducted using a constant comparativeapproach. A coding framework was developed bythe lead investigator and compared with a secondaryanalysis by a coinvestigator to ensure logic andbreadth. As there was close agreement for the basicthemes and coding decisions, all interviews were thencoded to determine recurrent themes and the relation-ships between themes. In addition, ‘deviant’ or ‘nega-tive’ cases [events or themes that ran counter toemerging propositions] were noted. To ensure that the

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 11

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 12: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

analyses were systematic and valid, several commonqualitative techniques were employed including con-sistent use of the interview guide, audiotaping andindependent transcription of the interview data,double coding and analysis of the data and triangula-tion of investigator memos to track the course of ana-lytic decisions.46

Explanation

Various types of problems addressed by healthcareimprovement efforts may make certain types of solu-tions more or less effective. Not every problem can besolved with one method––yet a problem often sug-gests its own best solution strategy. Similarly, the ana-lytical strategy described in a report should align withthe rationale, project aims and data constraints. Manyapproaches are available to help analyse healthcareimprovement, including qualitative approaches (eg,fishbone diagrams in root cause analysis, structuredinterviews with patients/families, Gemba walks) orquantitative approaches (eg, time series analysis, trad-itional parametrical and non-parametrical testingbetween groups, logistic regression). Often the mosteffective analytical approach occurs when quantitativeand qualitative data are used together. Examples ofthis might include value stream mapping where aprocess is graphically outlined with quantitative cycletimes denoted; or a spaghetti map linking geographyto quantitative physical movements; or annotations ona statistical process control (SPC) chart to allow fortemporal insights between time series data andchanges in system contexts.In the first example by Brady et al,44 family activated

medical emergency teams (MET) are evaluated. Thecombination of three methods—statistical processcontrol, a Pareto chart and χ2 testing—makes for aneffective and efficient analysis. The choice of analyticalmethods is described clearly and concisely. The readerknows what to expect in the results sections and whythese methods were chosen. The selection of controlcharts gives statistically sound control limits thatcapture variation over time. The control limits giveexpected limits for natural variation, whereas statistic-ally based rules make clear any special cause variation.This analytical methodology is strongly suited for boththe prospective monitoring of healthcare improvementwork as well as the subsequent reporting as a scientificpaper. Depending on the type of intervention underscrutiny, complementary types of analyses may be used,including qualitative methods.The METanalysis also uses a Pareto chart to analyse

differences in characteristics betweenclinician-initiated versus family initiated MET activa-tions. Finally, specific comparisons between sub-groups, where time is not an essential variable, areaugmented with traditional biostatistical approaches,such as χ2 testing. This example, with its one-paragraph description of analytical methods (control

charts, Pareto charts and basic biostatistics) is easilyunderstandable and clearly written so that it is access-ible to front-line healthcare professionals who mightwish to use similar techniques in their work.Every analytical method also has constraints, and

the reason for choosing each method should beexplained by authors. The second example, byTimmerman et al,45 presents a more complex analysisof the data processes involved in a multicentreimprovement collaborative. The authors provide aclear rationale for selecting each of their chosenapproaches. Principles of healthcare improvementanalytics are turned inwards to understand moredeeply the strengths and weaknesses of the way inwhich primary data were obtained, rather than inter-pretation of the clinical data itself. In this example,45

rational subgrouping of participating sites is under-taken to understand how individual sites contribute tovariation in the process and outcome measures of thecollaborative. Control charts have inherent con-straints, such as the requisite number of baseline datapoints needed to establish preliminary control limits.Recognising this, Timmerman, et al used linear regres-sion to test for the statistical significance in the slopesof aggregate data, and used run charts for graphicalrepresentation of the data to enhance understanding.Donabedian said, “Measurement in the classical

sense—implying precision in quantification—cannotreasonably be expected for such a complex and abstractobject as quality.”47 In contrast to the what, when andhow much of quantitative, empirical approaches todata, qualitative analytical methods strive to illuminatethe how and why of behaviour and decision making—be it of individuals or complex systems. In the thirdexample, by Dainty et al, grounded theory is applied toimprovement work wherein the data from structuredinterviews are used to gain insight into and generatehypotheses about the causative or moderating forces inmulticentre quality improvement collaboratives,including how they contribute to actual improvement.Themes were elicited using multiple qualitativemethods—including a structured interview process,audiotaping with independent transcription, compari-son of analyses by multiple investigators, and recur-rence frequencies of constructs.47

In all three example papers, the analytical methodsselected are clearly described and appropriately cited,affording readers the ability to understand them ingreater detail if desired. In the first two, SPC methodsare employed in divergent ways that are instructiveregarding the versatility of this analytical method. Allthree examples provide a level of detail which furthersupports replication.

Ethical considerations

Ethical aspects of implementing and studying theintervention(s) and how they were addressed,

Research and reporting methodology

12 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 13: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

including, but not limited to, formal ethics review andpotential conflict(s) of interest.

Example

Close monitoring of [vital] signs increases the chanceof early detection of patient deterioration, and whenfollowed by prompt action has the potential to reducemortality, morbidity, hospital length of stay and costs.Despite this, the frequency of vital signs monitoring inhospital often appears to be inadequate…Thereforewe used our hospital’s large vital signs database tostudy the pattern of the recording of vital signs obser-vations throughout the day and examine its relation-ship with the monitoring frequency component of theclinical escalation protocol…The large study demon-strates that the pattern of recorded vital signs observa-tions in the study hospital was not uniform across a24 h period…[the study led to] identification of thefailure of our staff in our study to follow a clinicalvital signs monitoring protocol…

Acknowledgements The authors would like to acknow-ledge the cooperation of the nursing and medical staffin the study hospital.

Competing interests VitalPAC is a collaborative devel-opment of The Learning Clinic Ltd [TLC] andPortsmouth Hospitals NHS Trust [PHT]. PHT has aroyalty agreement with TLC to pay for the use ofPHT intellectual property within the VitalPACproduct. Professor Prytherch and Drs Schmidt,Featherstone and Meredith are employed by PHT.Professor Smith was an employee of PHT until 31March 2011. Dr Schmidt and the wives of ProfessorsSmith and Prytherch are shareholders in TLC.Professors Smith and Prytherch and Dr Schmidt areunpaid research advisors to TLC. Professors Smith andPrytherch have received reimbursement of travelexpenses from TLC for attending symposia in the UK.

Ethics approval Local research ethics committeeapproval was obtained for this study from the Isle ofWight, Portsmouth and South East HampshireResearch Ethics Committee [study ref. 08/02/1394].”48

Explanation

SQUIRE 2.0 provides guidance to authors ofimprovement activities in reporting on the ethicalimplications of their work. Those reading publishedimprovement reports should be assured that potentialethics issues have been considered in the design,implementation and dissemination of the activity. Theexample given highlights key ethical issues that maybe reported by authors, including whether or notindependent review occurred, and any potential con-flicts of interest.49–56 These issues are directlydescribed in the quoted sections.Expectations for the ethical review of research and

improvement work vary between countries57 and mayalso vary between institutions. At some institutions,both quality improvement and human subject research

are reviewed using the same mechanism. Other insti-tutions designate separate review mechanisms forhuman subject research and quality improvementwork.56 In the example above, from the UK, Handset al48 report that the improvement activity describedwas reviewed and approved by a regional researchethics committee. In another example, from the USA,the authors of a report describing a hospital-wideimprovement activity to increase the rate of influenzavaccinations indicate that their work was reviewed bythe facility’s quality management office.58

Avoiding potential conflict of interest is as import-ant in improvement work as it is in research. Theauthors in the example paper indicate the presence orabsence of potential conflicts of interests, under theheading, ‘Competing Interests.’ Here, the authorsprovide the reader with clear and detailed informationconcerning any potential conflict of information.Both the original and SQUIRE 2.0 guidelines stipu-

late that reports of interventions to improve the safety,value or quality of healthcare should explicitlydescribe how potential ethical concerns were reviewedand addressed in development and implementation ofthe intervention. This is an essential step for ensuringthe integrity of efforts to improve healthcare, andshould therefore be explicitly described in publishedreports.

RESULTSResults: evolution of the intervention and details ofprocess measuresA. Initial steps of the intervention(s) and their evolution

over time (eg, timeline diagram, flow chart or table),including modifications made to the intervention duringthe project

B. Details of the process measures and outcome

Example

Over the course of this initiative, 479 patient encoun-ters that met criteria took place. TTA[Time to anti-biotic] delivery was tracked, and the percentage ofpatients receiving antibiotics within 60 minutes ofarrival increased from 63% to 99% after 8 months,exceeding our goal of 90% [figure1]… Control chartsdemonstrated that antibiotic administration was reli-ably, 1 hour by phase III and has been sustained for24 months since our initiative goal was first met inJune 2011.

Key improvement areas and specific interventions forthe initiative are listed in [figure 2]. During phase I,the existing processes for identifying and managingfebrile patients with central lines were mapped andanalyzed. Key interventions that were tested andimplemented included revision of the greeter role toinclude identification of patients with central lines pre-senting with fever and notification of the triage nurse,designation of chief complaint as “fever/central line,”re-education and re-emphasis of triage acuity as 2 for

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 13

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 14: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

these patients, and routine stocking of the Pyxismachine ….

In phase II, strategies focused on improving perform-ance by providing data and other information forlearning, using a monthly newsletter, public sharing ofaggregate compliance data tracking, individual reportsof personal performance, personal coaching of non-compliant staff, and rewards for compliance…

In phase III, a management guideline with key decisionelements was developed and implemented [figure 3].A new patient identification and initial managementprocess was designed based on the steps, weaknesses,and challenges identified in the existing process map

developed in phase I. This process benefited fromfeedback from frontline ED staff and the results ofmultiple PDSA cycles during phases I and II….

During the sustainability phase, data continued to becollected and reported to monitor ongoing perform-ance and detect any performance declines should theyoccur…18

Explanation

Healthcare improvement work is based on a ration-ale, or hypothesis, as to what intervention will havethe desired outcome(s) in a given context. Over time,

Figure 2 Evolution of the interventions (adapted from Jobson et al18).

Figure 3 Statistical process control chart showing antibiotic delivery within 60 min of arrival, with annotation (from Jobsonet al18).

Research and reporting methodology

14 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 15: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

as a result of the interaction between interventionsand context, these hypotheses are re-evaluated, result-ing in modifications or changes to the interventions.Although the mechanism by which this occurs shouldbe included in the methods section of a report, theresulting transformation of the intervention over timerightfully belongs under results. The results sectionshould therefore describe both this evolution and itsassociated outcomes.When publishing this work, it is important that the

reader has specific information about the initial inter-ventions and how they evolved. This can be in theform of tables and figures in addition to text. In theexample above, interventions are described in phases:I, II, III and a sustainability phase, and informationprovided as to why they evolved and how variousroles were impacted (figure 2). This level of detailallows readers to imagine how these interventions andstaff roles might be adapted in the context of theirown institutions, as an intervention which is successfulin one organisation may not be in another.It is important to report the degree of success

achieved in implementing an intervention in order toassess its fidelity, for example, the proportion of thetime that the intervention actually occurred asintended. In the example above, the goal of deliveringantibiotics within an hour of arrival, a processmeasure, is expressed in terms of the percentage oftotal patients for whom it was achieved. The firstchart (figure 3) shows the sustained improvement inthis measure over time. The second chart (figure 4)illustrates the resulting decrease in variation as theinterventions evolved and took hold. The charts areannotated to show the phases of evolution of theproject, to enable readers to see where each interven-tion fits in relationship to project results over time.

Results: contextual elements and unexpectedconsequencesA. Contextual elements that interacted with the

interventionsB. Observed associations between outcomes, interventions

and relevant contextual factorsC. Unintended consequences such as benefits, harms, unex-

pected results, problems or failures associated with theintervention(s)

Example

Quantitative results

In terms of QI efforts, two-thirds of the 76 practices[67%] focused on diabetes and the rest focused onasthma. Forty-two percent of practices were familymedicine practices, 26% were pediatrics, and 13%were internal medicine. The median percent ofpatients covered by Medicaid and with no insurancewas 20% and 4%, respectively. One-half of the prac-tices were located in rural settings and one-half used

electronic health records. For each diabetes or asthmameasure, between 50% and 78% of practices showedimprovement [ie, a positive trend] in the first year.

Tables 2 and 3 show the associations of leadershipwith clinical measures and with practice change scoresfor implementation of various tools, respectively.Leadership was significantly associated with only 1clinical measure, the proportion of patients havingnephropathy screening [OR=1.37: 95% CI 1.08 to1.74]. Inclusion of practice engagement reduced theseodds, but the association remained significant. Theodds of making practice changes were greater for prac-tices with higher leadership scores at any given time[ORs=1.92–6.78]. Inclusion of practice engagement,which was also significantly associated with makingpractice changes, reduced these odds [ORs=2.41 to4.20], but the association remained significant for allchanges except for registry implementation

Qualitative results

Among the 12 practices interviewed, 5 practices had 3or fewer clinicians and 7 had 4 or more [range=1–32]. Seven practices had high ratings of practicechange by the coach. One-half were NCQA [NationalCommittee for Quality Assurance] certified as apatient-centered medical home. These practices weresimilar to the quantitative analysis sample except forhigher rates of electronic health record use andCommunity Care of North Carolina Medicaidmembership…

Leadership-related themes from the focus groupsincluded having [1] someone with a vision about theimportance of the work, [2] a middle manager whoimplemented the vision, and [3] a team who believedin and were engaged in the work.…Although the prac-tice management provided the vision for change, pat-terns emerged among the practices that suggestedleaders with a vision are a necessary, but not sufficientcondition for successful implementation.

Leading from the middle

All practices had leaders who initiated the change, butpractices with high and low practice change ratingsreported very different ‘operational’ leaders.Operational leaders in practices with low practicechange ratings were generally the same clinicians, prac-tice managers, or both who introduced the change. Incontrast, in practices with high practice change ratings,implementation was led by someone other than thelead physician or top manager..”59

Explanation

One of the challenges in reporting healthcareimprovement studies is the effect of context on thesuccess or failure of the intervention(s). The mostcommonly reported contextual elements that mayinteract with interventions are structural variablesincluding organisational/practice type, volume, payermix, electronic health record use and geographicallocation. Other contextual elements associated with

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 15

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 16: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

healthcare improvement success include top manage-ment leadership, organisational structure, data infra-structure/information technology, physicianinvolvement in activities, motivation to change andteam leadership.60 In this example, the authors pro-vided descriptive information about the structuralelements of the individual practices, including typeof practice, payer mix, geographical setting and useof electronic health records. The authors noted vari-ability in improvement in diabetes and asthma mea-sures across the practices, and examined howcharacteristics of practice leadership affected thechange process for an initiative to improve diabetesand asthma care. Practice leadership was measuredmonthly by the community based practice coach ateach site. For analyses, these scores were reducedinto low (0–1) and high (2–3) groups. Practicechange ratings were also assigned by the practicecoaches indicating the degree of implementation anduse of patient registries, care templates, protocolsand patient self-management support tools. Localleadership showed no association with most of theclinical measures; however, local leadership involve-ment was significantly associated with implementa-tion of the process tools used to improve outcomes.The authors use tables to display these associationsclearly to the reader.In addition, the authors use the information from

the coaches’ ratings\to further explore this concept ofpractice leadership. The authors conducted semistruc-tured focus group interviews for a sample of 12 of the76 practices based on improvement in clinical

measures and improvement in practice change score.Two focus groups were conducted in each practiceincluding one with practice clinicians and administra-tors and one with front-line staff. Three themesemerged from these interviews that explicated theconcept of practice leadership in these groups. Whiletwo of the themes reflect contextual elements that areoften cited in the literature (visionary leader andengaged team), the authors addressed an unexpectedtheme about the role of the middle (operational)manager. This operational leader was often reportedto be a nurse or nurse practitioner with daily interac-tions with physicians and staff, who appeared to beinfluential in facilitating change. The level of detailprovided about the specifics of practice leadership canbe useful to readers who are engaged in their ownimprovement work. Although no harms or failuresrelated to the work were described, transparentreporting of negative results is as important as report-ing successful ones.In this example, the authors used a mixed methods

approach in which practice leadership and engage-ment was quantitatively rated by improvementcoaches as well as qualitatively evaluated using focusgroups. The use of qualitative methods enhancedunderstanding of the context of practice leadership.This mixed methods approach is not a requirementfor healthcare improvement studies as the influence ofcontextual elements can be assessed in many ways.For example, Cohen et al simply describe the probableimpact of the 2009 H1N1 pandemic on their work toincrease influenza vaccination rates in hospitalised

Figure 4 Statistical process control chart showing time to administration of antibiotics, with annotation (adapted from Jobson et al18).

Research and reporting methodology

16 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 17: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

patients,58 providing important contextual informa-tion to assist the reader’s understanding of the results.

Results: missing dataF. Details about missing dataExample 1

We successfully contacted 69% [122/178] of patients/families in the postimplementation group…Among theremaining 56 patients [31%] for whom no phone orE-mail follow-up was obtained, 34 had anotherencounter in our hospital on serial reviews of theirmedical record. Nine patients were evaluated in a car-diology clinic and 7 in a neurology clinic. As a resultof these encounters, there were no patients ultimatelydiagnosed with a cardiac or neurologic condition.61

Example 2

We identified 328 patients as under-immunizedbetween September 2009 and September 2010. Wefully immunized 194 [59%] of these patients bySeptember 2010…We failed to recover missing outsideimmunization records on 15 patients [5%]. Theremaining 99 patients [30%] refused vaccines, trans-ferred care, or were unreachable by phone or mail.For the 194 patients we fully immunized, we made504 [mean 2.6] total outreach attempts for care coord-ination. We immunized 176 [91%] of these patients byage 24 months. For the 20 patients who remainedunder-immunized, we made 113 [mean 5.7] total out-reach attempts for care coordination. We continuedattempting outreach to immunize these patients evenafter their second birthday.62

Explanation

Whenever possible, the results section of a health-care improvement paper should account for missingdata. Doing so enables the reader to understandpotential biases in the analysis, and may add import-ant context to the study findings. It is important forauthors to clearly state why data are missing, such astechnical problems or errors in data entry, attrition ofparticipants from an improvement initiative over time,or patients lost to follow-up. Efforts made by theteam to recover the data should be described, and anyavailable details about the missing data provided.In the first example,61 the improvement team was

unable to contact 56 patients for phone or emailfollow-up (ie, why the data are missing). To accountfor this missing data, the team performed serialreviews of medical records. In doing so, they wereable to report patient information relevant to thestudy outcomes. In the second example,62 the authorsalso clearly state the reasons for missing data (failureto recover outside records, transfers of care, unreach-able by phone or email). In addition, they give detailsabout the number of outreach attempts made for spe-cific patient groups. Providing a detailed descriptionof missing data allows for a more accurate interpret-ation of study findings.

DISCUSSIONSummaryA. Key findings, including relevance to the rationale and

specific aimsB. Particular strengths of the project

Example

In our 6-year experience with family-activated METs[Medical Emergency Teams], families uncommonly acti-vated METs. In the most recent and highest-volumeyear, families called 2.3 times per month on average. Asa way of comparison, the hospital had an average of8.7 accidental code team activations per month overthis time. This required an urgent response from thelarger team. Family activation less commonly resultedin ICU transfer than clinician activated METs, although24% of calls did result in transfers. This represents asubset of deteriorating patients that the clinical teammay have missed. In both family-activated andclinician-activated MET calls, clinical deterioration wasa common cause of MET calls. Families more consist-ently identified their fear that the child’s safety was atrisk, a lack of response from the clinical team, and thatthe interaction between team and family had becomedismissive. To our knowledge, this study is the largeststudy of family-activated METs to date, both in termsof count of calls and length of time observed. It is alsothe first to compare reasons for MET calls from familieswith matched clinician-activated calls.44

Explanation

Although often not called out with a specific sub-heading, the ‘summary’ of a report on healthcareimprovement most often introduces and frames the‘discussion’ section. While the first paragraph shouldbe a focused summary of the most critical findings,the majority of the project’s results should be con-tained in the results section. The goal of the summaryis to capture the major findings and bridge the discus-sion to a more nuanced exploration of those findings.Exactly where the summary section ends is far lessimportant that how it sets up the reader to exploreand reflect on the ensuing discussion.The example above gives a clear and concise state-

ment of the study’s strengths and distinctive features.This summary recaps quantitative findings (familiescalled METs relatively infrequently and fewer of theircalls resulted in intensive care unit (ICU) transfers),and introduces a subsequent discussion of concernsidentified by families which might not be visible toclinicians, including ways in which ‘family activationof an MET may improve care without reducingMET-preventable codes outside of the ICU’.44 Thisconveys an important message and bridges to a discus-sion of the existing literature and terminology.Providing a focused summary in place of an exhaust-ive re-statement of project results appropriately intro-duces the reader to the discussion section and a more

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 17

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 18: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

thorough description of the study’s findings andimplications.The authors go on to relate these main findings

back to the nature and significance of the problemand the specific aims previously outlined in the intro-duction section, specifically (emphasis added) ‘Toevaluate the burden of family activation on the clini-cians involved\…too better understand the outcomeof METs, and to begin to understand why families callMETs’.44

Another approach in structuring the summary com-ponent of the discussion is to succinctly link results tothe relevant processes in the development of the asso-ciated interventions. This approach is illustrated byBeckett et al in a recent paper about decreasingcardiac arrests in the acute hospital setting,63 “Key tothis success has been the development of a structuredresponse to the deteriorating patient. Following theimplementation of reliable EWS [early warningsystems] across the AAU[Acute Admissions Unit] andED [Emergency Department], and the recognitionand response checklists, plus weekly safety meetingsin the AAU at SRI[Stirling Royal Infirmary], there wasan immediate fall in the number of cardiac arrests,which was sustained thereafter.”63 This linkage servesto reintroduce the reader to some of the relevant con-textual elements which can subsequently be discussedin more detail as appropriate. Importantly, it alsoserves to frame the interpretive section of the discus-sion which focuses on comparison of results withfindings from other publications, and further evaluat-ing the project’s impact.

InterpretationA. Nature of the association between the intervention(s)

and the outcomesB. Comparison of results with findings from other

publicationsC. Impact of the project on people and systemsD. Reasons for any differences between observed and

anticipated outcomes, including the influence of contextE. Costs and strategic trade-offs, including opportunity

costs

Example 1

(a) After QI interventions, the percentage of patientsattending four or more clinic visits significantlyimproved, and in 2012 we met our goal of 90% ofpatients attending four or more times a year. A system-atic approach to scheduling processes, timely resched-uling of patients who missed appointments andmonitoring of attendance resulted in a significantincrease in the number of patients who met the CFFnational recommendation of four or more visits peryear.64

(b) Although the increase in the percentage of patientswith greater than 25th centile for BMI/W-L from 80%

to 82% might seem small, it represents a positiveimpact on a few more patients and provides moreopportunities for improvement. Our data are in agree-ment with Johnson et al. (2003), who reported thatfrequent monitoring among other interventions madepossible due to patients being seen more in clinic wasassociated with improved outcomes in CF.64

(c) We learned that families are eager to have inputand be involved…participation in the[learning andleadership collaborative] resulted in a positive culturechange at the ACH CF Care Center regarding the useof QI methods.64

(d) We noticed our clinic attendance started toimprove before the[intervention] processes were fullyimplemented. We speculate this was due to the heigh-tened awareness of our efforts by patients, familiesand our CF team.64

(e) Replication of these processes could be hindered bylack of personnel, lack of buy-in by the hospitaladministration and lack of patient/family involve-ment….barriers to attendance included rising fuelcosts, transportation limitations, child care issues,missed workdays by caregivers and average low-income population.64

Example 2

The direct involvement of patients and families…allowed us to address the social and medical barriersto adherence. Their input was invaluable since theylive with the treatment burden that is a daily part ofCF care…the in-clinic patient demonstration gave staffthe ability to upgrade or replace equipment that wasnot functioning.65

We found that following a simple algorithm helped tomaintain consistency in our program…the simplicityof this program makes it easily incorporated intoroutine CF clinic visits.65

Explanation

In the first example, Berlinski, et al64 describe theimplications of their improvement efforts by high-lighting that they increased the proportion of CFpatients receiving four clinic visits a year and achievedsecondary improvements on a nutritional outcomeand on the culture of their context. The authors alsooffer alternative explanations for outcomes, includingfactors which might have confounded the assertedrelationship between intervention and outcome—namely that performance on the primary outcomebegan to improve well before implementation of theintervention. This provides insight into what theactual drivers of the outcome might have been, andcan be very helpful to others seeking to replicate ormodify the intervention. Finally, their comparison oftheir results to that of a similar study provides a basisfor considerations of feasibility, sustainability, spreadand replication of the intervention.

Research and reporting methodology

18 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 19: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

The second example, from Zanni, et al65 found thatthe simplicity of their intervention could maximiseease of implementation, suggesting that costs andtrade-offs are likely to be minimal for replication insimilar contexts. Conversely, Berlinksi et al64 cite bar-riers to replicating and sustaining their work, includ-ing staffing, leadership, population socioeconomiccharacteristics and informatics issues, each of whichcould present cost or trade-off considerations thatleadership will need to consider to support implemen-tation and sustainability. Additionally, both Berlinskiet al and Zanni, et al observe that patient and familyinvolvement in the planning and intervention processsimultaneously improved the context and effectivenessof the intervention.

LimitationsA. Limits to the generalisability of the workB. Factors that might have limited internal validity such as

confounding, bias, or imprecision in the design,methods, measurement, or analysis

C. Efforts made to minimise and adjust for limitations

Example 1

Our study had several limitations. Our study of familyMET activations compared performance with our his-torical controls, and we were unable to adjust forsecular trends or unmeasured confounders. Ourimprovement team included leaders of our MET com-mittee and patient safety, and we are not aware of anyongoing improvement work or systems change thatmight have affected family MET calls. We performedour interventions in a large tertiary care children’shospital with a history of improvement in patientsafety and patient-centred and family-centred care.

Additionally, it is uncertain and likely very context-dependent as to what is the ‘correct’ level offamily-activated METs. This may limit generalizabilityto other centres, although the consistently low rate offamily MET calls in the literature in a variety of con-texts should reduce concerns related to respondingteam workload. We do not have process measures ofhow often MET education occurred for families andof how often families understood this information orfelt empowered to call. This results in a limited under-standing of the next best steps to improve familycalling. Our data were collected in the course of clin-ical care with chart abstraction from structured clinicalnotes. Given this, it is possible that notes were notwritten for family MET calls that were judged ‘non-clinical.’ From our knowledge of the MET system, weare confident such calls are quite few, but we lack thedata to quantify this. Our chart review for the reasonsfamilies called did not use a validated classificationtool as we do not believe one exists. This is somewhatmitigated by our double independent reviews thatdemonstrated the reliability of our classificationscheme.44

Example 2

Our study has a number of important limitations. Ourethnographic visits to units were not longitudinal, butrather snapshots in time; changes in response to theprogram could have occurred after our visits. We didnot conduct a systematic audit of culture and practices,and thus some inaccuracies in our assessments may bepresent. We did not evaluate possible modifiers ofeffect of factors such as size of unit, number of consul-tants and nurses, and other environmental features.We had access to ICUs’ reported infection rates only ifthey provided them directly to us; for informationgovernance reasons, these rates could not be verified.It is possible that we have offered too pessimistican interpretation of whether Matching Michigan‘worked’: the quantitative evaluation may have under-estimated the effects of the program [or over-estimatedthe secular trend], since the ‘waiting’ clusters werenot true controls that were unexposed to the interven-tions. …66

Explanation

The limitations section offers an opportunity topresent potential weaknesses of the study, explain thechoice of methods, measures and intervention, andexamine why results may not be generalisable beyondthe context in which the work occurred. In the firstexample, a study of family activated METs, Bradyet al identified a number of issues that might influenceinternal validity and the extent to which their findingsare generalisable to other hospitals. The success ofMETs, and the participation of family members incalling these teams, may depend on contextual attri-butes such as leadership involvement. Although fewhospitals have implemented family activated METs,the growing interest in patient and family engagementmay also contribute to a broader use of this interven-tion. There are no data available to assess the seculartrends in these practices that might suggest theobserved changes resulted from external factors.There were few family activated MET calls. This

positive result may stem from family education, butthe authors report that they had limited data on sucheducation. The lack of a validated tool to capturechart review information is noted as a potential weak-ness since some non-clinical MET calls might not havebeen recorded in the chart. The authors also note thatthe observed levels of family activated MET calls areconsistent with other literature.The impact of improvement interventions often

varies with context, but the large number of potentialfactors to consider requires that researchers focus on alimited set of contextual measures they believe mayinfluence success and future adaptation and spread. Inthe second example given, Dixon-Woods et al assessedvariation in results of the implementation of thecentral line bundle to reduce catheter-related blood-stream infections in English ICUs.66 While English

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 19

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 20: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

units made improvements, the results were not asimpressive as in the earlier US experience. Theresearchers point to the prior experiences of staff inthe English ICUs in several infection control cam-paigns, as contributing to this difference. ManyEnglish clinicians viewed the new programme asredundant, believing this was a problem alreadysolved. The research team also notes that some of theEnglish ICUs did not have an organisational culturethat supported consistent implementation of therequired changes.Dixon-Woods et al relied on quantitative data on

clinical outcomes as well as observation and qualita-tive interviews with staff. However, as they report,their study had several limitations. Their visits to theunits were not longitudinal, so changes could havebeen made in some units after the researchers’ obser-vations. They did not carry out systematic audits ofculture and practices that might have revealed add-itional information, nor did they assess the impact oflocal factors including the size of the unit, the numberof doctors and nurses, and other factors that mighthave affected the capability of the unit to implementnew practices. Moreover, while the study includedcontrols, there was considerable public and profes-sional interest in these issues, which may have influ-enced performance and reduced the relative impact ofthe intervention. The authors’ report66 of the contextand limitations is crucial to assist the reader in asses-sing their results, and in identifying factors that mightinfluence results of similar interventions elsewhere.

ConclusionsA. Usefulness of the workB. SustainabilityC. Potential for spread to other contextsD. Implications for practice and for further study in the

fieldE. Suggested next steps

Example

We have found that average paediatric nurse staffingratios are significantly associated with hospitalreadmission for children with common medical andsurgical conditions. To our knowledge, this study isthe first to explicitly examine and find an associationbetween staffing ratios and hospital readmission inpaediatrics… Our findings have implications for hos-pital administrators given the national emphasis onreduction of readmissions by payers. The role ofnursing care in reducing readmissions has traditionallyfocused on nurse-led discharge planning programmesin the inpatient setting and nurse-directed home carefor patients with complex or chronic conditions.

While these nurse-oriented interventions have beenshown to significantly reduce readmissions, our find-ings suggest that hospitals might also achieve reduc-tions in readmission by focusing on the number of

patients assigned to nurses. In paediatrics, limitingnurses’ workloads to four or fewer patients appears tohave benefits in reducing readmissions.

Further, hospitals are earnestly examining their dis-charge processes and implementing quality improve-ment programmes aimed at preparing patients andfamilies to manage health condition(s) beyond the hos-pital. Quality improvement initiatives to improveinpatient care delivery often depend upon the sus-tained efforts of front-line workers, particularlynurses. Prior research shows that hospitals with betternurse staffing ratios deliver JointCommission-recommended care for key conditionsmore reliably, highlighting the inter-relationship ofnurse staffing levels and quality improvement success.

The sustainability of quality improvement initiativesrelated to paediatric readmission may ultimatelydepend on nurses’ ability to direct meaningful timeand attention to such efforts.67

Explanation

The conclusion of a healthcare improvement papershould address the overall usefulness of the work,including its potential for dissemination and implica-tions for the field, both in terms of practice andpolicy. It may be included as a separate section in orafter the discussion section, or these components maybe incorporated within a single overall discussionsection.The authors of this report highlight the usefulness

of their research with reference to ‘the national[US]emphasis on reduction of readmission by payers’.They also refer throughout the paper to the debatesand research around appropriate nurse staffing levels,and the impact of nurse staffing levels on the sustain-ability of quality improvement initiatives in general,with reference to the key role of nurses in improvingcare and evidence that nursing staff levels are asso-ciated with delivery of high quality care. Although theauthors don’t refer directly to potential for spread toother contexts, the generalisability of their findings isdiscussed in a separate section of the discussion (notincluded here).In this example, the authors refer to ‘implications

for hospital administrators’ because their findings‘suggest that hospitals might also achieve reduction inreadmissions by focusing on the number of patientsassigned to nurses’. They also observe that these find-ings speak to ‘the validity of the California minimumstaffing ratio for paediatric care’. Perhaps they couldhave suggested more in terms of implications forpolicy, for example what their findings might meanfor the potential of payer organisations to influencenurse staffing levels through their contracts, or forbroader government legislation on nurse-patientratios. However, in their discussion they also recognisethe limitations of a single study to inform policydecisions.

Research and reporting methodology

20 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 21: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

The need for further study is emphasised in thewider discussion section. The authors note that ‘moreresearch is needed to better understand the reasons forchildren’s readmissions and thus identify which onesare potentially preventable’, calling for ‘additionalresearch on both paediatric readmission measures andthe relationship between nursing care delivery, nursestaffing levels and readmissions’. In writing abouthealthcare improvement, it is important that theauthors’ conclusions are appropriately related to theirfindings, reflecting their validity and generalisability,and their potential to inform practice. In this case,direct recommendations to change practice are appro-priately withheld given the need for further research.

FundingSources of funding that supported this work. Role, ifany, of the funding organisation in the design, imple-mentation, interpretation and reporting.

Example

Funding/Support: This research was funded by theCanadian Institutes of Health Research, the OntarioMinistry of Health and Long-Term Care, the GreenShield Canada Foundation, the University of TorontoDepartment of Medicine, and the Academic FundingPlan Innovation Fund.

Role of the Funder/Sponsor: None of the funder orsponsors had any role in the design of the study, theconduct of the study, the collection, management, ana-lysis or interpretation of the data, or the preparation,review, or approval of the manuscript; and decision tosubmit the manuscript for publication.68

Explanation

Sources of funding for quality improvement shouldbe clearly stated at the end of a manuscript in similarfashion to other scholarly reports. Any organisation,institution or agency that contributed financially toany part of the project should be listed. In thisexample, funding was received from multiple sourcesincluding government, university and foundationgranting agencies.Due to their financial interest in the quality

improvement project, funding sources have the poten-tial to introduce bias in favour of exaggerated effectsize. The role of each funding source should bedescribed in sufficient detail, as in the example above,to allow readers to assess whether these externalparties may have influenced the reporting of improve-ment outcomes. A recent paper by Trautner et al pro-vides a similar approach.69

SUMMARY AND CONCLUSIONS REGARDING THISE&EThe SQUIRE 2.0 E&E is intended to help authors‘operationalise’ SQUIRE in their reports of systematicefforts to improve the quality, safety and value of

healthcare. Given the rapid growth in healthcareimprovement over the past two decades, it is impera-tive to promote the sharing of successes and failuresto inform further development of the field. The E&Eprovides guidance about how to use SQUIRE as astructure for writing and can be a starting point forongoing dialogue about key concepts that areaddressed in the guidelines. We hope that SQUIRE2.0 will challenge authors to write better and to thinkmore clearly about the role of formal and informaltheory, interaction between context, interventions,and outcomes, and methods for studying improve-ment work. Due to space considerations, we havebeen able to cite a few of many possible examplesfrom the literature for each guideline section. Tofurther explore these key concepts in healthcareimprovement, we recommend both the complete arti-cles cited by the authors of this E&E as well as theirsecondary references. To promote the spread and sus-tainability of SQUIRE 2.0, the Guidelines, this E&Eand the accompanying glossary are accessible on theSQUIRE website (http://www.squire-statement.org).The website also links the viewer to resources such asscreencasts and opportunities to discuss key conceptsthrough an interactive forum.Since the publication of SQUIRE 1.01 in 2008,

there has been an enormous increase in the numberand complexity of published reports about healthcareimprovement. We hope that the time spent in theevaluation and careful development of SQUIRE 2.0and this E&E will contribute to a new chapter inscholarly writing about healthcare improvement. Welook forward to the continued growth of the field andthe further evolution of SQUIRE as we deepen ourunderstanding of how to improve the quality, safetyand value of healthcare.

Author affiliations1Department of Obstetrics and Gynecology, DartmouthHitchcock Medical Center, Lebanon, New Hampshire, USA2Department of Education, Veterans Health Administration,White River Jct, Vermont, USA3Department of Surgery, Veterans Health Administration, WhiteRiver Jct, Vermont, USA4Health Policy, Management and Evaluation, University ofToronto, Toronto, Ontario, Canada5Department of Nursing, University of Pennsylvania,Philadelphia, Pennsylvania, USA6Department of Pediatrics, Cleveland Clinic Children’sHospital, Cleveland, Ohio, USA7Department of Pediatrics, Colorado Children’s Hospital,Aurora, Colorado, USA8Division of Healthcare Delivery Science, New York University,New York, USA9Perinatal Institute and James M. Anderson Center for HealthSystems Excellence, Cincinnati Children’s Hospital MedicalCenter, Cincinnati, Ohio, USA10Department of Medicine, University of Toronto, Toronto,Ontario, Canada11Department of Medicine, Dartmouth Hitchcock MedicalCenter, Lebanon, New Hampshire, USA12Department of Clinical, Educational and Health Psychology,University College, London, UK13Department of Nursing, University of Alabama, Birmingham,Alabama, USA

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 21

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 22: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

14National Center for Patient Safety, Veterans HealthAdministration, White River Junction, NH USA15Department of Psychiatry and Community and FamilyMedicine, Geisel School of Medicine, Hanover, NewHampshire, USA16Department of Pediatrics, University of Michigan MedicalCenter, Ann Arbor, Michigan, USA17Department of Nursing, MGH Institute of HealthProfessions, Boston, Massachusetts, USA18Department of Pediatrics, Seattle Children’s Hospital, Seattle,Washington, USA19Institute of Health and Society, Newcastle University,Newcastle upon Tyne, UK20Jönköping Academy for Improvement of Health and Welfare,Jönköping University, Jönköping, Sweden

Correction notice This article has been updated since itpublished Online First. The names of two authors have beenrevised.

Twitter Follow Leora Horwitz at @leorahorwitzmd and JohanThor at @johanthor1

Collaborators Frank Davidoff, MD Editor Emeritus, Annals ofInternal Medicine, and Adjunct Professor at The DartmouthInstitute for Health Policy and Clinical Practice, Geisel Schoolof Medicine at Dartmouth, Hanover, New Hampshire, USA,[email protected]; Paul Batalden, MD Active EmeritusProfessor, Pediatrics and Community and Family Medicine,Geisel School of Medicine at Dartmouth, The DartmouthInstitute for Health Policy and Clinical Practice Hanover, NewHampshire, USA, [email protected]; David Stevens,MD Adjunct Professor, The Dartmouth Institute for HealthPolicy and Clinical Practice, Hanover, NH, USA, EditorEmeritus, BMJ Quality and Safety, London, UK Senior Fellow,Institute for Healthcare Improvement, Cambridge, MA, USA,[email protected]. Dr. Davidoff contributedsubstantially to the editing of the paper but did not serve as anauthor. Drs. Davidoff, Stevens and Batalden all contributedsubstantially to the final version of the SQUIRE 2.0 Guidelines,which provides the framework for this manuscript, and offeredcomments and guidance during the writing process.

Contributors All listed authors contributed substantially to thewriting of the manuscript. Each author was assigned a specificsection, and was the primary author for that section, with oneexception: TCF and JCM coauthored the section entitled‘Summary.’ Each author also had an opportunity to review thefinal manuscript prior to its submission. The correspondingauthor and guarantor of this project, DG, was responsible forcoordinating submissions and for the primary structuring andediting of this manuscript. GO and LD contributed substantiallyto both editing and to the conceptualization of the manuscript,and to the purpose and structure of included sections. LD alsoauthored the section entitled ‘Study of the Intervention.’

Funding The revision of the SQUIRE guidelines was supportedby funding from the Robert Wood Johnson Foundation and theHealth Foundation. Robert Wood Johnson Foundation (grantnumber 70024) Health Foundation (grant number 7099).

Competing interests None declared.

Provenance and peer review Not commissioned; internally peerreviewed.

Open Access This is an Open Access article distributed inaccordance with the Creative Commons Attribution NonCommercial (CC BY-NC 4.0) license, which permits others todistribute, remix, adapt, build upon this work non-commercially, and license their derivative works on differentterms, provided the original work is properly cited and the useis non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

REFERENCES1 Ogrinc G, Mooney SE, Estrada C, et al. The SQUIRE

(Standards for QUality Improvement Reporting Excellence)

guidelines for quality improvement reporting: explanation andelaboration. Qual Saf Health Care 2008;17(Suppl 1):i13–32.

2 Hoffmann TC, Glasziou PP, Boutron I, et al. Better reportingof interventions: template for intervention description andreplication (TIDieR) checklist and guide. BMJ 2014;348:g1687–87.

3 Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010Explanation and Elaboration: updated guidelines for reportingparallel group randomised trials. BMJ 2010;340:c869–69.

4 von Elm E. The Strengthening the Reporting of ObservationalStudies in Epidemiology (STROBE) Statement: guidelines forreporting observational studies. Ann Intern Med 2007;147:573.

5 Bossuyt PM. Towards complete and accurate reporting ofstudies of diagnostic accuracy: the STARD initiative. Clin Chem2003;49:1–6.

6 Moss F, Thompson R. A new structure for qualityimprovement reports. Qual Saf Health Care 1999;8:76–76.

7 Harvey G, Jas P, Walshe K, et al. Analysing organisationalcontext: case studies on the contribution of absorptive capacitytheory to understanding inter-organisational variation inperformance improvement. BMJ Qual Saf 2015;24:48–55.

8 Davidoff F, Dixon-Woods M, Leviton L, et al. Demystifyingtheory and its use in improvement. BMJ Qual Saf2015;24:228–38.

9 Portela MC, Pronovost PJ, Woodcock T, et al. How to studyimprovement interventions: a brief overview of possible studytypes. BMJ Qual Saf 2015;24:325–36.

10 Kaplan HC, Provost LP, Froehle CM, et al. The Model forUnderstanding Success in Quality (MUSIQ): building a theoryof context in healthcare quality improvement. BMJ Qual Saf2012;21:13–20.

11 Bamberger Re. Perspectives on context. The health foundation,2014.

12 Dixon-Woods M. The problem of appraising qualitativeresearch. Qual Saf Health Care 2004;13:223–5.

13 Ogrinc G, Davies L, Goodman D, et al. SQUIRE 2.0(Standards for QUality Improvement Reporting Excellence):revised publication guidelines from a detailed consensusprocess. BMJ Qual Saf 2015; doi:10.1136/bmjqs-2015-004411

14 Davies L, Donnelly KZ, Goodman DJ, et al. Findings from anovel approach to publication guideline revision: user roadtesting of a draft version of SQUIRE 2.0. BMJ Qual Saf 2015;doi:10.1136/bmjqs-2015-004117

15 Dyrkorn OA, Kristoffersen M, Walberg M. Reducingpost-caesarean surgical wound infection rate: an improvementproject in a Norwegian maternity clinic. BMJ Qual Saf2012;21:206–10.

16 Benning A, Ghaleb M, Suokas A, et al. Large scaleorganisational intervention to improve patient safety infour UK hospitals: mixed method evaluation. BMJ 2011;342:d195.

17 Reavey DA, Haney BM, Atchison L, et al. Improving painassessment in the NICU: a quality improvement project.Advances in neonatal care : official journal of the NationalAssociation of Neonatal Nurses 2014;14(3):144–53.

18 Jobson M, Sandrof M, Valeriote T, et al. Decreasing time toantibiotics in febrile patients with central lines in theemergency department. Pediatrics 2015;135:e187–95.

19 Murphy C, Bennett K, Fahey T, et al. Statin use in adults athigh risk of cardiovascular disease mortality: cross-sectionalanalysis of baseline data from The Irish Longitudinal Study onAgeing (TILDA). BMJ Open 2015;5:e008017.

Research and reporting methodology

22 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 23: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

20 Homa K, Sabadosa KA, Nelson EC, et al. Development andvalidation of a cystic fibrosis patient and family memberexperience of care survey. Qual Manag Health Care2013;22:100–16.

21 Starmer AJ, Spector ND, Srivastava R, et al. Changes inmedical errors after implementation of a handoff program.N Engl J Med 2014;371:1803–12.

22 Lesselroth BJ, Yang J, McConnachie J, et al. Addressing thesociotechnical drivers of quality improvement: a case study ofpost-operative DVT prophylaxis computerised decisionsupport. BMJ Qual Saf 2011;20:381–9.

23 Michie S, van Stralen MM, West R. The behaviour changewheel: a new method for characterising and designingbehaviour change interventions. Implement SciS 2011;6:42.

24 Sinnott C, Mercer SW, Payne RA, et al. Improving medicationmanagement in multimorbidity: development of theMultimorbiditY COllaborative Medication Review AndDEcision Making (MY COMRADE) intervention using theBehaviour Change Wheel. Implement Sci 2015;10:132.

25 Michie S, Atkins L, West R. The behaviour change wheel—aguide to designing interventions. London: SilverbackPublishing, 2014.

26 Zubkoff L, Neily J, Mills PD, et al. Using a virtualbreakthrough series collaborative to reduce postoperativerespiratory failure in 16 Veterans Health Administrationhospitals. Jt Comm J Qual Patient Saf 2014;40:11–20.

27 Dandoy CE, Davies SM, Flesch L, et al. A team-basedapproach to reducing cardiac monitor alarms. Pediatrics2014;134:e1686–94.

28 Duncan PM, Pirretti A, Earls MF, et al. Improving delivery ofbright futures preventive services at the 9- and 24-month wellchild visit. Pediatrics 2014;135:e178–86.

29 Damschroder LJ, Aron DC, Keith RE, et al. Fosteringimplementation of health services research findings intopractice: a consolidated framework for advancingimplementation science. Implement Sci 2009;4:50.

30 Rycroft-Malone J, Seers K, Chandler J, et al. The role ofevidence, context, and facilitation in an implementation trial:implications for the development of the PARIHS framework.Implement Sci 2013;8:28.

31 Srigley JA, Furness CD, Baker GR, et al. Quantification of theHawthorne effect in hand hygiene compliance monitoringusing an electronic monitoring system: a retrospective cohortstudy. BMJ Qual Saf 2014;23:974–80.

32 van Veen-Berkx E, Bitter J, Kazemier G, et al.Multidisciplinary teamwork improves use of the operatingroom: a multicenter study. J Am Coll Surg 2015;220:1070–6.

33 OkumuraMJ, Ong T, Dawson D, et al. Improving transition frompaediatric to adult cystic fibrosis care: programme implementationand evaluation. BMJ Qual Saf 2014;23(Suppl 1):i64–72.

34 Donabedian A. Evaluating the quality of medical care.Milbank Q 2005;83:691–729.

35 Provost L, Murray, S. The health care data guide: learning fromdata for improvement. 1st edn. San Francisco, CA: Jossey-Bass,2011.

36 Nelson EC, Mohr JJ, Batalden PB, et al. Improving healthcare, Part 1: the clinical value compass. Jt Comm J QualImprov 1996;22:243–58.

37 Lloyd RC. Quality health care : a guide to developing andusing indicators. 1st edn. Sudbury, MA: Jones and BartlettPublishers, 2004.

38 Pettigrew AWRMcfcs. Managing change for competitivesuccess. Cambridge, MA: Blackwell.

39 Pawson RaNT. Realistic evaluation. London: Sage, 1997.40 Fetters MD, Curry LA, Creswell JW. Achieving integration in

mixed methods designs-principles and practices.Health Serv Res 2013;48(Pt 2):2134–56.

41 Yin RK, Applied social research methods. In: Case study research:design and methods. 4th edn.. Los Angeles, CA: Sage, 2009.

42 Robson C 2011. Real world research: a resource for users ofsocial research methods in applied settings. 3rd edn.Chichester: Wiley, 2011.

43 Creswell JWe. Research design: qualitative, quantitative, andmixed methods approaches. 3rd edn. Thousand Oaks, CA:Sage, 2009.

44 Brady PW, Zix J, Brilli R, et al. Developing and evaluating thesuccess of a family activated medical emergency team: a qualityimprovement report. BMJ Qual Saf 2015;24:203–11.

45 Timmerman T, Verrall T, Clatney L, et al. Taking a closer look:using statistical process control to identify patterns ofimprovement in a quality-improvement collaborative. BMJQual Saf 2010;19:e19.

46 Dainty KN, Scales DC, Sinuff T, et al. Competition incollaborative clothing: a qualitative case study of influences oncollaborative quality improvement in the ICU. BMJ Qual Saf2013;22:317–23.

47 Donabedian A. Explorations in quality assessment andmonitoring. Ann Arbour, MI: Health Administration Press, 1980.

48 Hands C, Reid E, Meredith P, et al. Patterns in the recording ofvital signs and early warning scores: compliance with a clinicalescalation protocol. BMJ Qual Saf 2013;22:719–26.

49 Baily MA, Bottrell MM, Lynn J, et al. Special report: the ethicsof using qi methods to improve health care quality and safety.Hastings Cent Rep 2006;36:S1–40.

50 Bellin E, Dubler NN. The quality improvement–research divideand the need for external oversight. Am J Public Health2001;91:1512–17.

51 Lynn J. The ethics of using quality improvement methods inhealth care. Ann Intern Med 2007;146:666.

52 Perneger TV. Why we need ethical oversight of qualityimprovement projects. Int J Qual Health Care 2004;16:343–4.

53 Taylor HA, Pronovost PJ, Sugarman J. Ethics, oversight andquality improvement initiatives. Qual Saf Health Care2010;19:271–4.

54 Fox E, Tulsky JA. Recommendations for the ethical conduct ofquality improvement. JClin Ethics 2005;16:61–71.

55 O’Kane M. Do patients need to be protected from qualityimprovement?. In: Jennings B, Baily M, Bottrell M, eds.Health care quality improvement: ethical and regulatory issues.Garrison, NY: The Hastings Center, 2007:89–99.

56 Dixon N. Ethics and clinical audit and quality improvement(QI) a guide for NHS Organisations. London: HealthcareQuality Quest Improvement Partnership, 2011.

57 Dixon N. Proposed standards for the design and conduct of anational clinical audit or quality improvement study. Int J QualHealth Care 2013;25:357–65.

58 Cohen ES, Ogrinc G, Taylor T, et al. Influenza vaccinationrates for hospitalised patients: a multiyear quality improvementeffort. BMJ Qual Saf 2015;24:221–7.

59 Donahue KE, Halladay JR, Wise A, et al. Facilitators ofTransforming primary care: a look under the hood at practiceleadership. Ann Fam Med 2013;11(Suppl 1):S27–33.

60 Kaplan HC, Brady PW, Dritz MC, et al. The influence ofcontext on quality improvement success in health care: Asystematic review of the literature. Milbank Q2010;88:500–59.

Research and reporting methodology

Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480 23

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from

Page 24: Explanation and elaboration of the SQUIRE (Standards for …qualitysafety.bmj.com/content/qhc/early/2016/04/27/bmjqs... · between interventions and contextual elements, and a variety

61 Guse SE, Neuman MI, O’Brien M, et al. Implementing aGuideline to Improve Management of Syncope in theEmergency Department. Pediatrics 2014;134:e1413–21.

62 Bottino CJ, Cox JE, Kahlon PS, et al. Improving immunizationrates in a hospital-based primary care practice. Pediatrics2014;133:e1047–54.

63 Beckett DJ, Inglis M, Oswald S, et al. Reducing cardiac arrestsin the acute admissions unit: a quality improvement journey.BMJ Qual Saf 2013;22:1025–31.

64 Berlinski A, Chambers MJ, Willis L, et al. Redesigning care tomeet national recommendation of four or more yearly clinicvisits in patients with cystic fibrosis. BMJ Qual Saf 2014;23(Suppl 1):i42–9.

65 Zanni RL, Sembrano EU, Du DT, et al. The impact ofre-education of airway clearance techniques (REACT) on

adherence and pulmonary function in patients with cysticfibrosis. BMJ Qual Saf 2014;23 (Suppl 1):i50–5.

66 Dixon-Woods M, Leslie M, Tarrant C, et al. ExplainingMatching Michigan: an ethnographic study of a patient safetyprogram. Implement Sci 2013;8:70.

67 Tubbs-Cooley HL, Cimiotti JP, Silber JH, et al. Anobservational study of nurse staffing ratios and hospitalreadmission among children admitted for common conditions.BMJ Qual Saf 2013;22:735–42.

68 Dhalla IA, O’Brien T, Morra D, et al. Effect of a postdischargevirtual ward on readmission or death for high-risk patients.JAMA 2014;312:1305.

69 Trautner BW, Grigoryan L, Petersen NJ, et al. Effectiveness of anantimicrobial stewardship approach for urinary catheter–associatedasymptomatic bacteriuria. JAMA Intern Med 2015;175:1120.

Research and reporting methodology

24 Goodman D, et al. BMJ Qual Saf 2016;0:1–24. doi:10.1136/bmjqs-2015-004480

on 22 May 2018 by guest. P

rotected by copyright.http://qualitysafety.bm

j.com/

BM

J Qual S

af: first published as 10.1136/bmjqs-2015-004480 on 13 A

pril 2016. Dow

nloaded from