Syst Review

  • Upload
    dpf050

  • View
    21

  • Download
    0

Embed Size (px)

Citation preview

  • l Systematic reviews have increasingly replaced traditionalnarrative reviews and expert commentaries as a way of summarisingresearch evidence.

    l Systematic reviews attempt to bring the same level of rigour toreviewing research evidence as should be used in producing thatresearch evidence in the first place.

    l Systematic reviews should be based on a peer-reviewed protocol so thatthey can be replicated if necessary.

    l High quality systematic reviews seek to:

    ll Identify all relevant published and unpublished evidencell Select studies or reports for inclusionll Assess the quality of each study or reportll Synthesise the findings from individual studies or reports

    in an unbiased wayll Interpret the findings and present a balanced and

    impartial summary of the findings with due consideration of any flaws in the evidence.

    l Many high quality peer-reviewed systematic reviews are available in journalsas well as from databases and other electronic sources.

    l Systematic reviews may examine quantitative or qualitative evidence;put simply, when the two or more types of evidence are examined withinone review it is called a mixed-method systematic review.

    l Systematic reviewing techniques are in a period of rapid development.Many systematic reviews still look at clinical effectiveness, but methodsnow exist to enable reviewers to examine issues of appropriateness,feasibility and meaningfulness.

    l Not all published systematic reviews have been produced with meticulouscare; therefore, the findings may sometimes mislead. Interrogatingpublished reports by asking a series of questions can uncoverdeficiencies.

    1

    What is...? series Second edition

    For further titles in the series, visit:www.whatisseries.co.uk

    Pippa HemingwayPhD BSc (Hons) RGN

    RSCN Research Fellowin SystematicReviewing, School ofHealth and RelatedResearch (ScHARR),University of SheffieldNic Brereton PhD BSc(Hons) HealthEconomist, NBConsulting Services,Sheffield

    What is asystematicreview?

    Supported by sanofi-aventis

    Date of preparation: April 2009 NPR09/1111

    Evidence-based medicine

  • Why systematic reviewsare neededThe explosion in medical, nursing and alliedhealthcare professional publishing within thelatter half of the 20th century (perhaps20,000 journals and upwards of two millionarticles per year), which continues well intothe new millennium, makes keeping up withprimary research evidence an impossible feat.There has also been an explosion in internetaccess to articles, creating sometimes an awe-inspiring number of hits to explore. Inaddition, there is the challenge to build andmaintain the skills to use the wide variety ofelectronic media that allow access to largeamounts of information.

    Moreover, clinicians, nurses, therapists,healthcare managers, policy makers andconsumers have wide-ranging informationneeds; that is, they need good qualityinformation on the effectiveness,meaningfulness, feasibility andappropriateness of a large number ofhealthcare interventions; not just one or two.For many, this need conflicts with their busyclinical or professional workload. Forconsumers, the amount of information canbe overwhelming, and a lack of expertknowledge can potentially lead to false beliefin unreliable information, which in turn mayraise health professional workload and patientsafety issues.

    Even in a single area, it is not unusual forthe number of published studies to run intohundreds or even thousands (before they aresifted for inclusion in a review). Some of thesestudies, once read in full text, may giveunclear, confusing or contradictory results;sometimes they may not be published in ourown language or there may be lack of claritywhether the findings can be generalised toour own country. Looked at individually, eacharticle may offer little insight into theproblem at hand; the hope is that, whentaken together within a systematic review, aclearer (and more consistent) picture willemerge.

    If the need for information is to befulfilled, there must be an evidencetranslation stage. This is the act oftransferring knowledge to individual healthprofessionals, health facilities and healthsystems (and consumers) by means ofpublications, electronic media, education,training and decision support systems.Evidence transfer is seen to involve carefuldevelopment of strategies that identify targetaudiences such as clinicians, managers,policy makers and consumers and designingmethods to package and transfer informationthat is understood and used in decision-making.1

    Failings in traditionalreviewsReviews have always been a part of thehealthcare literature. Experts in their fieldhave sought to collate existing knowledge andpublish summaries on specific topics.Traditional reviews may, for instance, becalled literature reviews, narrative reviews,critical reviews or commentaries within theliterature. Although often very usefulbackground reading, they differ from asystematic review in that they are not led via apeer-reviewed protocol and so it is not oftenpossible to replicate the findings. In addition,such attempts at synthesis have not alwaysbeen as rigorous as might have been hoped.In the worst case, reviewers may not havebegun with an open mind as to the likelyrecommendations, and they may then build acase in support of their personal beliefs,selectively citing appropriate studies along theway. Indeed, those involved in developing areview may well have started a review (or havebeen commissioned to write one) preciselybecause of their accumulated experience andprofessional opinions. Even if the reviewerdoes begin with an open mind, traditionalreviews are rarely explicit about how studiesare selected, assessed and integrated. Thus,the reader is generally unable to assess the

    What is a systematic review?

    2

    What is asystematic review?

    Date of preparation: April 2009 NPR09/1111

  • 3What is asystematic review?

    likelihood of prior beliefs or of selection orpublication biases clouding the reviewprocess. Despite all this, such narrativereviews were and are widespread andinfluential.

    The lack of rigour in the creation oftraditional reviews went largely unremarkeduntil the late 1980s when severalcommentators exposed the inadequacies ofthe process and the consequent bias inrecommendations.2,3 Not least of theproblems was that small but important effectswere being missed, different reviewers werereaching different conclusions from the sameresearch base and, often, the findingsreported had more to do with the specialty ofthe reviewer than with the underlyingevidence.4

    The inadequacy of traditional reviews andthe need for a rigorous systematic approachwere emphasised in 1992 with the publicationof two landmark papers.5,6 In these papers,Elliot Antman, Joseph Lau and colleaguesreported two devastating findings.l First, if original studies of the effects of

    clot busters after heart attacks hadbeen systematically reviewed, thebenefits of therapy would have beenapparent as early as the mid-1970s.

    l Second, narrative reviews werewoefully inadequate in summarisingthe current state of knowledge. Thesereviews either omitted mention of effectivetherapies or suggested that the treatmentsshould be used only as part of an ongoinginvestigation when in fact the evidence(if it had been collated) was nearincontrovertible.These papers showed that there was much

    knowledge to be gained from collatingexisting research but that traditionalapproaches had largely failed to extract thisknowledge. What was needed was the samerigour in secondary research (research wherethe objects of study are other research studies)as is expected from primary research (original study).

    When systematic reviewsare neededConventionally, systematic reviews areneeded to establish clinical and cost-

    effectiveness of an intervention or drug. Increasingly, however, they are required to establish if an intervention or activity is feasible, if it is appropriate(ethically or culturally) or if it relates to evidence of experiences, values, thoughts or beliefs of clients and theirrelatives.1

    Systematic reviews are also:l Needed to propose a future

    research agenda7 when the way forward may be unclear or existingagendas have failed to address a clinical problem

    l Increasingly required by authors who wishto secure substantial grant funding forprimary healthcare research

    l Increasingly part of student dissertations orpostgraduate theses

    l Central to the National Institute for Healthand Clinical Excellence health technologyassessment process for multipletechnology appraisals and singletechnology appraisals.However, systematic reviews are most

    needed whenever there is a substantivequestion, several primary studies perhapswith disparate findings and substantialuncertainty. One famous case is describedby The Cochrane Library:8 a single research paper, published in 1998 and basedon 12 children, cast doubt on the safety ofthe mumps, measles and rubella (MMR)vaccine by implying that the MMR vaccine might cause the development of problems such as Crohns disease andautism. The paper by Wakefield et al9

    has since been retracted by most of theoriginal authors because of potential bias,but before that it had triggered a worldwidescare, which in turn resulted in reduceduptake of the vaccine.10 A definitivesystematic review by Demicheli et al onMMR vaccines in children concluded thatexposure to MMR was unlikely to beassociated with Crohns disease, autism orother conditions.11

    Here, then, is an area where a systematicreview helped clarify a vital issue to the publicand to healthcare professionals; preparingsuch a review, however, is not a trivialexercise.

    Date of preparation: April 2009 NPR09/1111

  • The process of systematicreviewThe need for rigour in the production ofsystematic reviews has led to the developmentof a formal scientific process for theirconduct. Understanding the approach takenand the attempts to minimise bias can help inthe appraisal of published systematic reviews,which should help to assess if their findingsshould be applied to practice. The overallprocess should, ideally, be directed by a peer-reviewed protocol.

    Briefly, developing a systematic reviewrequires the following steps.

    1. Defining an appropriate healthcarequestion. This requires a clear statement ofthe objectives of the review, intervention orphenomena of interest, relevant patientgroups and subpopulations (and sometimesthe settings where the intervention isadministered), the types of evidence orstudies that will help answer the question, aswell as appropriate outcomes. These detailsare rigorously used to select studies forinclusion in the review.

    2. Searching the literature. The publishedand unpublished literature is carefullysearched for the required studies relating toan intervention or activity (on the rightpatients, reporting the right outcomes and soon). For an unbiased assessment, this searchmust seek to cover all the literature (not justMEDLINE where, for example, typically lessthan half of all trials will be found), includingnon-English sources. In reality, a designatednumber of databases are searched using astandardised or customised search filter.Furthermore, the grey literature (material thatis not formally published, such asinstitutional or technical reports, workingpapers, conference proceedings, or otherdocuments not normally subject to editorialcontrol or peer review) is searched usingspecialised search engines, databases orwebsites. Expert opinion on whereappropriate data may be located is sought andkey authors are contacted for clarification.Selected journals are hand-searched whennecessary and the references of full-textpapers are also searched. Potential biases

    within this search are publication bias,12

    selection bias and language bias.13

    3. Assessing the studies. Once all possiblestudies have been identified, they should beassessed in the following ways.l Each study needs to be assessed for

    eligibility against inclusion criteria andfull text papers are retrieved for those thatmeet the inclusion criteria.

    l Following a full-text selection stage, theremaining studies are assessed formethodological quality using a criticalappraisal framework. Poor qualitystudies are excluded but are usuallydiscussed in the review report.

    l Of the remaining studies, reportedfindings are extracted onto a dataextraction form. Some studies will beexcluded even at this late stage. A list ofincluded studies is then created.

    l Assessment should ideally be conducted bytwo independent reviewers.

    4. Combining the results. The findingsfrom the individual studies must then beaggregated to produce a bottom line on theclinical effectiveness, feasibility,appropriateness and meaningfulness of theintervention or activity. This aggregation offindings is called evidence synthesis. The typeof evidence synthesis is chosen to fit thetypes(s) of data within the review. Forexample, if a systematic review inspectsqualitative data, then a meta-synthesis isconducted.14 Alternatively, a technique knownas meta-analysis (see What is meta-analysis?15

    in this series) is used if homogenousquantitative evidence is assessed for clinicaleffectiveness. Narrative summaries are used ifquantitative data are not homogenous.

    5. Placing the findings in context. Thefindings from this aggregation of an unbiasedselection of studies then need to be discussedto put them into context. This will addressissues such as the quality and heterogeneity ofthe included studies, the likely impact of bias,as well as the chance and the applicability ofthe findings. Thus, judgement and balanceare not obviated by the rigour of systematicreviews they are just reduced in impact andmade more explicit.

    4

    What is asystematic review?

    Date of preparation: April 2009 NPR09/1111

  • A word of caution, however. Performing arigorous systematic review is far from easy. Itrequires careful scientific consideration atinception, meticulous and laborioussearching, as well as considerable attention tomethodological detail and analysis before ittruly deserves the badge systematic. Thequality of a systematic review can be assessedby using a standard checklist. Examplechecklists are available from the NHS PublicHealth Resource Unit via the Critical AppraisalSkills Programme (CASP)16 or from the Centrefor Evidence-Based Medicine at the Universityof Oxford.17 It is useful to have experience ofprimary and secondary research, or tocollaborate with those that do, prior toundertaking a systematic review and to ensurethat an academic and practice partnershipdirects the review.

    The above has been an overview of thesystematic review process. Clear guidance onthe process of developing systematic reviewsis available electronically,18,19 from key textssuch as the one by Khan et al20 or via coursesrun at centres of excellence such as the NHSCentre for Reviews and Dissemination at theUniversity of York or the Centre forEvidence-Based Medicine at the Universityof Oxford.

    Some trends in systematicreviewingRapid evidence assessment reviewsIncreasingly, health policy makers, cliniciansand clients cannot wait the year or so requiredfor a full systematic review to deliver itsfindings. Rapid evidence assessments (REAs)can provide quick summaries of what isalready known about a topic or intervention.REAs use systematic review methods to searchand evaluate the literature, but thecomprehensiveness of the search and otherreview stages may be limited. TheGovernment Social Research Unit hasproduced an REA toolkit which isrecommended as a minimum standard forrapid evidence reviews.21 The toolkit statesthat an REA takes two to six months tocomplete and is a quick overview of existingresearch on a constrained topic and asynthesis of the evidence provided by thesestudies to answer the REA question. Examples

    of when an REA can be undertaken accordingto the REA toolkit include:l When there is uncertainty about the

    effectiveness of a policy or service andthere has been some previous research

    l When a decision is required withinmonths and policy makers/researcherswant to make decisions based on the bestavailable evidence within that time

    l When a map of evidence in a topic area isrequired to determine whether there is anyexisting evidence and to direct futureresearch needs.21

    An example of an REA to allowexamination of the methods is a report byUnderwood et al (2007), who evaluated theeffectiveness of interventions for people withcommon mental health problems onemployment outcomes.22

    User involvementUser involvement is well established as aprerequisite within primary research and isnow increasingly expected within a systematicreview. The Campbell Collaboration UsersGroup proposes a spectrum of userinvolvement in the systematic review process,ranging from determining the scope of thereview and the outcomes of relevance, todetermining the need for a review andinvolvement throughout all stages ofproduction and dissemination.23 Thedefinition of user involvement within thesystematic review protocol is recommended;thus, what is expected from a user or usergroup and at which stages of the reviewshould be clearly defined. For guidance onpublic involvement in research, accessINVOLVE at www.invo.org.uk

    Mixed methodsIncreasingly, qualitative methods are usedtogether with a randomised controlled trial toobtain a fuller picture of an intervention andthe way it works.24 It is also possible to mixmethods within a systematic review as themethods to systematically review qualitativeevidence, such as from grounded theory,phenomenology and other qualitativeresearch designs, are now developed. This isparticularly useful when different types ofdata such as qualitative data and quantitativedata are available to inform a review topic. For

    5

    What is asystematic review?

    Date of preparation: April 2009 NPR09/1111

  • 6

    example, the issues of a mixed-methodsynthesis have been described by Harden andThomas (2005) on the basis of their review ofthe barriers to, and facilitators of, fruit andvegetable intake among children aged four toten years.25 The following issues arose fromthe merger of two simultaneous meta-syntheses of trial data (quantitative) andstudies of experiences (qualitative).

    Strengths of mixed methodsl They preserve the integrity of the findings

    of different types of studies by using theappropriate type of analysis that is specificto each type of finding.

    l The use of categorical codes as a halfwayhouse to mediate between two forms ofdata was unproblematic.25

    Limitation of mixed methodsl There is potential researcher bias when

    categorical subgroups are not created a prioriand are created later on in the review.25

    Finding existing reviewsHigh quality systematic reviews are publishedin many of the leading journals and electronicdatabases. In addition, electronic publicationby the Cochrane Collaboration, the NHSCentre for Reviews and Dissemination andother organisations offers speedy access toregularly updated summaries (Box 1).

    Drawbacks of systematic reviewsSystematic reviews appear at the top of thehierarchy of evidence that informs evidence-based practice (practice supported by research

    findings) when assessing clinical effectiveness(Box 2).26 This reflects the fact that, when wellconducted, they should give us the bestpossible estimate of any true effect. As notedpreviously, such confidence can sometimesbe unwarranted, however, and caution mustbe exercised before accepting the veracity ofany systematic review. A number of problemsmay arise within reviews of clinicaleffectiveness.l Like any piece of research, a systematic

    review may be done badly. Attention tothe questions listed in the sectionAppraising a systematic review can helpseparate a rigorous review from one ofpoor quality.

    l Inappropriate aggregation of studiesthat differ in terms of intervention used,patients included or types of data can leadto the drowning of important effects.For example, the effects seen in somesubgroups may be concealed by a lack ofeffect (or even reverse effects) in othersubgroups.The findings from systematic reviews

    are not always in harmony with thefindings from large-scale high qualitysingle trials.27,28 Thus, findings fromsystematic reviews need to be weighed againstperhaps conflicting evidence from othersources. Ideally, an updated review would dealwith such anomalies.

    Hierarchies of evidence for feasibility orappropriateness reviews are available29 whenmost of the above applies.

    Appraising a systematic reviewNot all systematic reviews are rigorous and

    What is asystematic review?

    Date of preparation: April 2009 NPR09/1111

    Box 1. Useful websites for systematic reviews

    l The Cochrane Library www.cochrane.orgl The Joanna Briggs Institute www.joannabriggs.edu.au/pubs/systematic_reviews.phpl The Campbell Collaboration www.campbellcollaboration.orgl The Centre for Evidence-Based Medicine www.cebm.netl The NHS Centre for Reviews and Dissemination www.york.ac.uk/inst/crdl Bandolier www.medicine.ox.ac.uk/bandolierl PubMed Clinical Queries: Find Systematic Reviewswww.ncbi.nlm.nih.gov/entrez/query/static/clinical.shtml

  • 7

    unbiased. The reader will want to interrogateany review that purports to be systematic toassess its limitations and to help decide if therecommendations should be applied topractice. Further guidance on appraising thequality of a systematic review can be found inseveral useful publications.16,30,31 Guidancefocuses on the critical appraisal for reviews ofclinical effectiveness. To reflect this, thefollowing questions provide a framework.l Is the topic well defined in terms of the

    intervention under scrutiny, the patientsreceiving the intervention (plus thesettings in which it was received) and theoutcomes that were assessed?

    l Was the search for papers thorough?Was the search strategy described? Wasmanual searching used as well as electronicdatabases? Were non-English sourcessearched? Was the grey literature covered for example, non-refereed journals,conference proceedings or unpublishedcompany reports? What conclusions weredrawn about the possible impact ofpublication bias?

    l Were the criteria for inclusion ofstudies clearly described and fairlyapplied? For example, were blinded orindependent reviewers used?

    l Was study quality assessed by blindedor independent reviewers? Were thefindings related to study quality?

    l Was missing information sought fromthe original study investigators? Was

    the impact of missing information assessedfor its possible impact on the findings?

    l Do the included studies seem toindicate similar effects? If not, in thecase of clinical effectiveness, was theheterogeneity of effect investigated,assessed and discussed?

    l Were the overall findings assessed fortheir robustness in terms of theselective inclusion or exclusion ofdoubtful studies and the possibility ofpublication bias?

    l Was the play of chance assessed? Inparticular, was the range of likely effectsizes presented and were null findingsinterpreted carefully? For example, areview that finds no evidence of effect maysimply be an expression of our lack ofknowledge rather than an assertion thatthe intervention is worthless.

    l Are the recommendations basedfirmly on the quality of the evidencepresented? In their enthusiasm, reviewerscan sometimes go beyond the evidence indrawing conclusions and making theirrecommendations. All studies have flaws. It is not the mere

    presence of flaws that vitiates the findings.Even flawed studies may carry importantinformation. The reader must exercisejudgement in assessing whether individualflaws undermine the findings to such anextent that the conclusions are no longeradequately supported.

    What is asystematic review?

    Date of preparation: April 2009 NPR09/1111

    Box 2. Hierarchies of evidence for questions of therapy, prevention, aetiology or harm26

    Level 1a Systematic review (with homogeneity) of randomised controlled trials (RCTs)

    Level 1b Individual RCT (with narrow confidence interval)

    Level 1c All-or-none studies

    Level 2a Systematic review (with homogeneity) of cohort studies

    Level 2b Individual cohort study (including low quality RCT; eg

  • Supported by sanofi-aventis

    What is asystematic review?

    Published by Hayward MedicalCommunications, a division ofHayward Group Ltd.

    Copyright 2009 Hayward Group Ltd.All rights reserved.

    8

    References1. Pearson A, Wiechula R, Court A, Lockwood C. The JBImodel of evidence-based healthcare. Int J Evid BasedHealthc 2005; 3: 207215.2. Mulrow CD. The medical review article: state of thescience. Ann Intern Med 1987; 106: 485488.3. Teagarden JR. Meta-analysis: whither narrative review?Pharmacotherapy 1989; 9: 274281.4. Spector TD, Thompson SG. The potential and limita-tions of meta-analysis. J Epidemiol Community Health1991; 45: 8992.5. Antman EM, Lau J, Kupelnick B, Mosteller F, ChalmersTC. A comparison of results of meta-analyses of random-ized control trials and recommendations of clinicalexperts. Treatments for myocardial infarction. JAMA1992; 268: 240248.6. Lau J, Antman EM, Jimenez-Silva J et al. Cumulativemeta-analysis of therapeutic trials for myocardial infarc-tion. N Engl J Med 1992; 327: 248254.7. Torgerson C. Systematic Reviews. London: Continuum,2003.8. The Cochrane Library. The Cochrane Library publishesthe most thorough survey of MMR vaccination data whichstrongly supports its use.www.cochrane.org/press/MMR_final.pdf (last accessed 19November 2008)9. Wakefield AJ, Murch SH, Anthony A et al. Ileal-lym-phoid-nodular hyperplasia, non-specific colitis, and per-vasive developmental disorder in children. Lancet 1998;351: 637641.10. Murch SH, Anthony A, Casson DH et al. Retraction ofan interpretation. Lancet 2004; 363: 750.11. Demicheli V, Jefferson T, Rivetti A, Price D. Vaccinesfor measles, mumps and rubella in children. CochraneDatabase Syst Rev 2005: CD004407.12. Dubben HH, Beck-Bornholdt HP. Systematic reviewof publication bias in studies on publication bias. BMJ2005; 331: 433434.13. Egger M, Zellweger-Zahner T, Schneider M et al.Language bias in randomised controlled trials publishedin English and German. Lancet 1997; 350: 326329.14. Sandelowski M, Barroso J. Toward a metasynthesis ofqualitative findings on motherhood in HIV-positivewomen. Res Nurs Health 2003; 26: 153170.15. Crombie IK, Davies HTO. What is meta-analysis?London: Hayward Medical Communications, 2009.16. Public Health Resource Unit. Critical Appraisal SkillsProgramme (CASP). Making sense of evidence: 10 questionsto help you make sense of reviews.www.phru.nhs.uk/Doc_Links/S.Reviews%20Appraisal%20Tool.pdf (last accessed 19 November 2008)17. Centre for Evidence-Based Medicine. CriticalAppraisal. www.cebm.net/index.aspx?o=1157 (lastaccessed 23 January 2009)

    18. Higgins J, Green S (eds). Cochrane Handbook forSystematic Reviews of Interventions, Version 5.0.1 [updatedSeptember 2008]. www.cochrane-handbook.org/ (lastaccessed 19 November 2008)19. NHS Centre for Reviews and Dissemination.Undertaking systematic reviews of research on effectiveness:CRDs guidance for those carrying out or commissioningreviews. CRD Report 4. York: University of York, 2001.20. Khan KS, Kunz R, Kleijnen J (eds). Systematic reviewsto support evidence-based medicine: how to review and applyfindings of healthcare research. London: Royal Society ofMedicine Press, 2003.21. Government Social Research. Rapid EvidenceAssessment Toolkit. www.gsr.gov.uk/professional_guid-ance/rea_toolkit/index.asp (last accessed 23 January2009)22. Underwood L, Thomas J, Williams T, Thieba A. Theeffectiveness of interventions for people with common mentalhealth problems on employment outcomes: a systematicrapid evidence assessment.http://eppi.ioe.ac.uk/cms/LinkClick.aspx?fileticket=Wgr%2bPGyPMD0%3d&tabid=2315&mid=4279&language=en-US (last accessed 23 January 2009)23. Campbell Collaboration Users Group. UserInvolvement in the systematic review process. CampbellCollaboration Policy Brief. http://camp.ostfold.net/art-man2/uploads/1/Minutes_March_2008_Oslo.pdf (lastaccessed 19 November 2008)24. Evans D, Pearson A. Systematic reviews: gatekeepersof nursing knowledge. J Clin Nurs 2001; 10: 593599.25. Harden A, Thomas J. Methodological Issues inCombining Diverse Study Types in Systematic Reviews.International Journal of Social Research Methodology 2005;8: 257271.26. Phillips B, Ball C, Sackett D et al; Centre for Evidence-Based Medicine. Levels of Evidence.www.cebm.net/index.aspx?o=1025 (last accessed 19November 2008)27. LeLorier J, Gregoire G, Benhaddad A, Lapierre J,Derderian F. Discrepancies between meta-analyses andsubsequent large randomized, controlled trials. N Engl JMed 1997; 337: 536542.28. Egger M, Smith GD. Misleading meta-analysis. BMJ1995; 310: 752754.29. Evans D. Hierarchy of evidence: a framework forranking evidence evaluating healthcare interventions. JClin Nurs 2003; 12: 7784.30. Crombie IK. Pocket Guide to Critical Appraisal.London: BMJ Publishing Group, 1996.31. Moher D, Cook DJ, Eastwood S et al. Improving thequality of reports of meta-analyses of randomised con-trolled trials: the QUOROM statement. Quality ofReporting of Meta-analyses. Lancet 1999; 354:18961900.

    What is...? series

    First edition published 2001Authors: Huw TO Davies and Iain K Crombie

    This publication, along withthe others in the series, isavailable on the internet atwww.whatisseries.co.ukThe data, opinions and statementsappearing in the article(s) hereinare those of the contributor(s)concerned. Accordingly, thesponsor and publisher, and theirrespective employees, officersand agents, accept no liabilityfor the consequences of any suchinaccurate or misleading data,opinion or statement.

    Date of preparation: April 2009 NPR09/1111