1_4 Quality of Care

Embed Size (px)

Citation preview

  • 7/27/2019 1_4 Quality of Care

    1/10

    1.4: Quality o Care: Measurement and Improvement 1.4.1

    inorm changes to the health care system at the practice,

    hospital, and national levels. This serves to optimize health

    care delivery and thus the best care possible or patients.

    Why Are There Concerns About Quality?

    The rapid growth o health care spending in the United States

    has ocused increased attention on the quality o care that results

    rom this large commitment o resources. Unortunately, when the

    quality o health care in the United States is measured, signicant

    deciencies are ound. Moreover, there is a lack o correlation

    between higher expenditures and higher quality o care.1

    Poor quality results rom a variety o deciencies in any one o the

    properties o high-quality health care, including: unsae practices,

    use o ineective therapies, application o the wrong therapy to the

    wrong patient, delayed delivery o care, use o resource intensive

    care or marginal benet, and dierential health care delivery strictly

    based on age, gender, race, or ethnicity. Decits in the quality o

    health care are also ramed as deriving rom three types o short-

    comings, each o which may constitute a orm o ineciency.2

    Overuse occurs when a service is provided that may not

    be necessary or may expose the patient to greater potential

    harm than benet (i.e., when it is not warranted on medical

    grounds).

    Underuse occurs when a service with a avorable benet-

    risk ratio is not provided.

    Misuse includes incorrect diagnoses as well as medical er-

    rors and other sources o avoidable complications.

    One o the most compelling arguments implicating the eciency

    o health care in the United States derives rom the marked

    geographic variation in per capita health care spending, without

    Introduction

    Clinicians may perceive the topic o measuring and improv-

    ing quality o care as one that is largely under the purview

    o administrators and health policy makers. Instead, clinicians

    should be highly motivated to understand how to measure and

    improve quality o care or a variety o reasons, including:

    Quality o care is evidence-based medicine.Quality

    measurement and improvement initiatives help clinicians to

    stay current with the best available evidence or therapeu-

    tics and care delivery, thereby supporting the practice oevidence-based medicine.

    Quality o care is central to lielong learning, certifca-

    tion, and licensure. The maintenance o certications, as

    well as state licensure activities, is increasingly centered on

    quality measurement and improvement. This may include

    the demonstration o practice improvement.

    Quality o care is at the center o health care reorm.

    Consumer groups, hospitals, health care systems, payers,

    states, the ederal government, and other stakeholders

    are heavily ocused on quality o care. There is a particular

    ocus on unexplained variation in care delivery. This varia-tion is viewed as a marker o variation in quality, as a major

    contributor to health care expenditures, and as a target or

    health care reorm.

    Quality o care is increasingly about accountability.

    Public reporting and perormance-based reimbursement are

    increasingly based on quality measures or both processes

    o care and outcomes o care.

    Quality o care drives health care system improve-

    ment. Quality measurement and improvement initiatives

    Chapter 1: General Principles

    1.4: Quality of Care: Measurement and ImprovementLarry Allen, MD, FACC

    Consulting Fees/Honoraria: Ortho-McNeil Janssen Scientifc Aairs, Robert Wood Johnson Foundation, Amgen;

    Research Grants: American Heart Association

    John S. Rumsfeld, MD, PhD, FACC

    Consulting Fees/Honoraria: United Healthcare

    Learner Objectives

    Upon completion o this module, the reader will be able to:

    1. Dene quality o care and describe the major domains o high quality health care.

    2. Explain the eatures o quality initiatives that are relevant to practicing clinicians, including delivery o evidence-based medicine,

    public reporting and reimbursement based on quality metrics, and maintenance o certication and licensure.

    3. Compare and contrast the tools o quality, including clinical data standards, clinical practice guidelines, quality metrics, peror-

    mance measures, and appropriate use criteria (AUC).

    4. Describe the health care system eatures that are necessary to achieve high quality care, including measurement, eedback,

    system changes, engaged clinicians, and administrative support.

  • 7/27/2019 1_4 Quality of Care

    2/10

    1.4.2 Chapter 1: General Principles

    Process reers to the way in which care is delivered. The ideal

    process is to do the right thing or the right patient at the right

    time. Processes reer to the actions perormed in the delivery

    o patient care, including the timing and technical competency

    o their delivery. Process o care measures oten ocus on patient

    selection and administration o therapies (e.g., prescription o

    aspirin or patients with acute myocardial inarction).

    Outcomes reer to the results o care. These are measures o

    the end-results o health care delivery. From the patient,clinician, and societal perspectives, primary outcomes concepts

    are refected in just two questions. The rst question is related

    to mortality versus survival: Did the care/therapy delivered help

    patients live longer? The second question is related to morbid-

    ity versus quality o lie: Did the care/therapy improve patient

    health status and/or make patients eel better? For a variety o

    reasons, outcomes measures have ocused largely on survival,

    which is objective and easy to obtain, or on surrogate measures

    (e.g., blood pressure). However, there is a growing recognition

    o the importance o patient-centered outcomes, including

    patient health status or quality-o-lie measurements such as

    angina burden (e.g., Seattle Angina Questionnaire).

    The Donabedian model proposes that each component has a

    direct infuence on the next. In other words, the structural at-

    tributes o the system in which care occurs (i.e., resources and

    administration) dictate processes o care (i.e., delivery o thera-

    peutics) which in turn aect the outcomes (i.e., goal achieve-

    ment). Importantly, the patient is at the center, with the ultimate

    goal o improving outcomes that are important to patients and

    their amilies.

    It is also important to note that patients have dierent demo-

    graphic and clinical proles (e.g., comorbidities, disease sever-

    ity). Thus, clinicians and hospitals care or dierent case-mixes

    o patients. As such, it is generally true that valid measures o

    patient outcomes, especially or comparison among hospitals or

    other groups, must be risk-adjusted (i.e., case-mix adjusted).

    What About Cost?There is an increasing interest in assessing quality in relation-

    ship to resource use. Multiple studies have shown that higher

    costs o care and higher resource utilization do not translate

    into higher quality o care.2 Thereore, many current quality as-

    sessment and improvement eorts are ocused on eciency o

    care (i.e., cost per outcomes). A similar concept is that o value,

    which is the measurement o patient health outcomes, including

    the patient experience with care, achieved per dollar spent.

    9

    It isthe ratio that is critical. Costly interventions are not necessarily

    o low value, i they have signicant benet. Conversely, cheap

    interventions are not necessarily o high value, i they have

    minimal or no benet.

    A Systems Problem, A Systems SolutionFor most clinicians, day-to-day interest in quality would seem

    to ocus on individual decisions as they relate to the delivery o

    cardiovascular care to individual patients. However, quality im-

    provement cannot rest upon individual clinicians being asked to

    do more or do better.10 Instead, quality should be consid-

    obvious correlation to measures o health care quality or patient

    outcomes. The current substantial growth in the perormance

    o cardiovascular testing and procedures has been character-

    ized by increasing regional dierences, as documented among

    Medicare beneciaries in the Dartmouth Atlas o Cardiovascular

    Health Care.3 Yet, those regional dierences in use do not ap-

    pear to translate to signicant dierences in the perormance o

    well-accepted standards o care or the health o those commu-

    nities.4 Furthermore, the Institute o Medicine (IOM) and others

    have issued several reports documenting the extent o medical

    errors and their consequences.5,6 Clinicians must recognize that

    their actions, both in terms o errors o omission (i.e., not doing

    things they should) and errors o commission (i.e., doing things

    they should not), are under increasing scrutiny.

    What Is High Quality Health Care?

    The goal o health care is to help people live longer and better

    lives. Thereore, the extent to which health care delivery accom-

    plishes this overall goal represents the quality o that care. The

    IOM report, Crossing the Quality Chasm: A New Health System

    or the 21st Century, denes quality as: the degree to which

    health care systems, services, and supplies or individuals and

    populations increase the likelihood or desired health outcomes

    in a manner consistent with current proessional knowledge.7

    The IOM urther dened six domains o the highest quality

    health care, health care should be:7

    Sae: Avoiding harm to patients rom the care that is

    intended to help them

    Eective: Providing services based on scientic knowledge

    to all who could benet and reraining rom providing ser-

    vices to those not likely to benet (avoiding underuse and

    misuse, respectively)

    Patient-Centered: Providing care that is respectul o and re-

    sponsive to individual patient preerences, needs, and values,

    and ensuring that patient values guide all clinical decisions

    Timely: Reducing waits and sometimes harmul delays or

    both those who receive care and those who give care

    Efcient: Avoiding waste, including the waste o equip-

    ment, supplies, ideas, and energy

    Equitable: Providing care that does not vary in quality be-

    cause o personal characteristics such as gender, ethnicity,

    geographic location, and socioeconomic status

    How Should We Assess Quality?

    The Donabedian model is requently used to conceptualize

    quality assessment. It ocuses on three domains: structure,

    process, and outcomes.8

    Structure reers to the resources available to provide care. This

    typically includes such domains as personnel, equipment, acili-

    ties, laboratory systems, training, certication and protocols.

  • 7/27/2019 1_4 Quality of Care

    3/10

    1.4: Quality o Care: Measurement and Improvement 1.4.3

    Evidence-based medicine involves two undamental principles.11

    First, a hierarchy o evidence exists rom which to guide clinical

    decision making. While individual clinical observations can generate

    important hypotheses, unsystematic clinical observations are limited

    by sample size and deciencies in the ability to make accurate

    causal inerences. Thus, only systematic approaches to data col-

    lection and analysis are generally considered as evidence to guide

    clinical decisions. These systematic approaches, listed in increasing

    order o strength o evidence or inorming clinical decision mak-

    ing, include: physiological experiments, case series, cross-sectional

    studies, case-control studies, retrospective observational cohorts,prospective observational cohorts, and randomized clinical trials.

    Stronger study designs minimize bias and improve power, leading

    to improved evidence to support clinical decision making.

    It is important to note that this evidence hierarchy is not

    absolute. For example, randomized clinical trials can suer rom

    studying only highly selected patients, and thus may have

    limited generalizability. Similarly, observational studies must be

    cautious o unmeasured actors that can conound the interpre-

    tation o attribution, yet may give a broader assessment o care

    and outcomes in routine clinical practice. Thus, a synthesis o all

    ered largely on a system level. As such, quality improvement is

    achieved through a systems approach that provides a supportive

    environment or the delivery o health care. Continuous quality

    improvement is an organized, scientic process or evaluating,

    planning, improving, and controlling quality. The ollowing sec-

    tions describe the pieces o continuous quality improvement and

    how they t together to promote optimal care delivery both at

    the national and local levels.

    The Tools of Quality Assessment

    The success o achieving ideal quality in health care delivery

    requires that a quality inrastructure be in place. This quality

    inrastructure consists o clinical evidence, standardized deni-

    tions, clinical guidelines, perormance measures and other

    quality metrics, and AUC (Table 1). The goal o these tools is to

    promote the optimal use o evidence-based medicine in care

    delivery, thereby maximizing eciency by promoting diagnostic

    and therapeutic strategies with the highest value to patients.

    The EvidenceThe determination o care quality is grounded in clinical evidence.

    Table 1The Toolkit of Quality Improvement

    The Toolkit of Quality Improvement

    Tool Definition / Purpose

    Evidence

    Data

    Standards

    Clinical

    Practice

    Guidelines

    Process

    Performance

    Measures

    Appropriate

    Use Criteria

    Outcomes

    Measures

    Data on associations between actions and outcomes; derived from a hierarchy of

    scientic research:

    Unsystematic clinical observation

    Physiological experiments

    Expert opinion

    Case series

    Cross-sectional studies

    Case-control studies

    Retrospective observational cohorts

    Prospective observational cohorts

    Randomized controlled trials

    Agreed upon denitions, nomenclature, and data elements; facilitate accurate

    communication and fair comparison

    Detailed summary of the body of evidence-based medicine for a given disease process

    or clinical content area; includes specic recommendations for standards of care,

    graded on level (I, IIa, IIb, III) and type of evidence (A, B, C)

    Discrete processes of care that imply that clinicians are in error if they do not care for

    patients according to these clinical standards; must also allow for practical identication

    of those patients for whom a specic action should be taken (a clear denominator), easy

    determination of whether or not the measure has been performed (a clear numerator),

    and opportunities for timely feedback

    Identify common, prototypical patient subgroups for which expert clinicians assess the

    benets and risks of a test or procedure on patient outcomes (score 1-9); primary goal

    is to reduce overuse, thereby improving safety and efciency

    Measures of health that are important to patients and are through to be affected by

    processes of care; generally require risk-standardization to account for case mix

  • 7/27/2019 1_4 Quality of Care

    4/10

    1.4.4 Chapter 1: General Principles

    available evidence, such as in a systematic review and/or

    evidence-based clinical practice guideline, may enhance the

    assessment o benets and risks o a given therapy. A systematic

    review will provide guidance or care decisions above and

    beyond any single study.

    The second undamental principle is that evidence alone will

    never be sucient to make a clinical decision. Decision mak-

    ers must always integrate and trade the benets, risks, incon-

    veniences, and costs associated with alternate management

    strategies. This should be done within the context o patients

    goals, values, and preerences.

    Data StandardsStandardized sets o denitions, nomenclature, and data ele-

    ments acilitate accurate communication and air comparison.

    They help avoid the Tower o Babel syndrome with the in-

    ability to accurately compare clinical trials and other outcomes

    assessments due to diering denitions o clinical status and

    adverse outcomes. The American College o Cardiology (ACC),

    in association with the American Heart Association (AHA), has

    implemented Clinical Data Standards, the lexicon needed to

    achieve commonality and consistency in denitions in many

    areas o cardiovascular disease.12 Standardized denitions allow

    accurate comparisons between multiple relevant clinical trials

    as well as clinical outcomes collected through clinical registry

    programs.13

    Clinical Practice Guidelines

    The creation o clinical guidelines is intended to summarize thebody o evidence-based medicine or a given disease process or

    clinical content area. Preerably these guidelines are based upon

    multiple large, randomized controlled trials. When substantial

    randomized clinical trial data are lacking, smaller clinical trials,

    careully perormed observational analyses, or even expert

    consensus opinion is utilized as the weight o evidence or a

    particular clinical guideline. Over the past 25 years, the ACC

    and the AHA have published multiple cardiovascular clinical

    guidelines covering many relevant areas o cardiology, with

    continued updating as clinical advances dictate. These include

    Figure 1Levels of Evidence for Clinical Practice Guidelines

    Reproduced with permission rom The Evidence-Based Medicine Working Group. Users Guide to the Medical Literature: A Manual or Evidence-Based

    Clinical Practice. 2nd ed. Chicago: American Medical Association Press.

    Class IBenefit >>> RiskProcedure/Treatment

    Should beperformed/administered

    Class IIaBenefit >> Risk

    Additional studieswith focusedobjectives needed

    It is reasonable to

    perform procedure/administer treatment

    Class IIbBenefit Risk

    Additional studies withbroad objectives needed;additional registry datawould be helpful

    Procedure/Treatmentmay be considered

    COR III:No Benefit

    NotHelpful

    No ProvenBenet

    COR III:Harm

    Excess Cost w/oBenet or Harmful

    Harmful toPatients

    tClass III No Benefi

    or Class III Harm

    Level AMultiple populationsevaluated*

    Data derived frommultiple randomizedclinical trials ormeta-analyses

    Recommendation thatprocedure or treatmentis useful/effective

    Sufcient evidence frommultiple randomizedtrials or meta-analyses

    Recommendation infavor of treatment orprocedure being useful/effective

    Some conictingevidence from multiplerandomized trials ormeta-analyses

    Recommendationsusefulness/efcacyless well established

    Greater conictingevidence from multiplerandomized trials ormeta-analyses

    Recommendation thatprocedure or treatmentis not useful/effectiveand may be harmful

    Sufcient evidence frommultiple randomizedtrials or meta-analyses

    Level BLimited populationsevaluated*

    Data derived from asingle randomized trialor nonrandomized

    studies

    Recommendation thatprocedure or treatmentis useful/effective

    Evidence from singlerandomized trial ornonrandomized studies

    Recommendation infavor of treatment orprocedure being useful/effective

    Some conictingevidence from singlerandomized trial or non-randomized studies

    Recommendationsusefulness/efcacyless well established

    Greater conictingevidence from singlerandomized trial ornonrandomized studies

    Recommendation thatprocedure or treatmentis not useful/effectiveand may be harmful

    Evidence from singlerandomized trial ornonrandomized studies

    Level CVery limited populationsevaluated*

    Only consensus opinionof experts, case studies,or standard of care

    Recommendation thatprocedure or treatmentis useful/effective

    Only expert opinion,case studies, orstandard of care

    Recommendation infavor of treatment orprocedure being useful/effective

    Only diverging expertopinion, case studies,or standard of care

    Recommendationsusefulness/efcacyless well established

    Only diverging expertopinion, case studies,or standard of care

    Recommendation thatprocedure or treatmentis not useful/effectiveand may be harmful

    Only expert opinion,case studies, orstandard of care

    Levels of Evidence for Clinical Practice Guidelines

    Size of Treatment Effect

    EstimateofCert

    ainty(Precision)ofTreatmentEffect

    Procedure/test

    Treatment

  • 7/27/2019 1_4 Quality of Care

    5/10

    1.4: Quality o Care: Measurement and Improvement 1.4.5

    ity in routine practice; 2) practical identication o those patients

    or whom a specic action should be taken (a clear denominator);

    3) easy determination o whether or not the measure has been

    perormed (a clear numerator); 4) adherence to the measure

    results in meaningul improvements in clinically meaningul out-

    comes; and 5) opportunities or timely eedback to clinicians and

    institutions to promote continuous quality improvement.19

    Process Performance Measures

    Process perormance measures are distilled rom clinical guide-line therapeutic recommendations, generally capturing those

    Class I or Class III, Level o Evidence A recommendations or

    which the evidence is particularly strong. Process perormance

    measures describe discrete processes o care that are explicit

    diagnostic or therapeutic actions to be perormed or not per-

    ormed (e.g., the provision o aspirin or acute myocardial inarc-

    tion). The implication is that clinicians are in error i they do not

    ollow these care processes or do not document specic reasons

    or disregarding these recommendations.

    Outcome Performance Measures

    Outcome perormance measures are being increasingly used as

    perormance measures. Adding outcomes measures to process

    measures has important benets. For example, process mea-

    sures, even when reported together, capture a small raction

    o the care delivered; in contrast, outcomes measures, such

    as mortality or health-related quality o lie, should integrate

    the totality o care that a patient receives.20 The government

    website, Hospital Compare, reports 30-day r isk-standardized

    mortality and rehospitalization rates or ee-or-service Medicare

    beneciaries ater hospitalization or heart ailure, acute myo-

    cardial inarction, or pneumonia.21 These statistics are used or

    reimbursement purposes.

    Critiques o outcomes measures include, or example, that

    the methods or risk-standardization are not suciently air to

    account or important dierences in case mix. Also, outcomes

    measures do not tell clinicians and institutions specically what

    they are doing correctly or incorrectly. Thereore, risk-standard-

    ized outcomes measures should be combined with detailed

    measures o structure and process perormance, thereby provid-

    ing clinicians and institutions with audit and eedback on their

    overall perormance alongside data highlighting those areas in

    particular need o quality improvement activities.

    Composite Measures

    Composite measures have been constructed and deployed to

    address the prolieration o perormance measures and the need

    to ensure that these measures comprehensively represent health

    care quality.22 Composite measures utilize data reduction in

    order to simpliy presentation and interpretation. They also pro-

    mote scope expansion to better integrate multiple metrics into a

    more comprehensive assessment o provider perormance. How-

    ever, these advantages come at a cost. Standard psychometric

    properties o composites can be more complex to determine,

    methods or scoring (e.g., all-or-none vs. any vs. weighting) can

    lead to dierent conclusions, and problems with missing data

    acute myocardial inarction, unstable angina, chronic stable

    angina, coronary revascularization, heart ailure, supraventricu-

    lar arrhythmias, atrial brillation, implantation o pacemakers,

    and antiarrhythmia devices.14 These practice guidelines are

    intended to assist health care providers in clinical decision mak-

    ing through describing generally acceptable approaches or the

    diagnosis, management, or prevention o disease states.15-17

    Figure 1 provides a ramework or evaluating various procedures

    and treatments. The ramework includes both levels o evidenceand types o evidence.16

    Levels o Evidence: Recommendations are given one o the

    ollowing indication classications based on the evaluation o

    evidence by a panel o guidelines experts.

    Class I: procedure or treatment should be perormed or

    administered; the benet to risk ratio is avorable.

    Class IIa: it is reasonable to perorm the procedure or treat-

    ment; benet to risk ratio is probably avorable.

    Class IIb: procedure o treatment may be considered; ben-et to risk ratio is unknown.

    Class III: the procedure or treatment should not be per-

    ormed; no benet or risk outweighs benet.

    Types o Evidence: The weight o evidence to support a given

    recommendation is listed as A, B, or C. The highest level o

    evidence, A, implies data derived rom multiple randomized tri-

    als, while the lowest level o evidence, C, refects the consensus

    opinion o experts, case studies, or standard o care.

    Although we would like guidelines to be based on the high-

    est level o evidence in the hierarchy, multiple actors (e.g., the

    diculty o conducting large randomized trials) limit the extent

    to which the wide array o clinical decisions can be strongly

    recommended. O the 16 ACC/AHA clinical practice guidelines

    that report levels o evidence in September 2008, only 11% o

    2,711 recommendations were classied as Level o Evidence A,

    whereas 46% were level C.17

    Performance MeasuresPerormance, or care accountability, measures are those process,

    structure, eciency, and outcome measures that have been de-

    veloped using ACC/AHA methodology. This includes the process

    o public comment and peer review and the specic designa-

    tion as a perormance measure by the ACC/AHA Task Force onPerormance Measures.18,19 This may occur in collaboration with

    other national practice organizations and ederal agencies, such

    as the National Quality Forum (NQF), Centers or Medicare and

    Medicaid Services (CMS), or the Joint Commission on Accredita-

    tion o Health Care Organizations.

    Perormance measures must have a number o qualities that al-

    low them to be used or both continuous quality improvement as

    well as accountability and reimbursement, including: 1) ace valid-

  • 7/27/2019 1_4 Quality of Care

    6/10

    1.4.6 Chapter 1: General Principles

    or various common clinical scenarios. Additionally, a complete

    evaluation o appropriateness might also include a compari-

    son o the relative marginal cost and benets o each imaging

    modality. Regrettably, there is currently insucient evidence to

    make such evaluations across a broad spectrum o potential

    clinical indications or diagnostic and procedural decisions.

    Quality Improvement

    The tools o quality assessment t into a comprehensive cycle o

    activities that work to dene, measure, and ultimately promote

    quality health care (Figure 2).24 Discoveries rom basic science

    are translated into clinical diagnostics and therapies. These are

    then tested in clinical trials to determine ecacy and saety. This

    evidence is then synthesized into clinical practice guidelines,

    which are made available or consumption. A select group o

    can be amplied.

    Quality MetricsQuality metrics are those measures that have

    been developed to support sel-assessment

    and quality improvement at the provider, hos-

    pital, and/or health care system level.18 These

    metrics are oten a major ocus o clinical

    registry and quality improvement programs.13

    These metrics may not have been ormally

    developed using the ACC/AHA perormance

    measure methodology. However, they may

    be identied as preliminary, candidate,

    test, evolving, or quality measures,

    which indicates that they may be worthy o

    consideration or urther development into

    perormance measures. Quality metrics may

    not meet all specications o ormal peror-

    mance measures used in public reporting and

    accountability, but can still represent valuable

    tools to aid clinicians and hospitals in improv-

    ing quality o care and enhancing patient

    outcomes.

    Appropriate Use CriteriaAUC are intended to be a supplement to

    clinical practice guidelines and perormance

    measures, and dier rom them in important

    ways. AUC identiy common, prototypical

    patient subgroups or which expert clinicians,

    using available evidence rom the medical

    literature and clinical practice, assess the

    benets and risks o a test or procedure on

    patient outcomes. AUC are scored as ollows:

    a score o 7-9 means appropriate, 4-6 meansuncertain, and 1-3 means inappropriate.23 Ide-

    ally, AUC dene what to do, when to do,

    and how oten to do a certain modality or

    procedure, with consideration or local care

    environments and patient goals, preerences,

    and value. AUC should ideally be simple,

    reliable, valid, and transparent. AUC oer a

    ramework rom which to examine the ratio-

    nale o diagnostic and therapeutic actions to support a more

    ecient use o medical resources. The primary goals o AUC

    are to identiy overuse and in so doing, improve the saety and

    cost-eectiveness o care.

    The ACC, in partnership with relevant specialty and subspecialty

    societies, has been developing an increasing portolio o AUC

    in a variety o diagnostic modalities (e.g., cardiac computed

    tomography, cardiac magnetic resonance imaging, cardiac

    radionuclide imaging, transthoracic and transesophageal echo-

    cardiography, stress echocardiography) as well as procedural

    modalities (e.g., coronary revascularization).

    Ideally, such AUC would arise rom high-quality research

    evaluating the benets and risks o perorming imaging studies

    Figure 2The Cycle of Quality

    A cycle o specic eorts is needed to create systematic approaches to translating knowledge

    across the continuum rom discovery science to public health intervention. The cycle begins

    with the discovery o undamental biological, physical, and social constructs. Once a discovery is

    made, it undergoes a development cycle including extensive preclinical applied research beore

    it can be developed as a treatment with plausible human benet. Evidence is then gathered

    in human experiments, and assessments are made about the interventions value; these

    evaluations continue ater the treatment is clinically available. What is learned through the cycle

    is oten ed back to rene the science o discovery. At the clinicians level, measurement and

    education are central to completing the cycle.

    Reproduced with permission rom Cali RM, Harrington RA, Madre LK, Peterson ED, Roth D,

    Schulman KA. Curbing the cardiovascular disease epidemic: aligning industry, government,

    payers, and academics. Health A (Millwood) 2007;26:62-74.

    The Cycle of Quality

    Measurement

    and

    Education

    Discoveryscience

    Outcomes

    Performancemeasures

    Earlytranslational

    steps

    Clinicaltrials

    Clinicalpractice

    guidelines

    DataStandards

    3

    Networkinformation

    4

    Empirical

    ethics

    5

    Priorities andprocesses

    6

    Inclusiveness

    7

    Use forfeedback

    on priorities

    8

    Conict-of-interestmanagement

    9

    Evaluation ofspeed and uency

    10

    Pay-for-performance

    11

    Transparencyto consumers

    12

    FDACritical Path

    1

    NIHRoadmap

    2

  • 7/27/2019 1_4 Quality of Care

    7/10

    1.4: Quality o Care: Measurement and Improvement 1.4.7

    these guidelines are condensed into perormance measures,

    which are used or benchmarking, public reporting, and pay

    or perormance. Outcomes measures provide assessment o

    how well this process is achieving its ultimate goals. Any o this

    inormation can be ed back into the cycle to guide and reocus

    quality eorts at all steps.

    National Quality Improvement Registry ProgramsThe oundation o any quality improvement eort is measure-

    ment. Without systematic assessment and evaluation, it is dicult

    to know the quality o various care decisions. Participation in na-

    tional clinical registries and quality improvement programs, such

    as the National Cardiovascular Data Registry (NCDR), oers

    a method or accurately assessing clinical outcomes and oers

    eedback on how individual hospital and clinician practices com-

    pare with their peers. This is done through benchmarking peror-

    mance against aggregate national or similar hospital outcomes

    ollowing adjustment or case mix (Figure 3).13,25 Participation in

    these eedback systems is known to be a critical element in qual-

    ity improvement. The eedback o process and outcomes data to

    clinicians pinpoints opportunities to improve clinical perormance

    and quality. For example, they can also be used to help state

    regulatory agencies oversee the quality o demonstration projects,

    such as percutaneous coronary intervention (PCI) without onsite

    surgical backup. They also oer opportunities or post-market

    device surveillance, particularly or low requency adverse events.

    National Quality InitiativesNational quality initiatives can also be eective. One illustrative

    example is the Door-to-Balloon Alliance. It is known that PCIs

    or acute myocardial inarction are grounded in the principle o

    rapid reperusion. There is strong evidence that a shorter time

    rom patient presentation (i.e., emergency room door) to

    coronary artery opening via angioplasty (i.e., balloon infa-

    tion in the catheterization laboratory) is associated with better

    patient outcomes, particularly when these door-to-balloon (D2B)

    times are less than 90 minutes.

    However, despite D2B recommended quality measures being

    in place or years, as o 2006 only 40% o hospitals were able

    to consistently perorm primary PCI in less than 90 minutes. A

    team o cardiovascular outcomes researchers evaluated hospitals

    with best practices and identied the key processes o care that

    were associated with shorter D2B times. Six o these strategies

    became the core strategies o the D2B Alliance. These are: hav-

    ing emergency medicine physicians activate the catheterization

    Figure 3Example of a Quality Metrics Report From the NCDR CathPCI Executive Summary

    Reproduced with permission rom Rumseld JS, Dehmer GJ, Brindis RG. The National Cardiovascular Data Registry Its Role in Benchmarking and

    Improving Quality. US Cardiology 2009;Touch Briefngs:11-15.

    Example of a Quality Metrics Report

    from the NCDR CathPCI Executive Summary

    Percutaneous Coronary Intervention Quality Measures

    My hospital: 1.02% (rank: 118 of 366; rank percentile: 68)

    Your hospitals PCI morality rate adjusted using the ACC-NCDR

    risk adjusted model (detail line: 1,732)

    Risk-adjusted Mortality

    1.251.7124.4 0.94 0.73

    Lagging Leading

    Proportion of STEMI Patients with DBT 90 Minutes

    My hospital: 65% (rank: 87 of 389; rank percentile: 78)

    The proportion of primary PCI patients with DBT 90 minutes.

    The goal is to have a DBT of 90 minutes for all non-transferred

    patients having STEMI and having primary PCI (detail line: 1,767)

    50.036.423.9 63.0 76.9

    Lagging Leading

    BetterWorse

    My hospital: 2.7% (rank: 286 of 401; rank percentile: 68)Includes procedures with at least one vascular complication

    (detail line: 2,029)

    Incidence of Vascular Complications

    1.93.04.3 1.1 0.5

    LeadingLagging

  • 7/27/2019 1_4 Quality of Care

    8/10

    1.4.8 Chapter 1: General Principles

    Figure 4The Central Role of Data and Benchmarking in Quality Improvement

    EMR = electronic medical record.

    Adapted with permission rom Rumseld JS, Dehmer GJ, Brindis RG. The National

    Cardiovascular Data Registry Its role in Benchmarking and Improving Quality. US

    Cardiology 2009;6:11-5.

    The Central Role of Data and Benchmarking

    in Qualty Improvement

    System ChangesEMR

    Standing orders

    Critical pathways

    Integrated care

    BenchmarkingData

    Clinical Leaders

    Administrative Support

    laboratory, having a single call to a central page

    operator activate the catheterization laboratory,

    having the emergency department activate the

    catheterization laboratory while the patient is en

    route to the hospital, expecting sta to arrive in

    the catheterization laboratory within 20 minutes

    ater being paged, having an attending cardi-

    ologist always on site, and having sta in the

    emergency department and the catheterization

    laboratory use real-time data eedback.26

    The ACC thereby supported the national D2B

    Alliance to promote participation by hospitals,

    physician champions, and strategic partners com-

    mitted to addressing the D2B challenge. Partici-

    pating hospitals committed to implementing as

    many o the six strategies as possible. The goal o

    the D2B Alliance was to achieve D2B times o

  • 7/27/2019 1_4 Quality of Care

    9/10

    1.4: Quality o Care: Measurement and Improvement 1.4.9

    programs (such as national clinical registry programs) is also

    important, oten providing a solid inrastructure or quality mea-

    surement and improvement.

    Local activities or quality improvement should be iterative and

    involve breaking down quality eorts into small pieces. Multiple

    small quality cycles should be occurring in various domains o

    local health care delivery, involving multidisciplinary members o

    the team. The quality initiatives should be supported by adminis-

    tration and aligned with external entities (i.e., regulatory agenciesand national quality improvement initiatives). For example, a hos-

    pital with a high risk-adjusted mortality rate among its patients

    with acute myocardial inarction cannot set on a single course

    o action to x this problem. Instead, it is necessary to have an

    integrated approach involving multiple smaller initiatives within

    the continuum o care. This could include community education,

    emergency medical services, the emergency department, the

    interventional catheterization laboratory, in-hospital care, transi-

    tional services, and ambulatory ollow up. Measurement within

    each level should target areas or improvement.

    At the individual and local level, the Institute or Health Care Im-

    provement (IHI) promotes the Model or Improvement, developed

    by Associates in Process Improvement.28 The model organizes qual-

    ity improvement into actionable parts.

    1. Set Aims: These should be small goals that are targeted to

    a defned group o patients. They should be time-specifc

    and measureable.

    2. Establish Measures: Pick a quantitative measure that can

    determine i a specifc change leads to an improvement in

    quality.

    3. Select Changes: All improvement requires making

    changes but not all changes result in improvement.

    Clinicians and organizations must identiy the changes that

    they believe are most likely to result in improvement.

    4. Test Changes: Once the frst three undamental questions

    have been answered, the Plan-Do-Study-Act (PDSA) cycle

    should be used to test and implement changes in real work

    settings. The PDSA cycle uses action-oriented learning to

    determine i the change is an improvement. This is done by

    planning it, trying it, observing the results, and acting on

    what is learned (Figure 5).

    Including the right people on a process improvement team iscritical to a successul improvement eort. Teams vary in size

    and composition, but typically involve multidisciplinary represen-

    tation.

    Conclusion

    For clinicians who want to deliver the best possible care, be

    graded and reimbursed appropriately, and maintain certica-

    tion and licensure, understanding quality and being engaged in

    quality measurement and improvement must become central to

    clinical practice. Feedback o process and outcomes are critical

    elements leading to quality improvement. I you do not measure

    it, you will not improve it.

    Key Points

    Health care quality is highly relevant to patients, clinicians,

    and society.

    The highest quality health care is that which is eective,

    sae, timely, ecient, equitable and patient centered.

    The major domains o quality assessment are structure,

    process, and outcomes (Donabedians triad).

    Key tools or dening and measuring quality o care

    include: evidence, data standards, clinical practice

    guidelines, quality metrics, perormance measures, and

    appropriateness criteria.

    Quality improvement requires accurate data collection,

    risk-adjustment and benchmarking to make perormance

    measurement meaningul, persistent and iterative cycles o

    quality improvement, clinician champions, and a supportive

    organizational context.

    National clinical registry programs such as the NCDR utilize

    data standards, standardized tools or data collection, risk-

    adjustment, and benchmarking.

    References

    1. Fisher ES, Wennberg DE, Stukel TA, et al. The implications o re-gional variations in Medicare spending. Part 1: the content, quality,

    and accessibility o care. Ann Intern Med 2003;138:273-87.

    2. Orszag PR. The Overuse, Underuse, and Misuse o Health Care: Tes-

    timony beore the Committee on Finance, United States Senate, July

    17, 2008. Congressional Budget Ofce, Washington, DC; 2008.

    3. The Dartmouth Institute or Health Policy and Clinical Practice. The

    Dartmouth Atlas o Health Care. 2011. Available at: http://www.

    dartmouthatlas.org. Accessed 11/30/2011.

    4. Sutherland JM, Fisher ES, Skinner JS. Getting past denial--the

    high cost o health care in the United States. N Engl J Med

    2009;361:1227-30.

    5. Kohn LT, Corrigan JM, Donaldson MS. To err is human: buildinga saer health system. Washington, DC: National Academy Press;

    2000.

    6. Leape LL. Reporting o adverse events. N Engl J Med

    2002;347:1633-8.

    7. Institute o Medicine Committee on Quality o Health Care in

    America: Crossing the Quality Chasm: A New Health System orthe 21st Century. Washington DC: National Academy Press; 2001.

    8. Donabedian A. Explorations in Quality Assessment and Monitoring,Volume 1. The Defnition o Quality and Approaches to its Assess-

    ment. Ann Arbor, MI: Health Administration Press; 1980.

    9. Porter ME. What is value in health care? N Engl J Med

    2010;363:2477-81.

    10. Majumdar SR, McAlister FA, Furberg CD. From knowledge to prac-tice in chronic cardiovascular disease: a long and winding road. J

    Am Coll Cardiol 2004;43:1738-42.

    11. Guyatt G, Rennie D, Meade MO, Cook DJ. Users Guide to theMedical Literature: A Manual or Evidence-Based Clinical Practice.

  • 7/27/2019 1_4 Quality of Care

    10/10

    1.4.10 Chapter 1: General Principles

    2nd ed. New York: McGraw-Hill Proessional; 2008.

    12. Cannon CP, Battler A, Brindis RG, et al. American College o Cardi-

    ology key data elements and defnitions or measuring the clinical

    management and outcomes o patients with acute coronary syn-dromes. A report o the American College o Cardiology Task Force

    on Clinical Data Standards (Acute Coronary Syndromes Writing

    Committee). J Am Coll Cardiol 2001;38:2114-30.

    13. Bualino VJ, Masoudi FA, Stranne SK, et al. The American Heart

    Associations recommendations or expanding the applications o

    existing and uture clinical registries: a policy statement rom the

    American Heart Association. Circulation 2011;123:2167-79.

    14. American College o Cardiology. CardioSource: Guidelines and

    Quality Standards. 2011. Available at: http://www.cardiosource.

    org/science-and-quality/practice-guidelines-and-quality-standards.aspx. Accessed 11/30/2011.

    15. Gibbons RJ, Smith S, Antman E. American College o Cardiology/

    American Heart Association clinical practice guidelines: Part I:

    where do they come rom? Circulation 2003;107:2979-86.

    16. Gibbons RJ, Smith SC Jr, Antman E. American College o Cardiol-ogy/American Heart Association clinical practice guidelines: Part II:

    evolutionary changes in a continuous quality improvement project.

    Circulation 2003;107:3101-7.

    17. Tricoci P, Allen JM, Kramer JM, Cali RM, Smith SC Jr. Scientifc evi-

    dence underlying the ACC/AHA clinical practice guidelines. JAMA2009;301:831-41.

    18. Bonow RO, Masoudi FA, Rumseld JS, et al. ACC/AHA classifca-tion o care metrics: perormance measures and quality metrics:

    a report o the American College o Cardiology/American Heart

    Association Task Force on Perormance Measures. Circulation2008;118:2662-6.

    19. Spertus JA, Eagle KA, Krumholz HM, Mitchell KR, Normand SL.

    American College o Cardiology and American Heart Association

    methodology or the selection and creation o perormance mea-sures or quantiying the quality o cardiovascular care. Circulation

    2005;111:1703-12.

    20. Krumholz HM, Normand SL, Spertus JA, Shahian DM, Bradley

    EH. Measuring perormance or treating heart attacks and heart

    ailure: the case or outcomes measurement. Health A (Millwood)2007;26:75-85.

    21. US Department o Health and Human Services. Hospital Compare.

    2011. Available at: www.hospitalcompare.hhs.gov. Accessed11/30/2011.

    22. Peterson ED, Delong ER, Masoudi FA, et al. ACCF/AHA 2010 Posi-

    tion Statement on Composite Measures or Healthcare Peror-

    mance Assessment: a report o the American College o CardiologyFoundation/American Heart Association Task Force on Perormance

    Measures (Writing Committee to Develop a Position Statement on

    Composite Measures). J Am Coll Cardiol 2010;55:1755-66.

    23. Patel MR, Dehmer GJ, Hirsheld JW, et al. ACCF/SCAI/STS/AATS/

    AHA/ASNC 2009 Appropriateness Criteria or Coronary Revascular-

    ization: a report by the American College o Cardiology Foundation

    Appropriateness Criteria Task Force, Society or CardiovascularAngiography and Interventions, Society o Thoracic Surgeons,

    American Association or Thoracic Surgery, American Heart Asso-

    ciation, and the American Society o Nuclear Cardiology Endorsedby the American Society o Echocardiography, the Heart Failure

    Society o America, and the Society o Cardiovascular Computed

    Tomography. J Am Coll Cardiol 2009;53:530-53.

    24. Cali RM, Harrington RA, Madre LK, Peterson ED, Roth D, Schul-

    man KA. Curbing the cardiovascular disease epidemic: aligning in-

    dustry, government, payers, and academics. Health A (Millwood)

    2007;26:62-74.

    25. Brindis RG, Dehmer GJ, Rumseld JS. The National CardiovascularData Registry -- Its Role in Benchmarking and Improving Quality.

    US Cardiology. 2009;www.touchcardiology.com:11-15.

    26. Bradley EH, Herrin J, Wang Y, et al. Strategies or reducing thedoor-to-balloon time in acute myocardial inarction. N Engl J Med

    2006;355:2308-20.

    27. Bradley EH, Nallamothu BK, Herrin J, et al. National eorts toimprove door-to-balloon time results rom the Door-to-Balloon Alli-ance. J Am Coll Cardiol 2009;54:2423-9.

    28. Langley GJ, Nolan KM, Norman CL, Provost LP, Nolan TW. The

    Improvement Guide: A Practical Approach to Enhancing Organiza-

    tional Perormance. New York: Jossey-Bass; 1996.