5
20 CLINICAL QUALITY ARTICLE Clinical Indicators – What, Why and How YH Lee, KH Tan ABSTRACT Healthcare organisations are required to provide evidence of improving performance by utilising data. This can be used in comparative or benchmarking information relating to clinical care across institutions. The aims of clinical indicator development, monitoring and reporting are to increase healthcare providers’ awareness and involvement in evaluating care outcome or care process and identifying quality and/or process gaps in the care delivery system. Indicators serve as ‘pointers’ to direct healthcare providers’ attention and resources to target areas for improvement in the process of delivering health care. Keywords: Clinical indicators, performance measurement, quality improvement WHAT IS AN INDICATOR? The Australian Council on Healthcare Standards (ACHS) defines clinical indicator as an objective measure of the management process or outcome of care, in quantitative term. 1 It provides measurable dimension of the quality or appropriateness aspect of patient care. Clinical indicators can be used in comparative or benchmarking information relating to clinical care. Possible problems and/or opportunities for improvement are flagged out within the organisation. Examples of clinical indicators include inpatient mortality, perioperative mortality, unscheduled readmissions, unscheduled returns to the Operating Theatre, unscheduled returns to the Emergency Department, reattendance at Emergency Department for asthma, etc. Such data can help to highlight problem areas in clinical performance, inform or drive quality improvement activities, prompt reflections on clinical practice, proper channelling of resources and identify important issues for further research. Valid and reliable data concerning desired and undesired results play an important role in a comprehensive monitoring and evaluation system. WHY IS THERE A NEED FOR CLINICAL INDICATORS? Health care is becoming more complex, with increasing approaches to the delivery of care. At the same time, care must also be delivered in a context of cost constraints, increasing patient expectations, and a greater focus on accountability. 2,3 The Harvard Medical Practice Study, which reviewed over 30,000 hospital records in New York state, found injuries from care itself (“adverse events”) to occur in 3.7% of hospital admissions, over half of which were preventable and 13.6% of which led to death. 4 There is increasing pressure from the public, regulators and professionals to redesign healthcare processes and systems to become much safer in future. 5,6 Against this background there has been an explosion in methods aiming to use routine data to compare performance between healthcare providers. Thus the data have wide range of potential uses and are of interest to a wide range of stakeholders such as researchers, practitioners, managers, regulators, patients, and carers. 7 USE OF CLINICAL INDICATORS IN SINGAPORE Maryland Quality Indicator Project (MQIP) The MQIP indicators provide a clinical, outcome-based approach to measuring and evaluating organisational performance. It is a comparative analysis research project, initiated in Maryland, USA in 1985 by the Maryland Hospital Association (MHA). The Singapore’s National Medical Audit Programme sets one of its key objectives to monitor and assess the clinical performance of hospital institutions through clinical outcome indicators, so as to facilitate continuous quality improvement and benchmarking. Since 1998, all hospitals are required to submit data on clinical performance indicators as part of the comprehensive quality improvement activities YH Lee Manager, Medical Affairs KK Women’s and Children’s Hospital KH Tan Director, Clinical Quality Head, Perinatal Audit & Epidemiology Senior Consultant Department of Maternal Fetal Medicine KK Women’s and Children’s Hospital Correspondence to: Ms Lee Yean Hoon Medical Affairs KK Women’s and Children’s Hospital 100 Bukit Timah Road Singapore 229899 Email: [email protected] Tel: 63942316 Fax: 62937933

Clinical indicator.pdf

Embed Size (px)

Citation preview

Page 1: Clinical indicator.pdf

20

CLINICAL QUALITY ARTICLE

Clinical Indicators – What, Why and HowYH Lee, KH Tan

ABSTRACTHealthcare organisations are required to provide evidence of improving performance by utilising data. This can be used incomparative or benchmarking information relating to clinical care across institutions. The aims of clinical indicator development,monitoring and reporting are to increase healthcare providers’ awareness and involvement in evaluating care outcome or careprocess and identifying quality and/or process gaps in the care delivery system. Indicators serve as ‘pointers’ to direct healthcareproviders’ attention and resources to target areas for improvement in the process of delivering health care.

Keywords: Clinical indicators, performance measurement, quality improvement

WHAT IS AN INDICATOR?The Australian Council on Healthcare Standards (ACHS)

defines clinical indicator as an objective measure of themanagement process or outcome of care, in quantitative term.1

It provides measurable dimension of the quality orappropriateness aspect of patient care. Clinical indicators canbe used in comparative or benchmarking information relatingto clinical care. Possible problems and/or opportunities forimprovement are flagged out within the organisation. Examplesof clinical indicators include inpatient mortality, perioperativemortality, unscheduled readmissions, unscheduled returns tothe Operating Theatre, unscheduled returns to the EmergencyDepartment, reattendance at Emergency Department for asthma,etc. Such data can help to highlight problem areas in clinicalperformance, inform or drive quality improvement activities,prompt reflections on clinical practice, proper channelling ofresources and identify important issues for further research.Valid and reliable data concerning desired and undesired resultsplay an important role in a comprehensive monitoring andevaluation system.

WHY IS THERE A NEED FOR CLINICALINDICATORS?

Health care is becoming more complex, with increasingapproaches to the delivery of care. At the same time, care mustalso be delivered in a context of cost constraints, increasingpatient expectations, and a greater focus on accountability.2,3

The Harvard Medical Practice Study, which reviewed over

30,000 hospital records in New York state, found injuries fromcare itself (“adverse events”) to occur in 3.7% of hospitaladmissions, over half of which were preventable and 13.6% ofwhich led to death.4 There is increasing pressure from the public,regulators and professionals to redesign healthcare processesand systems to become much safer in future.5,6 Against thisbackground there has been an explosion in methods aiming touse routine data to compare performance between healthcareproviders. Thus the data have wide range of potential uses andare of interest to a wide range of stakeholders such asresearchers, practitioners, managers, regulators, patients, andcarers.7

USE OF CLINICAL INDICATORS INSINGAPORE

Maryland Quality Indicator Project (MQIP)The MQIP indicators provide a clinical, outcome-based

approach to measuring and evaluating organisationalperformance. It is a comparative analysis research project,initiated in Maryland, USA in 1985 by the Maryland HospitalAssociation (MHA). The Singapore’s National Medical AuditProgramme sets one of its key objectives to monitor and assessthe clinical performance of hospital institutions through clinicaloutcome indicators, so as to facilitate continuous qualityimprovement and benchmarking. Since 1998, all hospitals arerequired to submit data on clinical performance indicators aspart of the comprehensive quality improvement activities

YH LeeManager, Medical AffairsKK Women’s and Children’s Hospital

KH TanDirector, Clinical QualityHead, Perinatal Audit & EpidemiologySenior ConsultantDepartment of Maternal Fetal MedicineKK Women’s and Children’s Hospital

Correspondence to:Ms Lee Yean HoonMedical AffairsKK Women’s and Children’s Hospital100 Bukit Timah RoadSingapore 229899Email: [email protected]: 63942316Fax: 62937933

Page 2: Clinical indicator.pdf

21

required under the Private Hospital and Medical Clinics Act(PHMCA). These indicators were locally developed based onthe definitions and specifications of the MQIP indicators. On 1April 2000, the Ministry of Health (MOH) required all acutecare hospitals to participate officially in the MQIP. The aimwas to enable hospitals to benchmark their performance againstcomparable and reputable hospitals from among theparticipating hospitals in the US, Europe, Japan and Taiwan.Today, more than 1,000 acute care hospital institutionsworldwide participate in the MQIP. The MQIP is intended toprovide its participants with opportunities to compare theirindicator rates with peer group rates over time, and therebyhelp them gain a greater understanding of their level ofperformance. It provides a global overview of the quality ofcare provided by hospitals. Table 1 lists the selected clinicalindicators submitted by KKH to the MQIP.

Table 1: List of indicators submitted to MQIP

Measure Description

3.1 Total inpatient mortality

5.1 Total perioperative mortality

5.2 Perioperative mortality for patients with ASA P1

5.3 Perioperative mortality for patients with ASA P2

5.4 Perioperative mortality for patients with ASA P3

5.5 Perioperative mortality for patients with ASA P4

5.6 Perioperative mortality for patients with ASA P5

6.3 Total C-sections

7.1 Total unscheduled acute care readmissions within 15 daysfor the same or a related condition

8.2 Unscheduled admissions following ambulatory digestive,respiratory, and urinary system diagnostic endoscopies

8.3 Unscheduled admissions following all other ambulatoryoperative procedures

9.1 Unscheduled returns to intensive care units

10.1 Unscheduled returns to the operation room

A1.1 Unscheduled returns to the emergency department within24 hours

A1.2 Unscheduled returns to the emergency department within48 hours

A1.3 Unscheduled returns to the emergency department within72 hours

A1.1a Unscheduled returns to the emergency department within24 hours resulting in an inpatient admission

A1.2a Unscheduled returns to the emergency department within48 hours resulting in an inpatient admission

A1.3a Unscheduled returns to the emergency department within72 hours resulting in an inpatient admission

A5.2a Cancellations by the facility of scheduled ambulatorydiagnostic digestive system endoscopies on the day of theprocedure

A5.3a Cancellations by the facility of scheduled otherambulatory procedures on the day of the procedure

13.1 Documented falls in acute care

Specialty Specific Clinical Indicators (SSCI)In 2001, specialty specific clinical indicators were

introduced to monitor outcomes of specific clinical proceduresor treatment instituted by hospitals. The Chapters of theAcademy of Medicine, Singapore played a key role in theselection of appropriate clinical indicators for implementationin the local hospitals from the list of indicators developed bythe Australian Council on Healthcare Standards (ACHS). TheAcademy also advises on the indicator definitions, inclusion/exclusion criteria, and important confounding variables, as wellas input and recommendations based on the results of theindicators. The SSCI was introduced in three phases since July2001. In early 2004, the MOH Clinical Quality Branchconducted a review on the usefulness of SSCI. The results ofthe review were pending at the time of this report and hopefullyit will further enhance its usefulness. Table 2 lists the indicatorssubmitted by KKH at the time of writing. We expect the SSCIto evolve to become a very useful aspect of our specialty care.

Table 2: List of indicators submitted to SSCI

Indicator Speciality Description

Paediatric Medicine Re-attendance at A&E for asthma

Paediatric Neurosurgery Ventricular Shunt Infection

Intensive Care (Adult) Unscheduled returns to ICU

Ambulatory Care • Cancellation of scheduled ambulatorydigestive/urinary tract endoscopicprocedures by facility• Cancellation of scheduled otherambulatory procedures by facility

Paediatric Surgery • Appendicectomy with normal histology• Appendicectomy with normal histologybut other intra-abdominal pathology

Patient Safety Pilot ProgrammeLate 2003, MOH engaged Dr Vahe A. Kazandjian who is

the President of The Center for Performance Science, Inc.(CPS), an outcomes research centre based in Maryland (whichalso oversees the IQIP), in developing a pilot programme onpatient safety indicators. KKH is one of the six hospitals selectedto participate in the patient safety pilot programme. The pilotprogramme aims to locally develop and define measurablepatient safety indicators (with numerator and denominator data)and test comparability of the patient safety indicators acrossinstitutions. The development stage includes the completion ofa survey tool known as the “ISMP (Institute for Safe MedicationPractices) Medication Safety Self-Assessment Tool” whichenable comparison across institutions on areas such asorganisation culture/readiness in terms of safety, processes inmedication use, use of technology, communication etc. Theresults from the survey provide a quick overview on the current-state of the institution and identify the gap to target areas forsafer medication-use practices. Through deliberations andconsensus building among the six participating institutions,twenty patient safety indicators were selected for feasibilitytesting. The indicators primarily focus on medication safety.Table 3 lists the selected patient safety indicators.

Page 3: Clinical indicator.pdf

22

Table 3: List of patient safety indicators (Patient Safety PilotProgramme)

Measure Description

1.1a Orders written for and administered to wrong patients

1.5 Wrong route orders intercepted by Pharmacy that resultedin call back to physician

1.7 Incomplete orders that required call back to physician

1.10 Medications ordered for patient with a documentedallergy to those medications

2.2 Wrong medications dispensed resulting in harm

4.1 Medications administered to wrong patient

4.2 Wrong medication administered that did not result inharm

4.3 Wrong medication administered that did result in harm

4.5 Duplicate doses administered

4.5a Duplicate doses administered that resulted in harm

4.6 Medications administered via wrong route

4.6a Warfarin and i.v. heparin administered via wrong route

4.6b Antibiotics administered via wrong route

4.8 Wrong injection site used on wrong patient

4.9 Wrong dosage forms administered

4.10 Medications incorrectly prepared by non-pharmacypersonnel in unit

4.11 Medication administration devices malfunctioned

4.11a Medication administration devices malfunctioned duringmedication administration

4.11b Medication administration devices malfunctioned whenbeing serviced

4.12 Medication administration devices incorrectly-adjusted

USE OF CLINICAL INDICATORS IN KKH

Clinical Indicator Audit SystemInternally, a clinical indicator audit system is in place to

facilitate monitoring and reporting of significant medical events.The clinical indicator audit form (CIAF) is developed withinputs and support from the three Medical Divisions (Obstetrics& Gynaecology, Paediatric Medicine and Paediatric Surgery).The CIAF system provides an infrastructure to facilitatereporting and review of significant clinical incidents.Recommendations and changes made are tracked over time. Ithelps to promote a more transparent and learning environmentamong staff. The CIAF is also used to flag out sentinel eventswhich is reportable to MOH. A review of the CIAF is currentlyunderway with the aims to further clarify the scope anddefinition of the indicators that would enhance the accuracy ofreporting and to review the structure of the form to make itmore user-friendly and time sensitive. Table 4 lists the indicatorscaptured in the CIAF.

Table 4: List of KKH clinical indicators (CIAF System)

Indicator Speciality Description

Obstetrics & Unplanned re-admission related to theGynaecology previous hospitalisation within 15 days of

inpatient dischargeUnplanned removal, injury or repair oforgan during surgeryUnplanned return to operation theatre forcomplications during the same admissionUnplanned admission within 48 hoursfollowing ambulatory procedureAny serious or unexpected complicationfrom surgery / pre-operative or duringpost-operative recoveryCardiopulmonary arrestEclampsiaPeri-operative Deep Vein Thrombosis /Pulmonary EmbolismDeath

Anaesthesiology Trauma to organ e.g. broken tooth, lipabrasionAwareness while under general anaesthesiaAny procedure that caused transient orpermanent neurological problems / deficitsin patientAny problems arising from apparatus orequipment failure that have resulted ormay resulted in hypoxaemia to patients,physical injuries, esp. neurological injuries

Radiology Any serious or unexpected complicationfrom radiological procedure

Neonatology Birth traumaApgar score <4 at 5 mins, HIE Samat IIand aboveTerm infant, >7 days length of stay in NICUMassive aspiration syndromesMissed congenital malformationDeaths excluding stillbirth

Paediatric Medicine Unplanned re-admission related to theprevious hospitalisation within 15 days ofinpatient dischargeICU admission exceeding 14 daysPaediatrics admission exceeding 30 daysSerious complication including collapsefrom any procedure / medication.Deaths

Children’s Emergency Deaths

Paediatric Surgery Unplanned re-admission related to theprevious hospitalisation within 15 days ofinpatient dischargeUnplanned removal, injury or repair oforgan during surgeryUnplanned returns to operating theatre forcomplications during the current admissionUnplanned admission within 48 hrsfollowing ambulatory procedureICU admissions exceeding 14 daysWound ComplicationsSepsis related to instrumentation, catheters& devicesDeaths

Sentinel Event ReportingIn 2002, the Ministry of Health (MOH) implemented

Sentinel Event Review (SER) in both public and privatehospitals which replaces the old Committee on Inquiry (COI).

Page 4: Clinical indicator.pdf

23

COI sought to determine whether deaths were “avoidable” andtend to focus on individual responsibility while SER primarilyfocus on organisational systems and processes rather thanindividual performance. The key objectives of SER are topromote quality improvement with the intention to createpositive impact on improving patient care and reducing theprobability of such an event in the future by making changes toorganisation’s systems and processes. Hospitals are requiredto establish Quality Assurance Committee (QAC) to reviewsentinel events using Root Cause Analysis methodology, tounderstand the causes that underlie the event. The report isstrictly confidential and is protected under the Section 11(Quality assurance committees) of the PHMCA (refer to Table5).8 SER also helps to increase the general knowledge aboutsentinel events, their causes, strategies for prevention and aheighten vigilance on risk assessment. In February 2004,MOH’s Clinical Quality Branch (CQ) rolled out the revised“Guidelines for Review of Sentinel Events by Hospital QualityAssurance Committees, 2004”. The definition of a sentinel eventis detailed in Table 6.

Table 5: Extracted from the Private Hospitals and Medical ClinicsAct (PHMCA), Section 11 (Quality assurance committees), RevisedEdition 1999.

(1) The licensee of a private hospital or healthcare establishmentshall establish one or more quality assurance committeesto:-(a) monitor and evaluate the quality and appropriateness of

the services provided and the practices and procedurescarried out at the private hospital or healthcareestablishment;

(b) identify and resolve problems that may have arisen inconnection with any service provided or any practice orprocedure carried out at the private hospital or healthcareestablishment;

(c) make recommendations to improve the quality of theservices provided and the practices and procedurescarried out at the private hospital or healthcareestablishment; and

(d) monitor the implementation of the recommendationsmade under paragraph (c).

(3) A person who is or was a member of a quality assurancecommittee is neither competent nor compellable:-(a) to produce before any court, tribunal, board or person

any document in his possession or under his control thatwas created by, at the request of or solely for the purposeof the quality assurance committee; or

(b) to disclose to any court, tribunal, board or person anyinformation that has come to his knowledge as a memberof the quality assurance committee.

(5) A finding or recommendation by a quality assurancecommittee as to the need for changes or improvements inrelation to any service provided or any practice or procedurecarried out at a private hospital or a healthcare establishmentis not admissible in any proceedings as evidence that theservice, practice or procedure is or was inappropriate orinadequate.

(6) Anything done by a quality assurance committee, a member

of a quality assurance committee or any person acting underthe direction of a quality assurance committee in good faithfor the purposes of the exercise of the quality assurancecommittee’s functions, does not subject such a member orperson personally to any action, liability, claim or demand.

(7) Without limiting subsection (6), a member of a qualityassurance committee has qualified privilege in proceedingsfor defamation in respect of:-(a) any statement made orally or in writing in the exercise

of the functions of a member; or(b) the contents of any report or other information published

by the quality assurance committee.

Table 6: Guidelines for Review of Sentinel Events by Hospital QualityAssurance Committee, 2004. Definitions of a Sentinel Event.

A sentinel event is defined as :(a) an unexpected occurrence

i) involving death, or major permanent loss of function1 ,or major injury,

ANDii) that is associated with the treatment, lack of treatment

or delay in treatment of the patient’s illness or underlyingcondition.

For reporting purposes, an occurrence as defined in (a), shallalso be categorized2 as follows:

• Blood transfusion• Childbirth/ Pregnancy• Inpatient suicide• Medication usage• Surgical/ procedural complications3:

- Ward-based (e.g. chest tube insertion, pleural biopsy,& haemodialysis)- Non ward-based (e.g. surgery done in operatingtheatre, angiography, & CT-guided biopsy)

• Others (e.g. restraint/ fall/ assault/choking/ usage ofmedical equipment, etc.)

(b) Specifically, any of the following events:• Retained instruments/material after procedure• Wrong type of procedure/surgery• Wrong site of procedure/surgery• Wrong patient procedure/surgery

1 Major permanent loss of function occurs when a patient has sensory,motor, physiologic or intellectual impairment that was not present atthe time of admission2 An occurrence may involve more than one catergory3 Includes iatrogenic complications

HOW BEST TO USE CLINICAL INDICATORSTO CREATE APPROPRIATE CHANGE?

To be useful, indicators should pose several attributes. 9,10

• They should cover elements of practice that areacknowledged to be important and not simply record whatis easy to measure.

• Ideally they should be devised with the help of cliniciansand they should fairly reflect their clinical practiceappropriately.

• They should be value free.

Page 5: Clinical indicator.pdf

24

• The data on which indicators are based should be valid andreliable.

• The results should be presented in a user friendly way.• It should be made clear that indicators should be used for

guidance: they “cannot on their own, provide ‘definitive’evidence of success or failure and should be used to raisequestions, not provide answers”.The development and use of potential indicators should be

accompanied by a process of rigorous evaluation.11 Indicatorscould be incorporated into computerised record systems ingeneral practice. This might reduce the time taken to recordthe indicators and address problems associated with reliabilitybetween reporting staff.

FURTHER CONSIDERATIONSWe need to understand and be aware how the ways in which

data are collected may impact on the interpretations. Indicatorvalidity and reliability are two crucial aspects of performanceimprovement programme. Indicator validity refers to theusefulness of the measures in performance assessment andimprovement. Reliability is demonstrated through field-testing.Poor validity and/or reliability of the measures can underminethe conclusions drawn. Furthermore, when people are awarethat they will be judged on the data, other incentives may comeinto play leading to concerns about “gaming” with data.12

Changes in reporting practices over time may also underminethe validity of indicators derived from routine data source. In astudy of emergency admissions in one health authority from1989/90 to 1997/8 an apparent increase in emergency activitywas not matched by an increase in the number of admissions orby the increase in the number of patients each year. Whatappeared to be a rise in emergency admissions turned out to bemainly due to increased reporting of internal transfers afteradmission.13

Process measures are relatively easy to interpret, and theyprovide a direct link to the remedial action required. They maybe particularly useful in revealing quality problems that arenot susceptible to outcome measurement – for example, “nearmisses”, unwanted outcomes, or unnecessary resource use.14

ACKNOWLEDGEMENTSThe work carried out to develop, monitor and report the

clinical indicators reported in this paper at KKH would nothave been possible without the support and collaboration ofthe clinicians, nurses, pharmacists, Dr Yoong Siew Lee (formerDirector of Medical Affairs) and Medical Affairs staff namely,Ms Cheah Li Li, Ms Junne How, Ms Lok Sun Sun, Ms KamalaKrishnan and Ms Angela Bek.

REFERENCES1. The Australian Council on Healthcare Standards (ACHS). Clinical

Indicator Users’ Manual 2002, 4.2. Davies HT, Marshall MN. Public disclosure of performance data:

does the public get what the public wants? Lancet 1999;353:1639-40.

3. Nuttley SM, Smith PC. League tables for performanceimprovement in health care. J Health Serv Res Policy 1998;3:50-7.

4. Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR,

Lawthers AG, et al. Incident of adverse events and negligence inhospitalized patients: results of the Harvard medical Practice StudyI. N Engl J Med 1991; 324: 370-6.

5. Berwick DM, Leape LL. Reducing errors in medicine – It’s timeto take this more seriously. Quality in Health Care 1999; 8:145-146.

6. Thomas R, Lally J. Clinical indicators: Do we know what we’redoing? Quality in Health Care 1998; 7: 122.

7. Powell AE, Davies HTO and Thomson RG. Using routinecomparative data to assess the quality of health care: understandingand avoiding common pitfalls. Qual Saf Health Care 2003;12:122-128.

8. The Private Hospitals and Medical Clinics Act (PHMCA). QualityAssurance Committees, Section 11, Revised Edition 1999.

9. Likierman A. Performance indicators: 20 early lessons frommanagerial use. Public Money and Management 1993; 13: 15-22.

10. Avery AJ. Appropriate prescribing in general practice:Development of the indicators. Quality in health care 1998; 7:123.

11. Cantrill JA, Sibbald B, Buetow S. Indicators of the appropriatenessof long term prescribing in general practice in the United Kingdom:Consensus development, face and content validity, feasibility andreliability. Quality in Health Care 1998;7:130-5.

12. Smith P. On the unintended consequences of publishingperformance data in the public sector. Int J Public Admin1995;18:277-310.

13. Morgan K, Prothero D, Frankel S. The rise in emergencyadmissions - crisis or artefact? Temporal analysis of health servicesdata. BMJ 1999;319:158-9.

14. Crombie IK, Davies HT. Beyond health outcomes: the advantagesof measuring process. J Eval Clin Pract 1998;4:31-8.