34
How CAN we learn better from healthcare harm? EXPLORING THE USE OF THE HUMAN FACTORS ANALYSIS AND CLASSIFICATION SYSTEM Dr Suzanne Shale Winston Churchill Memorial Fellow 2016 Independent Healthcare Ethics Consultant For further information please see my website or contact me directly at [email protected]

How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

How CAN we learn better from healthcare harm?

EXPLORING THE USE OF THE

HUMAN FACTORS ANALYSIS AND CLASSIFICATION SYSTEM

Dr Suzanne Shale Winston Churchill Memorial Fellow 2016 Independent Healthcare Ethics Consultant

For further information please see my website or contact me directly at

[email protected]

Page 2: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

2

TABLE OF CONTENTS

ACKNOWLEDGEMENTS ..........................................................................................................................................3INTRODUCTIONANDEXECUTIVESUMMARY .................................................................................................4SECTIONONEWHATDOES‘LEARNING’FROMSERIOUSINCIDENTSMEAN? ......................................5Anoteondataandconfidentiality. ................................................................................................................................... 5IntroductiontotheHumanFactorsAnalysisandClassificationSystem .......................................................... 6

SECTIONTWOWHEREAREWENOW?ANANALYSISOFINVESTIGATIONPRACTICE.......................9HFACSLevelOne:ActsandOmissions ............................................................................................................................ 9HFACSLevelTwo:Preconditions.................................................................................................................................... 11HFACSLevelThree:Supervisory .................................................................................................................................... 13HFACSLevelFour:OrganisationalInfluences........................................................................................................... 14Summary:abaker’sdozenofpotentialbenefits ...................................................................................................... 17

SECTIONTHREELEARNINGFROMEXPERIENCESOFUSINGHFACSINHEALTHCARE................... 18Systemsandthecontextsforchange ............................................................................................................................ 18“Wecan’tsaythatRCAsaren’tworking”:culturesofcorporatesilence........................................................................18“BuildingamummyofBand-Aids”:thepressureofexternaldemands .........................................................................18“Wecan’ttalkfreelyaboutPatientSafetyWorkProducts”:investigatingintheshadowofthelaw................19

ExperiencesofimplementingHFACS............................................................................................................................ 19“Youfindarichnessineverydaylittlethings”:investigatorsandHFACSprocesses ..............................................19“Youdon'tgetgooddataoutofbighairyevents”:HFACSandtheinvestigated ........................................................21“Whatmakesitdifficultforstafftochange?Thatappliestoyoutoo!”:HFACSforleaders...................................22“Evenverycleverpeoplethinkthesolutionisgreatervigilance”:HFACSandfixes ................................................23“Givethefrontlinethenanocodes,theyrecognisealltheproblemsthemselves”:HFACSreflections ............23

Summary.................................................................................................................................................................................... 23SECTIONFOURANALYSINGFAILUREORLEARNINGFROMFAILURE?................................................ 25Astartingpoint:thehierarchyofinterventioneffectiveness............................................................................. 25ExampleOne:interruptions .............................................................................................................................................. 26ExampleTwo:‘OhShnocks!’ ............................................................................................................................................. 27ExampleThree:goinghomesafely ................................................................................................................................ 28

SECTIONFIVERECOMMENDATIONS............................................................................................................... 30

One:useHFACStoimprovethequalityofinvestigation,andaggregatefindings. ....................................................30Two:useHFACSasacommonassuranceframework. ..........................................................................................................30Three:investinandsupporttheinvestigatorrole..................................................................................................................30Four:makemoreresourcesavailableforsafetyinvestigationsormakebetteruseofthecurrentresource.........................................................................................................................................................................................................................30Five:entercross-organisationalarrangementsforinvestigations. .................................................................................30Six:payattentiontotheprocessofidentifying,implementingandevaluatingsystemicsolutions. .................30

BIBLIOGRAPHY...................................................................................................................................................... 31ABOUTSUZANNESHALE..................................................................................................................................... 33

Page 3: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

3

ACKNOWLEDGEMENTS

I am profoundly grateful to the individuals and organisations that hosted me during my Fellowship travels. Scott Shappell. My first debt of gratitude is owed to Scott Shappell who invited me to join his introductory HFACS training for Christus Health in 2015. Without that detailed introduction to HFACS and the inception of a relationship with Christus Health this Fellowship would not have happened. The Mayo Clinic, Rochester, Minnesota. I am particularly grateful to Dr Juliane Bingener-Casey who, as co-author of an article describing their use of HFACS, hosted my visit to the Mayo Clinic. My thanks too to Connie Feeder who organised a rich itinerary of meetings for me. Christus Health, Irving, Texas. I am indebted to Linda Scott who facilitated my visit to Christus and who, with colleague Dan Reed, spent days of their time sharing their experiences of working with HFACS. I am obliged to Randy Prentice who gave permission for my visit, and who graciously allowed me to join the Christus Colloquium where patient safety matters were under discussion. University of Texas MD Anderson Cancer Center, Houston, Texas. I am grateful to LaTasha Burns who gained permission for my visit to MD Anderson, shared her own experience, and facilitated a number of very informative meetings with colleagues. I am also appreciative that MD Anderson agreed to Linda Scott and Dan Reed from Christus Health joining the visit; this allowed for some fertile conversations between them as HFACS experts that incalculably enriched my learning. Healthcare Human Factors, University Health Network, Toronto. I owe huge thanks to Svetlena Taneva Metzger who made extensive arrangements for my visit, arranging an excellent programme of conversations with a wide range of colleagues. Thank you too, to Director Joe Cafazzo who agreed to my visit to Healthcare Human Factors taking place. OpenLab, University Health Network, Toronto. My thanks to OpenLab and especially Tai Huynh for the opportunity to join their ‘Open Round’ and for finding time to tell me more about the Patient Oriented Discharge Summary. East Midlands Patient Safety Collaborative and NHS colleagues participating in HFACS pilots. Following the award of my Fellowship I have been privileged to be able to work with an ever-enlarging group of dedicated and thoughtful people to test HFACS in the NHS. I am grateful to Dr Cheryl Crocker, Regional Lead for the Patient Safety Collaborative, for having the insight and providing the funds to enable this work to go ahead; and to Professor Murray Anderson Wallace who has been, as ever, a thoughtful and stimulating partner in the projects. I am also indebted to the several organisations and many people who are participating in the trials. There are too many people to thank individually, but you know who you are! The Winston Churchill Memorial Trust. I am indebted to the Trust for its support and immensely grateful to have been awarded the privilege of a Fellowship.

Page 4: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

4

INTRODUCTION AND EXECUTIVE SUMMARY

This report is about the challenge of learning from adverse events in healthcare. The main aim of my Fellowship was to explore how the Human Factors Analysis and Classification System (HFACS) has been used for investigating serious incidents in healthcare in the USA, and to consider whether adopting it could improve learning from healthcare harm in the UK. In the first section I consider what learning means, and describe HFACS. I suggest that recent discussion of the problem of learning from incidents has tended to focus on analysing incidents. Insufficient attention has been given to later stages in the learning process, that is, processes of envisioning solutions and realising change. In the second section I look at the current state of investigations, and use the HFACS framework to analyse what is going wrong with them. I conclude that there are issues to address at every level of organisational systems: the way in which investigators themselves think about adverse incidents, the environment and processes that surround their work, how they function as middle level professionals in their organisations, and how their organisations overall direct the resources, processes and organisational relationships surrounding adverse incidents. In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I conclude that the potential of HFACS can for the most part be realised in practice, but that there are significant leadership and structural problems to be overcome if it is to be fully effective. In the fourth section I move from looking at how to analyse harm to looking at how to generate solutions to identified problems. I discuss three examples of attempts to manage familiar types of healthcare harm, and consider the reasons why some simple and apparently compelling solutions have not worked. I argue that the failure to learn from healthcare harm is not just a failure to investigate well. It is also a failure to approach development of solutions using improvement science. In the final section of my report I make recommendations. They are:

• One: use HFACS to improve the quality of investigation, and aggregate findings.

• Two: use HFACS as a common assurance framework.

• Three: invest in and support the investigator role.

• Four: make more resources available for safety investigations or make better use of the current resource.

• Five: enter cross-organisational arrangements for investigations.

• Six: pay attention to the process of identifying, implementing and evaluating systemic solutions.

Page 5: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

5

SECTION ONE WHAT DOES ‘LEARNING’ FROM SERIOUS INCIDENTS MEAN?

To err is human, to cover up is unforgivable, and to fail to learn is inexcusable. (Sir Liam Donaldson, UK Chief Medical Officer at the launch of the World Alliance for Patient Safety, Washington DC 27 October 2004)

It is fifteen years since the publication of An Organisation with a Memory1, when the NHS made a commitment to learn from adverse incidents in healthcare. In a series of articles and reports published in recent years,2 it has been argued that healthcare providers are not learning well enough. Patients and service users are dissatisfied with the way investigations are carried out, the rate of harm does not appear to be falling significantly, and familiar harms persist (surgical ‘never events’, pressure ulcers, falls and so on). Discussions about learning from healthcare harm often start and end with the question of how well we analyse serious incidents. This is exemplified in a recent Care Quality Commission document entitled Learning, Candour and Accountability.3 CQC treats ‘learning’ as meaning ‘investigating in order to understand how the event occurred’. But this is a very narrow conception of learning. How do we know learning has happened? It is when we experience change. The singer sings more harmoniously. Trains run more punctually. Fewer people experience IT failure. And so it goes in healthcare, too. We only know learning has happened when we see the difference it makes. A ward team cares more capably. Health systems become more patient-centred and efficient. Fewer patients experience poor or catastrophic outcomes. So, learning is about much more than merely understanding. We certainly have to understand how harm occurs, in order to learn how to prevent it. But we have also to envision fitting solutions, and implement changes that work.

A note on data and confidentiality.

This report is based on three sources of data. The first source of data is my Fellowship travel, and the immensely valuable meetings that it allowed with HFACS users, clinical leaders, and human factors specialists. The second data source is two projects I have since undertaken with NHS healthcare providers to introduce them to HFACS. These projects continue as I write this report. They have entailed a retrospective review of Serious Incident Reports from four NHS mental health Trusts; revising existing HFACS nanocodes; supporting six NHS Trusts through an investigation using HFACS; and providing training and discussion forums for a total of seven NHS Trusts and their commissioners. Some detailed findings will be publicly available, and I would be pleased to share these with readers. My third data source is published research, cited throughout this report. Some of my conversations, and the NHS project work, have entailed access to confidential information about healthcare harm. In other conversations people expressed discomfort sharing opinions that differed from their

12001. DEPARTMENT OF HEALTH Building a safer NHS for patients: implementing an organisation with a memory. .www.doh.gov.uk/ buildsafenhs 2Including notably MACRAE, C. & VINCENT, C. 2014. Learning from failure: the need for independent safety investigation in healthcare. Journal of the Royal Society of Medicine, 107, 439-443. 2015b. HOUSE OF COMMONS PUBLIC ADMINISTRATION SELECT COMMITTEE. Investigating clinical incidents in the NHS.; 2015a. DEPARTMENT OF HEALTH Learning not blaming 3 2016. CARE QUALITY COMMISSION Learning, Candour and Accountability: A review of the way NHS trusts review and investigate the deaths of patients in England.

Page 6: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

6

employers’ views, might be read as impliedly critical of their organisations’ leaders, or could be perceived as critical of colleagues struggling against the odds to do good investigations. I encouraged people to talk to me ‘off the record’ and for this reason I have not specifically identified sources of information in my report. It is therefore important to read this report as a synthesis of information gleaned from many sources; it is possible that no single person would agree with everything I have written.

Introduction to the Human Factors Analysis and Classification System

The leading causes of healthcare harm are not technical failures such as machine breakdown, or iatrogenic harm such as medication side effects. Rather, they arise from human action somewhere in the care system.4 This is not because healthcare professionals or managers or system leaders are careless. It is because the provision of healthcare rests upon complex interactions between people, between people and bodies of knowledge, between people and the physical environment, and between people and socio-technical systems. Healthcare organisations and caregivers strive to innovate, and to meet inexorably rising demand. All this creates endless opportunities for things to go wrong. The science of human factors uses knowledge of how people behave in order to design safe, effective and efficient systems.5 Its value is widely recognised, not least by NHS England, which has promoted a Concordat on Human Factors in Healthcare, and mandated systemic (including human factors) analysis in incident investigations.6 The problem is that human factors specialists are thin on the ground in healthcare, and this is especially true when we look at who does investigations. The Human Factors Analysis and Classification system is a safety management tool that helps to identify and record sources of error in organisational systems.7 It can help organisations work systematically to analyse and attend to issues that lead to harm. It can be used to analyse past incident reports, structure ongoing incident investigations, and pro-actively audit safety systems. Originally (and still) used by the US Department of Defense, it is well known in aviation. Versions of HFACS have been used in mining, shipping, fire fighting and food production. In recent years, it has been adopted by a handful of healthcare providers in the US.8 HFACS was developed by behavioural scientists Scott Shappell and Douglas Wiegmann, drawing on James Reason’s so-called ‘Swiss Cheese’ model of incident causation.9 Reason’s model has become well known in healthcare, and is frequently illustrated in the form below. This diagram identifies four levels of potential error: organisational systems, supervision, environmental and individual preconditions, and the act itself. Patients are safe most of the time because at each of these levels people do things, organise the environment, and design systems, to keep patients safe. Patients are protected by ‘layers of defence’. Problems arise when holes open up in the layers of defence, or when systems interact in unpredicted ways.

4 CARAYON, P. & WOOD, K. E. 2010. Patient safety - the role of human factors and systems engineering. Stud Health Technol Inform, 153, 23-46. 5 Definition adopted by CHFG - Clinical Human Factors Group http://chfg.org 6Human Factors Concordat https://www.england.nhs.uk/wp-content/uploads/2013/.../nqb-hum-fact-concord.pdf and NHS England Serious Incident Framework 2015 7 SHAPPELL, S. A. & WIEGMANN, D. A. 2012. A human error approach to aviation accident analysis: The human factors analysis and classification system, Ashgate Publishing, Ltd. It is not without its critics, for example see OLSEN, N. S. & SHORROCK, S. T. 2010. Evaluation of the HFACS-ADF safety classification system: inter-coder consensus and intra-coder consistency. Accident Analysis & Prevention, 42, 437-444. 8 ELBARDISSI, A. W., WIEGMANN, D. A., DEARANI, J. A., DALY, R. C. & SUNDT, T. M. 2007. Application of the human factors analysis and classification system methodology to the cardiovascular surgery operating room. The Annals of Thoracic Surgery, 83, 1412-1419. THIELS, C. A., LAL, T. M., NIENOW, J. M., PASUPATHY, K. S., BLOCKER, R. C., AHO, J. M., MORGENTHALER, T. I., CIMA, R. R., HALLBECK, S. & BINGENER, J. 2015. Surgical never events and contributing human factors. Surgery, 158, 515-521. DILLER, T., HELMRICH, G., DUNNING, S., COX, S., BUCHANAN, A. & SHAPPELL, S. 2014. The Human Factors Analysis Classification System (HFACS) applied to health care. Am J Med Qual, 29, 181-90. 9 REASON, J. 1999. Managing the risks of organizational accidents, Aldershot, Ashgate.

Page 7: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

7

This familiar diagram is a simplified, linear and static representation of incident causation. It is therefore somewhat misleading. In most incidents there will be more than one ‘hole’ in the levels of defence. There will be several overlapping organisational systems. And organisational systems are fluid, so the ‘holes’ are less like holes in slices of hard Swiss cheese, and more like bubbles in boiling liquid, constantly moving. But the important point is that in order to get any sense of how an adverse event arises, we have to understand how harm can arise at each level: organisational, supervisory, preconditions, and acts.

HFACS groups human factors into the four organisational levels seen in the diagram above, and then breaks these down into several major categories and associated subcategories. For each subcategory, a set of very specific ‘nanocodes’ describes errors characteristic of the context. For the purposes of this report I am using the version of HFACS that I have adapted for use in the UK. The overarching framework can be found below. The nanocodes are too numerous to include in this report, but I would be pleased to share the nanocodes developed for use in NHS with readers. When HFACS is prospectively adopted, the HFACS framework can guide incident investigations. The categories and subcategories direct investigators to attend to human factors during the investigative process, and the nanocodes prompt investigators to look for characteristic types of error. At the conclusion of an investigation, the relevant nanocodes can be entered into a database. Providing a stable structure for recording data, the HFACS framework allows incident findings to be aggregated for thematic analysis. This enables organisations to identify priority areas for improvement action, and over time to evaluate the effectiveness of interventions.

Page 8: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

8

Human Factors Analysis & Classification System adapted from Shappell & Wiegmann by Dr Suzanne Shale www.clearer-thinking.co.uk

ORGANISATIONAL INFLUENCES

RESOURCES

HR Monetary Equipment

CLIMATE

Climate

PROCESSES

Intra-organisational Inter-organisational

SUPERVISION

SUPERVISION

Supervisory activities Operational Planning Correction of known problems

Supervisor non-concordance

PRECONDITIONS

ENVIRONMENTAL FACTORS

Physical Technological Patient & situation

CONDITION OF THE CARE PROVIDER

Adverse mental states

Adverse physiological

states Performance

inhibitors

PERSONNEL FACTORS

Commun-ication

Personal readiness

ACTS/OMISSIONS

ERRORS

Skill based Decisional Perceptual

NON-CONCORDANCE

Routine non-concordance

Exceptional non-concordance

Page 9: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

9

SECTION TWO WHERE ARE WE NOW?

AN ANALYSIS OF INVESTIGATION PRACTICE

Ignorance of remote causes, disposeth men to attribute all events, to the causes immediate, and Instrumentall: For these are all the causes they perceive…Anxiety for the future time, disposeth men to enquire into the cases of things: because the knowledge of them, maketh men the better able to order the present to their best advantage. Thomas Hobbes. Leviathan, Chapter XI (1588-1679)

James Reason defined human error as the failure of planned actions to achieve their desired ends – without the intervention of some unforeseeable event.10 This definition precisely characterises the process of serious incident investigation in healthcare: the planned actions (investigating serious incidents in order to reduce harm) have failed to achieve the desired end (familiar harms remain obstinately persistent). Using the term ‘human error’ is not to lay blame at any person’s door. In the course of my work I meet committed, clever and conscientious healthcare professionals who do investigations because, as well as helping make amends to those affected, they really want to improve the care that patients receive. They are working hard to achieve an end that eludes them. In this part of my report I use the framework of the Human Factors Analysis and Classification System to explain features of serious incident investigation in UK healthcare that get in the way of learning. This may require a little mental gymnastics. HFACS is generally used to help organisations understand human error in their primary task of providing effective care. Here I will be using HFACS to analyse human error in organisational approaches to investigation. The ‘Swiss cheese’ model reminds us that acts and omissions by caregivers are just the final links in a causal chain. They are rarely the sole reason an adverse event occurred. However, in a typical serious incident report much of the focus will be on acts or failures to act by team members directly involved in caring for a patient. As I explore below, there are several reasons investigators focus on what happens at the ‘sharp end’. First, investigators are subject to cognitive biases. It is generally easier to see a causal connection between what caregivers did (or did not do) and a harmful event, than it is to see the context within which caregivers’ actions made sense to them at the time. Second, few investigators in healthcare are also human factors specialists. There is nothing in root cause analysis (RCA) method itself that tells investigators where to look for risk. (I shall be referring to limitations in RCA but it should be noted that RCA processes might not be assiduously followed.11) Third, investigators work under conditions that restrict their ability to carry out a searching investigation involving actors at the ‘blunt end’. And fourth, other players in the social and organisational systems that surround individual investigators expect them to concentrate on finding error at the sharp end. I have set the HFACS category and sub-category definitions in text boxes, and these can be referenced against the framework set out at the end of Section One above.

HFACS Level One: Acts and Omissions

Decision errors Decision errors arise in conscious, goal-intended behaviour. Actions proceed from intention, but the plan is inadequate or inappropriate for the situation. Such errors may result from lack of information, knowledge, or experience.

10 Ibid. P. 71 112015b. HOUSE OF COMMONS PUBLIC ADMINISTRATION SELECT COMMITTEE. Investigating clinical incidents in the NHS.

Page 10: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

10

Investigators, just like caregivers and other professionals, make decision errors from time to time. Cognitive biases that can lead to sub-optimal investigations, include the fundamental attribution error, hindsight bias, anchoring bias, confirmation bias, and ‘search satisficing’.

The fundamental attribution error (correspondence bias).12 Observers are inclined to attribute people’s behaviour to their character or disposition, rather than a response to the situation they were in. Faced with the possibility that an error has occurred, there is a tendency for investigators to attribute this to carelessness (or worse) on the part of the caregiver. Hindsight bias. Our knowledge of the outcome of a chain of events (that a patient was harmed) colours our understanding of how it occurred. People “consistently exaggerate what could have been anticipated in foresight”.13 Anchoring bias.14 An investigator’s theory of what happened is strongly influenced by early data they receive. NHS investigators generally start their investigation after receiving a preliminary account of events (an ‘Interim Management Review’ or ’72 hour’ report). This will tend to anchor them in the perspective of a manager close to the sharp end error. Confirmation bias.15 Investigators look for evidence to support a theory of what happened, not to disprove it. Search satisficing.16 Investigators work under pressure of time, so it seems reasonable to call off the search once a good enough explanation is found. For many, error at the sharp end will supply a good enough explanation.

I have selected these biases from the long list of known cognitive biases because they are recognisable to investigators and because they are biases that HFACS can help overcome. The HFACS framework requires investigators to consider each of its levels, categories and sub-categories. It therefore operates against the tendencies to revert to character-based explanations, to assume that caregivers could see what was going to happen, to restrict the inquiry to matters implied in the preliminary report, to ignore evidence from the full range of domains, and call off the search when error has been spotted and before all aspects have been considered.

Skill-based errors (including ‘slips’) Such errors arise in domains of practised behaviour where expert actions may be performed with little conscious deliberation (e.g. driving). They include the manner or technique with which a task is performed. Examples include forgotten intentions, deficient technique, habit transference to new device.

Investigators are no less prone than those they investigate to make errors from poor technique, distraction, time pressure and so on. Even the best trained and most experienced investigator can overlook or forget to include relevant factors in the course of intense or extended investigative processes. The HFACS framework can be used as a ‘cognitive forcing strategy’ - a checklist - to ensure that all areas are covered.

Routine non-concordance Non-concordance is distinct from error because the actor concerned is aware of expectations or procedural requirements but deviates from them. Routine non-concordance is tolerated by colleagues and management, may be believed to be justifiable, & may have become routinised as a way of dealing with work pressures.

12 GILBERT, D. T. & MALONE, P. S. 1995. The correspondence bias. Psychological bulletin, 117, 21. 13 FISCHHOFF, B. & BEYTH, R. 1975. I knew it would happen. Organizational Behavior and Human Performance, 13, 1-16. 14 CROSKERRY, P. 2002. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Academic Emergency Medicine, 9, 1184-1204. 15 CROSKERRY op.cit 16 CROSKERRY op.cit

Page 11: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

11

A very high proportion of investigation reports note that caregivers did not comply with organisational policies, standard operating procedures, professional standards, or accepted custom and practice. The irony is that many investigations are themselves non-concordant with good RCA practice.17 And a further irony is that the reasons for non-concordance are similar for both investigators and caregivers who err: inadequate training, shortage of time, and managerial complicity. Many clinical professionals are expected to undertake incident investigations with only rudimentary training, and with scant knowledge of human factors or organisational systems. Investigators are working under pressure, and are unable to do a thorough investigation in the time available. Added to this, organisational managers with responsibility for investigations are also often working with limited knowledge of human factors, and have been allocated scant resources to get investigations done. Managers are then, too, inclined to accept a standard of investigation that has been tailored to the resource available. Where investigators lack knowledge of human factors, HFACS can help identify risks that might otherwise remain opaque to them. HFACS cannot compensate for shortage of resource. But by showing what factors a good investigation should address, it may provoke questions about whether the resource available allows investigators to meet this standard.

HFACS Level Two: Preconditions

The philosopher William James has been quoted as saying that one of the roles of philosophy is to help us to see the familiar as if it were strange, and the strange as if it were familiar. This is also a critical role for investigators, who have to be able to see and name risks in organisational activities that are so much a part of professionals’ every day worlds that they are invisible. Several of the precondition categories discussed below (such as physical and technological environment) are apt to be left unreported by investigators because they are a taken-for-granted aspect of everyday clinical work (such as operating theatre or ward lay out, or poor user interfaces in medical technologies). Others (such as the mental state of staff) go unreported because investigators are unaware of how these contribute to error.

Physical environment Includes the operational setting, e.g. an ambulance crew working at night on a busy road, and features of ambient environment such as clutter, operating theatre temperature, lighting, ward furniture, facilities design and layout.

Technological environment Includes the design of processes, equipment and controls, display / interface characteristics, checklist layouts and automation

Problems in design of healthcare environments are often taken for granted by caregivers, and those reviewing their work. Health professionals view it as their responsibility to facilitate safe care in unsafe environments. In usability testing, Healthcare Human Factors has found that caregivers frequently blame themselves when they encounter difficulties with device design. Where staff lead in accepting responsibility for things going wrong, investigators naturally tend to follow. Looking somewhat differently at the technological issues confronting investigators, the technological environment (which includes work processes) in which they themselves work is challenging. In the NHS at least, few investigators appear have access to the information systems or support that would enable them to extract data relevant to their investigations (for instance, activity data mapped to staff skill mix and use of agency staff). Finally, their own work processes may have been designed to meet system requirements for assurance, not the requirements of robust investigation: the time frame for investigations is set by NHS England, for example, and enforced by clinical commissioners. The HFACS framework brings physical and technological factors into focus, prompting investigators to give them due consideration.

172015b. HOUSE OF COMMONS PUBLIC ADMINISTRATION SELECT COMMITTEE. Investigating clinical incidents in the NHS.

Page 12: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

12

Patient and situational factors Factors relevant to patient care that require highly skilled management / additional risk assessment. Note that recording patient factors does not excuse inadequate care

Identifying patient and situational factors allows investigators to recognise that with some patients (such as prisoners requiring end of life care) and in some situations (such as provider resources being overwhelmed in a crisis) provision of safe care may be particularly challenging. Patient factors provide a particular challenge for investigators in mental health, where a high proportion of serious incidents are self-harm by service users or violence towards others. Some researchers have challenged the use of systemic investigations in mental health settings arguing that “the patient’s agency often makes the largest and most critical contribution to the chain of events leading to the tragedy”.18 Whilst respecting the point that service user agency has a role to play, to focus overly on service users’ actions is as erroneous as focusing overly on the acts and omissions of ‘sharp-end’ staff. It is easy to do because the actions of self-harming service users, like those of sharp-end caregivers, are those most clearly visible at the end of the causal chain. In a review of over one hundred Serious Incident reports for NHS Mental Health Trusts19 we noted that investigators tended to supply far more information about the person who had self-harmed than about how their services were staffed and managed. Investigators in mental health are generally former or current mental health professionals, and they are accustomed to bringing to their work a strong clinical focus on the state of mind of service users. The benefit of using a framework such as HFACS is precisely that it helps investigators to step back, temporarily, from this clinical focus in order to look more broadly at human factors across organisational processes.

Adverse Mental States Transient psychological and/or mental conditions that negatively affect performance such as loss of situational awareness, mental fatigue and stress, excessive risk taking or complacency, and misplaced motivation such as wanting to prove capability.

Adverse Physiological States Transient medical and or physiological conditions that preclude safe operations such as illness, intoxication, medication side effects.

Personal readiness Refers to personal readiness and mental preparation for work. It includes not maintaining licensing requirements, self-medicating, and not adhering to rest requirements (e.g. working double shifts for different providers)

In healthcare there is a strong cultural imperative for caregivers to ‘soldier on’ in the face of adverse mental or physical states; and the safety implications of caregivers’ mental and physical well-being are generally under-appreciated by caregivers themselves, investigators and leaders. HFACS leads investigators to explore adverse physical and mental states, and readiness for work, in the course of their investigation. Because HFACS is highly structured it has the advantage of permitting investigators to depersonalise some of the difficult questions they may have to ask: ‘This is nothing personal, our investigation framework means we will be asking everyone to describe how they felt at work that day’. A tendency to overlook the effect of mental and physiological factors applies equally to investigators themselves. This is particularly true of mental states. Investigating serious incidents can invoke strong feelings. These include torn loyalties (between colleagues, patients, and organisations for example), a desire to avoid blaming colleagues, stress arising from the emotions invoked by incidents and investigations, anxiety about ‘speaking truth to power’, and burnout when nothing seems to change.

18 VRKLEVSKI, L. P., MCKECHNIE, L. & O'CONNOR, N. 2015. The Causes of Their Death Appear (Unto Our Shame Perpetual): Why Root Cause Analysis Is Not the Best Model for Error Investigation in Mental Health Services. Journal of patient safety. 19 Projects undertaken with Murray Anderson Wallace, with funding from the East Midlands Patient Safety Collaborative

Page 13: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

13

It may be that as investigators using HFACS start to ask how caregivers felt leading up the event, they will also begin to recognise the relevance of these factors in their own work performance.

Performance inhibitions Includes mental aptitude and compatibility with the work, whether caregivers were qualified and experienced for level of task and permanent factors that may affect performance such as poor hearing, lack of physical strength, or chronic mental illness.

As Peerally has noted, investigations in healthcare are rarely conducted by “the expert accident investigators who are proficient in systems thinking and human factors, cognitive interviewing, staff engagement and data analysis that are characteristic of other high risk industries”.20 There are some very experienced investigators in healthcare, but a larger number of people involved in investigations are expected to carry out the work with quite limited training in either investigative method or human factors. It would be over-optimistic to think HFACS could wholly compensate for this. But the army of expert investigators needed even to meet current demand does not exist. For pragmatic reasons, another solution (possibly temporary) is required. With human factors descriptors built into it, HFACS can scaffold investigator knowledge and performance so that immediate incremental improvement becomes possible.

Communication/Coordination/Planning Includes the range of communication, co-ordination and teamwork issues that impact performance e.g. as failure to question or be questioned, ineffective handover, forgotten communications, barriers due to interpersonal relationships.

Investigations often identify communication and co-ordination issues as a factor, albeit those descriptions are often broad (‘poor communication’) rather than specific (‘standard nomenclature not used’). Analogous factors can also undermine the effectiveness of investigations. There is often limited communication with service users, patients, or families affected by harm and they are not always asked for their perspective on events; different investigation teams in organisations rarely co-ordinate their investigations and may not communicate their findings to each other so trends are missed; and investigators are not immune to the fear of speaking up to authority, so may be cautious about emphasising the contribution made to adverse outcomes by high status actors or high level organisational dysfunction. Again, the structure and codes to be found in HFACS may serve to ameliorate some of these problems. The ability to code reports and hold these codes in a database makes it possible to track themes across sites and organisations. But there are also systemic problems around communication and co-ordination, and these take us up to the next level in the HFACS system, Level Three.

HFACS Level Three: Supervisory

Supervisory activities Includes factors relating to oversight of personnel, including providing appropriate orientation, training, safety information, and guidance. Senior clinicians leading teams (such as surgeons and ward managers) are viewed as supervisors.

Operational planning Management and assignment of work including risk management, scheduling, timetabling, team formation, excessive workload etc. Supervisory non-concordance Existing rules, regulations, instructions or standard operating procedures are disregarded by management. Examples include violating or permitting violation of policies and procedures, not implementing Patient Safety Alerts, knowingly assigning tasks to unqualified personnel.

20PEERALLY, M. F., CARR, S., WARING, J. & DIXON-WOODS, M. 2016. The problem with root cause analysis. BMJ Quality & Safety.

Page 14: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

14

HFACS practitioners have observed a striking similarity in retrospective reviews of investigations: there is generally scant analysis of the supervisory context. Investigators too infrequently look up the organisational system for blunt end factors that contribute to harm materialising at the sharp end.21 One of the impediments to doing so is investigators’ own status and position within their organisations. Located at the mid-level, investigators can feel inhibited in putting questions to managers and clinical leaders above and around them. To do so potentially jeopardises interpersonal relationships and the organisational interdependencies that enable investigators to get their job done. A second problem is the unwritten convention that investigators should be focusing on identifying who did what at the sharp end. ‘Folk’ theories of causation confuse with ‘proximity’ with ‘cause’. Investigators are expected to focus on the end of the chain of causation, where the connection between an effect and what caused it appears to be clear. It can be far more challenging to evidence the impact of blunt end decisions, because between the blunt end and the patient harm is generally a visible act or omission of the sharp end professional. As things stand, the hazard for an investigator stepping outside of conventional accounts of causality is that this is as likely to invite questions about the competence of the investigator as about the competence of superiors.

Not correcting known problems Includes instances where deficiencies are known and have been reported to management but have continued uncorrected. Examples include not addressing inappropriate behaviour, failure to correct reported safety hazards etc

Most healthcare organisations are awash with unsolved problems that have repeatedly been escalated to management, in person or via reporting systems; and with action plans that have not yet been implemented. Failure to take corrective action is a feature of the ‘organisational normal’ that, along with short staffing and production pressure, is taken for granted both by supervisors and investigators. Few at supervisory level have the time or resources to attend to all of the problems brought to their attention. The interesting question is how they decide which to manage, and which to ignore. Agreement to adopt HFACS as the organisation’s investigative approach could potentially mitigate some of difficulties I have identified, because asking questions about blunt end supervisory activity would be mandated by the framework. However, this brings into relief the importance of organisational leaders embracing HFACS as ‘the way we do things’ and consistently expecting investigators to include analysis of supervision. I discuss leadership of implementation further in the next section of my report.

HFACS Level Four: Organisational Influences

As we examine organisational influences, we begin to see some of the very real constraints under which investigators are attempting to promote organisational learning.

Intra-organisational (within organisation) processes The processes by which things get done in the organisation including corporate procedures and systems, safety and quality programs etc.

At organisational level, the most significant weakness in current investigative processes is that they constitute what the poet Tennyson memorably described as a “codeless myriad of precedent, that wilderness of single instances”.22 Organisations are attempting to learn about systemic risk by looking at isolated cases of harm, producing highly variable incident investigation reports that are resistant to meta-analysis, and implementing low level fixes for every event. Most organisations rely upon a large group of investigators either to carry out investigations single-handedly or to constitute panels. There is considerable variation in how they work. In the UK, investigation methods

21DILLER, T., HELMRICH, G., DUNNING, S., COX, S., BUCHANAN, A. & SHAPPELL, S. 2014. The Human Factors Analysis Classification System (HFACS) applied to health care. Am J Med Qual, 29, 181-90. 22 Lord Alfred Tennyson ‘Aylmer’s Field’ (1863)

Page 15: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

15

and report templates are usually derived from the ‘London Protocol’,23 promoted by the (now abolished) NHS National Patient Safety Agency. Investigators with differing kinds and levels of expertise interpret the London Protocol / National Patient Safety Agency headings differently so that, even within organisations, investigation outputs are idiosyncratic and very difficult to synthesise. HFACS facilitates aggregation of findings from multiple investigations. The narrative part of a report can be set out according to the HFACS categories and sub-categories. Being more specific than the London Protocol and hierarchically organised, these headings alone make it easier to assimilate multiple report findings. The nanocodes serve to break the story down further, sorting idiosyncratic story elements into predetermined error descriptors. In this way, similarities and differences between very specific event stories come into sharper focus. The development of recommendations for action is a second significant feature of organisational process. Most require the investigator to recommend mitigating actions. The assumption here appears to be that solutions will be emergent from analysis. This may sometimes be the case, but is not necessarily so. Many investigators will have good analytical skills, but have not trained to be improvement experts or system innovators. Remember, too, that many investigators will only know about the single case they are investigating and will have been charged to make recommendations that would prevent that case recurring. Given the nature of the process they work within, and the expertise they bring to it, it is hardly surprising that investigators rarely identify effective solutions to what are often ‘wicked’ organisational and system problems. Even if investigators were to come up with systemic solutions, they are once again impeded by organisational process. Organisations frequently require action plans to be agreed by middle-level managers responsible for implementing them. The consequence of this is that recommendations are necessarily localised, and pitched at the lower end of the ‘hierarchy of intervention effectiveness’ (see Section Four). Recommended actions focus on people and processes over which middle-level managers have some control: characteristically, reminders to staff to be more vigilant, calls for further training, and changes to written policies, A further difficulty is that many stakeholders expect, and external organisations require, that recommendations and an action plan be generated for every incident. Some immediate local action may of course be necessary. But organisations are aware that action planning on the back of every incident investigation is resulting in a proliferation of ‘fixes’, has failed to solve systemic problems, and is inhibiting organisations from focussing their energy on selected improvement priorities to greater effect. Unfortunately, they feel compelled to do it anyway, which brings us to inter-organisational processes.

Inter-organisational processes (between organisations) Identifies systemic weaknesses arising from organisational boundaries between care providers.

Care organisations function within a complex system of public, political and health services administration. In England, multiple systems (legal, political, administrative, regulatory) and multiple organisations (Clinical Commissioning Groups, NHS England, NHS Improvement, Care Quality Commission, NHS Litigation Authority) impose accountability and demand assurance. One effect of this is that the investigative ‘micro-system’ within the organisation has to be tailored to meet a range of external requirements, even if this means doing things that are known to be ineffective. Organisations already know that the obligation to produce an action plan for every investigation results in a proliferation of ‘fixes’, but they feel obliged to produce them anyway. Different agencies (Commissioners and Coroners for example) have differing expectations of what should appear in an investigation report and each requires that investigation reports meet their own norms: for example that a single root cause be identified. In short, investigators are the servants of too many masters. Providers are also aware of limited learning between organisations, despite the UK having an ostensibly national health system that should facilitate this. Incident reports go ‘up’ to Clinical Commissioners for 23 TAYLOR-ADAMS, S. & VINCENT, C. 2004. Systems analysis of clinical incidents: the London protocol. Clinical Risk, 10, 211-220. The London Protocol is, like HFACS, derived from James Reason’s work. However, it is less structured and not set out as an organisational hierarchy.

Page 16: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

16

purposes of assurance, but findings rarely travel ‘sideways’ to other commissioners or providers. In any event, given the degree of variation between reports, aggregation of findings is even less feasible across multiple organisations than it is within them. One of the potential uses of HFACS could be to set an agreed framework for assurance purposes (e.g. by Clinical Commissioning Groups, so long as they continue to hold responsibility for oversight.)24 This would not entirely solve the problem of multiple expectations from different agencies, but would at least set clearer expectations in one quarter.

Human Resource management Issues in securing the human resources necessary to carry out the vision

Having sufficient caregivers with good enough skills is one of the most vexatious problems besetting care providers. But staffing problems are so endemic that they constitute the ‘organisational normal’, and as a result receive modest attention in incident analyses and recommendations. From the reports they receive, Board members and others could be forgiven for thinking that whilst staffing is a problem, error by staff at the sharp end is a bigger one. So long as the relationship between staffing and safety remains opaque within organisations, the problem remains one to be solved at the sharp end by staff going ‘above and beyond’ to compensate. Looked at from a different angle, we can also see issues related to securing the human resources necessary to conduct good investigations. Many, if not most, investigators take on investigations as one component in another job, either clinical or administrative. Investigators may have limited access to training, continuing professional development, and supervision from expert investigators. There are no occupational standards for investigators, no professional associations, and no form of accreditation or credentialing. My comments are not intended to impugn the commitment, knowledge or skills of healthcare investigators but rather to draw attention to failure to invest in people undertaking this difficult role.

Monetary resource allocation Excessive cost cutting or insufficient funding.

In its review of NHS investigations into service user deaths, the Care Quality Commission “worked with two mental health trusts, one acute trust and one community trust to estimate the costs of the activities they carry out for reviews and investigation”. Their data show stark variation between Trusts, with one spending 0.06% of its income on investigations, and another three times that at 0.18%. CQC make no comment on what level of expenditure is sufficient to secure an effective process, but the view of many observers is that investigation activity is seriously under-funded.

Equipment and facility resources Identifies issues arising from design, purchase and maintenance of equipment or facilities

I have noted above that recommendations rarely encompass large-scale issues such as ward design or re-engineered ICT systems, because these are not ‘doable’ at the middle level.

Organisational climate Indices of safety culture, such as levels of incident reporting and other aspects of organisational climate that impact on safety.

It is widely agreed that just treatment of caregivers whose patients incur harm - a just culture - is a critical component of overall safety culture. Most NHS organisations incorporate the Incident Decision Tree, developed by the National Patient Safety Agency, into their investigations to help discriminate between acceptable and unacceptable actions. But many caregivers attest to a pernicious blame culture in healthcare.

24This use was suggested during discussion at the Mayo Clinic, when describing the assurance process in Minnesota. An agency contracted by the Minnesota Department of Health reviews investigation reports, and appears to have adopted a framework akin to HFACS.

Page 17: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

17

The Incident Decision Tree is only one small element in the NHS’s culture of accountability, but there is something of a paradox buried in its use. Because many incident investigators focus on the actions of caregivers proximate to the incident, the Incident Decision Tree is invariably used to categorise what proximate caregivers do as acceptable or unacceptable. It is not used to categorise what higher-level actors do as acceptable or unacceptable. To some extent a tool intended to exculpate caregivers who are not at fault, has ended up reinforcing a tendency to see caregivers as those ultimately at fault. Investigations using HFACS should result in production of a ‘balanced scorecard’ encompassing the range of sharp end and blunt end factors that combine to cause a harmful event. This does not absolve anyone from responsibility, but rather locates responsibility in the right place. By doing this, could HFACS contribute to engineering a just culture?

Summary: a baker’s dozen of potential benefits

HFACS could help to improve investigations in the following ways:

1. Mitigating investigator cognitive bias 2. Functioning as a ‘cognitive forcing strategy’, ensuring that all potential issues are considered 3. Helping investigators who lack knowledge of human factors, to identify risks 4. Helping clinical investigators stand back from their focus on the patient, to see the system 5. Pulling physical and technological factors into focus, so they receive due consideration 6. Prompting investigators to explore the impact of caregivers’ physical and mental states 7. Empowering investigators to include analysis of supervisory level risk 8. Enabling organisations to aggregate findings from multiple investigations 9. Helping to identify themes across departments, sites and provider organisations 10. Helping to identify priorities for improvement action, on the basis of thematic analysis 11. Promoting a ‘balanced scorecard’ of human error, contributing to a just culture 12. Serving as a human factors informed framework for quality assurance purposes 13. Prompting questions about whether the resources allocated to investigations are adequate

If those are the potential gains, what has been the experience of healthcare providers implementing HFACS in their organisations?

Page 18: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

18

SECTION THREE LEARNING FROM EXPERIENCES OF USING HFACS IN HEALTHCARE

I have organised this section of my report around striking phrases I have heard during my conversations with people using HFACS, both in the US and in the UK. However, for reasons of confidentiality explained in my introduction, I have not ascribed comments to specific individuals or organisations.

Systems and the contexts for change

Healthcare organisations exist within larger social systems that shape what it is possible to do, and the way that things are done. Such systems include corporate structures, legal provisions surrounding clinical negligence claims, and professional and organisational regulation. The different systems underpinning the provision of healthcare in the US and UK affect the processes of investigating healthcare harm, and the scope for sharing the analysis that comes out of them.

“We can’t say that RCAs aren’t working”: cultures of corporate silence

Healthcare harm is embarrassing. Over several decades, analysts have explained how harmful events ‘incubate’ in an atmosphere variously described as organisational ignorance, organisational silence, organisational blindness, or cultural censorship.25 As Macrae has written, “organisations can be defined by what—and whom— they choose to ignore”.26 Some I met in the US voiced concern that their industry was ignoring the failure of Root Cause Analysis as a method. They viewed the competitive structure of the US healthcare industry as a disincentive to admitting that RCA was not working, because no organisation wanted to be first to state publicly that they were not learning from harm. The conviction that RCA should work (albeit that we do it badly) has a hold in England too. The quality of investigations has been criticised, but the principles of RCA are generally viewed as sound.27 In my experience, where UK organisations have identified shortcomings in their investigations, they have commissioned fresh RCA training for staff. Section One of my report implies that this response is doomed to failure.

“Building a mummy of Band-Aids”: the pressure of external demands

Clinical commissioning groups in England exert considerable influence over incident management. They require serious incident reports to be submitted for approval, and an action plan to accompany every investigated event. In doing so, they have helped created ‘the problem of many fixes’. The players are different in the US, but there a similar problem has emerged. The Joint Commission accreditation process28 exerts considerable influence over organisations’ systems and behaviours. The Commission promotes a voluntary reporting system, which entails submission of an organisational action plan for reported events. The expectation that organisations will produce action plans on an event-by-event basis

25 MACRAE, C. 2014. Early warnings, weak signals and learning from healthcare disasters. BMJ quality & safety, 23, 440-445. TURNER, B. A. & PIDGEON, N. F. 1997. Man-made disasters, JSTOR. See also HART, E. & HAZELGROVE, J. 2001. Understanding the organisational context for adverse events in the health services: the role of cultural censorship. Quality in Health Care, 10, 257-262. 26 MACRAE, C. 2014. Early warnings, weak signals and learning from healthcare disasters. BMJ quality & safety, 23, 440-445. 27PEERALLY, M. F., CARR, S., WARING, J. & DIXON-WOODS, M. 2016. The problem with root cause analysis. BMJ Quality & Safety, ibid. 28 The Joint Commission accredits healthcare organisations. Most, but not all, providers are accredited. Joint Commission accreditation is a prerequisite for receiving public healthcare funds in the form of Medicare and Medicaid. These make up a substantial proportion of many organisations’ income so JC expectations exert considerable influence.

Page 19: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

19

has effects akin to those seen in England: as one observer described it to me, it results in ‘a mummy of Band Aids’.

“We can’t talk freely about Patient Safety Work Products”: investigating in the shadow of the law

Legal considerations shape every patient safety investigation. In England, incident investigation documents are discoverable in clinical negligence claims. Organisations may not make it easy for claimants to get sight of relevant material, and there are reprehensible cases of concealment.29 But my experience has been that in professional forums, professionals generally feel able to talk openly about anonymised cases, even while legal proceedings are pending. In the US, federal laws of evidentiary privilege apply to patient safety investigations (there are equivalent but varying state-by-state laws in Canada). The protected documentation - called ‘Patient Safety Work Products’ - can include data, statements, and the investigation report itself and are largely not discoverable by claimants in legal proceedings. The rationale for these laws is that they allow organisations to investigate and discuss patient safety matters without fear that improvement activity could be used against them. Some commentators argue that laws of evidentiary privilege have had perverse consequences, and undermined some patient safety activities.30 My sense while in the US was that protecting investigation documents as Patient Safety Work Products was serving to inhibit discussion. Patient Safety Work Products, I heard, could only be discussed in confidence in restricted forums. Far from making it easier to discuss safety cases freely, this made it difficult to share information across the organisation - even more so between organisations and around health systems. There is little to be gained from improving the quality of investigations if findings cannot be openly discussed.

Experiences of implementing HFACS

Introducing HFACS entails changing organisational routines and disrupting settled perspectives. Change is refreshing to some, challenging to others, and sometimes just tricky to manage.

“You find a richness in every day little things”: investigators and HFACS processes

Some very experienced investigators quickly recognise the weaknesses in their existing approach or systems, welcome a new approach, and are appreciative of opportunity to learn about human factors. Others are less eager to change how they work, and I have heard varying explanations for this: scepticism that there is a problem with existing approaches, or that HFACS is the answer; a feeling that investigators’ existing skills and performance are impliedly devalued; some apprehension about mastering new methods and meeting new expectations; and general change weariness. As with any organisational transition, there is a need to be attentive to the values, views, and commitments of those at the sharp end. The organisations that shared their experience with me had all had different approaches to doing investigations, and indeed different processes and approaches within constituent parts of their organisation. One organisation commented that on reflection they had needed to understand better, when they introduced HFACS, how it could be insinuated into different systems and ways of working across a number of sites. Their insight proved very valuable, as it prompted me to look carefully at already existing processes in the NHS organisations I started working with. Doing so revealed remarkable diversity both in RCA/investigatory practice, and in organisational processes designed to serve the NHS’s Serious Incident Framework.

29 As evidenced in the reports of the Parliamentary and Health Service Ombudsman. A good review of candour in practice is BIRKS, Y. 2014. Duty of candour and the disclosure of adverse events to patients and families. Clinical Risk, 20, 19-23. 30LAUTH, L. A. 2007. The Patient Safety and Quality Improvement Act of 2005: An Invitation for Sham Peer Review in the Health Care Setting. Ind. Health L. Rev., 4, 151.Lauth

Page 20: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

20

HFACS requires investigators to look at a wider range of risk factors than is customary. What is the evidence that they do alter their practice, and the implications for process? First, the indications from US users and from my NHS project are that HFACS does indeed prompt investigators to reframe their inquiry. NHS investigators have found, for example, that referring to HFACS to generate hypotheses about possible causes (a first stage in systemic investigation) generates fresh ways of thinking and different themes for analysis. As might be anticipated, the wider scope of information sought with HFACS can increase the time it takes to carry out some aspects of an investigation. I heard in the US that if the full range of risks is to be explored, interviews take longer than usual; this has also been our experience of trialling HFACS in the NHS. But it is difficult to compare the time taken for an HFACS guided investigation with existing practice: it depends both on what existing practice is, and how the HFACS investigation is approached. Second, investigators need to understand what they are trying to find out when they look at less familiar areas. Investigators see the need to explore precondition factors, and report that the main problem encountered here is the reaction of interviewees (I discuss this below). However, investigators using HFACS have encountered difficulty knowing how to inquire into supervisory and organisational level issues. This is unsurprising, given how consistently retrospective reviews of Serious Incident Reports using HFACS reveal a paucity of analysis at these levels.31 Third, in our NHS project we have found that to answer the questions raised by HFACS requires greater attention to be given to corporate data (such as procurement, maintenance, reporting, activity, appraisal, and training data). These are data that NHS organisations routinely gather, but rarely collate for review (if indeed they review them at all). Analysing them undoubtedly provides a fresh perspective on what is happening at the sharp end but doing so does require time and analytical skill. Finally, I noted in Section Two that one of the significant advantages of HFACS is the facility to aggregate and analyse multiple reports using the HFACS nanocodes. There are several versions of HFACS nanocodes in existence, emanating originally from work with US healthcare providers.32 I reviewed these versions, and have derived two sets from them for use in NHS organisations and the independent sector in the UK: one set is for mental and community health providers, and the other one is for use in acute general and teaching hospitals. There were two reasons for reviewing the HFACS nanocodes. The nanocodes convert idiosyncratic story elements into predetermined error descriptors, making it easier to trace similarities and differences across events. To be effective, the nanocodes have to be sufficiently specific that they can ‘stand for’ the significant features in the underlying stories, whilst also being sufficiently general that the stories can be aggregated. The first purpose of my review was to see how well the nanocodes expressed aspects of NHS care. Most served well, but some new ones were added to capture specific features: for example, the use of the Care Plan Approach in mental health. A second reason for review is that, as descriptors, the nanocodes themselves convey meaning. The original versions are couched in terms of failure, a value-laden approach that is both unnecessary and unhelpful. I have amended the codes for the NHS so that, for instance, ‘failure to’ now simply reads ‘did not’. This makes the codes more palatable to people whose actions are investigated, and presents a more neutral account to readers. The coding process undoubtedly takes extra time (although this can be offset by any time saved on synthesising reports for organisational review purposes). But miscoded data are meaningless. Unless there is a reasonably high degree of reliability, the analytic advantage is lost and the time invested in coding is time wasted.33 So, if multiple coders are used, time will be required for testing and maintaining inter-rater

31This is true both of my retrospective reviews of NHS and independent sector SI reports, and DILLER, T., HELMRICH, G., DUNNING, S., COX, S., BUCHANAN, A. & SHAPPELL, S. 2014. The Human Factors Analysis Classification System (HFACS) applied to health care. Am J Med Qual, 29, 181-90. 32See this link for a composite set of nanocodes based on those in circulation https://www.ecri.org/EmailResources/HRC/HFACS/Worksheet%201%20HFAC4HC%20Taxonomy_BD.pdf 33OLSEN, N. S. & SHORROCK, S. T. 2010. Evaluation of the HFACS-ADF safety classification system: inter-coder consensus and intra-coder consistency. Accident Analysis & Prevention, 42, 437-444.

Page 21: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

21

reliability. Organisations have responded to the coding challenge in different ways. One had experimented with coding being done by the original incident investigators, and had then switched to having coding done by a central team before coded reports were shared more widely. There is no obviously optimal approach; much depends on the nature of the organisation and the scale of incident investigations being undertaken.

“You don't get good data out of big hairy events”: HFACS and the investigated

It is hard to be a healthcare professional who has unintentionally harmed a patient, and harder yet to endure the multiple inquisitions that follow.34 For those under investigation, HFACS offers both a degree of solace and a degree of concern. There is solace in HFACS refocusing attention (and blame) from the sharp end, to wider contributory factors. By and large I heard it reported that familiarity with HFACS breeds sharp end acceptance. But the benefits to healthcare, and healthcare staff, of a more balanced approach take time to realise. The organisations I spoke with had not been yet been using HFACS in a sustained way over a sufficiently long period to be able to demonstrate its acceptability to staff. The converse of solace is, however, potential peril: HFACS investigators need to look more assiduously at the level of preconditions and these can be of some concern to staff. Two of the HFACS preconditions - physical and technological factors – are matters of fate, design, or the responsibility of the organisation. But the preconditions also comprise mental, physiological, personal and communication/co-ordination factors that are precursors to actions customarily presented in RCAs as individual errors. Professionals are unaccustomed to being asked questions about these preconditions (except communication factors) and I heard that they are understandably wary of answering them, viewing them as potentially inculpatory. HFACS users told me that inquiring into about preconditions was easier if explanations for the questions were offered (“we are asking you about job stress, because if there are aspects that are stressful we can look at how to change them”); and also when asking about preconditions came to be known as the investigators’ standard practice. The emerging practice among NHS users is to share interview questions for all four HFACS levels in advance of meetings, giving interviewees time to think and reflect. The purpose behind investigating the preconditions is to seek to eliminate them, rather than to blame people at the sharp end when precondition risks materialise. But if the aim is to manage preconditions to error, caregivers have to be able to trust they will be treated fairly if they are candid with investigators. To achieve this, organisational leaders need to treat professionals justly, as well as institute changes in the investigative process. I discuss this further below. What made the single biggest difference to the quality of data that investigators gathered? One provider argued it all depended on how catastrophic the event had been. The patient safety industry cherishes its redemption stories: the disastrous errors that galvanize progress towards better, safer care. But I heard that in fact the most fruitful investigations, generating the most useful data, were inquiries into near misses. When professionals prevent an adverse event from occurring, they see just as clearly where the active and latent risks lie. They know what barriers failed, and the preventative actions that succeeded. But now they are able to talk about the event without shame and fear; rather with a degree of pride. Are we in thrall to grand redemption narratives, when the stories we should be telling are parables of near miss and incremental improvement? So far, the picture that has emerged from the experience of those implementing HFACS is that it does advance positive change in the culture surrounding investigations; but also that in order for HFACS to work

34SCOTT, S. D., HIRSCHINGER, L. E., COX, K. R., MCCOIG, M., BRANDT, J. & HALL, L. W. 2009. The natural history of recovery for the healthcare provider "second victim" after adverse patient events. Qual Saf Health Care, 18, 325-30, PANELLA, M., RINALDI, C., VANHAECHT, K., DONNARUMMA, C., TOZZI, Q. & DI STANISLAO, F. 2014. [Second victims of medical errors: a systematic review of the literature]. Ig Sanita Pubbl, 70, 9-28. WU, A. W. & STECKELBERG, R. C. 2012. Medical error, incident investigation and the second victim: doing better but feeling worse? BMJ Qual Saf, 21, 267-70.

Page 22: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

22

effectively, the culture surrounding investigations has to change. It is up to organisational leaders to cut this Gordian knot, so it is to leadership I turn next.

“What makes it difficult for staff to change? That applies to you too!”: HFACS for leaders

Every year, healthcare leaders around the world receive hundreds of thousands of serious incident investigations informing them that the ‘root cause’ of patient harm was failure of caregivers at the sharp end. Investigating health care harm with the frequency and the tools that we do has had a perverse outcome: it has reinforced deceptive ‘folk’ theories of causation and placed responsibility squarely on the shoulders of those at the sharp end. Cause has been confused with proximity to the event, so that what causes harm to patients is what caregivers do, not the dysfunctional systems within which they work. Folk theories of causation supply a reassuring narrative to organisational and system leaders. The decisions they make are sufficiently far removed from most events that causation can rarely be attributed to them. An HFACS-driven account of a serious incident is liable to afford a less consoling picture. Investigators who look carefully at supervisory level and organisational level issues are highly likely to find active and latent risks there. The lessons that emerge from using HFACS may be uncomfortable ones for leaders to learn. During my conversations in the US I heard how HFACS generated some difficult messages for those in the upper echelons of organisations, messages that could be awkward to convey and to hear. The challenges associated with attributing causality to higher-level decision making was explained in different ways. One view was that leaders do not want to hear bad news (a factor in the cultures of corporate silence I referred to earlier) and tend to respond by neutralizing or undermining the messengers who deliver it. Similarly, it was suggested that leaders’ responses differ according to whether they are looking for grounds to make systemic changes they favour, or are resistant to change. The former may ‘cherry pick’ the evidence HFACS offers, whereas the latter simply refute any causal linkages between their decisions and index events. Some expressed misgiving, too, that actions with cost implications would be deferred until leaders had overwhelming evidence, from analysis of aggregated reports, that they were absolutely necessary. Rather differently, it has also been pointed out that the problems which fall to be resolved at organisational level are vicious ones with no easy answer: shortages of healthcare staff; survival in a competitive market place (in the NHS, shortage of funds and rising demand); rising expectations of safety and risk management; complex care systems that require networks of co-operation. HFACS can help leaders see how their handling of these macro-level issues resolves eventually into micro-level actions or omissions by staff at the sharp end. This is useful, because it contributes to a better understanding of the systemic problems that need to be resolved if patients are to be kept safe. But HFACS does not do more than that: it is not an automatic solution-generating machine. I was told that leaders committed to HFACS were inclined to take the long view. Using HFACS as a tool to help bring about cultural change, they accepted that organisational culture does not change annually. Cultural interventions needed to be measured and sustained over a long period. One organisation had deliberately set out to use HFACS with the aim of changing its culture. It was proud that since adopting HFACS the number of reported incidents had ‘mushroomed’: it welcomed this increase as evidence of growing awareness of the merits of reporting, and also increased trust in the organisation’s response. HFACS potentially impacts everybody in an organisation, from top to bottom. Using it to best effect requires widespread understanding of what it is for, but this presents a challenge for implementation. Organisations had either provided lots of training and information about HFACS at executive level but felt they had not sufficiently engaged caregivers; or they had concentrated on informing caregivers and middle-managers about HFACS but felt they had not completely engaged senior leaders. Whichever approach they had adopted, though, both confronted the same question: how could they influence leaders to adopt appropriate solutions to the problems HFACS had helped them to identify?

Page 23: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

23

“Even very clever people think the solution is greater vigilance”: HFACS and fixes35

HFACS can certainly help to improve the quality of investigations, but that is not all that is needed for learning. Designing, implementing and evaluating a systemic response is a different challenge. In the final section of my report I shall be reviewing the problematical relationship between the quality of investigations and the quality of proposed solutions. At this stage it is sufficient to note that in much the same way that investigations fall prey to ‘folk theories’ of causation, so do recommended responses fall prey to ‘folk theories’ of organisational safety. Even armed with HFACS analyses, it can be difficult to persuade leaders that something other than greater caregiver vigilance, more training, or a new policy to shape behaviour, is required. For leaders to embrace more systemic solutions they either need understanding of systems safety, or a willingness to trust in the advice of others who possess it.

“Give the front line the nanocodes, they recognise all the problems themselves”: HFACS reflections

One of the most interesting reflections from HFACS users was how to use the knowledge of error embedded in the HFACS nanocodes. Taken together, the nanocodes supply a comprehensive overview of virtually every risk likely to materialise. As one of my informants commented, if you gave caregivers the nanocodes they could identify all the organisational problems for themselves. They would know which risks were most prevalent, and most likely to end up doing patients harm. Shappell has argued that HFACS can indeed be used to undertake active risk assessment. So why wait until something goes wrong to investigate?

Summary

The experience of HFACS users confirmed that several of the potential advantages I summarised at the end of Section One are realisable. The first six items on the summary list have to do with improving investigative practice.

1. Mitigating investigator cognitive bias 2. Functioning as a ‘cognitive forcing strategy’, ensuring that all potential issues are considered 3. Helping investigators who lack knowledge of human factors, to identify risks 4. Helping clinical investigators stand back from their focus on the patient, to see the system 5. Pulling physical and technological factors into focus, so they receive due consideration 6. Prompting investigators to explore the impact of caregivers’ physical and mental states

The organisations I met in the US had not, so far as I am aware, formally evaluated for these improvements. However, I consistently heard that HFACS investigations were providing better data than before. It is certainly proving true in the NHS, where investigators involved in the HFACS trials have found they quickly gain a far deeper understanding of how harm is produced – and prevented – within their organisational systems. Continuing down the list to:

7. Empowering investigators to include analysis of supervisory level risk I have found that this remains challenging for investigators, as does ensuring that their messages about blunt end causes are heard. 35Shappell has designed a decision matrix called HFIX to accompany HFACS, but the organisations I visited had made limited use of it.

Page 24: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

24

On the other hand, the experience of users both in the US and here in the UK points towards HFACS proving its value on items 8 to 12:

8. Enabling organisations to aggregate findings from multiple investigations 9. Helping to identify themes across departments, sites and provider organisations 10. Helping to identify priorities for improvement action, on the basis of thematic analysis 11. Promoting a ‘balanced scorecard’ of human error, contributing to a just culture 12. Serving as a human factors informed framework for quality assurance purposes

These leaves us with the final issue of resourcing investigations:

13. Prompting questions about whether the resources allocated to investigations are adequate This is, I think, an issue for the future. One of the US providers I met indicated that their organisation’s investigative resource had long been insufficient, that adoption of HFACS had made this apparent, but that the cost of implementing HFACS to full effect had been underestimated. Another US organisation had been an early adopter of HFACS, but had not made further resources available for investigations after the initial trial. That organisation had a strong improvement culture in which LEAN was dominant. Having made significant investment in LEAN, there may have been limited appetite for embracing alternative ways of defining, and hence learning from, organisational failure. There are morally loaded choices to be made about resourcing investigations. In the case of for-profit healthcare providers, the question is how far they are willing to reduce their margin in order to accomplish their patient safety mission. (This is also true, but to a lesser extent, of not-for-profit providers in the US. Their margin is returned to the community in the form of social benefit.) The question for the NHS, where budgets are fixed but demand is not, is whether allocating resources to direct patient care weighs heavier than allocating resources to improving safety. There is of course an economic argument that safe care is cheaper in the long run, but it is a very long run.

Page 25: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

25

SECTION FOUR ANALYSING FAILURE OR LEARNING FROM FAILURE?

In Section One I emphasised that learning from harm means very much more than merely understanding why it happened. It means changing things so that next time (and the next, and the next) things turn out better. In the course of my Fellowship it became ever clearer that improving the quality of investigations is certainly necessary, but is by no means sufficient, to really learn from failure. Indeed, I suspect it may turn out to be very much easier to improve investigations than it will be to improve the action that follows from them. In this section I share insights from human factors and design specialists who have talked with me about their work exploring mitigations for sources of healthcare error. I owe a particular debt to interviewees at Healthcare Human Factors in Toronto, whose insights were infinitely richer than I have been able to capture here. The examples I have chosen at first appear to be caregiver errors, and susceptible to quite simple solutions. However, looked at more closely, we see how mis-steps occur in the context of complex work flow processes and disordered organisational systems, and that they are far from simple to solve.

A starting point: the hierarchy of intervention effectiveness

The hierarchy of intervention effectiveness is familiar to the improvement community, having originated as part of a toolbox to support better management of medication error.36 It has a fairly slender evidence base however, and is better viewed as an expression of expert opinion than as a scientifically proven concept. The diagram below is taken from a paper by Cafazzo: he explains that the hierarchy places “interventions related to human behaviour toward the bottom of its scale in favour of technological interventions, which are viewed as more reliable. This should not suggest that human-based mitigation interventions (e.g., training, policy and checklists) are not without value”.37 That latter point is important. There is a tendency within the clinical human factors community to castigate providers who continue to be overly reliant on human-based mitigations. But healthcare is a primarily knowledge-based human practice. Even with orderly systems, it will still need highly trained, responsive, thoughtful actors. A more balanced discussion is needed about when human-based interventions are necessary, helpful or harmful.

36 1999. INSTITUTE FOR SAFE MEDICATION PRACTICES. . Medication Error Prevention “Toolbox.” 37 CAFAZZO, J. A. & ST-CYR, O. 2012. From discovery to design: the evolution of human factors in healthcare. Healthcare Quarterly, 15.

Page 26: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

26

Cafazzo notes that when healthcare providers do recognise the limitations of ‘blame and train’, they nevertheless “continue to seek silver-bullet solutions such as checklists, bar-coding and crew resource management (CRM), adapted from aviation”. He is clear that “no single mitigation strategy will totally eliminate use errors that lead to adverse events. A tapestry of strategies that have some scientific basis for success will likely be more successful.”

Example One: interruptions

The hazard of interruptions is well known to patient safety specialists.38 They are endemic in patient care in every setting: in the operating theatre, out patient procedures, in-patient wards, and home care. In surgery, interruptions are commonly implicated in surgical never events: wrong site surgery (the surgeon is interrupted immediately prior to incision, then returns to the task with the wrong procedure or site in mind); retained swabs and instruments (an interruption distracts the scrub nurse and results in a miscount); and wrong site blocks (hence the “Stop Before You Block” process which calls for requiring a verification moment immediately before proceeding). In home care and in-patient wards, on the other hand, there is a strong association with medication errors. Taking the example of a nurse medication error, it would be tempting to view it as an error on the part of the person who was interrupted. But working outwards from proximity to the error, we would see influences at every level of HFACS: task pressures on the nurse who did the interrupting; work processes, such as caring for several patients at once, that made interruptions inevitable; information technologies that interrupted work flow; perhaps training responsibilities that meant interruptions (e.g. questions) were more likely; maybe poor operational planning so that too few staff were on the floor, or the ones who were there were unfamiliar with the environment; maybe long term HR problems so that junior nurses or bank staff new to the ward constantly sought guidance from senior staff. Put another way, as Westbrook et al39 point out, interruptions are not under the control of individuals: they are part of a complex socio-technical system. A comprehensive approach would mean mitigating all of these risks. But faced with the apparent impossibility of eliminating all sources of interruption, what has been done is to try to protect professionals from

38WESTBROOK, J. I., COIERA, E., DUNSMUIR, W. T. M., BROWN, B. M., KELK, N., PAOLONI, R. & TRAN, C. 2010. The impact of interruptions on clinical task completion. Quality and Safety in Health Care, 19, 284-289. 39 Ibid.

Page 27: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

27

interruption at the point at which it is safety critical: dispensing medication, mixing chemotherapy drugs, and so on. Unfortunately, the quality improvement landscape is littered with failed experiments of this type. Prakash et al40 from University of Toronto note that mitigations have including prohibiting non-essential communication, using ‘Do Not Disturb’ vests, adopting checklists, and creating physical barriers to prevent interruptions. Their own work encompassed an observational study of the association between interruptions and error, a high fidelity simulated study of interruptions, and participatory design in which nurses dispensing medications were asked to propose solutions. During subsequent testing, nurses found some of the suggested interventions both amenable and effective. Others failed on both counts. For my purposes, one of the most interesting features of this study was the participatory design process. In focus groups, nurses involved in the simulations brainstormed potential error mitigation strategies and rejected some common solutions (e.g. ‘Do Not Disturb’ vests) at the outset. Then the interventions were tested in an experimental setting, where some were found to work well and others failed. This approach to finding a solution differs starkly from that adopted in the wake of most investigations, where the investigator alone is expected to find an answer that works and there is little or no subsequent evaluation. The value of the Toronto study is that it demonstrates how and why complex hospital care systems thwart ostensibly worthwhile, common-sensical solutions.

Example Two: ‘Oh Shnocks!’

Information technologies and automated processes have offered the alluring prospect of reducing human error by depriving humans of the opportunity to make mistakes. This is why they are placed higher in the hierarchy of intervention effectiveness. The promise of information technologies and automated processes has been a paperless system where patient notes and records cannot get lost, the right information is in the right place at the right time, health workers can be stopped from writing the wrong thing, can be automatically prompted to do the right thing, and be alerted to issues they may not have been aware of. While some of the anticipated gains have been realised, it is also true to say that one set of risks has been traded for another.41 The unfamiliar patient safety risks that health informatics and automation bring are only beginning to be understood. On one side of the equation, it is clear that there are problems in the design of user interfaces and devices that are supposed to standardise care and make it safer. Healthcare Human Factors have compiled a very telling montage of human – device interactions, recorded during their usability tests on medical devices. Typically, clinical users feel stupid and blame themselves when they cannot use poorly designed technology. On the other side of the equation problems emerge when technologies are on the whole well designed and work effectively, because clinical users may start to trust them too much. The phenomenon of automation bias is becoming increasingly familiar to healthcare providers, as clinical staff come to place greater trust in what machines are telling them than in their own clinical assessment.42 Again, when we look at how harm arises from increased automation and adoption of information technologies, we see a complex set of organisational and social systems that are not susceptible to an easy organisational fix. At the sharp end, clinical users are often expected to use new devices or IT systems, which may have intrinsic design flaws, with minimal training. At the blunter end the people doing operational planning may not yet know about the errors the new technologies could precipitate, because these might only emerge in use.

40 PRAKASH, V., KOCZMARA, C., SAVAGE, P., TRIP, K., STEWART, J., MCCURDIE, T., CAFAZZO, J. A. & TRBOVICH, P. 2014. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting. BMJ Quality & Safety, 23, 884-892. 41 ASH, J. S., BERG, M. & COIERA, E. 2004. Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. Journal of the American Medical Informatics Association, 11, 104-112. 42 For a compelling overview of the promise of and problems with health informatics see WACHTER, R. 2015. The digital doctor, New York, NY: McGraw-Hill Education.

Page 28: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

28

Procurement processes may be weak, or it may be (as has been observed of patient electronic medical record systems) that none of the systems currently on the market are good enough. Moving higher up the chain of error causation, there is a strong commercial disincentive for device manufacturers and health informatics companies to publicise the problems that users experience with their products; so it is difficult for users to learn from each other’s failures. At the highest level, there may be weaknesses in regulation of devices and technologies. Recognising the safety risks inherent in poorly designed devices, the US Food and Drug Administration (FDA) has introduced usability tests that are now more stringent than those that apply elsewhere, including in the European Union.43 Ensuring that health informatics systems are resistant to user error will prove more difficult even than regulating devices, however. Electronic medical records and associated systems are heavily customised for the end user, whose own requirements may supply the element of hazard. So what can an investigator do when – as is frequently the case – poor usability contributes to patient harm? They cannot change regulatory requirements. They cannot change the design of the device. They can and do ask for health informatics system problems to be looked at: but they are often told either that the problem is user error, or that the system cannot be easily and cheaply fixed, or that fixing the system might cause other problems. So almost inevitably, even if an investigator trusts the hierarchy of intervention effectiveness, a recommendation for more training or greater vigilance emerges as the only practical solution.

Example Three: going home safely

My previous two examples show how complex systems can so easily frustrate simple solutions. I wanted to finish on a more optimistic note with an example of an intervention - developed through participatory design with patients - that appears to have real promise.44 ‘PODS’ is the Patient Oriented Discharge Summary, developed by OpenLab for University Health Network, Toronto. The aim of PODS is to help caregivers work with patients as partners in care, preparing them for discharge and providing critical information in an assimilable form. It is well known that discharge from secondary care can be a critical time for patients. Adverse events following discharge can result in an unscheduled return to hospital, often via emergency services, and generally poor outcomes. The 2011 report of the Ontario Avoidable Hospitalization Expert Panel45 concluded that adverse events following discharge were attributable to four ‘root causes’: not understanding medical terms; not being fluent in English; not being able to memorize verbal instructions; and being too stressed at the time of illness to absorb information. The PODS was developed to provide patients with “a tool that would ultimately complement the [hospital] Standardized Discharge Summary, but focused specifically on information most relevant and actionable for patients, presented in an easily understandable and usable form…In reviewing discharge summaries from different hospitals, it was evident that these were information dense documents laden with technical language meant mostly for the patient’s primary care provider. These documents were not well suited for use as tools to transmit critical information from hospital to patient at time of discharge”.46 PODS was developed over a period of several months, using a participatory design process that included service users from hard to reach groups. The process incorporated a literature review; interviews with providers; observation and focus groups with patients and their caregivers; a ‘cultural probe’ consisting of a diary, disposable camera and ‘postcards from home’ enabling patients to record their experiences after discharge; and usability testing of prototypes. 43 Hence organisations such as Healthcare Human Factors now undertake independent usability testing on devices that manufacturs wish to introduce to the US market. 44 A comprehensive evaluation is currently being undertaken by http://www.uhnOpenLab.ca 45 ONTARIO MINISTRY OF HEALTH AND LONG TERM CARE 2011. ‘Report of the Avoidable Hospitalisation Advisory Panel: Enhancing the Continuum of Care’ 46 The PODS Report. Reports and PODS toolkit can be downloaded at http://pods-toolkit.uhnopenlab.ca/

Page 29: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

29

The prototype PODS tested well and is now going through wider testing and roll out. “This would have saved me so much anxiety and fear of doing something wrong when I was discharged. I didn’t want to bother my doctors and went on a hope and prayer. Even my home care people weren’t always sure of what to do” recounted one patient.47 There is a note of caution to be sounded, however. Its developers argue that PODS is one element in the larger discharge process, not a silver bullet. To make discharge safer, they suggest, it is vital to treat the patient and their family and caregivers as ‘part of the team’, tell them what is going on in the discharge planning process, and be clear about what they can expect when they are discharged. As ever, it is likely to be the organisational systems surrounding discharge, and the way that these shape activity at the sharp-end that increase or reduce the utility of PODS. On the other hand, adoption of PODS may help to organise discharge activity in such as way that there is a real gain for patients heading home from hospital.

47 Ibid.

Page 30: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

30

SECTION FIVE RECOMMENDATIONS

The experience of users in healthcare settings is that using HFACS to structure and record investigations undoubtedly provides a more complete picture of the sources of healthcare harm. However, it remains hard to question blunt end decisions, and to evidence the connection between blunt end decisions and sharp end errors. It can also be difficult to persuade leaders accustomed to ‘blame and train’ to adopt more systemic solutions. In this report I have identified a number of obstacles to learning from harm. These recommendations aim to overcome some of them.

One: use HFACS to improve the quality of investigation, and aggregate findings.

Common methods of investigation (whether RCA or a derivation of RCA) are flawed. I have argued in my report that HFACS has much to offer by way of improving investigations and that it is also valuable as a means of aggregating findings.

Two: use HFACS as a common assurance framework.

Assurance standards for incident investigations are inconsistent. In England, clinical commissioners communicate different requirements for reports, and reject them on different grounds. There is a lack of clarity about ‘what good looks like’ in SI reports. Commissioners have considerable influence. Requiring providers to report findings consistent with the HFACS framework could have a strong backwash effect on the quality of investigations themselves.

Three: invest in and support the investigator role.

The current level of training and support for investigators is incommensurate with expectations of them and the pivotal role they play in learning from harm. A number of suggestions have been made for how to ‘professionalise’ the investigator role, for example through accreditation. Consideration should be given to how to support the ‘community of practice’ of investigators, who currently work in professional isolation for the most part.

Four: make more resources available for safety investigations or make better use of the current resource.

Meticulous safety investigations require resources. Organisations face a choice. They can either do the same number of investigations as they do currently with greater thoroughness, which will require more wherewithal; or they can do fewer in-depth investigations but do them better.

Five: enter cross-organisational arrangements for investigations.

The organisational level at which investigators are located constrains them from asking questions of colleagues at the same level, or above them, in their organisational hierarchy. Providers should consider entering into reciprocal arrangements with partner organisations; so that a proportion (at least) of investigations are done by outsiders, who will not be jeopardising their own position by asking difficult questions of blunt end decision makers.

Six: pay attention to the process of identifying, implementing and evaluating systemic solutions.

Learning from healthcare harm should be treated as a branch of improvement science. The current approach to solving the problems that investigations identify – case-by-case recommendations by investigators, negotiated action plans, an absence of testing and evaluation - is inadequate to the task. If there is anything we should have learned from attempting to learn from healthcare harm, it is that we need to understand better why improvement activity succeeds or fails.

Page 31: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

31

BIBLIOGRAPHY

2001. DEPARTMENT OF HEALTH Building a safer NHS for patients: implementing an

organisation with a memory. . 2015a. DEPARTMENT OF HEALTH Learning not blaming 2015b. HOUSE OF COMMONS PUBLIC ADMINISTRATION SELECT COMMITTEE.

Investigating clinical incidents in the NHS. 2016. CARE QUALITY COMMISSION Learning, Candour and Accountability: A review of the

way NHS trusts review and investigate the deaths of patients in England. ASH, J. S., BERG, M. & COIERA, E. 2004. Some unintended consequences of information

technology in health care: the nature of patient care information system-related errors. Journal of the American Medical Informatics Association, 11, 104-112.

BIRKS, Y. 2014. Duty of candour and the disclosure of adverse events to patients and families. Clinical Risk, 20, 19-23.

CAFAZZO, J. A. & ST-CYR, O. 2012. From discovery to design: the evolution of human factors in healthcare. Healthcare Quarterly, 15.

CARAYON, P. & WOOD, K. E. 2010. Patient safety - the role of human factors and systems engineering. Stud Health Technol Inform, 153, 23-46.

CROSKERRY, P. 2002. Achieving quality in clinical decision making: cognitive strategies and detection of bias. Academic Emergency Medicine, 9, 1184-1204.

DILLER, T., HELMRICH, G., DUNNING, S., COX, S., BUCHANAN, A. & SHAPPELL, S. 2014. The Human Factors Analysis Classification System (HFACS) applied to health care. Am J Med Qual, 29, 181-90.

ELBARDISSI, A. W., WIEGMANN, D. A., DEARANI, J. A., DALY, R. C. & SUNDT, T. M. 2007. Application of the human factors analysis and classification system methodology to the cardiovascular surgery operating room. The Annals of Thoracic Surgery, 83, 1412-1419.

FISCHHOFF, B. & BEYTH, R. 1975. I knew it would happen. Organizational Behavior and Human Performance, 13, 1-16.

GILBERT, D. T. & MALONE, P. S. 1995. The correspondence bias. Psychological bulletin, 117, 21. HART, E. & HAZELGROVE, J. 2001. Understanding the organisational context for adverse events

in the health services: the role of cultural censorship. Quality in Health Care, 10, 257-262. LAUTH, L. A. 2007. The Patient Safety and Quality Improvement Act of 2005: An Invitation for

Sham Peer Review in the Health Care Setting. Ind. Health L. Rev., 4, 151. MACRAE, C. 2014. Early warnings, weak signals and learning from healthcare disasters. BMJ

quality & safety, 23, 440-445. MACRAE, C. & VINCENT, C. 2014. Learning from failure: the need for independent safety

investigation in healthcare. Journal of the Royal Society of Medicine, 107, 439-443. OLSEN, N. S. & SHORROCK, S. T. 2010. Evaluation of the HFACS-ADF safety classification

system: inter-coder consensus and intra-coder consistency. Accident Analysis & Prevention, 42, 437-444.

PANELLA, M., RINALDI, C., VANHAECHT, K., DONNARUMMA, C., TOZZI, Q. & DI STANISLAO, F. 2014. [Second victims of medical errors: a systematic review of the literature]. Ig Sanita Pubbl, 70, 9-28.

PEERALLY, M. F., CARR, S., WARING, J. & DIXON-WOODS, M. 2016. The problem with root cause analysis. BMJ Quality & Safety.

PRAKASH, V., KOCZMARA, C., SAVAGE, P., TRIP, K., STEWART, J., MCCURDIE, T., CAFAZZO, J. A. & TRBOVICH, P. 2014. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting. BMJ Quality & Safety, 23, 884-892.

REASON, J. 1999. Managing the risks of organizational accidents, Aldershot, Ashgate.

Page 32: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

32

SCOTT, S. D., HIRSCHINGER, L. E., COX, K. R., MCCOIG, M., BRANDT, J. & HALL, L. W. 2009. The natural history of recovery for the healthcare provider "second victim" after adverse patient events. Qual Saf Health Care, 18, 325-30.

SHAPPELL, S. A. & WIEGMANN, D. A. 2012. A human error approach to aviation accident analysis: The human factors analysis and classification system, Ashgate Publishing, Ltd.

TAYLOR-ADAMS, S. & VINCENT, C. 2004. Systems analysis of clinical incidents: the London protocol. Clinical Risk, 10, 211-220.

THIELS, C. A., LAL, T. M., NIENOW, J. M., PASUPATHY, K. S., BLOCKER, R. C., AHO, J. M., MORGENTHALER, T. I., CIMA, R. R., HALLBECK, S. & BINGENER, J. 2015. Surgical never events and contributing human factors. Surgery, 158, 515-521.

TURNER, B. A. & PIDGEON, N. F. 1997. Man-made disasters, JSTOR. VRKLEVSKI, L. P., MCKECHNIE, L. & O'CONNOR, N. 2015. The Causes of Their Death Appear

(Unto Our Shame Perpetual): Why Root Cause Analysis Is Not the Best Model for Error Investigation in Mental Health Services. Journal of patient safety.

WACHTER, R. 2015. The digital doctor, New York, NY: McGraw-Hill Education. WESTBROOK, J. I., COIERA, E., DUNSMUIR, W. T. M., BROWN, B. M., KELK, N., PAOLONI,

R. & TRAN, C. 2010. The impact of interruptions on clinical task completion. Quality and Safety in Health Care, 19, 284-289.

WU, A. W. & STECKELBERG, R. C. 2012. Medical error, incident investigation and the second victim: doing better but feeling worse? BMJ Qual Saf, 21, 267-70.

Page 33: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

33

ABOUT SUZANNE SHALE

I am an independent consultant in healthcare ethics and patient safety. I develop guidance, conduct applied research, deliver a wide range of education and training provision and offer one-to-one support to healthcare professionals. I hold a PhD in medical ethics, and developed my consultancy after a successful career as an academic at the University of Oxford. I now work with public and private care organisations, regulators, medical schools, medical defence organisations, charities, patient groups, and overseas governments. I chair the UK’s leading patient safety charity, Action against Medical Accidents. I sit on the Department of Health’s Independent Reconfiguration Panel, was Ethics Advisor to NHS England’s Patient Safety Steering Group until its recent reorganisation, was a member of Health Education England’s ‘Learning to be Safer’ Expert Advisory Group and co-chair of the 2013 ‘Surgical Never Events Task Force’ for NHS England. My book Moral Leadership in Medicine: Building Ethical Healthcare Organizations was published by Cambridge University Press in 2012.

Page 34: How CAN we learn better from healthcare harm? · 2017-06-01 · In the third section I review experiences of adopting HFACS in the US, and recent work testing HFACS in the NHS. I

34