94
1 Igniting Innovation Improving Health, Creating Wealth Nursing Technology Evidence Review Produced in collaboration with Nursing Technology Team NHS England June 2015

Igniting Innovation

  • Upload
    others

  • View
    63

  • Download
    0

Embed Size (px)

Citation preview

1

Igniting Innovation Improving Health, Creating Wealth

Nursing Technology Evidence Review Produced in collaboration with Nursing Technology Team NHS England

June 2015

2

Foreword

Health Information technology will have a profound impact on how nurses, midwives and

health visitors do their jobs. These changes will affect how and where they provide care,

how they interact with their patients and colleagues and how they measure their work and

its effectiveness. In the health care systems of high-income countries, many aspects of

nursing are increasingly supported by a wide array of technology such as electronic and

personal health records (EHRs and PHRs), biometric and telemedicine devices, and

consumer-focused wireless and wired systems. Technology is also integral to the

blueprint for the future of the the NHS; the NHS England Five Year Forward View is full of

references to technological healthcare delivery:

“Raise our game on technology”

“Exploit the information revolution”

“Accelerate useful innovation”

(NHS England 2014 4;31;32).

This brief review surveys the academic literature regarding a group of nursing-specific

technologies and applications. We have captured some of the messages arising from this

evidence and noted many gaps and unanswered questions. We are right at the beginning

of the learning curve for nursing technology in NHS and to climb it nurses must address

currently un-asked questions along with merely unanswered ones.

We hope this document will help nurses and midwives to achieve the greatest possible

improvements in the care they deliver to patients by harnessing these innovative systems

to the full.

Dr Lucy Sitton-Kent Dr Philip Miller

3

Contents

Glossary 5

1. Executive Summary: 8

2. Context

10

3. Methodology

Inclusion criteria

Exclusion criteria

Review process

11 11

12

12

4. Capabilities

13

4.1 Mobile access to digital care records

Effectiveness and accuracy

Financial impact

Impact on patients and staff

14 15 16 18

4.2 Digital images for nursing care

Effectiveness and accuracy

Combined with tele health

Imaging feet

Forensic digital imaging

Dietary intake

Legal implications

19

21

23

24

24

25

25

4.3 Remote face to face interaction

Effectiveness and accuracy

Financial impact

Patient and staff acceptability

27

28

30

33

4.4 Digital capture of clinical data at point –of-care and digitally enabled observations management Automated clinical data capture

35

37

4

Effectiveness and accuracy of automated data capture

Digital enabled management of clinical data

Effectiveness and accuracy of Early Warning Score (EWS) calculation and

response triggering

Prognosis/risk scoring, monitoring and decision-support

Effectiveness and accuracy of the generation of automated care

summaries/handovers

38

40

40

41

43

4.5 Safer care interventions

Drug administration

Blood Products

Effectiveness and accuracy of using patient identification

Theoretical work

45

47

50

52

53

4.6 Dashboards

Effectiveness and accuracy

Impact on patients and staff

55

57

60

4.7 E-Workforce deployment

Effectiveness, accuracy and skill mix

Financial impact

Self-Rostering: Recruitment, Retention, Wellbeing, Acceptability for staff

62

61

64

66

5. Limitations and learning

Methodology

Human factors

Technology

The Health technologist

69

69

71

72

73

6. Conclusion

75

7.Contacts 76

8. References 77

5

Glossary Term Abbreviation Synonyms Interpretation(s)

Automated Drug

Cabinet

ADC Automated drug cabinet: records stock

levels, patient administration and

reordering, controls access (e.g.

fingerprint), guides selection, provides

alerts and limited clinical decision

support

Adverse Drug Event ADE Event involving medication where harm

or risk of harm occurs

Bar-Code BC Machine readable pictorial

representation of patient ID or product

information

Bar Code Medicine

Administration

BCMA Administration procedure reliant on the

use of BC scanner and patient and

product specific bar-code labelling

Bluetooth Near-field communication technology

Clinical Decision

Support Systems

CDSS HIT application delivering, processing

or presenting clinical information to

assist disease management

Computerised

Physician / Provider

Order Entry

CPOE System for ordering of drugs,

investigations and other specific

procedures

Cross-matched to

transfused ratio

C:T ratio Number of units of cross-matched

blood units divided by the total number

of units actually transfused (1 is ideal;

>2.5 is considered wasteful)

Critical Care

Outreach Team

CCOT “Rapid

Response

Team”

Provision of (often nurse led) critical

care assessment and intervention

beyond confines of Intensive Care

department

Early Warning Score EWS Simple algorithm combining

conventional physiological observations

into an index to predict deterioration

and facilitate preemptive intervention

Electronic or Digital

Health Record

EHR / DHR Electronic

Medical

/Hospital /

Health

Record

Software which brings together key

clinical and administrative data using a

single interface (but may be available

on a variety of devices

Electronic

Prescribing and

Medication

Administration

EPMA Cf. BCMA

and CPOE

Linked electronic prescribing to

medicines administration record with or

without BC technology

6

E-rostering e-R Software to manage and generate

Nursing and other work schedules

Health Information

systems

HIS Data storage systems in health sector

Health Information

Technology

HIT Any information and communication

technology employed in the healthcare

sector

Informatics Study of the theory and practice of the

use of digital information

Identification ID

Intensive Therapy /

Care Unit

IT(C)U Level 3 care Mechanical ventilation

Length of Stay LOS

Non-Invasive Blood

Pressure

NIBP Via blood pressure cuff

Natural Language

Processing

NLP Computer analysis or processing of

unstructured human language e.g.

coded data derived from free-text

nursing records

Positive Patient

Identification

Safety convention requiring a patient to

identify themselves rather than simple

passive confirmation

Personal Digital

Assistant

PDA Term for hand-held digital device now

largely superseded by smart phones

Real-time Practically instantaneous or trivial delay

e.g. “automated data dash-board in

real-time”

Recognise and

Rescue Team

RRT Rapid

Response

Team

See CCOT

“Recognise and

rescue”

Principle of early detection of signs of

deterioration in patient’s clinical status

(or risk of) see EWS

Summary Care

Record

SCR Automatically generated summary of a

portion of patient EHR (generally from

Primary Care record and used in

Secondary Care)

Telecare Telecare refers to systems and

equipment designed to help disabled or

vulnerable people remain safe and

independent in their home. Alarms and

communication devices that alert a

family member, nurse or warden and/or

provide remote support to a person to

7

manage everyday tasks

Telehealth Telehealth is the delivery of health

services and information via

telecommunications technologies e.g.

Skype, smart phones, or Florence

(FLO) a personal self-monitoring and

communication tool

Telemedicine Remote consultation, examination and

diagnosis (and increasingly treatment)

via communication technology such as

video streaming

Telemedicine Video

Consultation

TVC Remote consultation between specialist

and patient using video link

Total Opportunity

Error

TOE The denominator of drug administration

errors i.e. all possible doses given

including “Stat” and PRN (as opposed

to pre-planned doses only) – used to

calculate ratio of errors

Universal Donor

Blood

UDB O negative blood which can be

transfused without rhesus and group

typing of recipient

VitalPAC Early

Warning Score

ViEWS Proprietary monitoring system

incorporating EWS principles

Visual Analogue

Scale

Conventionally 100 mm line allowing

marking of a self-reports of intensity

e.g. of pain or breathlessness

“Wrong Blood in the

Tube” errors

WBIT Sample taken from the wrong patient

and sent for laboratory processing (cf.

sample from correct patient mislabeled)

8

1. Executive Summary The NHS is embracing technology in the pursuit of efficient and effective care for patients. Nurses

have a key role to play in transforming care through such technology. The first feature to emerge

from this rapid review was the lack of high quality, accessible evidence to inform future

implementation of many nursing technologies. This absence of evidence does not mean these

technologies do not work but underlines how new much of this technology is and how much there

is to learn. This review can only present an incomplete picture of the impacts of many current

implementations. We conclude that it will remain challenging for many NHS organisations to make

timely informed decisions when choosing and implementing such technologies, particularly in

relation to

o the full financial and efficiency impacts

o costs, training and workload implications of implementation and ongoing

maintenance and development

o the ultimate effects on patients and staff

o the likely need to restructure work flows and other systems

Second, we identified a great need for discussion and agreement about the methodology used to

evaluate these health technologies. There is rarely a straightforward relationship between intended

causes and effects even when implementing the same technologies in different places. This is not

a reason to ignore research into implementation and evaluation of these technologies. Rather we

need to take a more comprehensive approach incorporating all available numerical and qualitative

data and use new theoretical models and frameworks to help guide the choice of approach.

This evidence review demonstrates that implementing and evaluating health technology is complex

and that these technologies interact with multiple systems and cannot be viewed in isolation.

Evaluation should use specific and sensitive measures to indicate which planned benefits were

realised and which were not. Many of the studies reviewed used very general quality improvement

measures over a relatively short follow-up period, these were not sensitive enough to measure the

impact of the particular technology, detect the effects of one technology being implemented with

other concurrent changes or distinguish if the technology was incompletely implemented, fully

integrated or sustained. In addition it is probable that there are studies that failed to achieve their

over-ambitious outcomes and therefore remained unpublished.

9

The third central feature of the research reviewed here is a lack of description and attention to

patient and staff involvement. Nurses and patients should be involved in the design and evaluation

of health technologies and we found few articles which focused on the experiences of users

themselves. The implementation phases of HIT are notoriously difficult and involvement of as

many stakeholders as possible at the earliest opportunity, can help to avoid or minimise problems.

Evaluations should be longitudinal, record problems as well as successes, assess interactions with

other technical and non-technical systems, be sensitive to the benefits required and be published

as widely and accessibly as possible to inform the many who will follow.

Despite these warnings the review discovered many instances of where new systems had

transformed care delivery, improved efficiency and safety and was well received by staff and

service users. The literature included:

Clear evidence of benefit from:

secure mobile access to digital care records

digital images when combined with additional clinical information

remote face to face interaction for both chronic illness and acute conditions such as stroke,

particularly in rural areas

increased accuracy and ease of automated digital capture of clinical data and continuous

measurement

increased sensitivity to some clinical events

improved availability and timeliness of information

reductions in blood and some drug administration errors

efficiency gains in resource use such as drug stock control and staff rostering

Mixed positive and negative evidence for

digitally enabled observations management

automated handovers

digital dashboards

E-roster deployment

10

2. Context In October 2012 the Prime Minister announced the establishment of a Nursing Technology Fund

(NTF) to support nurses, midwives and health visitors to make better use of digital technology in all

care settings to deliver safer and more effective and efficient care. The first round released £30m for

85 projects around 3 capabilities. The second round saw 62 organisations awarded funding totaling

almost £35M.

The capabilities for the 2nd round and the remit of this review are:-

• Mobile access to digital care records across the community (4.1)

• Digital images for nursing care (4.2)

• Remote face-to-face interaction (4.3)

• Digital capture of clinical data at point-of-care and digitally-enabled observations management

(4.4)

• Safer clinical interventions (4.5)

• Digital nursing dashboards (4.6)

• E- workforce deployment (4.7)

11

3. Methodology This rapid review is intended to address the need for a synthesis of the academic evidence base in

a selection of capability areas identified from the second round of NTF projects. To achieve this we

focused on conducting a search of published evidence and then extended this to the grey literature

to provide an overview of the contemporary position. Our objective was to review the available

evidence regarding the seven capabilities in terms of an understanding of the implementation

processes and the likely financial, patient and staff impacts.

Through this review, we aimed to synthesize literature published between 2008 and 2015 in a way

that was as rigorous as possible and useful to practitioners. Technology evolves rapidly (Jamal et

al., 2009) but key earlier material was included where pertinent. The pace of change in this area

means that our conclusions are essentially provisional but we feel justified in proposing suggestions

for evaluations and research priorities.

Literature searches were performed for each capability in the following databases: Cochrane library,

MEDLINE, CINAHL, Embase and Google Scholar. The reference sections of the selected articles

were scanned for additional papers. A hand search of key health websites was performed including

The King’s Fund and NICE. The evidence found was difficult to synthesise as a result of the lack of

settled terminology and methods with the choice of research question often reflecting the

opportunistic and naturalistic nature of many studies. While there is a need to integrate future

research and evaluations from the procurement or pre-implementation stage it is worth bearing in

mind that for many projects there simply is no baseline data as they represent the first

implementation of IT within the NHS. Such limitations are inherent but will become less important as

digital working becomes more pervasive and mature.

Inclusion criteria

The inclusion criteria initially focused on published systematic reviews in English on the Nursing

Technology used to deliver the seven capabilities in different settings. This was broadened due to a

lack of evidence and the review includes any studies where an impact had been discussed or

demonstrated such as “grey” literature (Lilford et al., 2009) and narrative commentaries. The

meaning of Nursing Technology was broadly defined to include different types of systems and tools

for data capture, information processing, decision support, management reporting and analytics,

12

telemedicine/telehealth applications, digital devices. We excluded capabilities used by patients

where there was no monitoring by a doctor or nurse and those designed for patient/ provider

education only. Technologies used by Midwives and Health Visitors were included, but most of the

literature found was concerned with nursing care in mental health, adult and paediatric contexts.

Exclusion criteria Studies written in languages other than English were excluded. Work published prior to 2008 was

excluded unless particularly relevant and not superseded or if regularly cited in later work. Some

studies were difficult to retrieve and were excluded if they could not be adequately reviewed within

the timeframe. Full-text articles were retrieved wherever possible but in a few cases abstracts were

used if sufficient detail or significant results were reported.

Review Process

The review and synthesis involved six steps:

1. Academic database and internet searches and review of taxonomies and key words

2. Review of summary records with reference to the review specification

3. An informal, iterative selection process and categorization under the seven NTF capabilities

4. Duplicates and articles reporting the same studies were removed

5. The methods used and the effects reported were evaluated for effectiveness and accuracy,

financial, staff and patient impact

6. Contextual factors and methodological challenges were identified to try and give a sense of

the reliability and applicability to the NTF capabilities specifically.

13

4. Capabilities The following sections will present syntheses of the evidence found for each of the seven

capabilities. The capabilities are not entirely discrete and there is some overlap with similar

systems providing multiple functionality.

14

Capability Details of review and search terms

Effectiveness and accuracy

Financial impact Impact for patients and staff Learning for implementation

4.1 Mobile access to digital care records

839 records

found; 364

abstracts

assessed; 21

papers included.

Search terms: Mobile working, nurs*, digital records, community mobile access, digital pens, smartphone, electronic patient care records, e-health records, personal digital assistants, tele*

The literature in this area did not focus on effectiveness in terms of quality of records.

One study reported an associated increase in amount of information recorded in notes.

Mobile access made nursing and midwifery notes more contemporaneous

Notes include access additional information and therefore can lead to improved care.

Little quantification of investment required for set up costs and maintenance.

Much anecdotal evidence about how deployment of technology increased productivity by reducing referrals, admissions, staff travel time, “no access” visits and data duplication.

Patient benefits were largely indirect e.g. nurses reported that they have increasing amount of face to face time.

Patients reporting that they liked to see what the midwives write. Such “open” records were confined to midwifery

Nurses were inventive with HIT using it to provide additional information for patients which was positively received.

Nurses and Midwives were generally positive about HIT including the ability to prepare by accessing information prior visits, reductions in duplication and increased time interacting with the patients.

Implementation required

good project management,

leadership, support, skills,

clinical engagement, good

connectivity, IT support and

secure systems.

If supported, staff will find

novel ways to use the

technology.

Important to map out how to move from paper to electronic records without loss of function

Applications reviewed potentially release significant resources, the challenge is how to measure this and recognize all implementation costs.

Considerable scope to free up nursing time for other activities valued by patients

15

4.1 Mobile access to digital care records The work reviewed under this capability relates to mobile access to an electronic health record

(EHR) in patients’ own homes and care homes (the latter are recognised as the patient’s own

homes and are not networked NHS sites). Mobile access refers to technologies such as digital

pens, smartphones, and laptops used by staff to view and edit records held centrally. Overall the

evidence in this category was of mixed quality ranging from individual Trusts reporting successful

projects on websites, through to case studies, several systematic reviews and the National Mobile

Health worker project (2013). The presentation within this capability reflects this, and is quite

descriptive as comparison between evidence sources cannot be made because of these different

structures, information supplied and assumptions made. Where no systematic reviews are available

there is an increased risk that equivocal or negative findings were not published which can create

bias. The evidence is grouped under three main themes; effectiveness and accuracy, financial

impact and the impact on staff and patients.

Electronic Health Records have been described as a digital repositories of retrospective, concurrent

and prospective (planned appointments etc.) patient data, accessible by multiple authorised users

and stored and exchanged securely to support continuing, efficient and quality integrated healthcare

and consist of a mixture of coded data and unstructured narrative text (Häyrinen et al., 2008). The

Department of Health (1998) distinguishes between EHRs and electronic patient records (EPRs):

EHRs being longitudinal records that follow patients’ care for a lifetime and EPRs describing records

relating to periodic care given by one provider, although this difference is not always reflected in the

literature (Black et al., 2011). The technologies that were noted in the literature were personal

digital assistants, digital pen systems, tough books, iPads, smartphones and laptops. Their purpose

includes the ability to access and update notes in real time from any location with an internet

connection and the option to obtain a second opinion from another health professional virtually

(Dewsbury 2014). Although many benefits were reported there was a lack of detail about the

financial investment required.

Effectiveness and accuracy

The literature here is largely international. Pare (2011) carried out a study that investigated the

effects of the introduction of mobile computing on the quality of home care nursing practice in

16

Québec. The completeness of the nursing notes in nine centres was compared and included 77

paper records (pre-implementation) and 73 electronic records (post-implementation). Overall, the

introduction of the software was associated with an improvement in the completeness of the nursing

notes. Patients felt that the use of mobile computing during home visits allowed nurses to manage

their health condition better and, hence, provide superior care services. The nurses reported a very

high level of satisfaction with the quality of clinical information collected.

Self-reported, improved levels of knowledge leading to better care from the use of EHRs is

described in the following studies. Braun et al., (2013) conducted a systematic review of Community

Health Workers, mainly in developing countries using a variety of Mobile Technology not just mobile

access to records. They found that in the 25 articles included, learning was the most significant

outcome for staff (n=9, 32%). Among the reviewed studies Lee et al., (2011) examined mobile

phone use for maternity, sexual and child health services in Indonesia. The study showed that

mobile phone use was positively associated with higher self-efficacy and this higher self-efficacy

was positively associated with health knowledge of maternal health practices in the areas of family

planning practices, prenatal, and child delivery processes. Another study of traditional birth

attendants explained that use of mobile phones enhanced their ability to independently meet the

clients’ health care needs through more timely access to relevant information (Chib et al., 2008).

Braun et al., (2013) also concluded that the outcome most frequently measured among studies 71%

(n=20) was improvement in the quality of care and this was most commonly measured in terms of

compliance to accepted standards. For example, a simulated experimental study used mobile

multimedia devices to support point-of-care clinical decisions by Community Health Workers in

Colombia and found that there were significantly fewer errors and better compliance with care

protocols over a range of clinical care situations, (Florez-Arango et al., 2011).

Financial impact A full detailed cost benefit analysis of the use of these technologies is not available in the current

literature. The most comprehensive picture is presented in the National Mobile Health worker

project (DOH 2013) but some detail is also reported on NHS and affiliated websites. Following

implementation of a tough book, the National Mobile health worker project found for the first phase

the majority of sites contacts increased and more time was spent with patients. Journeys and total

journey time rose, although to a lesser degree than activity, indicating improved efficiency.

Clinicians across the 11 pilot sites estimated that the devices allowed them to save 507 referrals,

(£42 per referral), equating to a saving of nearly 9 % across the pilot period and an estimated 49

17

admissions (£1735 per admission) were avoided saving approximately 21% across the same

period. Results varied significantly across the sites and services which was a reflection of the

differing local processes and approaches and there is insufficient detail to determine whether these

are gross or net savings. In Phase 2 there were 6 sites, however the number of completed data

returns for the final benefit period ranged between 14 -88%. This combined again with the

significant variations that were present across the sites led the authors to conclude that there could

not be any meaningful analysis of combined data across the project (DoH 2013). Some sites

demonstrated significant increases in productivity including large increases in contact activity for the

Specialist Nurses in Hartlepool (142%) and the Palliative Services in Birmingham (83-93%). There

were also significant increases in time spent with patients following deployment of the mobile

devices with Tower Hamlets demonstrating a 104% increase. They also found that productivity

could be maintained even in the presence of significant organisational change, staff absence or bad

weather, with the devices allowing business continuity throughout.

In terms of efficiency journeys were reduced, in Hartlepool the Specialist Nurses achieved an 11%

reduction and time staff spent travelling in Tower Hamlets showed a 33% drop. Data duplication

was also reduced, freeing up clinical time. This was especially significant where EPRs were

established and use optimised. In addition, “no access” visits decreased significantly as

demonstrated in Avon (38-47%), Northampton (37%) and Tower Hamlets (50%). Each “no access”

visit was calculated to cost £42, (University of Kent 2010). Although data was supplied about

projected savings there were no detailed financial costs included in the projects. There was no

outline of how much the technology cost to implement and maintain or how much of the projected

savings were realised.

Successes in terms of cost reduction have also been communicated through award processes and

published case studies. For example, midwives in Portsmouth have reported the success of digital

pens. Here the midwife carries specifically printed forms, on which they write as they would with an

ordinary pen. The digital pen takes snapshots of the information, converts this into an encrypted

format, which is sent into the midwife’s BlackBerry via Bluetooth. The files are then sent back to the

hospital as soon as a signal is detected reducing the amount of travel and data input time for

midwives. It has also streamlined processes within the hospital, since the information is

automatically fed into different systems. They estimated savings to be the equivalent of six full-time

midwives, or £220,000 per year based on a workforce of 130, http://www.ehi.co.uk/news/acute-

care/7607/warwick-to-use-ipads-for-patient-records.

18

Derbyshire Healthcare NHS Foundation Trust estimates that 85% of its data is now entered on the

same day due to the use of digital pens by its 800 staff allowing the nurses to spend more of their

day on direct patient care producing an estimated saving of about £800,000 a year (Duffin 2013).

Rotherham Community Health Services has introduced digital pens and reported a 35% increase in

productivity of its physiotherapists http://www.ehi.co.uk/news/ehi/5867/rotherham-physios-flex-

digital-pens. Kidd (2011) reported the benefits to nurse prescribers of being able to access records

remotely, in a study of 11 rural and urban sites. Mobile access led to increased efficiency, better

patient contacts and improved teamwork. However, for all these reports there is a lack of detail

about set up and maintenance costs.

When the investment cost of equipment and maintenance is included the financial impact is less

favourable. In the USA, Rantz et al., (2010) explored the patient benefit of the electronic medical

record (EMR). They investigated the unique and combined contributions of EMR at the bedside and

on-site clinical consultation by gerontology expert nurses and found both costs and quality of care

increased in nursing homes. 18 nursing homes in 3 USA states participated in a 4-group 24-month

comparison where the EMR and nurse consultation were tested. Costs, staffing, staff retention and

resident outcomes were measured. Total costs increased in both intervention groups that

implemented the technology. The authors suggest that this was due to the cost of the technology,

maintenance, support and on-gong staff training. Improvement trends were detected in resident

outcomes of Activities of Daily Living, range of motion, and high-risk pressure sores for both

intervention groups but not in comparison groups. Therefore in this study the implementation was a

quality initiative rather than a financial one.

Impact on Patients and Staff Detail about patient perspectives is limited although there is work around the quality of their care.

For staff the literature mainly focuses on what is required for implementation. Lau et al., (2010)

carried out a meta-level synthesis of 50 papers (published 1994-2008) and found that the important

factors influencing HIT success for staff were having in-house systems, users who were developers,

integrated decision support and benchmark practices, and addressing such contextual issues as

provider knowledge and perception, incentives, and legislation/policy. They found some evidence

for improved quality of care, but this varied across studies. The areas where HIT did not lead to

significant improvement included resource utilisation, healthcare cost, and health outcomes.

Significantly they concluded that in many of the papers it was evident that the studies were not

designed or had insufficient power/duration, to properly assess health outcomes. They recommend

19

caution when interpreting these findings, because there were wide variations in organisational

contexts and how the HIT were designed/implemented and perceived echoing the conclusions of

the Mobile Health Worker project.

Black et al’s (2011) systematic review into the impact of eHealth on quality and safety also found a

difficulty in establishing cause and effect. They thematically categorised eHealth technologies into

three main areas: (1) storing, managing, and transmission of data; (2) clinical decision support; and

(3) facilitating care from a distance. They found that there was relatively little empirical evidence to

substantiate many of the claims made in relation to these technologies and that best practice

guidelines in effective development and deployment strategies are lacking. What is interesting in

regard to this capability is that although a number of reviews claim to assess the impact of EHRs,

many actually investigated auxiliary systems such as Clinical Decision Support Systems (CDSS),

Computerised Physician Order Entry (CPOE), and ePrescribing. As a result, most of the impacts

assessed were more relevant to these other capabilities.

Zhang et al (2010) reported a study of 91 Canadian nurses finding that the nurses’ perception of

usefulness is the main factor in the adoption of mobile technology. Doran et al., (2012) in the United

State highlighted usability, organisational culture, evidence-based practice and adaptation of the

electronic clinical information system. MacLure et al., (2014) in a systematic review of medical and

non-medical practitioners’ views of the impact of eHealth on shared care, summarised the major

themes from 12 papers as organisational, social, technical and external. For the organisational

theme, the main influences were resource and time implications, culture of the workplace and

change management requirements. Social concerns focused on the impact on patient consultations,

extra workload, need for training for varying levels of IT literacy amongst staff, usability, patient

privacy and the practitioners’ role. The technical themes included the incompatibility of technological

systems and the need for shared definition and terminology, External factors highlighted the

interplay between national policy and strategy and regional autonomy. Lastly, Koivunen et al.,

(2015) in a study in Finland of 117 nursing professionals found that they perceived electronic

communication to speed up the management of patient data, increase staff co-operation and

competence and make more effective use of working time.

20

Capability Details of review and search terms

Effectiveness and accuracy Financial impact Impact for patients and staff Learning for implementation

4.2 Digital images for nursing care

1858 records found; 104 abstracts assessed; 37 papers included. Search terms: Digital images, digital photographs, nurs*, digital camera, smartphone Tele*

Principal use of digital images in the recording of wound healing.

Little robust evidence for the effectiveness of digital imaging alone.

Overall more accurate in combination with additional clinical information or software.

Some evidence of effectiveness when combined with teleconsultations to reduce demand for appointments.

No inclusion of investment costs in the literature for the equipment required

No inclusion of the cost/benefit if wounds are monitored effectively and advice sought when needed.

No inclusion of patient

feedback.

Limited staff perspectives in

literature.

One study mentioned staff

anxiety.

Another study reported increased staff confidence when expert viewed images remotely.

Legal implications for use of images and quality.

Images must be considered as part of the patient record.

Considerable scope for increasing skills in nurses working with specialists.

Systems need to be integrated and complete rather than require staff printing off pictures and putting them into a paper patient record.

Good image quality is very important.

21

4.2 Digital images for nursing care A digital image is one that is recorded via a digital camera or smartphone and transmitted

electronically. The most evidence emerged for still photography, in particular its use in the

diagnosis and monitoring of the healing of chronic and acute wounds. Much of the literature found

in this review was non-UK in origin. Most studies were too small to give reliable information about

sensitivity and accuracy of assessments or measurements obtained using digital imaging.

The digital image may often be printed and kept in paper notes but is sometimes stored

electronically in the EHR; this review considers how these images are incorporated into Telehealth

applications to ensure correct and timely treatment. Several other applications emerged from the

evidence review and have also been included such as the digital imaging of patients’ feet who have

diabetes. Although at present this is medically dominated it is foreseeable that this could be a useful

technology for nurses to employ routinely in the future. Other applications were found such as digital

images to record assault-related injuries and to monitor food intake. There is little evidence

regarding these applications currently. Generally, papers found in this area involve small numbers

of patients and focus on the effectiveness of the digital methods used. Again, cost-benefit analyses

are largely absent as is information on acceptability for patients or nurses.

Effectiveness and accuracy in wound assessment Digital photography has virtually replaced conventional film photography for clinical imaging. It is

commonly used to demonstrate if new procedures, dressings and ways of working, impact on

wound healing (Kaliyadan et al., 2008, MacBride et al., 2008, Varkey et al., 2008, Dufrene 2009,

Rennert et al., 2009, Bradshaw et al., 2011, Engel et al., 2011) and is recommended in NICE

guidance on pressure ulcer management (NICE 2014) .

Thompson et al., (2013) examined the validity and reliability of the revised Photographic Wound

Assessment Tool (revPWAT) on digital images taken of various types of chronic, healing wounds.

This Canadian multicenter trial involved 206 different photographs taken of 68 individuals with 95

chronic wounds of various aetiologies (diabetic foot, venous/arterial ulcers, pressure ulcers). An

initial wound assessment using the revPWAT was performed at the bedside and 3 digital

photographs taken. The revPWAT scores derived from photographs were assessed by the same

staff on different occasions and by different staff. They showed moderate to excellent intra-rater

correlation coefficients as well as test-retest and interrater reliability. They concluded that there was

excellent agreement between bedside assessments and assessments using photographs.

22

Florczak et al., (2012) carried out a study where nine USA wound care nurses used a camera-

enabled mobile device at the bedside as part of a wound assessment process. The greatest gains

for single evaluation items were reviewing wound progress and recognizing changes in wound

status. In this study, the wound nurses used the camera-enabled mobile device to take digital

photographs at the bedside, to enter wound assessments and risk assessments and to record

treatment recommendations. While this approach supported client-centered care and effective

communication, it also represented a competing pressure on nurses’ time. Some nurses struggled

with the introduction of a new assessment protocol incorporating digital photographs. Because their

system was not all electronic they also had to print it off and sign images to share with others, thus

increasing workload.

Baumgartner et al., (2009) in a study involving 48 patients and compared the validity of digital

photographs for the remote assessment of pressure ulcers as compared to existing practice of

bedside examination and diagnosis. The photographs were reviewed blindly by a dermatologist. The

sensitivity of the blinded assessment was 97% in terms of pressure ulcers being diagnosed at the

bedside and via photographs and areas judged to be unaffected at the bedside were also identified

correctly from the photographs. The results were only slightly less positive (92% sensitivity) when

limited to darker skin.

Terris et al., (2010) found a much less clear validation of digital imaging when they investigated the

assessment of stage 3 and 4 pressure ulcers in individuals with spinal cord injury in Cleveland

comparing beside with digital photograph assessments. Two wound-care nurses assessed 31

wounds among 15 participants during a 6 month period. One nurse assessed all wounds in person,

while the other used digital photographs. They achieved only moderate agreement about the

wounds which the authors felt may have been attributed to variation in assessor’s subjective

perception of qualitative wound characteristics.

Some authors have expressed caution in relying on digital images to diagnose and monitor wounds.

Buckley et al., (2009) conducted a descriptive comparative community study of 43 adult patients

with a total of 89 wounds. They found that Wounds, Ostomy and Continence nurses who provide

remote nurse-to-nurse consultations using digital images were at risk of recommending under or

over-treatment of patients' wounds. In dermatology, Janardhanan et al., (2008) found that some skin

conditions required a face-to-face consultation and they recommended a mixture of face-to-face

consultations and consultations via tele dermatology. For burns management, Hop et al., (2014)

found that burn size, but not depth, could be assessed reliably and validly by experts using

photographs of the burn wound. Finally Jesada et al., (2013) explored whether a digital photograph

23

obtained by a nurse in the acute care setting could be used to determine staging and wound

characteristics of a pressure ulcer when viewed remotely compared to bedside assessment. 100

digital photographs of pressure ulcers were obtained from 69 patients on general and critical care

medical-surgical nursing units. The study compared 13 wound characteristics and a total score on

the Bates-Jensen Wound Assessment Tool (BWAT) as well as the staging of pressure ulcers. Their

findings indicated that a digital photograph alone cannot reliably convey the characteristics of a

pressure ulcer. Jesada et al., (2013) therefore advocated a digital photo and clinical information,

systems designed to provide digital photography wound measurements and the development of

standardized education for examining digital photographs by wound experts, to increase the

accuracy of the assessment and documentation.

Other studies have used planimetry software (designed to measurement of surface area and

surface area-to-perimeter) to provide more accurate wound measurements from digital photographs

(Rogers et al., 2010, Wendelken et al., 2011). (Mayrovitz, and Soontupe (2009) in a small study of 6

wounds and 20 senior nursing students, used computerized planimetry of digital images. They

found area error rates of up to 7% with well-defined wound edges and 10% for wounds with less

defined borders. Miller et al., (2012) investigated the inter-rater and intra-rater reliability of a wound

imaging and measurement system called SilhouetteMobile in community nursing, in Victoria,

Australia. Seven community nurses including Wound Management Clinical Nurse Consultants and

Wound Resource Nurses assessed average wound surface area of 14 wound images as captured

using a wound imaging and measurement system. This again was a very small study but they

found that here was high inter-rater and intra-rater reliability, which was unaffected by image quality.

However, reliability was poor when tracing small wounds.

Combined with telehealth The following two small studies indicate how digital imaging could possibly be incorporated into

patient care to reduce hospital visits and inappropriate referrals. Chanussot-Deprez (2008) carried

out a small study of three patients with hard to heal ulcers in Mexico. A digital camera was used to

take images which were emailed to a wound care expert in Mexico City. The photographs allowed

the expert to diagnose and evaluate the wounds, two wounds healed and the decreased in size

although there was no information about time, they also they said it was cost-effective but no details

of cost were included.

Vowden and Vowden (2013) evaluated the effectiveness of a telehealth system, using digital pen-

and-paper technology and a modified smartphone to take digital pictures, to remotely monitor and

24

support the effectiveness of wound management in 39 nursing home residents. It was a cluster

randomised controlled pilot study, conducted in 16 selected nursing homes in Bradford for residents

with a wound of any aetiology or severity. Residents in the control homes, continued to receive

standard care directed by nursing home staff, while those in the evaluation homes received wound

care supported by input from remote experts. Analysis of individual patient management pathways

suggested that the combination of digital picture and clinical information provided, improved patient

outcomes. Such applications may offer cost savings by improving dressing product selection,

decreasing inappropriate onward referral and speeding healing. These studies are small and lack

detail about cost-benefit, but give a suggestion of possible benefits.

Imaging of feet While there is much medical literature looking at thermal imaging of the skin on the foot to assess

circulatory status (Nagase et al.,2011, Roback 2010, Roback et al., 2009), only one study was found

that looked at how the use of digital images would assist in patient care. Foltynski et al., (2010)

reported on a new way of managing patients with diabetes who needed foot care, with a nurse at

the patient’s home and a consultant in another location. Digital images of foot ulcers and blood

glucose and blood pressure measurements, were sent to the consultant for an opinion. The visiting

nurse reported increased confidence with the treatment of the foot ulcer and described the

consultations as a learning situation. Patients also expressed satisfaction with the process.

Forensic digital imaging Nurses may be involved in using digital imaging to document the effect of assault as it is seen as

valuable additional documentation of physical injuries after sexual assault. Ernst et al., (2012)

reported on the need for high image quality and naturalness in order that the images are reliable to

use. This issue of image quality is also picked up in Melville’s (2013) study which explored the

quality of photographs that are taken by professionals in child abuse cases. A total of 120 images

depicting 60 cutaneous lesions were selected for the study. Paired images of single lesions varied in

quality of focus, exposure, or framing. Seventy medical and nursing professionals evaluated the

images for quality (1-9 scale). Accuracy was defined as the agreement between the subjects and

the live examiners written documentation. Adequacy was the proportion of subjects that did not

indicate that the photograph was inadequate for interpretation. Mean accuracy among subjects was

64% and ranged from 35% to 84%. Accuracy was not predicted by subject profession, experience,

or self-rated computer skill. Image quality and adequacy were independently associated with

increased accuracy. They concluded that reliance on photographs alone is not sufficiently accurate

in the assessment of cutaneous trauma.

25

Dietary intake Navarro et al (2014) used digital imagery of food to estimate correlation with real-time visual

estimation of dietary intake reporting the intra- and inter-rater agreement for digitally captured plate

waste in hospitalised adults. 106 lunch plates of hospitalised adults were photographed

immediately prior to tray collection. Two raters used the modified Comstock method to assess food

waste from each of their digital images. Additionally, a randomly selected subset of photographs

was re-estimated 72 hours following the first estimation. The results showed that dietary intake

estimated from digital images has a high degree of both inter- and intra-rater reliability in

hospitalised patient populations. The findings were similar for regular diets but somewhat weaker for

soft diets.

Legal implications A theme to emerge from this literature review relevant to all professionals, was the importance of

recognizing the legal implications of taking and storing digital images. A detailed discussion on the

security of digital technology in the health sector is beyond the scope of this review.

There is clear guidance for medical practitioners, (GMC 2011) on the recording of photographs,

record keeping for nurses (Nursing and Midwifery Council 2015) and further advice by the Royal

College of Nursing (2012). Informed consent must be obtained for the photograph, with patients

understanding why the photo is being taken and how it will be used and stored. NHS organisations

should have their own policy on the taking and storing of photographs drawing on the Information

Security Management: NHS Code of Practice (DoH 2007). This states that digital or printed,

photographs must be treated as the same as paper or digital health records and as such staff are

required to abide by all relevant laws and national guidance including the Data Protection Act 1998

with regards to the capture, storage and use of patient images (NHS England 2014, Lakdawala et

al., 2013).

Harting et al., (2015) commented that the main choice was whether to have a digital camera or let

staff use smartphones. Many organisations have a Trust digital camera by which photographs are

taken. However Harting et al., (2015) says that they are usually not password protected and require

the nurse to download the pictures to a computer. Smartphones are highly portable, can rapidly

distribute a captured image and their password protection and/ or encryption abilities can better

secure data, making unwanted access less likely. Whatever is used, and despite password

protection, encryption and other technical safeguards, any storage, transport or dissemination

method can be compromised, potentially exposing identifiable data to misuse. All mobile technology

26

is subject to loss, damage or theft especially in a community settings entailing other risks such as

treatment delays and loss of record continuity.

King (2014) commented on the governance arrangements and training needs that were identified as

necessary prior to rolling out a community nursing project. The study evaluated a remote system of

wound management, incorporating the use of digital and mobile technology the use of a wound

template and the recording of remote images. Consent to photography, information governance

such as adequate encryption, ensuring that the correct images were uploaded to the correct record

and the practical issues of uploading and attaching images to records and reports were all

highlighted as important governance issues.

27

Capability Details of review

and search terms Effectiveness and

accuracy Financial impact Impact for patients and

staff Learning for

implementation 4.3

Remote face

to face

interaction

1398 records found;

230 abstracts

assessed; 28 papers

included.

Search terms:

video consultation,

video conference

teleconsultation,

teleconference,

nurs*, tele*, remote

interaction

Effective

technology

particularly in

the

management of

long term

conditions and

also the acute

treatment of

stroke where

specialist advice

is otherwise

unavailable.

Some evidence of

investment costings of

equipment but cost/benefit

analysis rare.

Overall results were mixed depending on service re-design e.g. not cost-effective if it involves both a doctors with the patient and a specialist.

Some studies but not all

reported reductions in

readmission rates and

length of stay.

One systematic review

reported telehealth to be

less costly than the non-

tele health alternatives.

Clear savings were

demonstrated where

medical cover was not

feasible e.g. in prisons

where transfer of patients

has additional costs or

reductions in admissions

from care homes or

transfers from rural

hospitals.

Mixed evidence of

patient and staff

satisfaction.

Positive benefits for

patients in ease of

access and

reduced expense.

Possible negative

impact of reduced

personal contact.

Technology always

alters relationships

and dynamics

between patients

and clinicians

thought rarely

reported or

analysed in detail.

Staff adopt different

roles when

implementing

telehealth.

Needs careful initial facilitation and integration with other human and technical systems.

Whole system HIT implementations probably provide the greatest benefits but there is little cost/benefit data.

Particularly tested and beneficial in rural areas.

Require HIT infrastructure and resources e.g. bandwidth and network coverage.

Considerable scope for nurses to develop further skills under remote supervision of specialists.

28

4.3 Remote face to face interaction Teleconsultation technology allows patients and clinicians to communicate with other clinicians

through a video conference link (Rice 2011). This section focusses on the consultation or

conference between health care staff and patients as a clinical tool, rather than between staff as an

educational or communication tool. Cruikshank (2010) says this improvement in access to expertise

could improve equality, access and clinical outcomes for patients but requires integration with other

systems. “Its effective usage requires access to video; Picture Archiving and Communications

(PACS) image sharing capabilities; remote diagnostics (e.g. ECG); and the patient record”

(Cruickshank 2010 p14). A range of evidence was found in this area, including several RCTs but

most of the research explores individuals’ implementation of the technology. The literature in this

capability is international in origin and there is a significant focus on applications in rural

communities where it is difficult either for health care providers to get specialist services to patients

or patients to services. The researchers in this areas acknowledge the huge challenges faced by

health services in delivering access to tertiary or secondary care to sparse populations. Video

consultation is seen as a central way to address this cause of inequality within the literature.

Themes of effectiveness, financial impact and acceptability to patients and staff emerged from this

literature. During such remote interactions, the nurse usually plays the role of being the person with

the patient and connecting to the specialist who is in another location.

Effectiveness and accuracy The literature on effectiveness of video consultation versus face to face consultation predominantly

focusses on medical diagnosis and treatment. Fatehi in Queensland, Australia, has reviewed 505

papers on the technical aspects of clinical videoconferencing but found that a high proportion of

telemedicine papers lack sufficient technical details which limits their repeatability and

generalisability (Fatehi et al., 2015a). Fatehi then went on to carry out a study which examined

videoconferencing and the decision making of endocrinologists concluding that it was as effective as

in-person consultation in decision making (Fatehi et al., 2015b). There is also literature that

describes effective management of patients, particularly those with chronic conditions (Chen et al.,

2013, Dhiliwal and Salins 2015). This is an area where nurses increasingly play a significant role

and where the results for telehealth are generally favourable.

Clemensen’s (2008) pilot study investigated whether video consultations in the home were a viable

alternative to hospital visits for patients with diabetic foot ulcers. They also measured the patients,

relatives, visiting nurses, and experts’ levels of satisfaction and increased confidence with this new

29

mode of assessment. Through field observations, semi-structured interviews, focus groups, and

qualitative analysis of tele-medical consultations in a pilot test they showed that it was possible for

experts at the hospital to conduct clinical examinations and decision making at a distance, in close

cooperation with the visiting nurse and the patient.

One chronic condition that has received significant interest is Chronic Obstructive Pulmonary

Disease (COPD). Sorknaes et al., (2011) and Saleh et al., (2014) evaluated the use of

teleconsultations with respiratory patients with COPD. Sorknaes et al., (2011) investigated the

effect of telemedicine video consultations (TVCs) on early readmissions of 100 non- randomly

allocated COPD patients in their homes after discharge from hospital. The patient received daily

TVC at home with a nurse based at the hospital for approximately one week and an additional

follow-up call. The patient could also call the nurse during the study follow-up period of 28 days. The

nurse and the patient were able to see and talk to each other and the nurse was able to measure

saturation and perform spirometry remotely. The results were transferred to the hospital by a secure

Internet connection. 12% of the TVC group compared to 22% of the control group were readmitted

because of exacerbation of COPD indicating that TVC is a significant protective factor for early

readmission. There are significant limitations to this study. The subjects were not randomly

assigned to treatment group, the duration of the study was short and the intervention was not

blinded. Analysis of the economic implications was limited to comparison to other, much older

studies.

In a later, randomised study of 266 patients with severe COPD discharged after acute exacerbation

from two hospitals, Sorknaes et al., (2013) found there were no significant difference in the total

mean number of hospital readmissions after 26 weeks between the intervention group and control

groups. There was also no significant differences between the two groups in mortality, time to

readmission, mean

number of total hospital readmissions, mean number of readmissions with COPD, mean number of

total hospital readmission days or mean number of readmission days with COPD measured at 4, 8,

12 and 26 weeks. Saleh et al., (2014) evaluated the effects of telemedicine video-consultation

(TVC) using similar programme but with follow-up for 12 months using a before and after design.

They compared the frequencies and durations of hospital re-admissions due to COPD

exacerbations within 6 and 12 month follow up after commencement of TVC to events at 6 and 12

months prior to TVC. The study showed the number of patients re-admitted and the number of re-

admissions due to COPD exacerbations were not reduced at 6 or 12 months post-TVC. The mean

30

length of re-admission at 12 months post-TVC was markedly reduced as compared to pre-TVC.

There was no cost-benefit analysis reported.

Remote teleconsultation has also been studied in the treatment of acute stroke. Audebert et al.,

(2009) carried out a follow-up study to assess the long-term impact of a stroke network with

telemedicine support. Over a 30 month period, 1938 stroke patients (and 1122 control patients) from

five community hospitals were followed-up. There were no significant reductions in composite

outcomes of reduced “death or institutional care” at 30 months but a significant reduction of “death

and dependency” at 30 months. Pedragosa et al., (2009) found an increase in the number of

patients evaluated by a specialised neurologist and Dharmasaroja et al., (2010) an increase in the

number and speed of thrombolytic treatments that occurred. Over an 11 month period,

Demaerschalk et al., (2010) studied 54 acute stroke patients and found no significant difference in

mortality, 90 day functional outcome, use of thrombolysis, rate of intracerebral haemorrhage after

treatment with thrombolysis or evaluation times, between telemedicine vs telephone consultations.

No consultations were aborted but technical problems occurred in 74% of telemedicine

consultations compared with none in telephone consultations. None of the technical issues

prevented correct treatment decisions but some did effect the time-to-treatment decision

(Demaerschalk et al., 2010).

Roots et al., (2011) included these studies in their systematic review investigating the use of

telemedicine in managing acute stroke and post-stroke rehabilitation. There were 20 observational

and two randomized, controlled acute studies involving 15872 patients and two observational and

four randomized controlled rehabilitation studies involving 112 patients that met the inclusion

criteria. From this review they concluded that telemedicine for stroke, in both acute and

rehabilitation settings, was demonstrated to be safe, effective, feasible and acceptable. It was also

shown to reduce geographical differences, increase diagnostic accuracy and increase uptake of

thrombolytic treatment. Further work is also in progress with Salisbury et al., (2014) exploring what

other conditions and patient groups internet videos and other remote consultation methods could be

effective.

Financial impact While there are systematic reviews of the evidence around telehealth, videoconferencing is usually

combined with other methods, particularly consultations by telephone and therefore it is difficult to

assess its impact independently. Wade et al., (2010) carried out a systematic review of economic

31

analyses of telehealth services using real time video communication. They were unable to perform

a meta-analysis of the effects of telehealth because of the heterogeneity of the outcome measures.

Instead they reviewed studies which report economic details with patient outcome data and a non-

telehealth comparator. 36 articles met the inclusion criteria. 61% of the studies found telehealth to

be less costly than the non-telehealth alternative, 31% found greater costs and 9% gave the same

or mixed results. The studies were classified according to where costs arose, with 23 reporting the

cost perspective of the health services, 12 were at societal level, and one included direct patient

perspectives. In three studies of telehealth in rural areas, the health organisation paid more for

telehealth, but due to savings in patient travel, the societal perspective demonstrated cost savings.

In regard to health outcomes, 33% of studies found improved health outcomes, 58% found

outcomes were not significantly different and 6% found that telehealth was less effective.

In the review Wade et al., (2010) highlighted that the organisational model of care was the most

important factor in determining the value of the service compared to the clinical discipline, the type

of technology, or the date of the study. They concluded that the delivery of health services by real

time video communication was cost-effective for home care and access to on-call hospital

specialists. Nonetheless they showed mixed results for rural service delivery, and found no

evidence that telehealth was cost-effective for local delivery of services between hospitals and

primary health care centres. This final funding was due to GPs being with the patient during the

consultation, (thereby doubling the medical costs) underlying the importance of the details of

implementation in delivering expected efficiency savings.

Detailed costings are available for TVC in a study by Barber et al., (2015) who highlighted that in

remote and rural areas, such as the Western Isles, specialist medical input i.e. stroke may not be

available or feasible. They reported a before and after study of a two year service intervention

where a weekly stroke multidisciplinary meeting in the Western Isles Stroke Unit was led by a

remote specialist stroke doctor via a videoconference link. They found that the median length of

stay fell by 10 days, primarily by empowering the team to discharge plan at an earlier stage. This

they calculated would lead to a potential recurring annual cost saving of €68,000 (one day in

hospital = €400). They reported that no other significant rehabilitation service changes were made

during the two-year project to account for these improvements. They acknowledge that the numbers

were small and that length of stay is a fairly crude outcome measure dependent on many other

factors. They costed the consultant coming to the hospital as €48,000, the cost of implementing the

service €7000, the videoconferencing unit €9967 and the license €177. This extension of equity and

32

quality in access to specialist care should be set against the need to demonstrate absolute

cost/benefit improvements but is difficult to quantify in cost-benefit comparisons. Finally in trauma

and general surgery contexts Latifi et al., (2009) carried out a retrospective analysis of 59

teleconsultations between five rural hospitals and a level One trauma center in the US. They

included set up costs (which were covered by a grant) and details on ongoing costs. For six

patients, the remote consultations were considered potentially lifesaving and 29% of patients

remained in the rural hospitals (eight trauma and nine general surgery patients). Treating patients in

the rural hospitals reduced transfers, saving an average of $19,698 per air transport or $2,055 per

ground transport, in addition to potential safety benefits. The telepresence of a trauma surgeon aids

in the initial evaluation, treatment, and care of patients, improving outcomes and reducing the costs

of trauma care.

Rice (2011) reported on the work at Airedale NHS Foundation Trust which had been developing a

range of services using high quality video links to provide consultant and other clinical professional

input to prisoners. Approximately 50% of all the patients remained in the remote location to receive

care. Previously all cases would have been transferred to the local secondary care for consultations,

indicating a likely cost saving to both the health care and prison services (not reported). Cruikshank

(2010) also commented on the Airedale work in over 18 UK Prisons and took a more financial focus

estimating that the average cost per escort episode (including mental health transfers) as £350 and

the average cost per bed watch episode was £3,731.

Pope et al., (2013) described the results of providing 24 hour access to immediate clinical support

using teleconsultation (the “telehub”) in both domestic and residential care settings in the Yorkshire

and Humber Region. This dedicated 24/7 service is staffed by experienced nurses who can receive

and make calls to patients over domestic broadband links, supported by a consultant general

physician. Consultations are viewed either on a domestic TV via set top box technology or on a

dedicated mobile video system. The consultations are documented using a shared EHR (TPP

SystmOne). COPD patients using the scheme had a 29.5% reduction in hospital admissions and a

36% reduction in length of stay for those admissions that did occur. Overall in the 14 nursing and

care homes supported using mobile teleconsultation equipment, there was a 50% reduction in

admissions compared with similar homes that had no access to teleconsultation and a 74% relative

reduction in Emergency Department attendances. They did not report details of the cost of

providing the hub nor additional training. In a later paper, Pope et al., (2014) explored whether this

change in practice could be sustained. Initial findings from 17 care homes supporting a population of

33

just over 1000 people, showed that there appeared to be a 53% reduction in ED attendance, 35%

reduction in admissions and a 59% reduction in total bed days for residents in the 12 months

following introduction of the service. Again the actual cost of these systems and the staff resource

requirements e.g. training, are absent as with most of the literature identified. The location of costs

and the relative impact of location of care on average costs must also be taken into consideration

when overall financial impacts are considered.

Patient and staff acceptability How patients and staff feel about using the technology and how it changes their interactions,

receives some attention in the literature and serves to indicate the complexity of introducing HIT.

From a patient perspective the evidence is mainly positive with schemes reporting high satisfaction

(Clemensen et al., 2008, Saleh et al., 2014, Sorkneas et al., 2011, Pope et al., 2013, 2014, Barber

et al., 2015). Johansson and Lindberg (2014) via a questionnaire distributed in rural areas of

northern Sweden, described the views of 26 residents of rural areas on accessibility to specialist

care and the use of video consultation as a tool to increase accessibility. They conclude that

although patients recognise savings in time, environmental damage and cost of travel as important,

there was uncertainty regarding video consultation as preferred solution. From a professional

perspective, Vuononvirta et al., (2009) carried out a study of the attitudes of multi-professional

teams to telehealth adoption in northern Finland health centres and revealed some complexity.

Observations and interviews were carried out with 30 professionals (physicians, nurses, psychiatric

nurses, physiotherapists). Overall attitudes were more positive than negative with diversity of

attitudes occurring in relation to time, situation, profession, health centre and telehealth application.

Ten different types of telehealth adopters were categorised: “enthusiastic user”, “positive user”,

“critical user”, “hesitant user”, “positive participant”, “hesitant participant”, “critical participant”,

“neutral participant”, “negative participant” and “positive non-participant”. This may be helpful for HIT

project staff and managers to consider when working with staff implementing such technologies.

There is some evidence that using this technology affects the nature of the clinician/patient

relationship e.g. Clemensen et al., (2008), Esterle and Mathieu-Fritz, (2013), Van Gurp et al.,

(2013). Wakefield et al., (2008) compared differences in nurse and patient communication profiles

between two telehealth modes: telephone and videophone. 28 subjects were enrolled in a

randomized controlled clinical trial evaluating a 90-day home-based intervention for heart failure.

They found that nurses on the telephone were more likely to use open-ended questions, responses

such as mmm and yeah, friendly jokes, lifestyle information, approval comments and checks for

34

understanding. Compliments were given and a sense of partnership was more evident but closed

questions were more common on the videophone. Nurses perceptions of the interactions were not

different between the telephone and videophone, nor did their perceptions change significantly over

the course of the intervention. The study did not collect patients’ perspectives.

Miller (2011) reviewed 10 interaction analysis studies of videoconference systems, describing and

categorising the communication and concluded that whilst some studies found telemedicine to be

more patient-centered than conventional medicine, others found it less so. There is currently limited

understanding of positive and negative effects of telemedicine on provider-patient relations.

35

Capability Details of

review and

search terms

Effectiveness and accuracy Financial impact Impact for patients and

staff

Learning for

implementation

4.4

Digital

capture of

clinical data

at point-of –

care and

digitally-

enabled

observations

management

Records found

1920; abstracts

assessed 145;

33 papers

included.

Search terms:

“Continuous

monitoring”,

“automatic

data capture”,

“automated

clinical

prediction”,

“clinical

decision

support

systems”,

automat*,

alert*, alarm*,

hand-held,

nurs*, observ*,

chart*, monit*

Automated data capture was more

accurate than manual recording

Continuous measurement more

comprehensive and less resource

intensive

Increased availability of data for

clinical audit and research but little

evidence of impact of this.

Greater sensitivity to some clinical

events e.g. apnoea

Evidence supporting integration of

nursing data into prognostic alerts

was mixed - most deployments are at

developmental stage

Some evidence that automation of

EWS calculation coupled with

decision support for nurses e.g.

when to escalate, is effective in

improving process outcomes and

timeliness

No conclusive evidence that

automated warnings reduce LOS or

mortality but may reduce cardiac

arrest calls via early detection of

deterioration.

Complex integration of clinical data

for prognosis and early warning

using predictive analytics in very

early stages of development - may

Likely

reductions in

nursing time but

not widely

quantified.

Reduction in

nursing

resource for

observations

may be

substituted for

other patient

care –

financially

neutral but

quality positive

Automated care

summary

generation

studies showed

reduction in

record

preparation and

handover times.

Little detail of

equipment and

implementation

costs e.g.

training.

Some studies

reported positive

feedback from staff

for automated data

capture and

automated care

summary generation

- primarily ease and

time saving.

No direct

involvement of

patients reported -

given most studies in

ITU and theatres this

is not unexpected.

Automated

measurement may

be able to reduce

“white coat”

hypertension other

systematic errors

and reduce

overtreatment.

Possible negative

patient experience if

automation leads to

overuse as

continuous

monitoring may be

uncomfortable

Significant

opportunities exist

for transformation

of “early warning”

systems

More

comprehensive

clinical monitoring

and production of

data for patient

trends, clinical

audit and

research.

Considerable

scope to free- up

nurse resource

for other care

activities

Analysis of time-

series data is

needed to reveal

impacts on

mortality, LOS

and ITU

escalation

“Big data”

analytics rarely

used or planned

Other factors are

important

36

lead to improvements in clinical

decision-making. Potential benefits from improved

accuracy, safety and completeness

of automated care summaries by

integrating with EHR were described.

Mixed results about whether it

reduces handover preparation time

or time to treatment delivery

Little information

on staff

interactions and

communication

or restrictive in lower

acuity settings. Risk of “alarm fatigue

confounders e.g.

trends towards lower

acuity of hospitalised

patients

Cost-benefit should

reflect reduced

workload and

increased quality

care and improved

patient experience.

Process data external

to the systems e.g.

clinical response times

should be integrated

for robust evaluation of

impact.

•Specificity of alarms

and alerts is of

concern to avoid

desensitisation

37

4.4 Digital capture of clinical data at point-of –care and digitally-enabled

observations management The physiological monitoring of patients is essential for the early detection and prevention of

deterioration in acute settings. NICE has published recommendations regarding the physiological

monitoring of patients in secondary care (NICE 2007 CG50) and mandates the use of “track and

trigger” or “recognise and rescue” models in all acute settings. Along the continuum from recognise

to rescue there are a number of points at which technology can enable staff to intervene more

rapidly and effectively. The following section presents two closely inter-related applications of digital

technology to acute healthcare: automated clinical data capture and storage, and management and

manipulation of such data to support clinical decision-making and handovers.

Clinical observations in acute care are predominantly recorded by nursing and increasingly, support

staff. The majority of interventions identified are limited to routine but vital and high-volume

observations such Non-invasive Blood Pressure (NIBP), respiratory rate, heart rate, oxygen

saturation and temperature. Literature regarding automatically captured parameters such as

ventilator settings or blood gas assays is also included if such data are then automatically stored as

part of a clinical record or integrated into a clinical decision support system. The automatic

integration of such observations with other parameters or the triggering of clinical responses

remotely, are dealt with in the section on management of digital clinical data as are the limited

number of sophisticated systems which integrate nursing data with other elements of the EHR such

as laboratory results to create alerts or hand-over documentation automatically. It is predicted that

such integration will become the norm in the coming decade and the limited impact of current

deployments reflect delays in radically changing working practices in healthcare needed to realise

the huge potential benefits (Wachter, 2015).

Automated Clinical Data Capture Timely and appropriate medical intervention for individual patients depends on the accurate

measurement and recording of physical and psychological parameters and the subsequent

availability of such data to inform clinical decision making. In recent years the collection of

physiological data at the bedside has increasingly relied on electronic devices to measure (but

rarely record) routine observations. Paper charts and clinical notes continue to be the norm in much

of the NHS regardless of acuity or level of care. Manual recording methods in nursing result in a

number of pitfalls such as missed or missing observations, transcription, legibility and rounding

errors and finally, errors in manually calculating useful clinical indexes such as Early Warning

38

Scores EWS (NICE CG50 2007, National Patient Safety Agency 2007). The literature in this area

largely focuses on the benefits of automated capture in terms of efficiency and accuracy of data

collection and patient outcomes, and only secondarily on cost-effectiveness or staff and patient

acceptability.

Effectiveness and Accuracy of Automated Data Capture There is some evidence that recording vital signs using HIT is faster and more accurate. A

simulation cross-over study of vital sign data entry carried out by Prytherch (2006) found that an

electronic system was 1.6 times as fast as manual recording with a reduction in errors. Bellomo

(2012) and Mohammed (2009) showed reductions in nursing time taken to record data when

electronic data capture was compared to manual charting (1.6 minutes and 13.9 seconds

respectively). Automated capture improved the completeness and accuracy of blood pressure

monitoring Hug (2011) in a study of NIBP monitoring in an intensive care environment. Automated

observations (filtered for extreme values indicating measurement error) improved predictions of

subsequent hypotensive episodes when compared to traditional episodic recording made by nursing

staff. Jones (2011) examined the use of the “Patientrack” system and found a 19% error rate on

paper was completely eliminated after the electronic system was introduced. In a small-scale cross-

over study of 6 nurses treating 5 patients in an intensive care setting, Wood and Finkelstein (2013)

found that a 30% error rate in traditional recording was eliminated after automation. Wood and

Finkelstein (2013) also reported evidence of efficiency gains as recording was instantaneous and

data were immediately available to multiple staff. Smith et al (2009) found the introduction of a bar-

code enabled, hand-held device used to upload vital signs automatically reduced errors of 10%

(paper) and 4.4% (manual entry into EHR) to less than 1% in an uncontrolled before and after study.

Taenzer et al (2014) carried out a comparison of automatic vs nurse recording of oxygen saturation

measurements. Nurses were found to overestimate SaO2 by 6% by comparison to the automated

record. This has important safety implications as it implies that desaturation events may be missed

with intermittent manual recording methods but could also reflect correct filtering out by nurses of

incorrect observations e.g. during repositioning of miss-placed probes. Signal et al. (2010)

compared automatic, continuous glucose monitoring on an intensive care unit to periodic manual

nurse blood sampling using point of care testing. This study predicted a small efficiency gain in

nursing time and improved glycaemic control but no full cost-benefit was undertaken. Both

Prytherch (2006) and Wood and Finkelstein (2013) reported positive staff evaluations of the

automated processes they studied.

39

Automation could have secondary benefits. A common and important source of bias in

measurements of blood pressure arises from patient anxiety during measurement taking creating

errors known as “white-coat” (UK) or “office” (US) hypertension. This can lead to over-detection and

subsequent over-treatment of hypertension, mainly in primary care. In a group of studies (Myers

2012 and 2014) and Nelson et al (2009), a fully automated system was used to measure NIBP over

10 minutes: the patient was instructed to lie down and relax and was left alone during the process.

The automated measurements were found to correlate more closely with continuous ambulatory

measurements and markers of end-organ hypertensive damage such a renal function, when

compared to traditional methods potentially reducing over treatment. Myers (2012) suggested that

the system may not yield significant benefits if staff fail to wait for the full assessment period or to

leave the patient alone for sufficient time emphasising the importance of human reactions to new

technology.

Automated systems can be set to obtain ad hoc, periodic or continuous measurements, thus

overcoming the need to rely on staff to carry out periodic assessments which can be missed. Once

monitoring equipment is placed e.g. NIBP cuffs or arterial lines, the system takes measurements

independently reducing workloads. Multiple or continuous measurements of rapidly fluctuating

physiological parameters are likely to correlate better with long-term physical status than single

random measurements. This principle, along with reductions in patient anxiety, underpins the

increased accuracy of continuous or automated system if implemented appropriately. Continuous

monitoring of vital signs for example, was shown in one study to decrease ITU length of stay (LOS)

(Bellomo 2012) and in a second, to reduce general ward and ITU LOS (Brown 2014). Evidence of

an effect on cardiac arrest rate was equivocal in both studies. There is a significant risk of bias in all

of the available evidence for effects of continuous monitoring on patient outcomes arising from the

uncontrolled and unblinded before and after study designs and demonstrating impacts on mortality

is likely to require very large studies.

There is limited evidence demonstrating benefits of employing automated regular or continuous

monitoring outside of hyper-acute settings (theatres and intensive care). Continuous or more

frequent intermittent monitoring, could have a significant impact on patient comfort and convenience

which may in turn lead to poorer compliance. One study, Brown et al., (2014), tested a non-contact

system to monitor heart rate and respiratory rate in acute admission patients in a general ward

setting. There were small reductions in the median length of stay in the intervention group on both

the study wards and for ITU stay for escalated patients suggesting more timely transfers. The study

40

was controlled but small, unblinded, and pseudo-randomised and no cost-benefit data were

presented.

Vergales (2014) investigated an automated stand-alone respiratory monitoring alarm in a neonatal

ITU setting. The algorithm to detect apnoea episodes demonstrated greater sensitivity than manual

nurse observations detecting three quarters of clinically significant episodes compared to one

quarter detected by standard nursing observations.

Automated alarms have been shown to improve the proportion of (non-automated) EWS triggers

receiving an appropriate response (Jones 2011). Timeliness of subsequent treatment was difficult to

measure because of poor documentation of attendance by rescue teams or medical staff.

Digitally enabled management of Clinical Data Once patient data is collected decisions need to occur. There are increasingly sophisticated

systems which allow for the integration and presentation of digital patient data once it is captured to

support clinical decision-making. The evidence can be delineated into three distinct applications:

Early Warning Score calculation and response triggering, algorithmic combination of

disparate electronic clinical records for screening,

prognosis or risk scoring, monitoring and decision support

the use of automatically populated templates or artificial intelligence and natural language

processing systems, to create real-time case or handover summaries to improve efficiency

or efficacy of clinical communication.

It is again worth emphasising that the deployment of such systems and the dramatic changes in

working practices they require are only just beginning even in high-income settings such as the US.

Generally current results are mixed with methodological problems in most of the studies and a

failure to fully utilize the resulting “big data” created by such systems.

Effectiveness and accuracy of Early Warning Score (EWS) calculation and response triggering Early detection of deterioration in a patient’s condition, the afferent arm of “recognise and rescue”, is

increasingly employed with the aim of reducing inpatient mortality and morbidity (Jones 2011).

There are a limited number of platforms that have been deployed and evaluated to facilitate the

automatic calculation of EWS and the automatic triggering of a clinical response e.g. Critical Care

outreach team in response to manually entered readings. Such systems typically combine bedside

41

records of vital signs and electronic nursing observations such a urine output and calculate a score

by weighting each element in line with published or locally created algorithms e.g. ViEWS, NEWS.

The performance of digital systems in terms of sensitivity, specificity and any improvement in patient

outcomes is potentially heavily dependent on the performance of the underlying equations and

thresholds (Romero-Brufau et al., 2014) with most published scores having high “false alarm” rates.

Research evidence into the derivation and calibration of the scoring systems is excluded from this

review unless specifically reported in digital implementation studies. More importantly there is very

little evidence of changes to workflows and staff behavior to maximise the impacts of or guide the

successful implementation of such systems.

Sawyer et al., (2011) demonstrated that automated alerts could reliably give early warning of

deterioration as a result of sepsis but subsequent work showed that such alerts did not reduce

hospital LOS or other outcomes when delivered automatically to senior nursing staff (Bailey et al.,

2013). A further randomised trial of electronic alerts delivered directly to “Rapid Response Team

nurses” by contrast, resulted in increased intensity of monitoring and did demonstrate a reduction in

hospital LOS by one day in a randomized trial (Kollef et al., 2014). There was no effect on rates of

transfer to ITU, mortality or need for long-term care at discharge beyond chance.

The remote telemetry and consultation systems employed in some ITU settings (with a consultant

advising on the care of large numbers of patients (e.g. Institute for Child Health, London) were not

considered for this review.

Prognosis/risk scoring, monitoring and decision support Systems using algorithms or analytics to combine vital signs, laboratory results, nursing

observations are rapidly being developed. Tepas et al., (2013) studied an automated system to

integrate patient parameters, laboratory data and nursing observations and other risk factors

interactively to calculate a Rothman Index of severity of illness. This correlated strongly with

morbidity and mortality in the peri-operative period. The automated score gave accurate predictions

of length of stay and mortality potentially allowing early intensive intervention and reductions in use

of unplanned intensive care. There was no clinical intervention, cost –benefit was not estimated and

the system required sophisticated integration of many clinical variables suggesting that widespread

adoption will take time. Woodford et al., (2012) observed the use of continuous monitoring of routine

vital signs recorded during pre-hospital trauma transfers of 120 patients and found that such

observations had similar sensitivity to conventional prognostic scores dependent on complex

42

investigations, suggesting potential benefits from earlier, cheaper and more accurate triage of such

patients.

A system called Artemis was implemented in a US Neonatal ITU (Blount et al., 2010 and McGregor

et al., 2011) to integrate clinical variables in real-time to assist with clinical decision–making and to

facilitate data-mining to develop new indicators of deterioration. Laboratory reports, nursing

observations and other elements of the EHR were combined to create prognostic calculations

updated iteratively for each patient. The three published papers relating to this system describe

detailed technical specifications, development and pilot implementation of the system in a NITU but

no clinical or outcome data are presented. Even where systems have such capability it is often the

case that even some early-adopting US hospitals are playing catch-up in developing the clinical or

statistical expertise needed to benefit fully from such an unprecedented wealth of information

(Wachter, 2015).

Better coordination of existing fragmented care may be easier both to achieve and demonstrate. A

study of an electronic system to facilitate immunosuppression therapy post-liver transplant was

conducted in a tertiary center in the US (Park 2010). The system automatically integrated historical

laboratory and prescribing data previously stored in multiple places. This allowed a liver specialist

nurse to improve the coordination of clinical teams to make immunosuppressant prescribing safer

and more efficient. There were significant reductions in toxicity and rejection events and estimated

cost savings were demonstrated in the 12 months after the implementation of the system. Caution

needs to be applied to these results, particularly in regards to the reproduction of similar results, as

it is a single system and implementation followed a safety review triggered by poor practice and

errors. Costs were imputed rather than actual but the large effect size detected suggests that the

system warrants further evaluation and other similar applications are possible. It is also unclear from

the article where this system lies on the continuum from HIT-mediated accessibility and usability of

data which is relatively simple but has high impact and sophisticated automated integration of data

to generate predictive indices and other decision support applications.

An observational before and after study examined the effect of the introduction of an electronic

physiological surveillance system (VitalPAC) which automatically stores physiological data,

calculates EWS scores and provides bedside decision support (Schmidt et al 2014). This rigorous

study provides evidence that full implementation was followed by a reduction in mortality at two

separate sites (7.75% to 6.42% and 7.57% to 6.15%) over a relatively long follow-up period. No

adjustment for existing reductions in national trends for in-hospital mortality were made and so it is

unclear how big an impact the system itself made. Jones (2013) describes the nurse-led

43

development of an automated EWS alerting system which displayed colour-coded warnings on

hand-held and desktop devices after manually entered vital signs data were added to the hospital

EHR. There was a reduction from 16 to 3 cardiac arrests after implementation of the system but

overall or case-mix adjusted hospital mortality was not reported.

Effectiveness and accuracy of the generation of automated Care summaries/handovers Care summaries, handover documentation and “ward rounds” all require the combination of recent

historical data and physiological observations to be efficient and effective. Such functions are

traditionally the preserve of nursing and junior medical staff and are time-consuming and subject to

errors and omissions (Randmaa et al., 2014). Systems have been developed for automatically

producing such summaries. Natural Language processing (NLP) systems are increasingly applied

to unstructured medical record data and have demonstrated very high levels of accuracy in

producing quantitative data sets for billing and research purposes (Murff, 2011, Fitzhenry 2013,

Nelson 2015). Such systems could help to generate shift care summaries with time savings and

accuracy increases as possible benefits.

Hunter (2012) developed a system with researchers from the University of Edinburgh to generate

automated shift to shift hand-over summaries for nursing staff working in neonatal intensive care.

Integration of quantitative and laboratory data along with NLP of free-text nursing care records

entered was used by the modular system “BT-Nurse” to create a shift “time-line” and summary.

Staff rated these automated handover records favourably. They noted potentially great benefits in

accuracy, safety and completeness particularly when compared to more junior staff giving

conventional handovers, however, there was no formal evaluation of these effects included in the

pilot study. This system is also dependent on the presence of sophisticated electronic data capture,

EHR and some modifications in staff record keeping practices and so may not be transferrable to

other organisation.

There are less sophisticated computer –generated summaries, which use an automatically

populated template, these have been shown to reduce hand-over duration by 44mins (Kochendorfer

et al., 2010). Accuracy has been shown to improve where a handover tool is linked the EHR and a

preparation time which could range from up to 20 minutes per day was reduced to up to 15 minutes

per day was reported for hand-over summaries by medical staff in a Neonatal ITU setting (Palma et

al 2011). A similar system showed no difference in the more clinically valid measure of time to

receive first clinical intervention after handover in a surgical setting but a reduction in median length

of stay by one day of borderline statistical significance may represent a real effect, perhaps by

44

reducing hand-over errors or increasing continuity of care (Ryan et al., 2011). Wohlauer et al.,

(2012) found an 11 minute reduction in preparation time by junior doctors for ward-round and hand-

over summaries, using automatically populated templates, along with a marginal reduction in the

numbers of patient “missed” during the ward round. Overall the evidence from such studies is mixed

and there is little evidence from a socio-technical perspective on the impact on working practices,

other clinical systems or objective accuracy or usefulness of such information in daily practice.

45

Capability Details of

review and

search terms

Effectiveness and

accuracy

Financial impact Impact for patients

and staff

Learning for implementation

4.5

Safer clinical

interventions

Records found

677; abstracts

assessed 182; 44

papers included.

Selected search

terms:

BCMA, ADC,

Adverse Drug

Events,

Medication

Administration

Errors

nurs*, barcode,

RFID, QR code,

blood transfusion,

positive patient

identification, total

opportunity error

Evidence of

effectiveness of

HIT in reducing

drug and blood

administration

errors is currently

limited but of high

quality showing

significant

impacts on safety

Bar codes can

reduce blood

administration

errors and some

drug

administration

errors.

Long term follow-

up studies

(including socio-

technical and

qualitative work)

are needed to

understand

workflows,

workarounds and

unintended

consequences

Broad financial data about

the cost of errors available

but savings are assumed

rather than measured and

not set against investment

required

Qualitative estimates of

severity of patient impact

from errors hampers reliable

cost-benefit estimation.

Some nurse resource saving

if single checker policy is

adopted with no loss of

safety.

One high quality study of a

single “whole chain” system

for blood transfusion in an

acute NHS Trust estimated a

net saving of £598k per year

Cost benefit highly context

specific e.g. high saving in

liver transplant aftercare

(US) but less impact in some

standard hospital settings

Limited cost/benefit reported

for UK studies

Faster and more

appropriate

delivery of blood

products carries

patient benefit

and may

improve

resource use by

clinicians

Reduction of

sampling and

labelling errors

reduce common

harm of further

venepuncture

and rarer harm

of

incompatibility

errors.

Blood laboratory

staff reported

reduced stress

at times of peak

demand

because of

automation of

many

processes.

Some studies with robust

observational outcome data

showed reductions in errors

When outcomes are restricted to

events causing harm, safety

benefits are often modest for drug

administration.

Uncontrolled before and after

studies carry risk of bias given

focus on errors during

implementation and study phases

may contribute to improvement.

Incident and error reporting data

is not robust enough to clearly

demonstrate the safety impact of

HIT.

“End to end” HIT likely to provide

the greatest safety benefits but

there is little cost/benefit data.

Socio-technological evidence

needed to understand

workarounds to better understand

how safety improvements are

actually achieved and can be

“locked in”

Systems generating automated

warnings may generate high

volumes of false or un-informative

alarms.

46

4.5. Safer care interventions The use of health information technology (HIT) systems to reduce errors in patient and product

identification, have mainly involved the employment of Bar Code Medicines Administration (BCMA)

and blood administration systems. These systems have the potential to demonstrate significant

reductions in “wrong patient” and “wrong product” errors by automating the processes of checking

normally undertaken by clinical staff, primarily nurses. When integrated with other HIT systems such

as Computerised Physician Order Entry (CPOE) and Automated Dispensing Cabinets (ADC),

BCMA systems may deliver greater improvements in safety than stand-alone systems. Highly

integrated systems are referred to as “end to end” (mainly drug delivery) or “whole chain” (Blood

delivery) and offer automation of the product management at every step. In addition, efficiency

improvements can be self-reinforcing for example where ADC systems could increase drug

availability because of automatic stock monitoring and therefore reduce omission errors or where

“whole-chain” HIT blood management systems could reduce delays in blood delivery which in turn

may reduce pre-emptive ordering by doctors thereby cutting wastage. Currently there is only limited

literature documenting research into BCMA and ADC standalone systems and the larger literature

on CPOE systems have a low UK installed base and are not included current review.

Patient or medicinal product identification and linking during the administration process can be

accomplished using bar-code systems or Radio Frequency Identification Chip (RFID) or similar

machine readable formats. Scanners may be connected to a mobile computer or computer enabled

drug cabinet or as a handheld device such as a PDA or smartphone with a suitable camera and

software. Generally the interventions reviewed here feature proprietary technology but increasingly

“off the shelf” products are used particularly with regards to hand-held devices such as PDAs and

smartphones. These are attractive because of relatively low unit costs. Such platforms support more

or less customisable “apps” or run proprietary software. In general the reliability of such devices has

drastically improved in recent years but may still exhibit problems during implementation which may

or may not be resolvable at site or necessitate “workarounds” to fit with existing workflows. Devices

may be subject to hardware or software limitations (interface usability, limited battery life) and all

such devices are subject to network problems (WLAN black-spots, dropped connections) which may

hamper effective implementation. These problems may feature in early-phase deployment but also

in the overall system life-cycle and where the exact details of supplier support and contractual

arrangements may become very important. There was little available evidence of such problems in

the literature although this is likely to indicate a degree of publication bias e.g. suppressed for

commercial or reputational reasons. Initial failure to integrate systems into established workflows,

47

poor design features and more rarely systemic failures can lead to “workarounds”, ad hoc staff

practices and behaviours designed to bypass an organisation’s systems and processes to

accomplish an activity (Wachter 2015). Ways to evaluate and reduce such practices have been

reported, for example where an override feature becomes overused (Pockras 2013). These effects

often dilute the predicted impacts on patient safety but are rarely reported. It is essential that the

NHS finds ways to learn lessons from early adoptions to speed up the innovation cycle and

maximize return on investment.

Nurses have a key role to play in the delivery of safe interventions in the areas of drug

administration and transfusion of blood products. Nurse administration of drugs and blood products

is the last step in complex chains of processes. Interception and reporting of errors at this point is

lower than at any other step having been estimated at around 2% of actual errors (Young et al.,

2010, Agrawal 2009, Leung et al., 2014). This means that HIT systems designed to reduce errors at

administration, wherever the error originates, have significant potential to prevent harm and waste.

This review examined the literature which has emerged in relation to drug administration and blood

products through HIT patient identification and product delivery systems. This review focusses on

technology to assist nurses in drug administration as they are by far the main administrators of

drugs and transfusions in the NHS.

Drug administration Leape et al., (1995), Kohn and Donaldson (2000) and Bates et al., (1998) describe evidence that

the drug prescription and administration stages of the medicines delivery process generate 39% and

38% of all drug errors respectively. In a review of 91 papers, Keers et al., (2013) found a lower rate

for administration errors of 19.6% falling to 8.0% when timing errors were excluded. A higher

reported incidence of errors for intravenous medication administration of 20.7% was reported again

after excluding timing errors. In terms of overall cost, the only data found was from the US where it

has been estimated that between 380 000 and 450 000 Adverse Drug Events (ADE including both

prescription and administration errors)) occur per year in the US costing $3.5 billion (a mean $8,433

per ADE (Aspden 2006 cited in Van der Linden et al., 2013).

Error detection and near-miss events are variously identified and measured. There is an emerging

consensus that direct and where possible covert, observation delivers the most robust metric

(Cottney 2014, Berdot et al., 2013, Keers et al., 2013). Direct or covert observation is justifiable

methodologically but is a costly research method and therefore underused. Drug safety could also

48

be addressed through Global Trigger Tools and other more efficient adverse event sampling

strategies but no such papers were identified during this review.

Drug administration errors are counted in a variety of ways and the choice of denominator is another

key to validity and generalisability. The most appropriate rate is “Total Opportunity [for] Error” (TOE)

which is defined as directly observed errors as a proportion of all prescribed and “as required” doses

(Keers 2013). Many studies exclude “wrong time” errors e.g. administration defined as +\- 30 or 60

minutes from prescription mandated time) given their lower likely harmful impact and the more

intractable nature of the problem which often arises from lack of nursing resources or other barriers.

Omission errors are also difficult to interpret given that documentation of rationale is often poor but

might often justify the nurses’ decisions e.g. based on a physiological observation being out of safe

range such as Heart Rate. A second important metric is Adverse Drug Events (ADE) although

these are variously defined, sometimes including prescription errors, adverse reactions or

reintroduction of drugs where previous allergy events occurred. ADEs have also been sub-

categorised by severity whether actual or predicted. Overall it appears that ADEs, like all drug

errors, are under-reported as incident reporting is known to be a poor source of data and sub-

clinical events may pass unnoticed.

Error rates may be higher in paediatric settings owing to small volumes of medications requiring

greater precision, the need for and complexity of weight-dose calculations, the use of weight

estimation methods in emergency settings and the variety of alternative preparations and route

choices (Chua et al., 2010). For example, even bar-code reading errors have been noted on

Neonatal ITU possibly because of the small radius of the children’s’ limbs resulting in poor fitting of

wrist-bands (Cochran et al., 2007). Morriss et al., (2009) reported an increase in errors of

administration after BCMA was introduced onto a neonatal ITU but a reduction in harmful events.

This may merely be a result of the fact that such systems dramatically increase the recording and

accuracy of administration times (and thus inflate the degree of error from this source) whilst

reducing actual patient harms making some studies difficult to interpret and generalisable

cost/benefit estimations more problematic.

DeYoung et al., (2009) and Helmons et al. (2009) both examined bar-code medication system

effects in ITU and ward settings on medication errors. They reported before and after rates of 19.7%

and 10.7% falling to 8.7% and 9.9% which were significant and non-significant respectively.

Rodriguez-Gonzalez et al., (2012) in a "disguised" observational study after BCMA implementation

in two GI units, found no "wrong patient" errors at all. Of the TOE of 20.7%, most errors (13.9%)

49

resulted from ignorance of nurses as to elements of administration e.g. before food or administration

via the wrong route. Hardmeier (2014) conducted a direct observation, post-implementation study of

BCMA and eMAR introduced into paediatric wards and found an error rate of 5% of all observed

doses with compliance varying by process and unit. The comparison of drug to the eMAR twice

(instead of once) was achieved 68% of the time. Compliance in checking two forms of Patient ID

varied between 31% and 70% by unit. However in both cases there was no available comparison

with an existing baseline as with many transitions from paper to computerised system. Koppel et

al., (2008) undertook an observational mixed-methods study at 5 hospitals using BCMA systems

and developed a typology of clinical staff workarounds and identified 15 types of workaround

(including collecting BC wristbands at the drug trolley and carrying pre-scanned medications to the

bedside) arising from 31 separate causes including damaged / absent wrist bands, hardware

malfunctions, non-BC medications, emergencies and failed connectivity.

Coleman and colleagues conducted a well-designed evaluation using a combined Computerised

Physician Order Entry (CPOE) and Administration recording system of four different interventions to

reduce missed or overdue drug doses (Coleman et al., 2013). Data from the system were analysed

over time to demonstrate the independent effect of each implementation against existing trends over

a 4-year period. The overall effect of the interventions was to reduce antibiotic and non-antibiotic

omission rates from 10.3% to 4.4% and 16.4% to 8.2% respectively. The time-series analysis

indicated that the introduction of a prescription pause function for prescribers, a clinical dashboard

reporting missed doses and root cause analysis meetings were associated with significant “step-

change” reductions in omitted doses of antibiotics but real-time visual reporting to staff of delayed

doses, was not. The sophisticated analysis allowed the inclusion of existing safety trends to be

accounted for and to evaluate sequential interventions directly. Serrano et al., (2012) reported on

the effects of the introduction of BCMA systems onto an onco-haematology unit and found, during a

short follow-up period, that significant errors were detected by the system including attempts to re-

administer the same prescription and wrong patient identification. Holmstock et al. (2014) used a

time-series design to analyse dose omissions and drug administration efficiency during the

implementation of an Electronic Prescribing and Medication Administration system on one medical

and one surgical ward. Only a short report of this work could be found and although omission rates

are low compared to other papers found no evidence is reported to demonstrate that this was

directly related to the introduction of the system.

The impact of BCMAs on time spent on drug administration is not conclusive either in terms of the

reducing time taken or whether systems themselves are bypassed or otherwise subverted to save

50

time. Dibbed et al., (2011) reported an increase in time spent on patient care vs drug treatment

from introduction of a BCMA system. Hassink et al., (2012) found no difference and Franklin et al.,

(2007) an increase in time that nurses spent on medication administration tasks after

implementation. Poon et al., (2010) reported that 20% of drug scanning steps were missed and

Rack et al. (2012) reported nurse focus group evidence that a majority of nurses said they failed to

use the scanner for patient or drug identification at least once on the previous shift. It is known that

interruptions to medication tasks result in greater error rates (Raban and Westbrook 2014) but no

evidence was found to suggest medication-related HIT could have a direct effect on this problem.

Blood Products The administration of blood products is a key element of care in many settings, primarily GI

medicine, trauma, haematology and to a lesser extent, oncology and obstetrics. Errors may occur in

numerous steps in the process from collection of products from pharmacy to final administration.

The review found no published evidence for interventions to reduce errors during the donation and

production phases of the blood supply chain both carried out by specialist nurses. Evidence is

restricted to studies of patient sampling and administration where available.

Pre-analytic errors (mislabeling or incorrect patient sample) are a leading cause of health laboratory

error but are almost certainly underreported as an unquantifiable number of errors remain

undetected (Snyder et al., 2012). Transfusion to the wrong recipient is referred to as a 'never event'

emphasising its gravity. Nurses play a key role in the administration of blood, in both the correct

collection and labelling of “group and save” and cross-matching samples from patients in hospital

and the subsequent administration of products to patients. “Wrong blood in tube” errors (WBIT) vary

between 1 in every 1500 to 3000 samples (Cottrell 2013a, 2013b) and are not as rare as transfusion

errors allowing some statistical inferences to be made. HIT interventions have been deployed to

reduce error during both of these steps. Other important benefits of HIT are likely to arise from

increased efficiency and reduced wastage of a finite resource. The Cross-matched to Transfused

Ratio (CTR) provides a good proxy for safety and efficiency. This is the proportion of units

transfused that have received patient-donor specific antigen testing. Lower ratios indicate more

efficient use of blood with higher ratios associated with increased laboratory costs and waste. In

emergency situations restricted testing is employed (e.g. “type specific”) or “universal donor” blood

(UDB) is supplied. The risk of haemolytic transfusion reaction has been shown to be low in

emergency settings using type-specific blood although fully cross-matched blood is still preferable

(Goodell et al. 2010). However their devastating patient impact and high cost means they warrant

preventative interventions with the aim of their complete elimination.

51

There is evidence from a single site study which suggests that HIT such as Bar Code matching of

recipient, sample and product can reduce WBIT errors by approximately half from 1 in every 12,322

to 1 in every 26,690 samples and wrong products transfused from 1 in 27,523 to 1 in 67,935

(Murphy 2012b). An automated alarm was triggered if a BC mismatch was detected at both the

bedside and remotely within the transfusion laboratory. Of the 343 such alarms triggered 37 were

“true” mismatches and all of these potentially devastating events were avoided.

Bar Code systems have also been shown to increase other markers of safety such as closer

adherence to standard procedures amongst nurses (Murphy 2012a & 2012b; Miller et al., 2013). It

was shown that, in common with other applications of safety HIT, benefits may arise from the

enforcement of existing rules or the prevention of short-cuts and multi-tasking rather than merely

from automation of checking steps. These important socio-technical mechanisms which can

contribute to improved safety are often not reported in full but could provide explanations as to why

some systems lead to workarounds compromising safety whilst others do not. This lack of detailed

examination of mechanisms hampers much of the HIT literature and reduces its usefulness to the

NHS as a whole.

Reducing the time to delivery of blood products increases the likelihood of full cross-matching being

possible without reducing time-to-vein in emergencies. This in turn reduces wasteful pre-emptive

ordering, overuse of universal donor blood and increases physician acceptability of automated

systems (Murphy 2012b). Elsewhere, Murphy (2012c) presents estimated net savings of £598k per

year with yearly costs of £357k for system, support and training set against yearly gross savings of

£955k as a result of improved productivity, greater than national trend reductions in blood use (12%

vs 9% national from SHOT data) and reduced wastage (estimates are reported in (Murphy 2012c) in

US dollars and converted here using December 2012 mean exchange rate of 1.61 (source:

http://www.x-rates.com last accessed 17/06/2015).

Murphy (2012b) found a combination of patient barcodes, hand held devices and electronically

controlled fridges led to a faster supply of urgent blood from 18 minutes to 45 seconds. Miller et al.

(2013) conducted an uncontrolled before and after study of implementation of 2D BC in single

haematology/oncology clinic in Australia and noted positive verbal identification increased from 57%

to 94%, cross-referencing wrist band to patient ID increased from 36% to 94% and cross-

referencing of “compatibility tag” to wrist band increased from 48% to 99%. Nurses were able to

scan the BC at the bedside and print labels on demand with portable printer further increasing

efficiency. Labelling errors decreased from 103 per year to 8. There was no detailed staff, time or

workflow analysis presented.

52

Collectively, the relatively short follow-up phases for these studies may not adequately capture

longer-term shifts in acceptance by staff or the development of workarounds or indeed improved

efficiencies as familiarity increases and systems “bed in”. Murphy (2012b) did report that the BC

technology supported a reduction from 2 to 1 nurse required for bedside checking whilst actually

improving safety rates. As with other HIT efficiency gains, this can be considered a financial saving

or as a nursing quality improvement as time savings can be reinvested in improving patient

experience.

Sellen et al. (2015) conducted a systematic review but found only 13 papers (reporting 6 studies) in

this area. They included grey and qualitative literature as long as there was evidence of a "research

method". No studies were found that analysed serious transfusion events as these are so rare but

Cross-matched to Transfused Ratio (CTR) is used as a proxy measure of safety and efficiency.

Reductions in time to blood issue of between 96% and 98% were found with reductions in CTR of

between 23% and 65%. Other relevant results were blood waste due to hardware failure, staff time

saving (of around 100 hours per week), increased traceability of units and lowered laboratory staff

stress during emergencies or high-demand periods. Cottrell (2013) conducted a systematic review

of blood transfusion technologies and found evidence for the effectiveness of various interventions

to reduce WBIT errors. All studies showed improvement but did not demonstrate longer-term

sustainability and it is not clear which intervention or combination of interventions is superior.

Effectiveness and accuracy of other safety systems Evaluations of CPOE systems themselves are not included in this review although advantages may

include reductions in prescription-related errors in terms of legibility, timing (as a result of alerts and

instant updates), availability (compared to a single paper drug card) and completeness (automatic

inclusion of essential elements such as route or rate instructions) which could all increase safety.

We have reviewed articles where outcomes directly relevant to administration errors were included.

No papers were identified formally evaluating interpretation of prescriptions by nursing staff at the

point of administration or enforcing pre- administration checks for allergy, pain, heart rate etc.

although these would have been relevant and systems increasingly include such safety functions.

Fitzhenry (2011) studied a BCMA enabled warfarin dosing system which included automated dose

error alerting. Alerts were scored from 0-10 in terms of the likely clinical patient impact by staff.

Only 99 of 2404 alerts were deemed “clinically meaningful” using a subjective numerical rating

scale. Further evidence regarding CPOE emerges from Manias et al’s., (2012) review of evidence

of the impact on drug errors in ITU from a variety of safety interventions. This showed that CPOE

did not always reduce prescribing errors but did lead to reductions in harmful Adverse Drug Events

53

(ADEs). The same review found evidence that smart pumps had no effect on infusion errors and a

25% workaround rate was observed during the study period in one paper in an adult ITU setting.

Two further reviews of BC technology and drug administration errors reached different conclusions

with Young et al., (2010) suggesting that BCMA had mixed effects on error rates and Leung et al.,

(2014) finding up to 80% reductions in administration errors where systems were integrated with

CPOE and pharmacy processes. The complexity of such technologies and the hospital workflows

within which they are embedded combined with the expense of rigorous evaluations of individual

implementations and lack of baseline data, means there is insufficient published evidence to

substantiate projected benefits.

In a before and after study based on two Intensive Care units (Chapuis et al., 2010) the rate of all

administration errors fell further in a unit equipped with ADCs compared to a control unit with

traditional drug storage but there was no difference in errors leading to patient harm suggesting the

multifactorial nature of drug administration errors. Cottney and Innes (2015) showed that other

factors affecting safety, including interruptions, the number of “stat” (due immediately) or “pro re

nata” (as required, at the discretion of nursing or medical staff or request of the patient) doses, bed

occupancy and burden of drug administration workload all increase error rates significantly and

none of these are directly amenable to a technological fix as these do not reduce workload for

nurses to any great extent.

Theoretical and qualitative work

Toussi et al., (2014) carried out a “failure mode and effects analysis” study of medication safety. A

panel examined medication processes and ascribed scores for likelihood and severity of different

errors. Potential errors were still possible despite automation but such systems were more

amenable to the inclusion of decision support and automated alert features reducing the risk some

error types significantly. Finally, Lawler’s et al. (2011) systematic review of the socio-technical

aspects of HIT implementation showed that BCMA does improve medication administration

accuracy when implementation accommodates actual workflows and staff needs and priorities but

no cost/benefit data are presented. Lawler commented on communication difficulties between

teams and suggests workarounds with unintended consequences tend to emerge because of poor

design or fit with existing care processes.

Song et al., (2015), used a survey technique to examine the views of nurses already using a BC

system and found that feedback and communication regarding errors were positively associated

54

with perceived usefulness and ease of use, whereas the age of staff was a negatively associated

with intention to use the systems. Smeuler et al., (2014) conducted a qualitative study of 20 nurses

regarding their role in drug administration safety finding that level of nursing knowledge, experience

and workload were all significant factors impacting on nurses' ability to perform safe drug delivery.

All of which may reduce the efficacy of HIT interventions unless automated alerts and reminders

implemented via ADCs can be shown to mitigate these effects directly. So far this has not been

demonstrated in the academic literature.

55

Capability Details of review

and search terms

Effectiveness and

accuracy

Financial impact Impact for

patients and staff

Learning for implementation

4.6

Digital nursing

dashboards

744 records found;

93 abstracts

screened;

45 papers included.

Search terms

Nursing dashboards

Dashboards

Clinical dashboards

Few papers detail

the importance of

dashboards

focusing on the

quality of the

measures not the

quantity.

The impacts on

clinical nursing

are not clear in

the literature

although there are

improvements

reported such as

increased

communication

and information

sharing.

Lack of

quantifiable data

from EHRs

Unclear how

accurate or ‘real-

time’ dashboard

data can be.

Limited financial

information in

the literature.

One example of

how much these

systems cost

and savings in

terms of a GP

urgent care

dashboard

preventing

hospital

admissions –

possible causal

link.

No mention

of patient

perspectives

or utility

Staff

feedback

positive -

dashboard

had helped

the nurses

achieve

outcomes of

care.

Some

feedback

that decision

support tools

were

required as

visual

presentation

of the data

was

insufficient to

achieve

changes

Involvement of the staff in the

design is important – need for

evidence on increasing impact as

staff may not necessarily use them.

Dashboards need to be accessible

- user guide is useful.

Lack of detail of how data is

analysed longitudinally (as opposed

to day to day reactivity). Snapshots

of real-time data can be very

misleading.

Research is needed into the

motivational effects as evidence for

direct feedback on performance is

limited - only multi-modal

approaches to quality improvement

have any evidence base.

56

4.6. Dashboards “Dashboard” is a word increasingly used in a variety of ways within healthcare to describe frequently

updated graphical displays of various indicators such as bed occupancy. They can be used in the

other capabilities described in this review such as monitoring patient acuity, workforce deployment

and ratios or patient flow performance in Emergency Departments (Spektor et al., 2008, McHugh et

al., 2011, McCaughey et al., 2015). They are increasingly used to inform commissioning in specialist

services and to compare the performance of organisations and individuals (NHS England 2015,

Redwood et al., 2013, Maben et al., 2012). In this section we examine the literature on digital

nursing dashboards and their impact on performance metrics. The key characteristics of quality and

clinical dashboards, which separate them from other systems such as computerized decision

support systems (CDSS) and electronic medical record (EMR) are that they

provide summary data on performance measured against targets (often related to quality of

care or productivity)

use data visualization techniques (such as graphs) to provide intuitive feedback to leaders or

individual clinicians (Dowding et al., 2015).

Dashboards visually display nursing metrics so staff can easily see and understand the standard of

care that is being provided in their unit. Many sources suffer from a lack of detail about the costs of

these dashboards, whether the measures are real-time or a visual display of daily/weekly/monthly

measures and whether they impact on clinical behavior. There is a predominance of examples of

areas developing and introducing a dashboard with little evidence for longer-term impacts.

Nursing metrics are ways of measuring the quality of nursing care. They include the use of

indicators to measure nurse-delivered interventions, outcomes and patient experiences (Griffiths et

al., 2008). The term “metrics” originated in business where it is used to describe and display

measurements of results or production. Nursing metrics are designed to quantify important nursing

care and procedures in a specific area (Foulkes 2011). The policy driver for these is evident in High

Quality Care for All: NHS Next Stage Review (Department of Health DH 2008a) and Framing the

Nursing and Midwifery Contribution: Driving up the Quality of Care (DH 2008b). There is a

significant amount of literature documenting the importance of metrics both in this country and

abroad (Aydin et al., 2008, Brown et al., 2008), Chandraharan 2010, Flannigan et al., 2010, Royal

College of Nursing (RCN) 2011, Doran et al., 2011,Morrow et al., 2012, Felkey and Fox 2014) and

57

work looking at establishing them in certain areas nationally (NHS England 2015). But it is unclear

what effect they have (Dowding et al., 2015).

Nursing metrics are focused on outcomes for which nurses could realistically be held accountable

such as infection rates, number of falls, and number and grade of pressure ulcers (Foulkes 2011),

however metrics need to be visible and accessible if they are to assist in promoting quality

improvement. Dashboards are a visual representation of these metrics such that staff can easily see

the standard of care provided and the impact of specific quality improvement initiatives (Brown et

al., 2010). The way in which information is presented to individuals may also affect the decisions

they make (Workman 2008). The majority of clinical dashboards tend to use the same universally

recognized colour coding, based on the traffic light system e.g. green for performance within targets,

areas in amber where performance is falling slightly behind targets, and areas in red where there

has been a major deviation from the agreed targets, (Flannigan et al., 2010). The idea is to avoid a

common failing: that nurses collect data but do not always ‘own it or understand it’ (Foley and Cox

2013). However the value of such dashboards is dependent upon the quality of the data entered

(Collins and Draycott 2015). The underuse of dashboards in the United Kingdom was noted by the

NHS Next Stage Review (DoH 2008). It advocated the development of dashboards in all clinical

areas and that every provider of NHS services should systematically measure, analyze, and

improve quality, developing local frameworks to use national relevant indicators.

Looking at the literature in this area there is a definite international mix with two themes emerging:

the effectiveness of the dashboards and the perspective of staff introducing and working with

dashboards. The literature is not always specific to nursing as it is often ward teams that are

included rather than the individual professions.

Effectiveness and accuracy A key message which emerged from a Royal College of Nursing summit on clinical dashboards

(RCN 2011) was that the choice of indicators used to measure quality of care was vital and should

be based on what is important to the healthcare service user/patient and nursing team. Otherwise

there was a risk that dashboards could be used to measure what is convenient, rather than what is

necessary. There was also the risk of too much data being displayed.

Simms et al., (2013) in a study of consultant-led maternity units in the South West found that there

were 352 different quality indicators (QIs), covering 37 different indicator categories, with up to 39

different definitions for one particular QI. Fraizer and Williams (2012) in a paper detailing the

introduction of a dashboard system in a US health provider that used monthly data stressed that

58

“de-cluttering” was important and that effective dashboards focused on not the quantity of

measures, but on the quality of the key measures being reported.

The literature contains many examples of dashboards being designed and implemented.

McLaughlin et al., (2014) reported that in 2007, the University of California Los Angeles (UCLA)

Department of Neurosurgery successfully created a quality dashboard to help manage process

measures and outcomes and ultimately to enhance clinical performance and patient care. Guha et

al., (2013) reported on the design of acute gynecology dashboard focusing on activity, workforce

and clinical indicators, updated monthly. They documented that since the creation of the dashboard

it has gained popularity and is now used in a variety of activities including gynecology business

planning, sourcing new equipment, initiation of acute gynecology theatre lists and enabling

consistent feedback on clinical incidents and training of multidisciplinary staff members.

At an organisational level, Clark et al., (2013) described the internal Medicine Program at The

Prince Charles Hospital Queensland, which developed a clinical dashboard that displays locally

relevant information alongside relevant hospital and state wide metrics to inform daily clinical

decision making. The data reported on the clinical dashboard is driven from data sourced from the

electronic patient journey board in real time as well as other Queensland Health data sources,

providing clinicians including nurses, access to a wealth of local unit data presented in a simple

graphical format. This can have the effect of aiding the identification and confirmation of patient flow

problems, assisting identifying the root causes and enabling the evaluation of patient flow solutions.

These are useful studies about the design of dashboards, but for both studies there is a lack of

detail about the effectiveness and the costs associated with implementation.

Two major pushes for dashboard deployment in the UK were the Clinical Dashboard pilot

programme,

http://webarchive.nationalarchives.gov.uk/20130502102046/http://www.connectingforhealth.nhs.uk/s

ystemsandservices/clindash/overview and the QIPP National Urgent Care Clinical Dashboard in

https://www.networks.nhs.uk/nhs-networks/qipp-urgent-care-gp-

dashboard/documents/4718_dashboards_4pp_a4_WEB_AW_FINAL.pdf. The pilot programme

involved the development and implementation of 24 dashboards, encompassing over 500 metrics,

utilising multiple clinical systems across in 12 projects over 10 Strategic Health Authorities. The

QUIPP dashboard, when fully implemented, will display information to help 2000 GP practices pro-

actively manage and co-ordinate their patient’s healthcare, especially for the most vulnerable

patients and those with long-term conditions by providing Real-Time information from the local acute

trust(s) on ED attendances, admissions and discharges combined with near real-time information

59

from out of hours (OOH) and the walk in center to each GP practice, enabling a whole system view

of urgent care. The learning from both of the programmes is difficult to access, with information on

websites, see above.

The QIPP programme built on the success in NHS Bolton Urgent Care Clinical Dashboard (UCCD),

which was developed as part of the NHS Connecting for Health led Clinical Dashboards pilot

programme (Talbot 2012). The UCCD was designed by local GPs and combined information that

they already received from the local Acute Trust on A& E attendances, admissions and discharges,

with information from out-of-hours and the walk-in centre providers. This information was then

provided to 56 GP practices through the dashboard which gave a more complete picture of patient

care. The dashboard, was used in different ways within practices to give GPs, nurses and active

case managers a clearer picture of urgent care activity, helping them pro-actively manage and co-

ordinate patient care. This, alongside a range of other local service improvement initiatives, led to

2009-2010 ED admissions falling by over 3% compared to a regional increase of 9%. Unscheduled

hospital admissions fell by over 4%, with one practice showing reductions of 17 %. It was calculated

that over the year 2009-2010 the reductions equated to cost savings of over £600,000. The costs of

implementation were £6,000 or less for several sites that only needed to purchase software. For

practices which invested in additional staffing and/or software, the average total spend was £43,000

(Hedgecock and Lewis 2012).

Dashboards do not always present the clarity which is expected. Shaw et al., (2013) examined the

introduction of an electronic health record (EHR) query tool which extracted EHR data from a

pediatric intensive care unit for 295 patients to provide a real-time visual display showing

compliance with patient safety activities. The display included data on presence of consent for

treatment, restraint orders and deep venous thrombosis (DVT) prophylaxis. Baseline compliance

and duration of non-compliance with these activities were established prior to, and one month

following, the activation of the dashboard. Patients included in the analysis were equally divided pre

and post intervention and were similar in age, race, gender and mortality. The median time from

admission to attainment of caregiver consent was 327 minutes in the pre-intervention time period

and 219 minutes in the post-intervention time period, clinically but not statistically significant. There

were no significant differences in compliance rates for DVT prophylaxis or restraint orders between

time periods.

Dowding et al. (2015) performed a literature search from 1996–2012. 122 full text papers were

retrieved of which 11 reported an empirical evaluation of dashboards, reporting either their impact

on desired outcomes (patient/professional behavioral) or clinician perceptions of the utility of such

60

dashboards in clinical practice. Overall the results indicated considerable heterogeneity in

implementation setting, dashboard users and indicators used. In terms of impact there was

evidence that in contexts where dashboards were easily accessible to clinicians (such as in the form

of a screen saver) their use was associated with improved care processes and patient outcomes

(Starmer et al., 2008, Zaydfudim et al., 2009,). Dowding et al., (2015) concluded that it remained

unclear what dashboard characteristics (e.g. type of graphical display, method of presentation) are

related to improved outcomes and how clinicians incorporated use of dashboards into their

everyday practice. Similar to much HIT research Dowding et al., (2015) reported that the majority of

studies had some element of potential bias, with 5 studies being of low quality meaning that any

significant results should be treated with caution. Keen et al., (2014) are due to report in 2017 on a

study examining ward use of dashboards occur at four NHS Trusts and whether they achieve the

openness, transparency and candor envisaged by Francis (2013) and supported by Keogh (2013).

Also work by Cookson et al., (2013) through consultation with a wide range of NHS stakeholders is

developing equity dashboards for presenting findings in an accessible and useful form to local and

national decision makers to identify emerging changes in equity performance and understand how

their actions are influencing inequalities is due to deliver in 2016

Impact on patients and staff In terms of the acceptance by staff, Daley et al., (2013) explored the experiences and attitudes of

mental health professionals working in three acute elderly care areas, to a new clinical dashboard

system which used a variety of different graphics. Metrics were tracked from baseline to 6 months.

Staff completed a questionnaire 3 months after the initial implementation, the results of which

suggested there was improved access to information, increased communication and information-

sharing, increased staff awareness, and data quality associated with the introduction of the

dashboards. Jeffs et al., (2014) in a self-reported qualitative study undertaken in Toronto explored

the perceptions and experiences of 51 front-line nurses and five managers from six units, of the

implementation of a unit-level dashboard. They described the Care Utilising Evidence (CUE)

dashboard as a visual tool that displayed data on the impact of the nursing care provided to

patients. Staff used the tool to keep track of processes of care and patient outcomes and

experiences at a unit level. They were then able to use the performance data to identify quality care

improvements specific to their unit and it also acted as a reminder. Importantly the nurses reported

that before the CUE dashboard they were often unaware of the outcomes of care and that it gave

them encouragement for continuing to provide care.

61

Safdari et al., (2014) and Frazier and Williams (2012) described how involving staff within their ED

department/hospital can be helpful in establishing key performance indicators and designing the

dashboard. Russell et al., (2014) described the collaborative approach taken by NHS Lanarkshire,

which involved nursing staff, programme leads and the eHealth team in the development of a

general ward nursing dashboard as a means of ensuring safe, effective person-centered care. Staff

in 70 acute adult inpatient wards, in three general hospitals, had been assisted to understand and

use it. To ensure consistency of understanding, a user’s guide was developed both within the

dashboard (as an information button) and as a leaflet. The guide helped staff to refresh their

knowledge and develop a deeper understanding of the information available to them because “Data

alone cannot ensure care is of high quality” (Russell et al., 2014 pp 43).

In their small study Effken et al., (2011) explored the decision making of 20 nurse managers in three

Arizona hospitals. They described the nurse manager's environment in terms of the constraints it

imposes on the nurse manager's ability to achieve targeted outcomes through organisational goals

and priorities, functions, processes, as well as work objects and resources (e.g., people, equipment,

technology, and data). They found that nurse managers receive a great deal of data, much in

electronic format. Although dashboards were perceived as helpful because they integrated some

data elements, what was lacking was the decision support tools to help nurse managers with

planning or answering “what if” questions. No nursing example was found of such a decision

support tool, but Dolan et al., (2013) explored the feasibility of such a supportive tool in interactive

decision dashboards in the medical treatment of knee osteoarthritis. They created a computerized,

interactive clinical decision dashboard and performed a pilot test of its clinical feasibility and

acceptability with 25 doctors using a multi-method analysis. The mean time spent interacting with

the dashboard was 4.6 minutes and the interactive clinical decision dashboard scored well for

mechanical and cognitive ease of use, clarification of values, reduction in decisional uncertainty,

provision of decision-related information and decision-aiding effectiveness. In a later paper Dolan

and Veazie (2015) reported that such an interactive decision dashboard Increased confidence in

medical decision making.

62

Capability Details of review and search terms

Effectiveness and accuracy

Financial impact Impact for patients and staff

Learning for implementation

4.7 E-Workforce Deployment

Records found 580; abstracts assessed 28; papers included 12. Search terms: e-roster, nurs*, roster, rota, workforce planning, off-duty, autom*, digital

Lack of baseline data on rota quality and efficiency to assess the impact of implementation

Limited evidence detailing implementation from a practitioner-manager perspective.

Some evidence of improvement in rule adherence and reduction in changes after rota release

No evidence that e-roster is used to increase self-rostering and therefore improve employee wellbeing

Literature mainly anecdotal or narrative - no detailed cost benefit data found

No studies have demonstrated financial savings net of implementation, training and licenses

Significant savings in senior nurse time likely but robust evidence is lacking

Financial impact of savings from improved analytics and management oversight and control has not been reported

There is little evidence that e-roster systems as currently used facilitate greater control for staff

One study reported that the system was acceptable to nurses.

No studies measuring relative “fairness” or other acceptability indicators for e-roster versus existing systems or between units or sites

No studies reported the impact on senior nurses in terms of ease or time saving

No reports of systems that allow greater accessibility to rostered staff

There is a need for research to compare implementation and outcomes between sites now that sufficient systems have been deployed.

The degree to which HIT is integrated and adapted to local practices may strongly influence the realisation of benefits such as safer nursing cover and appropriate skill-mix.

Fidelity to planning during implementation via training and senior monitoring may significantly increase the benefits achieved e.g. by using automation and rule-setting functionality.

Benefits likely in reducing senior nurse time and improving skill-mix and safety - little published evidence.

Analytics to manage costs are possible with integration with ESR and payroll systems – published evidence lacking

Attendance management systems are needed to ensure payroll reflects actual activity and reduce mis-payment

Monitoring compliance may be more important if e-roster is extended to other, less directly supervised staff groups.

63

4.7 E-Workforce Deployment Adequate skill-mix provision is a key determinant in the realisation of safety and care quality

benefits proposed for e-rostering (NHS Employers 2007). Within nursing this concern is increasingly

driven by increases in the introduction of nurse practitioner roles often entailing a great range of

skills sets needed across different wards and departments and at different times. To enable the

management of complex skill-mixes via e-roster systems, considerable up-front data input is

required as well ongoing record “maintenance” to ensure the accuracy of information regarding staff

capabilities, training and accreditation. The evidence discussed below is grouped under four broad

themes: Effectiveness and accuracy of rotas including management of skill-mix; financial impacts;

self-rostering and staff impact acceptability.

Effectiveness, accuracy and skill-mix

The importance of accurate nursing workforce management can hardly be overstated, especially in

the light of recent safety reports e.g. Francis 2013. E-rostering systems have been proposed as

having the potent to deliver improvements in safety such as adequate skill-mix provision and

matching of staff supply to demand. In addition e-roster systems may impact on the degree to which

planned rosters become the actual worked hours and deployment often termed rota “robustness”.

Many commercially available e-roster systems incorporate tools for managing skill-mix which could

support increased productivity by ensuring that required skills are available consistently. No

literature was identified measuring the impact of e-roster implementation on skill mix availability or

indeed the degree to which the functionality of deployed systems was utilised or integrated with

Electronic Staff Records. In addition, accurate information on patient flows, acuity and case-mix are

also required to target workforce planning using an e-roster system. There was no literature

regarding the adaptation of e-roster processes to incorporate information on variability in patient

demand e.g. modelling patient flow and incorporating into e-roster constraint or rule setting. There

was also no literature regarding the use of e-roster analytics in longer-term variations such as

seasonal planning or predicting the need for part-time or annualised hour’s staff.

Where automatically generated rosters are adapted post hoc to reflect unexpected sickness, peaks

in workflow and other threats, this result in reductions in the efficiencies gained from advanced

planning. Very few studies measured the robustness of electronically generated rosters in real-world

64

settings either in terms of their ability to meet soft and hard constraints (e.g. European Working

Time Directive) or adherence to planned shift allocations or staff requests. Wong et al.,(2014)

examined the use of an electronic roster system and found that a two-staged process including an

initial roster meeting demands and requests followed by an iterative adjustment process, delivered

effective rosters when assessed in terms of completeness of cover, skill-mix, fairness, staff-

wellbeing and continuity of staffing across consecutive shifts. There was no comparison to previous

systems and robustness to sickness or workload variation was not assessed.

Drake conducted a study (2014b, 2014c) of e-roster system implementations in NHS hospitals but

found no data describing rosters created prior to implementation to enable a before and after

comparison. Instead cross-sectional comparisons were made between sites after implementation of

various systems. The study found enormous variation in the degree to which automatic functions

were utilised by senior nurses in practice, from 6% to 80% of all shifts allocated. Rule violations and

post hoc changes were more likely where there was a greater reliance on manual roster creation

suggesting that automation performed well in meeting staffing needs where it was used. Increasing

self-rostering was not associated with an increased need for retrospective changes suggesting that

automation using e-rostering could realise the benefits of greater staff autonomy without loss of

roster robustness. Despite the adoption of electronic systems, problems such as unfilled shifts

persisted and contributed to safety concerns. Unfilled shift rates varied across hospitals with rates of

up to 15% of all planned shifts left vacant in some units. E-roster system analytics may make

identifying such problems easier and may permit more effective work force planning but this has not

been tested in any of the literature identified.

The degree to which e-roster implementation is localised at hospital or even ward level, may

strongly influence the realisation of safe and adequate of worked rosters. Although e-roster

implementation for nursing has reached 90% coverage, unscheduled inspections on wards reveal

staff short-falls compared to workforce plans (Hockley and Boyle 2014).

Financial impacts E-roster systems are suggested as likely to positively impact on both high agency spend and payroll

errors (NHS Employers). Ronneberg (2010) reported a reduction in the requirement for agency staff

after the introduction of an e-roster system. The degree to which this is generalisable to other

settings is unclear. The time devoted to the management of rostering by senior nursing staff may

also be reduced by the implementation of electronic systems. Wong et al., (2014) reported that

65

roster generation processes using an electronic system took 10-15 minutes compared with manual

processes taking “2 hours” or more. This was merely to fulfil hard constraints e.g. no empty shifts

implying that further time would have been required to finesse the roster before implementation and

this time was not estimated. Other literature found concurs that a conventional manual approach

may take hours of senior staff time to generate even preliminary versions of a roster and much more

for larger more complex settings (NHS Employers 2007). In addition the size of the ward or unit and

possibly the complexity of shift patterns and workload may also be related to the size of any

potential efficiency benefits derived from automation. Again context is a central concern in predicting

likely cost/benefit improvements. No work was identified in the public domain quantifying this aspect

of e-roster implementation. Hertfordshire Partnership NHS Foundation Trust reported savings of

£0.75m over a three year period after the introduction of an e-roster

(http://www.nhsemployers.org/case-studies-and-resources/2012/11/hertfordshire-partnership-nhs-

foundation-trust---e-rostering-to-reduce-agency-spend last accessed 23/04/2015). Little supporting

data are presented in this case-study. It is not clear where such savings emerged e.g. whether from

reduced agency spending or reductions in other errors arising from to manual processes. It is not

reported whether savings are net of implementation costs e.g. software licenses and training, or

whether they includes indirect costs such as ESR maintenance or residual manual work by nursing

managers in altering rosters post-hoc.

Direct links to payroll systems may also reduce overpayment compared to manual systems, for

example where shifts are cancelled at short notice. Further efficiencies may arise simply from

increased legibility and accuracy of e-rosters compared to paper but this was not reported in the

literature in keeping with the overall lack of evidence from prior to automation regarding roster

practices. Improvements in the accessibility of rosters for nursing staff may increase efficiency and

communication ensuring that planned shifts are actually worked.

The literature on e-roster systems does demonstrate that automation allows roster creation in a

matter of minutes compared to many hours for manual systems (Wong 2014, Glass and Knight

2010, NHS Employers 2007). Overall NHS Employers estimate that typical improvements from the

introduction of e-rostering reduce senior nurse time from three days to about three hours. No

literature was found quantifying such benefits and the degree to which automated rosters need

refinement has not been widely researched in live systems.

66

The National Audit Office (2010) identified large levels of payment errors arising from the failure to

update payroll systems in line with staff turnover, missed shifts or unearned supplemental payments

for unsocial hours during sickness periods. Hockley and Boyle (2014) extrapolate from conservative

local figures to estimate that nationally the NHS may lose up to £30m per year in overpayments due

to rostering errors often arising from failures to link separate roster and payroll systems. No

literature was identified which reported changes in the levels of payment errors as a result of

implementation of linked e-roster systems. The greatest threat to purported payroll savings from e-

roster deployment remains the reliance on manual reporting of shift attendance by supervisory staff.

E-roster deployment has not generally been accompanied by widespread, automation of attendance

information e.g. via biometric “clock-in” systems (Hockley and Boyle 2014). The same report uses

data from Basildon and Thurrock University Hospitals NHS Trust, to illustrate the potential level of

savings to the NHS at over £40m (this paper reports funding by Kronos Ltd, supplier of e-roster

systems to the Trust concerned). It is possible that this estimate is at the upper limit of possible

savings and arises from a single Trust that has fully implemented automated attendance recording

systems rather than a stand-alone e-roster product.

Most of the evidence for the cost-benefit impacts of e-roster systems is anecdotal or assumed to

arise simply from the rich functionality available in many commercially available systems and the

scale of the existing waste. Crucially the limited degree to which previous practice was understood

or analysed, limits the reliability of these assumptions of savings to the NHS. No high-quality, time-

series auditing evidence adjusted for confounding factors has been identified within the literature but

this may be available locally to guide implementation and evaluation of financial impacts and

cost/benefits.

Self-Rostering: Recruitment, Retention, Wellbeing, Acceptability for staff

Many of the benefits of e-rostering are thought to arise from the degree to which such systems can

facilitate greater staff autonomy in managing their working patterns (NHS Employers). There is

strong evidence that increasing employee “work time control” or degree of self-rostering delivers

benefits to employees as a result of improved work-life balance, work-family conflicts and to

employers in the form of job-related outcomes such as improved performance, motivation and

reduced actual or intended turnover (Nijp et al 2012). The same review did not find consistent

67

evidence of any impacts on employee wellbeing or health however and there appears to be little

sign that these measurements are routinely monitored.

Rönnberg and Larsson (2010) used data derived from shift requests in the form of self-written roster

lines submitted by ward nurses, to generate rosters automatically which were then compared to

convention rosters written by ward managers. No cost-benefit data was presented but electronically

generated rosters met compliance constraints to a greater degree than the conventional process. It

was reported that there was less discrepancy between staff requests and what was allocated using

the automated process which is likely to have had a positive impact on staff acceptability.

Drake (2014b, 2014c) qualitatively compared hospital rostering policies (14 NHS hospitals) reporting

that the requirement for “fairness” in rostering processes was the commonest consideration to be

mentioned. Though assertions of increased fairness are widely made in support of e-roster system

implementation, Drake claims that fairness is poorly defined and may be no more nuanced than an

adherence to a few rules (2014a). Such impersonal implementation may not be accepted as “fair”

by nurses in practice. The original automated version of a roster may not survive manual revisions

which impact on objectively important attributes of the roster e.g. adequate skill-mix (2014b). But

perceived fairness may actually depend on manual revisions in the roster process rather than

impersonal automation to reflect individual personal circumstances etc. Unless systems enable staff

to make many personal requests or write their own rota (within limits) easily and quickly, it is unlikely

that even maximally automated e-roster systems will increase satisfaction levels markedly without

subsequent manual revisions. Such radical implementation of e-roster functions may be less time

consuming than manual revisions in a paper system or poorly implemented e-roster. Failure to take

advantage of such functioning could reduce the impacts on “empowerment” and improved well-

being (Drake 2014a).

Overall there is little evidence that e-roster systems are currently used in ways that facilitate greater

staff control nor that they lead to increased acceptability of rosters for staff they cover and therefore

improvements in contingent human resource metrics thought to underpin likely benefits.

No other literature was identified measuring the extent to which self-rostering was facilitated by e-

roster implementation in practice. No studies were identified which measured acceptability of e-

roster implementation in terms of perceived work-life balance, well-being or in actual or proxy

measures of staff turnover.

68

The benefits of e-roster rollout across the NHS fall broadly into those thought to arise directly

(recording, reporting, accessibility and auditing functions) and those that are inferred as secondary

benefits (improved planning and skill-mix and reduced agency spend) but are more or less

contingent on effective and faithful implementation. As with other HIT applications, implementation

may be as important as the original choice of systems and demonstrating planned benefits in terms

of high-level outcomes and performance is difficult. Overall there is as yet, no evidence that the

purported benefits of e-rostering systems have been realised within the NHS and demonstrating

benefits such may require larger comparisons across organisations and/or monitoring of local

practices.

69

5. Limitations and learning The quality of the evidence in the area of nursing HIT is variable and there is a lack of scientific

evaluation for the implementation of existing and developing health technologies in nursing. Four

themes emerge that need consideration: methodological issues, human factors, technological

implications and the health care context.

Methodology A variety of methodological issues arise in the context of HIT and these problems are essentially

those of assessing complex health interventions in general: how to establish tight cause-effect

relations; how to assess the impact of elements of an intervention where confounding effects cannot

be controlled for; how research should be reported so as to defend it against a “purist” research

agenda which demands demonstrable, generally quantitative, impacts on high-level outcomes such

as mortality. Despite widespread deployment of HIT to improve safety, drive quality improvement

and control costs, high-quality evidence of efficacy remains sparse. In many cases poor

methodological approaches such as small sample sizes, short follow-up periods, failure to consider

confounding effects of existing trends and lack of attention to the socio- technical aspects of

implementation and embedding, all reduce the utility of what is published for commissioners and

senior managers. A final overarching problem is that HIT for nursing is in its infancy. There is little

comparable base-line data and no allowance for the cultural lag inherent in transforming workflows

and existing practices needed to maximize the potential benefits of digitisation.

All 21st century digital technology has an increasingly short life-cycle and this is nowhere more

important than in healthcare. As the projects described here mature, ongoing evaluation of

interventions throughout the whole life-cycle of a HIT deployment is called for. This is essential to

avoid missing the development of unintended consequences such as workarounds, redundancy,

misfits with other human and digital systems and asynchronous obsolescence. Local

implementation takes time to demonstrate benefit and requires considerable project management

which must include cultural and behavioural shifts. The evaluation phases governed by the

academic research cycle are typically too short to study maturing systems and so the need for

“more research” is better expressed as “robust ongoing evaluation” reflecting a lack of any definable

destination or endpoint. Academic peer-review is essential to drive methodological refinement but

valid techniques must be made more widely available to NHS organisations themselves so they can

to assess their own progress internally. Sharing of learning must increase dramatically with

70

consideration given to supporting staff in publishing, not necessarily academically, positive and

negative findings, of what is effectively “action research”. Most importantly of all, “failures” must be

recast as learning opportunities and must be shared free of the risk of reputational damage to

individuals and organisations from commercial and political interests.

There was a lack of available information on many important factors such as the costs of equipment,

software, training, support and maintenance of systems. This precludes the establishment of

realistic cost-benefit analyses. Similarly, a reduction of all economic evaluations to financial bottom

lines risks obscuring the sometimes dramatic impacts of releasing nursing time to increase

qualitative aspects of care such as communication and demonstrating compassion which are so

important to patients and families. Robust qualitative and observational data can often indicate

when a quality improvement has occurred even when a financial one has not. Few of the current

evaluations of the implementation phases of HIT projects reviewed here include a robust qualitative

analysis of staff and patient acceptability, behavioral intentions, acculturation or adaptive responses

to technology especially where systems appear to conflict with staff professional pressures and

patient need. Barriers to uptake and usability must be understood and overcome and research

should incorporate socio-technical theoretical models to ensure comprehensive assessment of

impacts on staff and patients as well as accounting for process improvements.

There remains a lack of consensus around the evaluation of highly complex service interventions

(Lilford et al., 2009). Catwell and Sheikh (2009) say that we need to have the means to demonstrate

the “true” benefits of HIT systems. Quantitative outcomes measured with quasi-experimental before

and after methods with suitable controls where possible are the only option in most cases of HIT

investment. Careful selection of controls or individual participants, stepped-wedge implementations

and the inclusion of confounding secular trends in modelling using Interrupted Time Series analysis

and other quasi-experimental techniques are all robust techniques with a good fit to situations where

“the plane must be rebuilt in flight” as the saying goes. These techniques are not all highly complex

and interpretation of longitudinal analysis is perfectly possible for senior clinical nurses with suitable

training and support. Time-series principles have been used in managerial contexts e.g. Statistical

Process Control Charts to highlight unexpected performance deteriorations have been used by non-

academics in industry for decades but remain underused in the NHS. The collection, manipulation

and analysis of the increasing amount of available data in a form useful for senior staff should be

viewed as a core competence for nursing and managerial staff responsible for deploying and

refining HIT.

71

Many templates exist. Talmon et al., (2009) detail a list of principles for properly describing the

evaluation of HIT projects, to allow the reader to place the evaluation the right context and judge the

validity and generalisability of findings. Lilford et al. (2009) discuss ways to address complexity

using pre-implementation and pilot testing incorporating principles from the Medical Research

Council (2008) framework for evaluation of complex interventions. They advocate “methodological

pluralism”, undertaking combined quantitative (“what happened?”) and qualitative work (“why did it

happen?”) when evaluating IT systems, with each informing the other. Greenhalgh et al., (2010)

suggest that HIT programmes should be studied at three levels: macro (national policy, wider social

norms and expectations); meso (organisational processes and routines); micro (particular

experiences of patients and professionals). Such research can result in effective tactics to calibrate

systems and maximise desired impacts and minimize disruption.

Lau et al., (2010) notes that formal studies HIT often lack sufficient power or duration to assess

health outcomes in statistical terms. These limitations can be mitigated by using longitudinal as

suggested above. In many cases, the technology is only part of a complex set of interventions that

included changes in clinical workflow, provider behaviours and scope of practice but these elements

are too often ignored (Wachter 2015). But it is here in the messy detail, that real impacts and

improvements will occur.

Human factors In much of the HIT literature examined in this review the tensions between fidelity and local

adaptation, the emergence of workarounds, underutilised functionality and failure to eradicate

duplication and redundancy is evident but seldom addressed directly. Nurses in particular, given

the unstructured nature of their work and holistic responsibility for patient well-being rather than

narrow treatment, often develop “workarounds” to mitigate the impact of inflexible systems on their

clinical priorities and the priorities of their patients. This hierarchy should be reversed. Workarounds

occur overwhelmingly due to failures in systems and implementation of HIT in context-insensitive

ways (Marini 2009). Rack et al’s (2012) focus group study in the US reported that over half of

nurses interviewed said that they had bypassed the BCMA system in their previous shift. Other

behaviours can include barcodes fixed to beds instead of patients or stored on a drug trolley or in a

remote preparation room (Hardmeier et al., 2014). Lawler e al., 2011, suggests the inclusion of

nurses in the implementation or even procurement phase of HIT planning could help mitigate these

down-stream effects by ensuring that human resource impacts are addressed. There also needs to

be much more exploration of the impact of HIT for the patient. HIT designed to reduce

72

documentation and improve communication should result in more patient contact whilst ensuring

that interprofessional working is enhanced and not displaced. If a nurse can access a social history

on a PC in the office will she/he still spend as much time welcoming a patient to a ward? If a doctor

can issue nurse orders from an office and therefore not inform a nurse that relatives appear upset

and confused about a treatment plan will she do so?

Carlisle (2012) has documented that the biggest barrier to successful HIT implementation is lack of

proper project management and leadership. These are vital to ensuring nurses embrace technology.

Too often she says implementation occurs in organisations that lack the skills to deploy technology

well. There is a lack of clinical engagement, responsive IT support and early process-mapping,

these conceal, significant problems if neglected. This can result in unnecessary resistance to HIT by

staff and is often misinterpreted. For example, nurses were found to be embarrassed because they

felt patients became aware of their lack of keyboard skills and proficiency in using new technology

(Carlisle 2012). Greenhalgh et al., (2008) advocates the widespread use of socio-technical change

models applicable to the implementation of HIT. This could mean that rather than view an EHR as

an end in itself with success measured in terms of reduction in costs of paper records or speed of

implementation. Success could be measured as more frequent and timely access to records or

improvements in legibility and reduced transcription errors. Instead of talking about “implementing

the EHR”, implementers should think of “integration” of the EHR as a part of the infrastructure for a

wider strategic change with a longer timeframe but a greater impact.

Lau et al., (2010) make three recommendations. First, when reproducing a successful project

attention must be paid to specific features and key factors that are critical to “making the system

workable” locally by early and continued involvement of frontline staff and assessments of the

impact on their day-to-day work. Second, contextual issues should be addressed, for example,

where information is required in one place and at one (a clinic) but resides in multiple systems.

Third, return-on-investment can only be judged by ‘measuring the clinical impact […] of

implementation’. Human factors are currently neglected or undervalued in HIT evaluations but are

the key to delivering quality and patient experience improvements.

Technology Although much everyday technology such as smart phones appear flawless we rarely use a fraction

of the available functionality and then only for trivial individual tasks. This is not a good benchmark

by which to judge healthcare HIT. Usability and system resilience have profound effects on

73

acceptance because they have clinical and not just usability impacts. Poor connectivity can result in

prolonged log in times or clinical applications “timing out”, resulting in the loss of work (Kidd 2011).

This is not merely frustrating but often dangerous. As a result of the imperative to deliver care,

hardware and software failures result in staff developing workarounds to side-step a label printer

failure in an emergency room, fingerprint recognition errors on an ADC in a busy psychiatric unit or

network “black-spots” in community midwifes practice area.

Maturation of systems tend towards the resolution of these problems (e.g. Franklin et al., 2007).

This requires recognising (and acknowledging) where and how failures have an impact, identifying

legitimate workarounds and developing safer alternatives (Barber 2010). Other considerations may

include new opportunities for cross-infections such as multiple use keyboards in outpatient clinics,

access to limited computers in pharmacy, restrictions on simultaneous record entry in multiple

trauma teams and the introduction of less efficient workflows. Some HIT implementations fail

because the host organization had insufficient understanding of what it was actually doing before

the HIT implementation and what they really needed a system do.

Wade et al., (2010) hold that telehealth using video has particular requirements and hence

associated costs such as video screens, connectivity of sufficient bandwidth and reliability, physical

space at each location, and health providers’ time to deliver services. Again these must be

considered in the planning stage and incorporated into any evaluation or research as they can

easily derail implementation. Lastly a central problem for evaluating the effectiveness of HIT which

are commercial systems are that their algorithms are not “open-label” and therefore evidence from

academic studies of generic approaches cannot be generalised to their commercial applications.

The Health Technologist

Health Information technology is not simply about replacing paper forms with their electronic

equivalents and when using evidence from HIT in situ it can be difficult to generalise between sites,

settings and especially health systems. In particular, there is a preponderance of evidence from

outside the UK; this should preclude simplistic extrapolation of reported benefits to the NHS. Cost

benefit analyses and even the likelihood of successful implementation are not clear even where high

quality evaluations have been completed if the context is sufficiently different. For example, positive

evaluation of BCMA systems may be partially reliant on “Unit dose" dispensing common in US

tertiary hospitals, where pharmacy prepares and issues individualized drug supplies to facilitate

accurate billing. Such individualised dispensing is rare in UK where drugs are supplied to ward and

74

departments in generic multi-dose packaging. If systems are dependent on one of more such steps

the overall intervention may not deliver the expected safety or cost benefits. The evidence from drug

administration error research suggests that too crude an understanding of the aetiology of drug

errors will reduce understanding of what works, what does not and at what cost.

In the US, the head start over the rest of the world in the use of HIT driven by recent financial

incentives to digitization (e.g. HITECH) has gradually led to the emergence of hybrid clinical-IT

specialists; staff embedded within IT departments but with clinical backgrounds (Wachter 2015).

The American Health Information Management Association (www.ahima.org/) offers accreditation

as a “certified health technology specialist” and in the UK the Council for Health Informatics

Professions offers yearly registrations, a code of conduct and accreditation of training and education

(www.ukchip.org/).

75

6. Conclusion Throughout the research for this review a central theme has surfaced frequently. Existing evidence

and evaluations of HIT along with many implementations themselves have failed to live up to

expectations for similar reasons. Mass implementation of HIT requires not just new computers and

monitors but new ways of working, new skills and new understandings.

Introducing Nursing technologies is a complex process requiring appropriate, responsive and

flexible technology, good infrastructure, effective project management and a clear evaluation system

which may rarely be of a purely academic nature. This rapid review has described the existing

academic and non-academic knowledgebase regarding the likely effectiveness, financial impact and

impacts on staff and patients of seven capabilities of current HIT. Where available, it has

synthesized evidence for their impact on the nurses they are designed to support, the organisations

which implement them and the patients whose outcomes they are designed to improve. The

evidence varies in both scope and quality but rarely is it comprehensive enough to provide

assurance that implementation will be straightforward. This reflects the need for the NHS to learn

rapidly from its own experimentation of these advances. Academic research alone will not provide

the necessary evidence required by the NHS to achieve quality improvement goals. Robust multi-

methodological evaluation is possible and the results from large exemplar projects should receive

wide attention. But small scale, rapid cycle or continuous monitoring of impacts should be central to

any local implementation plan and sufficient skills and external support should be available to

clinicians and IT specialists within each organization to enable them to undertake planning and

evaluation where it is needed.

76

7. Contacts

Dr Lucy Sitton- Kent Research Fellow for Innovation and Translation East Midlands Academic Health Science Network And Nottingham University Business School Level C, Institute of Mental Health University of Nottingham, Innovation Park, Triumph Road, Nottingham, NG7 2TU t: +44 (0) 115 7484262 e: [email protected] w: http://www.emahsn.org.uk/ Dr Philip Miller Knowledge and Evaluation Researcher East Midlands Academic Health Science Network Level C, Institute of Mental Health University of Nottingham, Innovation Park, Triumph Road, Nottingham, NG7 2TU t: +44 (0) 115 7432457 e: [email protected] w: http://www.emahsn.org.uk/ Professor Rachel Munton East Midlands Academic Health Science Network Level C, Institute of Mental Health University of Nottingham, Innovation Park, Triumph Road, Nottingham, NG7 2TU t: +44 (0) 115 8231300 e: [email protected] w: http://www.emahsn.org.uk/

77

8. References Agrawal, A., (2009) Medication errors: prevention using information technology systems. British Journal of Clinical Pharmacology, 67(6), 681-686. Audebert, H. J., Schultes, K., Tietz, V., Heuschmann, P. U., Bogdahn, U., Haberl, R. L., and Schenkel, J. (2009). Long-term effects of specialized stroke care with telemedicine support in community hospitals on behalf of the Telemedical Project for Integrative Stroke Care (TEMPiS). Stroke, 40(3), 902-908. Aydin, C.E., Burnes Bolton, L., Donaldson, N., Storer Brown, D., Mukerji, A., (2008) Beyond Nursing

Quality Measurement: The Nation’s First Regional Nursing Virtual Dashboard,

http://www.ncbi.nlm.nih.gov/books/NBK43614/

Bailey, T.C., Chen, Y., Mao, Y., Lu, C., Hackmann, G., Micek, S.T., Heard, K.M., Faulkner, K.M., and Kollef, M.H., (2013) A trial of a real-time Alert for clinical deterioration in Patients hospitalized on general medical wards. Journal of Hospital Medicine, 8(5), 236-242. Barber, N, (2010) Electronic prescribing – safer, faster, better? Journal of Health Services Research & Policy. 15: 64-67 Barber, M., Frieslick, J., Maclean, A., Williams, J., and Reoch, A., (2015). The Western Isles Stroke Telerehabilitation (Specialist Medical Consultation) Service–implementation and evaluation. European Research in Telemedicine/La Recherche Européenne en Télémédecine, 4(1), 19-24. Bates, D. W., Leape, L. L., Cullen, D. J., Laird, N., Petersen, L. A., Teich, J. M., ... and Seger, D. L. (1998). Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. Jama, 280(15), 1311-1316. Baumgarten, M., Margolis, D. J., Selekof, J. L., Moye, N., Jones, P. S., and Shardell, M. (2009). Validity of pressure ulcer diagnosis using digital photography. Wound repair and regeneration, 17(2), 287-290. Bellomo, R., Bailey, A.M., Beale, R., Clancy., G, Danesh V, Hvarfner A, Jimenez E, Konrad D, Lecardo, M., Pattee, K., Ritchie, J., Sherman, K., Tangkau P, (2012) A controlled trial of electronic automated advisory vital signs monitoring in general hospital wards. Critical Care Medicine, 40(8), 2349-2361. Berdot, S., Gillaizeau, F., Caruba, T., Prognon, P., Durieux, P., and Sabatier, B. (2013) Drug administration errors in hospital inpatients: a systematic review. PLoS One, 8(6), e68856. Black, A.D., Car, J., Pagliari, C., Anandan, C., Cresswell, K., Bokun,T., McKinstry, B., Procter, R.,Majeed, A.,and Sheikh, A. (2011) The Impact of eHealth on the Quality and Safety of Health Care: A Systematic Overview. PLoS Med 8(1): e1000387. doi:10.1371/journal.pmed.1000387 Blount, M., Ebling, M.R., Eklund, J.M., James, A.G., McGregor, C., Percival, N., Smith, K.P. and Sow, D. (2010) Real-time analysis for intensive care: development and deployment of the artemis analytic system. IEEE Engineering in Medicine and Biology Magazine, 29(2), 110-8. Bradshaw, L. M., Gergar, M. E., and Holko, G. A. (2011). Collaboration in wound photography competency development: a unique approach. Advances in skin and wound care, 24(2), 85-92.

78

Braun, R., Catalani, C., Wimbush, J., Israelski, D. (2013). Community Health Workers and Mobile Technology: A Systematic Review of the Literature. PLoS ONE 8(6): e65772. doi:10.1371/journal.pone.0065772: Brown, D.S., Aydin, C.E., Donaldson N. (2008), Quartile dashboards: translating large data sets into performance improvement priorities. Journal for Healthcare Quality,30 (6):18-30.

Brown, D. S., Donaldson, N., Burnes Bolton, L., and Aydin, C. E. (2010). Nursing‐Sensitive Benchmarks for Hospitals to Gauge High‐Reliability Performance. Journal for Healthcare Quality, 32(6), 9-17. Brown, H., Terrence, J., Vasquez, P., Bates, D.W. and Zimlichman., E. (2014) Continuous monitoring in an inpatient medical-surgical unit: A controlled clinical trial. American Journal of Medicine, 127(3), 226-232. Buckley, K.M., Adelson, L.K., and Agazio, J.G. (2009), Reducing the risks of wound consultation: adding digital images to verbal reports, Journal of Wound, Ostomy, and Continence Nursing. 36(2):163-70. Carlisle, D. (2012). Remote access to closer care, Nursing Standard. 26(29): 20-22. Catwell, L., and Sheikh, A. (2009), Evaluating eHealth Interventions: The Need for Continuous Systemic Evaluation, 6 (8) PLoS Medicine:1- 6. Chandraharan, E. (2010). Clinical dashboards: do they actually work in practice? Three-year experience with the Maternity Dashboard. Clinical Risk, 16(5), 176-182. Chanussot‐Deprez, C., and Contreras‐Ruiz, J. (2008). Telemedicine in wound care. International Wound Journal, 5(5), 651-654. Chapuis, C., Roustit, M., Bal, G., Schwebel, C., Pansu, P., David-Tchouda, S., Foroni, L., Calop, J., Timsit, J.F., Allenet, B., Bosson, J.L. and Bedouch, P. (2010) Automated drug dispensing system reduces medication errors in an intensive care setting. Critical Care Medicine, 38(12), 2275-81. Chen, L., Chuang, L. M., Chang, C. H., Wang, C. S., Wang, I. C., Chung, Y., ... and Lai, F. (2013). Evaluating self-management behaviors of diabetic patients in a telehealthcare program: longitudinal study over 18 months. Journal of medical Internet research, 15(12): e266 Chib A, Lwin MO, Ang J, Lin H, Santoso F (2008) Midwives and mobiles: using ICTs to improve healthcare in Aceh Besar, Indonesia. Asian Journal of Communication,18:4, 348–364. Chua, S., Chua, H. and Omar, A. (2010) Drug administration errors in paediatric wards: a direct observation approach. European Journal of Pediatrics, 169(5), 603-611. Clark, K. W., Whiting, E, Rowland, J, Thompson, L E., Missenden, I, and Schellein, G (2013). Breaking the mould without breaking the system: the development and pilot of a clinical dashboard at The Prince Charles Hospital. Australian Health Review 37, 304–308. Clemensen, J., Larsen, S. B., Kirkevold, M., and Ejskjaer., N. (2008). Treatment of diabetic foot ulcers in the home: video consultations as an alternative to outpatient hospital care. International journal of telemedicine and applications, vol. 2008, Article ID 132890.

79

Cochran, G.L., Jones, K.J., Brockman, J., Skinner, A. and Hicks, R.W. (2007) Errors prevented by and associated with bar-code medication administration systems. Joint Commission journal on quality and patient safety / Joint Commission Resources, 33(5), 293-301, 245. Coleman, J.J., Hodson, J., Brooks, H.L. and Rosser, D. (2013) Missed medication doses in hospitalised patients: A descriptive account of quality improvement measures and time series analysis. International Journal for Quality in Health Care, 25(5), 564-572. Collins, K. J., and Draycott, T. (2015). Measuring quality of maternity care. Best Practice and Research Clinical Obstetrics and Gynaecology. Published Online: March 31, 2015, Publication stage: In Press Corrected Proof Cookson. R., Ferguson, B., Fleetcroft, R., Goddard, M., Goldblatt, P., Laudicella, M., Raine, R. (2013) HSand DR - 11/2004/39: Developing indicators of change in NHS equity performance, http://www.nets.nihr.ac.uk/projects/hsdr/11200439 Cottney, A., and Innes, J. (2015) Medication-administration errors in an urban mental health hospital: A direct observation study. International Journal of Mental Health Nursing, 24(1), 65-74. Cottney, A. (2014) Improving the safety and efficiency of nurse medication rounds through the introduction of an automated dispensing cabinet. BMJ Quality Improvement Reports, 3(1). u204237-w1843. Cottrell, S., and Davidson, V. (2013a) National audit of bedside transfusion practice. Nursing standard (Royal College of Nursing (Great Britain): 27(43), 41-48. Cottrell, S., Watson, D., Eyre, T.A., Brunskill, S.J., Dorée, C. and Murphy, M.F. (2013b) Interventions to Reduce Wrong Blood in Tube Errors in Transfusion: A Systematic Review. Transfusion Medicine Reviews, 27(4), 197-205. Cruickshank, (2010) “Healthcare without Walls: A Framework for Delivering Telehealth at Scale, from 2020 Health, November 2010 url: http://www.2020health.org/dms/2020health/downloads/reports/2020telehealth LOW.pdf Daley,K., Richardson,,J., James,I., Chambers, A., and Corbett, D. (2013) Clinical dashboard: use in older adult mental health wards, The Psychiatrist , 37, 85-88 Demaerschalk, B. M., Bobrow, B. J., Raman, R., Kiernan, T. E. J., Aguilar, M. I., Ingall, T. J., ... and Meyer, B. C. (2010). Stroke Team Remote Evaluation Using a Digital Observation Camera in Arizona The Initial Mayo Clinic Experience Trial. Stroke, 41(6), 1251-1258. Department of Health (1998) Information for Health. A National Strategy for the Modern NHS 1998-2005. DH Publications, London. Department of Health (2007) Information Security Management: NHS Code of practice, DH Publications, London Department of Health (2008a) High Quality Care for All: NHS Next Stage Review, DH Publications, London

80

Department of Health (2008b) Framing the Nursing and Midwifery Contribution: Driving up the Quality of Care, DH Publications, London Department of Health (2013), National Mobile Health Worker Project: Final Report, DH Publications, London Dewsbury, G. (2014),The use of patient management systems in the community, British Journal of Community Nursing 19:11, 548-550 DeYoung, J., Venderkooi, M. and Barletta, J. (2009) Effect of bar-code-assisted medication administration on medication error rates in an adult medical intensive care unit. . American Journal of Health-System Pharmacy, 66, 1110-1115. Dharmasaroja, P. A., Muengtaweepongsa, S., and Kommarkg, U. (2010). Implementation of telemedicine and stroke network in thrombolytic administration: comparison between walk-in and referred patients. Neurocritical care, 13(1), 62-66. Dhiliwal, S. R., and Salins, N. (2015). Smartphone applications in palliative homecare. Indian journal of palliative care, 21(1), 88-91 Dolan, J.G., and Veazie, P.J. (2015), Balance Sheets Versus Decision Dashboards to Support

Patient Treatment Choices: A Comparative Analysis. The Patient-Patient-Centered Outcomes Research, 1-7. Dolan, J.G., Veazie, P.J., Russ, A.J., (2013), BMC Medical Informatics and Decision Making 13 (51), Development and initial evaluation of a treatment decision dashboard. BMC medical informatics and decision making, 13(1), 51. Doran, D., Mildon, B., and Clarke, S. (2011) Toward A National Report Card in Nursing: A Knowledge Synthesis. Prepared for the Planning Committee for a Think Tank entitled “Toward a National Report Card for Nursing” Nursing Health Services Research Unit, University of Toronto.

Doran, D., Bloomberg, L. S., Reid-Haughian, C., and Cafazzo, J. (2012). A Formative and Summative Evaluation of an Electronic Health Record in Community Nursing. In NI 2012: Proceedings of the 11th International Congress on Nursing Informatics (Vol. 2012). American Medical Informatics Association. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3799176/ Dowding, D., Randell, R., Gardner, P., Fitzpatrick, G., Dykes, P., Favela, J.,and Currie, L. (2015). Dashboards for improving patient care: Review of the literature. International Journal of medical informatics, 84(2), pp 87-100. Drake, R. (2014a) Five dilemmas associated with e-rostering. Nursing Times, 110(20), 14-16. Drake, R.G. (2014b) The nurse rostering problem: from operational research to organisational reality? Journal of Advanced Nursing, 70(4), 800-810. Drake, R.G. (2014c), The ‘Robust’ roster: exploring the nurse rostering process. Journal of Advanced Nursing, 70(9), 2095-2106.

81

Duffin, C. (2013) Digital pen means nurses can spend more time with patients, Mental Health Practice Nov 17 (3): 9-9. Dufrene, C. (2009) Photography as an Adjunct in Pressure Ulcer Documentation, Critical Care Nursing Quarterly. Realities and Myths in Critical Care Practice. 32(2):77-80. Dwibedi, N., Sansgiry, S.S., Frost, C.P., Dasgupta, A., Jacob, S.M., Tipton, J.A. and Shippy, A.A. (2011) Effect of bar-code-assisted medication administration on nurses’ activities in an intensive care unit: A time–motion study. American Journal of Health-System Pharmacy, 68(11), 1026-1031. Effken, J. A., Brewer, B. B., Logue, M. D., Gephart, S. M., and Verran, J. A. (2011). Using Cognitive Analysis to fit decision support tools to nurse managers’ work flow, International Journal of Medical Informatics, 80, (10), 698 – 707. Engel, H., Huang, J. J., Tsao, C. K., Lin, C. Y., Chou, P. Y., Brey, E. M., and Cheng, M. H. (2011). Remote real‐time monitoring of free flaps via smartphone photography and 3G wireless internet: A prospective study evidencing diagnostic accuracy. Microsurgery, 31(8), 589-595. Ernst, E. J., Speck, P. M., & Fitzpatrick, J. J. (2012). The element of naturalness when evaluating image quality of digital photo documentation after sexual assault. Advanced Emergency Nursing Journal, 34(3), 250-258. Esterle, L., and Mathieu-Fritz, A. (2013). Teleconsultation in geriatrics: Impact on professional practice. International Journal of medical informatics, 82(8), 684-695. Fatehi, F., Armfield, N. R., Dimitrijevic, M., and Gray, L. C. (2015a). Technical aspects of clinical videoconferencing: a large scale review of the literature. Journal of telemedicine and telecare, 21(3), 160-166. Fatehi F., Gray, L.C., Russell, A.W., Paul, S. (2015b), Validity and reliability of video teleconsultation for the management of diabetes: A randomized controlled trial. Diabetes Technology & Therapeutics, (A120-A120). Felkey, B. G., and Fox, B. I. (2014). Health System Dashboard: It’s All Coming Together. Hospital pharmacy, 49(5), 485-486. FitzHenry, F., Murff, H.J., Matheny, M.E., Gentry, N., Fielstein, E.M., Brown, S.H., Reeves, R.M., Aronsky, D., Elkin, P.L., Messina, V.P. and Speroff, T. (2013) Exploring the Frontier of Electronic Health Record Surveillance: The Case of Post-Operative Complications. Medical care, 51(6), 509-516. Flannigan, C., Toland, U., and Hogan, M. (2010). Dashboards in neonatology. Clinical Audit, 2, 79-81. Florczak, B., Scheurich, A., Croghan, J., Sheridan Jr, P., Kurtz, D., McGill, W., and McClain, B. (2012). An observational study to assess an electronic point-of-care wound documentation and reporting system regarding user satisfaction and potential for improved care. Ostomy/wound management, 58(3), 46-51.

82

Florez-Arango JF, Iyengar MS, Dunn K, Zhang J (2011) Performance factors of mobile rich media job aids for community health workers. Journal of the American Medical Informatics Association,18 (2): 131–7. Foley, B. and Cox, A. (2013). Work organisation and innovation - Case study: Nottingham University Hospitals NHS Trust, UK. Dublin: European Foundation for the Improvement of Living and Working Conditions. http://digitalcommons.ilr.cornell.edu/intl/248/?utm_source=digitalcommons.ilr.cornell.edu%2Fintl%2F248andutm_medium=PDFandutm_campaign=PDFCoverPages Foltynski, P., Wojcicki, J.M., Ladyzynski, P., Migalska-Musial, K., Rosinski, G., Krzymien J., Waldemar Karnafel, W.(2010), Monitoring of Diabetic Foot Syndrome Treatment: Some New Perspectives, 35, (2):176–182, Foulkes, M. (2011) Nursing metrics: measuring quality in patient care. Nursing Standard, 25, 42, 40-45. Francis, R. (2013) Report of the Mid Staffordshire NHS Foundation Trust public inquiry. HC 898. London. Franklin, B.D., O'Grady, K., Donyai, P., Jacklin, A. and Barber, N. (2007) The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: A before-and-after study. Quality and Safety in Health Care, 16(4), 279-284. Frazier, J. A., and Williams, B. (2012). Successful implementation and evolution of unit-based nursing dashboards. Nurse Leader, 10(4), 44-46. General Medical Council (GMC 2011) New guidance on Making and using visual and audio recordings of patient, http://www.gmc-uk.org/guidance/9457.asp Glass, C.A. and Knight, R.A. (2010),The nurse rostering problem: A critical appraisal of the problem structure. European Journal of Operational Research, 202(2), 379-389. Goodell, P.P., Uhl, L., Mohammed, M. and Powers, A.A. (2010) Risk of Hemolytic Transfusion Reactions Following Emergency-Release RBC Transfusion. American Journal of Clinical Pathology, 134(2), 202-206. Greenhalgh, T. Stramer, K. Bratan, T, Byrne, E. Mohammad, Y, Russell, J, (2008),Introduction of shared electronic records: multi-site case study using diffusion of innovation theory.BMJ (Clinical research ed.) 337 (2008): a1786 Greenhalgh,T., Stramer, K., Bratan, T., Byrne, E., Russell, J., and Potts, H. (2010), Adoption and non-adoption of a shared electronic summary record in England: a mixed-method case study, BMJ (Clinical research ed.) 340 (2010), c3111. Griffiths P, Jones S, Maben J, Murrells T. (2008) State of the art metrics for nursing: A rapid appraisal. National Nursing Research Unit, King’s College London Guha, S. Hoo,W. P. and Bottomley, C.(2013) Introducing an acute gynaecology dashboard as a new clinical governance tool, Clinical Governance: An International Journal, 18:3 , 228-237

83

Hardmeier, A., Tsourounis, C., Moore, M., Abbott, W.E. and Guglielmo, B.J. (2014) Pediatric Medication Administration Errors and Workflow Following Implementation of a Bar Code Medication Administration System. Journal for Healthcare Quality, 36(4), 54-63. Harting, M. T; DeWees, J. M.; Vela, K. M; Khirallah, R. T. (2015), Medical photography: current technology, evolving issues and legal perspectives. International Journal of Clinical Practice. 69(4):401-409, Hassink, J.J.M., Jansen, M.M.P.M. and Helmons, P.J. (2012) Effects of bar code-assisted medication administration (BCMA) on frequency, type and severity of medication administration errors: a review of the literature. European Journal of Hospital Pharmacy: Science and Practice, 19(5), 489-494. Häyrinen, K, Saranto K, Nykanen P., (2008) Definition, structure, content, use and impacts of EHRs: a review of the research literature. International Journal of Medical Informatics; 77: 291-304. Health and Social Care Information Center HSCIC (2013) Urgent Care Clinical Dashboards ClearWaterWatcher, http://systems.hscic.gov.uk/qipp/library/cwwatcherfs.pdf Hedgecock, L and Lewis, J (2012) Dashboard data, The Commissioning Review http://www.thecommissioningreview.com/article/dashboard-data Helmons, P., Wargel, L. and Daniels, C. (2009) Effect of bar-code-assisted medication administration on medication errors and accuracy in multiple patient care areas. American Journal of Health-System Pharmacy, 66, 1202-1210. Hockley, T. and Boyle, S. (2014) NHS staffing: not just a number. London School of Economics, London. Holmstock, L., Verellen, E., Franklin, B.D. and McLeod, M. (2014) Developing a tool for evaluating the impact of an electronic prescribing and administration system on medication safety in an English NHS hospital trust. International Journal of Pharmacy Practice, 22, 14-15. Hop, M. J., Moues, C. M., Bogomolova, K., Nieuwenhuis, M. K., Oen, I. M. M. H., Middelkoop, E.and Baar, M. V. (2014). Photographic assessment of burn size and depth: reliability and validity. Journal of wound care, 23(3), 144-152. http://informahealthcare.com/doi/abs/10.1586/erd.10.35 Hug, C.W., Clifford, G.D. and Reisner, A.T. (2011) Clinician blood pressure documentation of stable intensive care patients: an intelligent archiving agent has a higher association with future hypotension. Critical Care Medicine, 39(5), 1006-14. Hunter, J., Freer, Y., Gatt, A., Reiter, E., Sripada, S. and Sykes, C. (2012) Automatic generation of natural language nursing shift summaries in neonatal intensive care: BT-Nurse. Artificial Intelligence in Medicine, 56(3), 157-72. Jamal, A., McKenzie, K., and Clark, M. (2009),The impact of health information technology on the quality of medical and health care: a systematic review. Health Information Management Journal, 38 (3), 26-37.

84

Janardhanan, L., Leow, Y. H., Chio, M. T., Kim, Y., and Soh, C. B. (2008). Experience with the implementation of a web-based teledermatology system in a nursing home in Singapore. Journal of telemedicine and telecare, 14(8), 404-409. Jeffs, L., Lo, J., Beswick, S., Chuun, A., Lai, Y., Campbell, H., and Ferris, E. (2014). Enablers and Barriers to Implementing Unit-Specific Nursing Performance Dashboards. Journal of nursing care quality, 29(3), 200-203. Jesada, E., Warren, J., Goodman, D., Iliuta, R., Thurkauf, G., McLaughlin, M., Johnson, J., and Strassner, L. (2013), 'Staging and Defining Characteristics of Pressure Ulcers Using Photographs by Staff Nurses in Acute Care Settings', Journal of Wound, Ostomy and Continence Nursing, 40,(2),150-156. Jesada, E, Warren, J, Goodman, D, Iliuta, R, Thurkauf, G, McLaughlin, M, Johnson, J, and Strassner, L (2013), 'Staging and Defining Characteristics of Pressure Ulcers Using Photographs by Staff Nurses in Acute Care Settings', Journal of Wound, Ostomy and Continence Nursing, vol. 40, no. 2, pp. 150-156. Johansson, A. M., Lindberg, I., and Söderberg, S. (2014). Patients’ experiences with specialist care via video consultation in primary healthcare in rural areas. International journal of telemedicine and applications, 2014, 9. 143824. Jones, B.G. (2013) Developing a vital sign alert system. American Journal of Nursing, 113(8), 36-44. Jones, S., Mullally, M., Ingleby, S., Buist, M., Bailey, M. and Eddleston, J.M. (2011) Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Critical care and resuscitation: Journal of the Australasian Academy of Critical Care Medicine, 13(2), 83-88. Kaliyadan, F., Manoj, J., Venkitakrishnan, S., and Dharmaratnam, A. D. (2008). Basic digital photography in dermatology. Indian Journal of Dermatology, Venereology, and Leprology, 74(5), 532. Keen, J., Long, A., McGinnis, E., Randell, R., Willis, S., Ginn. C., and Whittle, J. (2014) HSand DR - 13/07/68: Information Systems: Monitoring and Managing from Ward to Board http://www.nets.nihr.ac.uk/projects/hsdr/130768 Keers, R.N., Williams, S.D., Cooke, J. and Ashcroft, D.M. (2013) Prevalence and Nature of Medication Administration Errors in Health Care Settings: A Systematic Review of Direct Observational Evidence. Annals of Pharmacotherapy, 47(2), 237-256. Keogh, B., (2013), Review into the quality of treatment and care provided by 14 hospital Trusts in England: overview report. NHS England, London. Kidd R. (2011), Benefits of mobile working for community nurse prescribers. Nursing Standard, 25(42):56-60.

85

King, B. (2014) Influencing dressing choice and supporting wound management using remote ‘tele-wound care’ British Journal of Community Nursing. Suppl: S24-31. Kochendorfer, K.M., Morris, L.E., Kruse, R.L., Ge, B. and Mehr, D.R. (2010) Attending and resident physician perceptions of an EMR-generated rounding report for adult inpatient services. Family Medicine, 42(5), 343-349. Kohn L, C.J., Donaldson M (2000) To Err Is Human: Building a Safer Health System, National Academy Press, Washington D.C. Koivunen, M., Niemi, A., and Hupli, M. (2015). The use of electronic devices for communication with colleagues and other healthcare professionals–nursing professionals’ perspectives. Journal of Advanced Nursing. 71 (3) 620–631, Kollef, M.H., Chen, Y., Heard, K., LaRossa, G.N., Lu, C., Martin, N.R., Martin, N., Micek, S.T. and Bailey, T. (2014) A randomized trial of real-time automated clinical deterioration alerts sent to a rapid response team. Journal of Hospital Medicine (Online), 9(7), 424-9. Koppel, R., Wetterneck, T., Telles, J.L. and Karsh, B.T. (2008) Workarounds to Barcode Medication Administration Systems: Their Occurrences, Causes, and Threats to Patient Safety. Journal of the American Medical Informatics Association, 15(4), 408-423. Lakdawala, N; Bercovitch, L; Grant-Kels, J (2013), A picture is worth a thousand words: Ethical dilemmas presented by storing digital photographs in electronic health records, Journal of the American Academy of Dermatology. 69(3):473-475 Latifi, R (2009), Initial experiences and outcomes of telepresence in the management of trauma and emergency surgical patients. The American Journal of Surgery,198 (6):905 -10 Lau, F., Kuziemsky, C., Price, M., and Gardner, J. (2010). A review on systematic reviews of health information system studies. Journal of the American Medical Informatics Association, 17(6), 637-645. Lawler, E.K., Hedge, A., and Pavlovic-Veselinovic, S. (2011) Cognitive ergonomics, socio-technical systems, and the impact of healthcare information technologies. International Journal of Industrial Ergonomics, 41(4), 336-344. Leape, L.L., Bates, D.W., Cullen, D.J. and et al. (1995) Systems analysis of adverse drug events. JAMA, 274(1), 35-43. Lee S, Chib A, Kim JN (2011) Midwives’ cell phone use and health knowledge in rural communities. Journal of Health Commun 16(9): 1006–23. Leung, A.A., Denham, C.R., Gandhi, T.K., Bane, A., Churchill, W.W., Bates, D.W. and Poon, E.G. (2014) A Safe Practice Standard for Barcode Technology. Journal of Patient Safety, Publish Ahead of Print. Lilford RJ, Foster J, Pringle M (2009) Evaluating eHealth: How to Make Evaluation More Methodologically Robust. PLoS Med 6(11): e1000186.

86

Maben, J., Morrow, E., Ball, J., Robert, G., and Griffiths, P. High Quality Care, (2012), Metrics for Nursing. National Nursing Research Unit, King’s College London, MacBride, S; Wells, M; Hornsby, C; Sharp, L; Finnila, K; Downie, L (2008) A Case Study to Evaluate a New Soft Silicone Dressing, Mepilex Lite, for Patients With Radiation Skin Reactions, Cancer Nursing. 31(1):E8-E14, MacLure K, Stewart D, Strath A. (2014) A systematic review of medical and non-medical practitioners’ views of the impact of ehealth on shared care, European Journal of Hospital Pharmacy 21: 54–62. Manias, E., Williams, A. and Liew, D. (2012) Interventions to reduce medication errors in adult intensive care: a systematic review. British journal of clinical pharmacology, 74(3), 411-423. Marini, S, Hasman, A, (2009) Impact of BCMA on medication errors and patient safety: a summary. Studies in health technology and informatics, 146: 439-444 Mayrovitz, H. N., and Lisa B. SoontupeL. B (2009). "Wound areas by computerized planimetry of digital images: accuracy and reliability." Advances in skin and wound care 22.5: 222-229. McCaughey, D., Erwin, C. O., and DelliFraine, J. L. (2015). Improving Capacity Management in the Emergency Department: A Review of the Literature, 2000-2012. Journal of Healthcare Management, 60(1), 93-115. McGregor, C., Catley, C., James, A. and Padbury, J. (2011) Next generation neonatal health informatics with Artemis. Studies in health technology and informatics, 169, 115-119. McHugh, M., VanDyke, K., McClelland, M., and Moss, D. (2011). Improving patient flow and reducing emergency department crowding: A guide for hospitals. http://www.ahrq.gov/research/findings/final-reports/ptflow/ptflowguide.pdf McLaughlin N, Afsar-Manesh N, Ragland V, Buxey F, Martin NA. (2014) Tracking and sustaining improvement initiatives: leveraging quality dashboards to lead change in a neurosurgical department, Neurosurgery;74 (3):235-43; Medical Research Council (2008) Developing and evaluating complex interventions: new guidance. Available:http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1000186 Melville, J; Lukefahr, J L., Cornell, J; Kellogg, N, D. Lancaster, J L. (2013), The Effect of Image Quality on the Assessment of Child Abuse Photographs, Pediatric Emergency Care. 29(5):607-611 Miller, C., Karimi, L., Donohue, L., and Kapp, S. (2012). Interrater and intrarater reliability of silhouette wound imaging device. Advances in skin and wound care, 25(11), 513-518. Miller, E. A. (2011). The continuing need to investigate the nature and content of teleconsultation communication using interaction analysis techniques. Journal of Telemedicine and Telecare, 17(2), 55-64.

87

Miller, K., Akers, C., Magrin, G., Whitehead, S. and Davis, A.K. (2013) Piloting the use of 2D barcode and patient safety-software in an Australian tertiary hospital setting. Vox Sanguinis, 105(2), 159-166. Mohammed, M., Hayton, R., Clements, G., Smith, G. and Prytherch, D. (2009) Improving accuracy and efficiency of early warning scores in acute care. British Journal of Nursing, 18(1), 18-24. Morriss Jr, F.H., Abramowitz, P.W., Nelson, S.P., Milavetz, G., Michael, S.L., Gordon, S.N., Pendergast, J.F. and Cook, E.F. (2009) Effectiveness of a Barcode Medication Administration System in Reducing Preventable Adverse Drug Events in a Neonatal Intensive Care Unit: A Prospective Cohort Study. The Journal of Pediatrics, 154(3), 363-368.e1. Morrow, E., Robert, G., Maben, J. and Griffiths, P. (2012) Implementing largescale quality improvement – lessons from The Productive Ward: Releasing time to care TM. International Journal of Health Care Quality Assurance, 25(4) 237-253. Murff, H.J., FitzHenry, F., Matheny, M.E. and et al.,. (2011) Automated identification of postoperative complications within an electronic medical record using natural language processing. JAMA, 306(8), 848-855. Murphy, M.F. (2012a) End-to-end electronic transfusion management in hospital practice. Vox Sanguinis, 103, 38. Murphy (2012b) Transfusion errors: How they occur and novel systems to improve the safety of blood administration. Transfusion Alternatives in Transfusion Medicine, 12(2):1-9 Murphy, M, Fraser, E, Miles, D, Noel, S, Staves, J, Cripps, B, Kay, J, (2012c) How do we monitor hospital transfusion practice using an end-to-end electronic transfusion management system? Transfusion, 52(12): 2505-2512 Murphy, M, Stanworth, S, Yazer, M, (2011) Transfusion practice and safety: Current status and possibilities for improvement. Vox Sanguinis, 100(1): 46-59 Myers, M.G. and Godwin, M. (2012) Automated office blood pressure. Canadian Journal of Cardiology, 28(3), 341-6. Myers, M.G. (2014) Eliminating the human factor in office blood pressure measurement. Journal of Clinical Hypertension, 16(2), 83-6. Nagase, T., Sanada, H., Takehara, K., Oe, M., Iizaka, S., Ohashi, Y., and Nakagami, G. (2011). Variations of plantar thermographic patterns in normal controls and non-ulcer diabetic patients: novel classification using angiosome concept. Journal of plastic, reconstructive and aesthetic surgery, 64(7), 860-866. National Audit Office / Department of Health (2006) Good practice in managing the use of temporary nursing staff. London (online resources at http://www.nao.org.uk/wp-content/uploads/2006/07/05061176_Good_Practice.pdf last accessed 16/06/2015) National Patient Safety Agency (2007) Recognising and responding appropriately to early signs of deterioration in hospitalised patients. Report. Published at http://www.npsa.nhs.uk/corporate/news/deterioration-in-hospitalised-patients/?locale=en

88

Navarro, D A; Singer, P; Leibovitz, E; Krause, I; Boaz, M, (2014), Inter- and intra-rater reliability of digitally captured images of plate waste, Nutrition and Dietetics. 71(4):284-288. Nelson, M.R., Quinn, S., Bowers-Ingram, L., Nelson, J.M. and Winzenberg, T.M. (2009) Cluster-randomized controlled trial of oscillometric vs. manual sphygmomanometer for blood pressure management in primary care (CRAB). American Journal of Hypertension, 22(6), 598-603. Nelson, S, Lu, C, Leng, J, Cannon, G, He, T, Zeng, Q, Halwani, A, Sauer, B (2015) The use of natural language processing of infusion notes to identify outpatient infusions, Pharmacoepidemiology and Drug Safety. 24(1): 86-92

NHS Connecting for Health (2012), Pioneering Dashboards are leading innovation, http://webarchive.nationalarchives.gov.uk/20130502102046/http://www.connectingforhealth.nhs.uk/engagement/clinical/publications/cc17/dashboards NHS Employers (2007) Electronic rostering: helping to improve workforce productivity A guide to implementing electronic rostering in your workplace. London (online report avauilable at http://www.nhsemployers.org/~/media/Employers/Publications/E-Rostering%20guidance.ashx last accessed 16/06/2015) NHS England (2014) The NHS Five Year Forward View, NHS England, London NHS England 2015, Specialised services quality dashboards http://www.england.nhs.uk/commissioning/spec-services/npc-crg/spec-dashboards/ NICE (2014) NICE Pressure ulcers: prevention and management of pressure ulcers [CG179] Published date: April 2014 http://www.nice.org.uk/guidance/cg179 NICE (2007), Acutely ill patients in hospital: Recognition of and response to acute illness in adults in hospital [CG50] published date: July 2007 http://www.nice.org.uk/guidance/CG50 Nijp, H.H., Beckers, D.G., Geurts, S.A., Tucker, P. and Kompier, M.A. (2012) Systematic review on the association between employee worktime control and work―non-work balance, health and well-being, and job-related outcomes. Scandinavian journal of work, environment and health, 38(4), 299-313. Nursing and Midwifery Council NMC (2015), The Code, NMC, London Palma, J.P., Sharek, P.J. and Longhurst, C.A. (2011) Impact of electronic medical record integration of a handoff tool on sign-out in a newborn intensive care unit. Journal of Perinatology, 31(5), 311-317. Paré, G., Sicotte, C., Moreault, M. P., Poba-Nzaou, P., Nahas, G.,and Templier, M. (2011). Mobile computing and the quality of home care nursing practice. Journal of Telemedicine and Telecare, 17(6), 313-317. Park, E.S., Peccoud, M.R., Wicks, K.A., Halldorson, J.B., Carithers, R.L., Jr., Reyes, J.D. and Perkins, J.D. (2010) Use of an automated clinical management system improves outpatient immunosuppressive care following liver transplantation. Journal of the American Medical Informatics Association, 17(4), 396-402.

89

Pedragosa, À., Alvarez-Sabin, J., Molina, C. A., Sanclemente, C., Martín, M. C., Alonso, F., and Ribo, M. (2009). Impact of a telemedicine system on acute stroke care in a community hospital. Journal of Telemedicine and Telecare, 15(5), 260-263. Pockras, P (2013) Reconcling Pyxis overrides after the implementation of EPIC. Online Journal of Nursing Informatics, 17(3) (online at http://ojni.org/issues/?p=2868 last accessed 16/06/2015. Poon, E.G., Keohane, C.A., Yoon, C.S., Ditmore, M., Bane, A., Levtzion-Korach, O., Moniz, T., Rothschild, J.M., Kachalia, A.B., Hayes, J., Churchill, W.W., Lipsitz, S., Whittemore, A.D., Bates, D.W. and Gandhi, T.K. (2010) Effect of Bar-Code Technology on the Safety of Medication Administration. New England Journal of Medicine, 362(18), 1698-1707. Pope, R. M., Malin, R., Buchan, M., Pearson, C., and Wagner, A. (2014). Evidence of sustained benefit: Use of teleconsultation to support people living in residential care settings. International Journal of Integrated Care, 14(8). Inter Digital Health Suppl; URN:NBN:NL:UI:10-1-116540. Pope, R., Muchan, M., Malin, R., Binks, R., and Wagner, A. (2013). The results of 24 hr teleconsultation with people at home and in residential care settings. International Journal of Integrated Care, 13(7). T&T Conf Suppl; URN: URN:NBN:NL:UI:10-1-115664 Prytherch, D.R., Smith Gb Fau - Schmidt, P., Schmidt P Fau - Featherstone, P.I., Featherstone Pi Fau - Stewart, K., Stewart K Fau - Knight, D., Knight D Fau - Higgins, B. and Higgins, B. (2006) Calculating early warning scores--a classroom comparison of pen and paper and hand-held computer methods. Resuscitation, 70(2), 173-8. Raban, M.Z. and Westbrook, J.I. (2014) Are interventions to reduce interruptions and errors during medication administration effective?: a systematic review. BMJ Quality and Safety, 23(5), 414-421. Rack, L.L., Dudjak, L.A. and Wolf, G.A. (2012) Study of Nurse Workarounds in a Hospital Using Bar Code Medication Administration System. Journal of Nursing Care Quality, 27(3), 232-239. Randmaa, M. Mårtensson, G., Swenne, C. L., & Engström, M. (2014) SBAR improves communication and safety climate and decreases incident reports due to communication errors in an anaesthetic clinic: a prospective intervention study. BMJ Open 4.1: e004268. Rantz, M., Hicks,J.,Petroski,L Madsen, G .F., Alexander, R. W., Galambos, G., Conn, C. Scott-Cawiezell, V. Zwygart-Stauffacher,J. and Greenwald, M (2010), Cost, staffing and quality impact of bedside electronic medical record (EMR) in nursing homes. Journal of the American Medical Directors Association, 11, (7) 485-493. RCN (2011) Nursing dashboards – measuring quality. Royal College of Nursing: London RCN (2012) RCN guidance Nursing staff using personal mobile phones for work purposes, RCN, London Redwood S Ngwenya NB, Hodson J Ferner RE .Coleman JJ (2013), Effects of a computerized feedback intervention on safety performance by junior doctors: results from a randomized mixed method study, BMC Medical Informatics and Decision Making 13:63,

90

Rennert, R., Golinko, M., Kaplan, D., Flattau, A., and Brem, H. (2009). Standardization of wound photography using the wound electronic medical record. Advances in skin and wound care, 22(1), 32-38. Rice, P. (2011), Teleconsultation for Healthcare Services: A workbook for implementing new service models, Health Innovation and Education Cluster (HIEC) Yorkshire and the Humber, http://yhhiec.org.uk/wp-content/uploads/2011/10/11092020_tele_consultation_workbk.pdf. Roback, K., (2010) An overview of temperature monitoring devices for early detection of diabetic foot disorders, Expert Review of Medical Devices, 7, (5):711-718. Roback, K., Johansson, M. and Starkhammar, A. (2009). Feasibility of a thermographic method for early detection of foot disorders in diabetes. Diabetes technology and therapeutics, 11(10), 663-667. Rodriguez-Gonzalez, C.G., Herranz-Alonso, A., Martin-Barbero, M.L., Duran-Garcia, E., Durango-Limarquez, M.I., Hernandez-Sampelayo, P. and Sanjurjo-Saez, M. (2012) Prevalence of medication administration errors in two medical units with automated prescription and dispensing. Journal of the American Medical Informatics Association, 19(1), 72-8. Rogers, L.C., Bevilacqua, N.J., Armstrong, D.G and Andros, G., (2010), Digital planimetry results in more accurate wound measurements: a comparison to standard ruler measurements. Journal of Diabetes Science and Technology, 4(4), 799-802. Romero-Brufau, S., Huddleston, J.M., Naessens, J.M., Johnson, M.G., Hickman, J., Morlan, B.W., Jensen, J.B., Caples, S.M., Elmer, J.L., Schmidt, J.A., Morgenthaler, T.I. and Santrach, P.J. (2014) Widely used track and trigger scores: Are they ready for automation in practice? Resuscitation, 85(4), 549-552. Rönnberg, E. and Larsson, T. (2010) Automating the self-scheduling process of nurses in Swedish healthcare: a pilot study. Health Care Management Science, 13(1), 35-53. Roots A, Bhalia, A and Bims J.(2011) “Telemedicine for Stroke: A Systematic Review”, Journal of Neuroscience Nursing, 7:2, 481-489. Russell, M., Hogg, M., Leach, S., Penman, M., and Friel, S. (2014). Developing a general ward nursing dashboard. Nursing Standard, 29(15), 43-49. Ryan, S., O’Riordan, J.M., Tierney, S., Conlon, K.C. and Ridgway, P.F. (2011) Impact of a new electronic handover system in surgery. International Journal of Surgery, 9(3), 217-220. Safdari, R., Ghazisaeedi, M., Mirzaee, M., Farzi, J., Goodini, A. (2014), Development of balanced key performance indicators for emergency departments strategic dashboards following analytic hierarchical process, Health Care Management, 33, (4): 328-34. Salisbury, C., Campbell, J., McKinstry, B., Ziebland, S., Atherton, H., Gibson, A.(2014), The potential of alternatives to face to face consultation in general practice, and the impact on different patient groups publication data August 2017, HSand DR - 13/59/08, http://www.nets.nihr.ac.uk/projects/hsdr/135908

91

Saleh, S., Larsen, J. P., Bergsåker-Aspøy, J., and Grundt, H. (2014). Re-admissions to hospital and patient satisfaction among patients with chronic obstructive pulmonary disease after telemedicine video consultation - a retrospective pilot study. Multidisciplinary Respiratory Medicine, 9(1), 6. Sawyer, A.M., Deal, E.N., Labelle, A.J., Witt, C., Thiel, S.W., Heard, K., Reichley, R.M., Micek, S.T. and Kollef, M.H. (2011) Implementation of a real-time computerized sepsis alert in nonintensive care unit patients. Critical Care Medicine, 39(3), 469-73. Schmidt, P.E., Meredith, P., Prytherch, D.R., Watson, D., Watson, V., Killen, R.M., Greengross, P., Mohammed, M.A. and Smith, G.B. (2014) Impact of introducing an electronic physiological surveillance system on hospital mortality. BMJ Quality and Safety, 24(1), 10-20. Sellen, K.M., Jovanovic, A., Perrier, L. and Chignell, M. (2015) Systematic review of electronic remote blood issue. Vox Sanguinis, 108 (4) published online: 31 Mar 2015 Issue Shaw. A., Jacobs, B., Stockwell D., Futterman C. and Spaeder, M., (2013) Effects of a Unit-Wide dashboard on compliance with patient safety measures, Critical Care Medicine. 41(12 SUPPL. 1):A160-A161. Signal, M., Pretty, C.G., Chase, J.G., Le Compte, A. and Shaw, G.M. (2010) Continuous glucose monitors and the burden of tight glycemic control in critical care: can they cure the time cost? Journal of Diabetes Science and Technology, 4(3), 625-35. Simms RA , Ping H, Yelland A Beringer AJ, Fox R and Draycott TJ (2013) Development of maternity dashboards across a UK health region; current practice, continuing problems, European Journal of Obstetrics, Gynecology, and Reproductive Biology 170(1):119-24, Smeulers, M., Onderwater, A.T., Zwieten, M.C.B. and Vermeulen, H. (2014) Nurses' experiences and perspectives on medication safety practices: an explorative qualitative study. Journal of Nursing Management, 22(3), 276-285. Smith, L.B., Banner, L., Lozano, D., Olney, C.M. and Friedman, B. (2009) Connected care: reducing errors through automated vital signs data upload. CIN: Computers, Informatics, Nursing, 27(5), 318-23. Snyder, S.R., Favoretto, A.M., Derzon, J.H., Christenson, R.H., Kahn, S.E., Shaw, C.S., Baetz, R.A., Mass, D., Fantz, C.R., Raab, S.S., Tanasijevic, M.J. and Liebow, E.B. (2012) Effectiveness of barcoding for reducing patient specimen and laboratory testing identification errors: A Laboratory Medicine Best Practices systematic review and meta-analysis. Clinical Biochemistry, 45(13–14), 988-998. Song, L.P.R.N., Park, B.P. and Oh, K.M.I.P.R.N. (2015) Analysis of the Technology Acceptance Model in Examining Hospital Nurses' Behavioral Intentions Toward the Use of Bar Code Medication Administration. CIN: Computers, Informatics, Nursing, 33(4), 157-165. Sorknæs AD, Madsen H, Hallas J, Jest P and Hansen-Nord M. (2011) Nurse tele-consultations with discharged COPD patients reduce early readmissions – an interventional study. The Clinical Respiratory Journal, 5: 26–34.Denmark

92

Sorknaes, A. D., Bech, M., Madsen, H., Titlestad, I. L., Hounsgaard, L., Hansen-Nord, M.,and Østergaard, B. (2013). The effect of real-time teleconsultations between hospital-based nurses and patients with severe COPD discharged after an exacerbation. Journal of telemedicine and telecare, 19(8), 466-474. Spektor, M., Ramsay, C., Sommer, B., Likourezos, A., and Davidson, S. J. (2008). 144: Electronic Dashboard and a Multidisciplinary Hospital-Wide Team Decrease Patient Throughput Intervals and Reduce Number of Admitted Patients Held in the Emergency Department. Annals of Emergency Medicine, 52(4), S87. Starmer, J., and Giuse, D. (2008). A real-time ventilator management dashboard: toward hardwiring compliance with evidence-based guidelines. In AMIA Annual Symposium Proceedings (Vol. 2008, p. 702). American Medical Informatics Association. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2656080/ Taenzer, A.H., Pyke, J., Herrick, M.D., Dodds, T.M. and McGrath, S.P. (2014) A comparison of oxygen saturation data in inpatients with low oxygen saturation using automated continuous monitoring and intermittent manual data charting. Anesthesia and Analgesia, 118(2), 326-31. Talbot , A (2012),Supporting urgent care using clinical dashboards: transforming data into knowledge, http://www.healthcareconferencesuk.co.uk/news/clinical-dashboards-an-update-on-the-national-urgent-care-dashboard Talmon, J., Ammenwerth, E., Brender, J., de Keizer, N., Nykänen, P., and Rigby, M. (2009). STARE-HI—Statement on reporting of evaluation studies in Health Informatics. International Journal of Medical Informatics, 78(1), 1-9. Tepas, J.J., 3rd, Rimar, J.M., Hsiao, A.L. and Nussbaum, M.S. (2013) Automated analysis of electronic medical record data reflects the pathophysiology of operative complications. Surgery, 154(4), 918-24. Terris, D. D., Woo, C., Jarczok, M. N., and Ho, C. H. (2010). Comparison of in-person and digital photograph assessment of stage III and IV pressure ulcers among veterans with spinal cord injuries. Journal of Rehabilitation, Research and Development, 48(3), 215-224. Thompson, N; Gordey, L; Bowles, H; Parslow, N; Houghton, P (2013) Reliability and Validity of the Revised Photographic Wound Assessment Tool on Digital Images Taken of Various Types of Chronic Wounds. Advances in Skin and Wound Care. 26(8):360-373, Toussi, M., Berthelot, H. and Simon, G. (2014) Study of medication errors which may occur in the process of electronic prescription and dispensing using failure mode and effects analysis (FMEA). Pharmacoepidemiology and Drug Safety, 23, 236-237. University of Kent (2010)‘Unit Costs of Health and Social Care 2010’ published by the Personal Social Services Research Unit, University of Kent Van der Linden, C.M.J., Jansen, P.A.F., Grouls, R.J.E., van Marum, R.J., Verberne, M.A.J.W., Aussems, L.M.A., Egberts, T.C.G. and Korsten, E.H.M. (2013) Systems that prevent unwanted represcription of drugs withdrawn because of adverse drug events: A systematic review. Therapeutic Advances in Drug Safety, 4(2), 73-90.

93

Van Gurp, J., van Selm, M., van Leeuwen, E., and Hasselaar, J. (2013). Transmural palliative care by means of teleconsultation: a window of opportunities and new restrictions. BMC medical ethics, 14(1), 12. Varkey, P; Tan, N; Girotto, R; Tang, W; Liu, Y; Chen, H, (2008)A Picture Speaks a Thousand Words: The Use of Digital Photography and the Internet as a Cost-Effective Tool in Monitoring Free Flaps. Annals of Plastic Surgery. 60(1):45-48, Vergales, B.D., Paget-Brown, A.O., Lee, H., Guin, L.E., Smoot, T.J., Rusin, C.G., Clark, M.T., Delos, J.B., Fairchild, K.D., Lake, D.E., Moorman, R. and Kattwinkel, J. (2014) Accurate automated apnea analysis in preterm infants. American Journal of Perinatology, 31(2), 157-62. Vicente, M. S., Armengol, M. V., Rodríguez, M. A., Crespo, A. M., and Buil, L. O. (2012). Effect of bar-code technology on the safety of cytostatic drugs administration. European Journal of Hospital Pharmacy: Science and Practice, 19 (2), 142. Vowden, K and Vowden, P (2013), A pilot study on the potential of remote support to enhance wound care for nursing-home patients, Journal of Wound Care, 22, (9), 481-488 Vuononvirta, T., Timonen, M., Keinänen-Kiukaanniemi, S., Timonen, O., Ylitalo, K., Kanste, O., and Taanila, A. (2009). The attitudes of multiprofessional teams to telehealth adoption in northern Finland health centers. Journal of Telemedicine and Telecare, 15(6), 290-296. Wachter, R (2015) The digital doctor: Hope, hype and harm at the dawn of medicine’s computer age. MaGraw-Hill Education, New York. Wade, V. A., Karnon, J., Elshaug, A. G., and Hiller, J. E. (2010). A systematic review of economic analyses of telehealth services using real time video communication. BMC health services research, 10(1), 233. Wakefield, B. J., Bylund, C. L., Holman, J. E., Ray, A., Scherubel, M., Kienzle, M. G., and Rosenthal, G. E. (2008). Nurse and patient communication profiles in a home-based telehealth intervention for heart failure management. Patient education and counseling, 71(2), 285-292. Weiner, J.P (2012) Doctor patient communication in the e-healh era, Israel Journal of Health Policy Research, 1:33 Wendelken, M.E., Berg, W.T., Lichtenstein, P., Markowitz, L., Comfort, C., Alvarez, O.M., (2011), Wounds measured from digital photographs using photo digital planimetry software: validation and rater reliability.Wounds. 2011 Sep; 23(9):267-75. Wohlauer, M.V., Rove, K.O., Pshak, T.J., Raeburn, C.D., Moore, E.E., Chenoweth, C., Srivastava, A., Pell, J., Meacham, R.B. and Nehler, M.R. (2012) The Computerized Rounding Report: Implementation of a Model System to Support Transitions of Care1. Journal of Surgical Research, 172(1), 11-17. Wong, T.C., Xu, M. and Chin, K.S. (2014) A two-stage heuristic approach for nurse scheduling problem: A case study in an emergency department. Computers and Operations Research, 51, 99-110.

94

Wood, J. and Finkelstein, J. (2013) Comparison of automated and manual vital sign collection at hospital wards. Studies in Health Technology and Informatics, 190, 48-50. Woodford, M.R., Mackenzie, C.F., DuBose, J., Hu, P., Kufera, J., Hu, E.Z., Dutton, R.P. and Scalea, T.M. (2012) Continuously recorded oxygen saturation and heart rate during prehospital transport outperform initial measurement in prediction of mortality after trauma. The Journal of Trauma and Acute Care Surgery, 72(4), 1006-11. Workman, M. (2008). An experimental assessment of semantic apprehension of graphical linguistics. Computers in Human Behavior, 24(6), 2578-2596. Young, J., Slebodnik, M. and Sands, L. (2010) Bar Code Technology and Medication Administration Error. Journal of Patient Safety, 6(2), 115-120. Zaydfudim, V., Dossett, L. A., Starmer, J. M., Arbogast, P. G., Feurer, I. D., Ray, W. A., May, A,K and Pinson, C. W. (2009). Implementation of a real-time compliance dashboard to help reduce SICU ventilator-associated pneumonia with the ventilator bundle. Archives of Surgery, 144(7), 656-662. Zhang H; Cocosila M; Archer N. (2010),Factors of adoption of mobile information technology by homecare nurses: a technology acceptance model 2 approach. CIN: Computers, Informatics, Nursing. 28(1):49-56.