Upload
chris-kiess
View
790
Download
2
Tags:
Embed Size (px)
DESCRIPTION
As certain high profile Health Information Technology implementations now approach tens to one-hundred million dollars, the need to understand determinants of success is greater than ever. Institutions simply cannot afford to be unprepared. In this paper, we examine currently existing frameworks for measuring Organizational Readiness for Change, understanding that this is a key component of successful Health IT implementation.
Citation preview
Measuring Organizational Readiness for the Implementation of Health Information TechnologyI-631 Clinical Information Systems Final Group Project
As certain high profile Health Information Technology implementations now approach tens to one-hundred million dollars, the need to understand determinants of success is greater than ever. Institutions simply cannot afford to be unprepared. In this paper, we examine currently existing frameworks for measuring Organizational Readiness for Change, understanding that this is a key component of successful Health IT implementation.
Christopher Kiess, Ronelle Brumleve, Kevin Chang, 1/11/2009
Introduction
In the 1940s, scientists were just discovering the utility of the first widely used antibiotic, Penicillin. A
practicing clinician in that era need only know a handful of medications, including their usage, doses,
side effect profiles, and potential interactions. In present day medical practice primary care physicians
are responsible for hundreds of medications in the same capacity, with the number of new medications
and treatment algorithms seeming to rise by an exponential degree on a daily basis. As the information
needs for medical practice and therefore health care delivery continues to grow, our reliance on
dependable systems to store and organize this information has become a certainty. These systems,
which we refer to as health information technologies (HIT), are unfortunately costly to develop,
implement, and maintain. The primary costs for an organization using these systems come during the
implementation phase. With certain high-profile HIT implementations now costing upwards of one
hundred million dollars, 1 implementation efforts are increasingly becoming more important than ever.
While not agreed upon universally, HIT is believed to provide benefits to healthcare spurring a
number of agencies in the United States to call for the adoption of such systems to improve quality and
promote evidence-based practice. 2-5 Specific benefits of health information technology include
decreased errors, improved processes and workflow, increased adherence to evidence-based guidelines
and decreased cost. 2,6-8 Beyond simply the benefits of information technology, the implementation
process has been studied and reported on as an issue in its own rite. The implementation of information
technology affects numerous hospital processes from finance to patient care and education. 9,10 The type
of information technology implemented in medical institutions varies and can include clinical decision
support (CDS) tools, computerized physician order entry (CPOE), medication reconciliation and
medication dispensing systems. 11-13 Much has been written concerning specific obstacles related to the
implementation of health information technology (HIT). These include staff resistance to
implementation, 14 communication problems between the physician and patient as a result of
technology 15,16 and workarounds as a result of design. 17
A recent study published in the New England Journal of Medicine cited physician resistance as
one of the top barriers to the implementation of electronic-records systems in hospitals. 18 Physician and
staff resistance are primary barriers to the implementation of technology 14 and facilitators to
unintended consequences in the form of workarounds. 17 Unintended consequences and failure in the
implementation of technology in hospitals can, in part, be attributed to the lacking of a sociotechnical
2
approach and a failure to truly evaluate the given systems for their benefits ensuring goals are aligned
across the organization. 19,20,14 And thus a major question to answer is in how we reduce resistance prior
to implementations. In order to do this, one must first find a means of measuring the resistance.
Another way to state this is to find a means of measuring readiness for change.
In our research we have hypothesized that one critical determinant of successful
implementation comes in measuring organizational readiness for change, which would provide a
foundation for an intervention prior to implementation of HIT. Throughout this manuscript we explore
the concept of measuring organizational readiness for change further and present our findings in detail,
including a discussion of technology implementation frameworks, measuring organization readiness for
change and organizational readiness for Health IT implementation in particular. We propose sample
questions based on our findings to include in such tools and conclude with a summary of our work thus
far with suggestions for next steps.
Background
This project was born as a result of the work the Health Services Research (HSR) Department at the
Regenstrief Institute. HSR is a part of the Center for Health Services Research and outcomes at the VA
Center in Indianapolis. Their primary areas of research interest involve patient safety, patient
communication, HIT solutions and process improvement. In 2008, the department was awarded a 1.9
million dollar contract with AHRQ to reduce the rate of MRSA in hospitals using Lean Systems
interventions and active surveillance in ICUs. The intervention – active surveillance – is not technical in
itself nor does it necessarily involve HIT. However, there are future implementations on the project that
will utilize HIT and Lori Losee along with the Project Manager began discussing certain problems with
the project. Specifically, there was a question concerning whether an organization or unit could be
measured for their readiness to change prior to an implementation. There have been some problems
and barriers with the hospitals that denote the process of selection prior to the start of the project was
not without flaw. Thus we discussed the appropriate measures to use in selection and what
interventions could be made prior to the primary intervention. This discussion eventually moved to
technology and informatics positing a method for measuring an organization’s readiness for the
implementation of new technologies. Interspersed within this conversation were other topics such as
Weick and Sutcliffe’s Management of the Unexpected 21 and knowledge management within
organizations. Weick and Sutcliffe proposed a method of measuring an organization’s readiness to
3
manage the unexpected. In knowledge management, it is often common to measure an organization’s
readiness to share knowledge through the use of a survey. Jay Leibowitz introduces a tool such as this in
his approach to implementing a knowledge management program. 22 We theorized a project could be
developed to validate a measurement tool for change readiness in an organization with a specific foci on
HIT implementation.
Implementation Science
In the past few decades, an emerging body of research has culminated in an effort to develop
frameworks and theories of implementation. Specifically, what are the measures an organization can
use to diffuse a change or to disperse new practice? This body of research has attempted to answer
those questions with extremely limited success. Two separate reviews of the implementation literature
have reported on the lack of solid research in implementation science and the paucity of practical
methods. 23,24 However, there are some generally accepted theories that have been developed or, in
some cases, borrowed from – most notably Rogers’ Diffusion of Innovations (DOI) Theory, 25 the PAHRIS
framework 26-29 and quality improvement models. 30,31 We won’t review all theories presented or those
we encountered. However, we will provide a foundation of those theories we believe have applicability
to our end goals.
Rogers DOI theory explores both the quality of an innovation and the innovator or end-user. The
theory notes five characteristics of innovation that are particularly influential among potential adopters
within a setting: 1) perceived benefit of the change, 2) observability of the innovation, 3) compatibility of
the change with the current culture and personal belief systems, 4) level of simplicity of innovation and
5) trialability. These characteristics are apt for a number of changes and implementation efforts.
However, HIT generally falls short of three of the qualities. Observability and trialibility are nearly
impossible with HIT since implementations generally require a physical change in process and
implementation of IT with “no going back” once implementation is complete (though pilot
implementations are often utilized). And, simplicity of the innovation is an oxymoron when speaking of
IT. The two primary qualities of Rogers’ theory that apply would be perceived benefits of the change and
compatibility of the change with the culture. These two qualities or characteristics are discussed in more
details below. It is worth also noting the other dimension of Rogers’ theory – the end-user. Rogers’
identified five clusters of personality: 1) innovators (2.5%) 2) early adopters (13.5%) 3) early majority
(34%) 4) late majority (34%) 5) laggards (16%). Rogers’s noted the first 3 categories of people are
4
essential for building momentum and sustainable change. He also saw these categories of users
distributed along a Bell curve in normal distribution. But beyond Rogers’ sound theory how can we
increase adoption? Moreover, can this theory be applied to HIT? It has been applied theoretically 32 but
we were unable to find any concrete studies.
PARIHS (Promoting Action on Research Implementation in Health Services) is another
framework used in implementing a change, but has not been used specifically for HIT. The framework
was originally developed in 1998 and has undergone research and development over the course of
several years in the United Kingdom. Though the focus is on the implementation of evidence for bedside
care, the overall framework contains elements that would apply to the implementation of any change.
PARIHS exploits the relationships among evidence, context, and facilitation and considers these
elements to have a dynamic relationship occurring within the same realm of one another. The general
principles focus on the evidence for the change, the context of the change and the facilitation of the
process for change. These three elements are key in implementing a change, but may not be
appropriate for HIT as it is often difficult to determine the evidence for the change or to convince the
end-user that HIT is a better way.
Neither of the above two theoretical frameworks provides a systematic or practical set of
guidelines for implementation. But tools such as Lean and Six Sigma have been discussed in the
literature 24 as practical methods in implementing change. However, these tools fall short of the
assessment our team was seeking.
In general, the literature for implementation science fell short of providing any concrete
guidance for implementation, assessment or systems change. The studies are either lacking in detail,
short of sound study design and vague in terms of what and how to assess. However, there were two
primary points we derived form this literature review that persistently appeared in our subsequent
research – the perceived usefulness of a change and the compatibility of a change with the culture from
Rogers’ theory. These are essentially value judgments and as we will see, they occur in other work as
well.
Measuring Organizational Readiness to Change
There are a number of theorists and authors who have indicated a change must be preceded by a set of
circumstances that will allow or facilitate the change. 24,25,33-36 The qualities that precede the change have
not been agreed upon and as we have noted, there have been no rigorous studies to validate any of the
5
qualities proposed in implementation models. Given this bleak picture of implementation science, it is
not surprising that there are few solid tools in existence to measure organizational readiness to change.
It does seem ironic that these tools even exist given the paucity of data in determining exactly what
qualities or factors within an organization indicate a readiness to change. However, a recent review of
the literature on assessment tools revealed 106 articles written concerning assessing an organization’s
readiness to change. 37 Though our particular goal was to either locate or develop a tool to assess the
readiness of an organization to accept HIT, we felt it necessary to seek out any assessment tools for
change, period. This would enable us to determine if these general change assessment tools might be
useful in the development of a specialized tool. The review we located in recent (2008) and of the 106
articles the authors located only 72 contained empirical research. Of those articles, there was a lack of
consensus concerning what constituted a change or conceptually what change meant. This concern has
been echoed in other recent literature as well. 24,35 In all, the review only identified 43 instruments – 7 of
which had been assessed for validity and reliability. Our team reviewed these instruments and found
only two that came close to meeting our needs. However, one instrument contained 118 questions 38 –
far too many for our purposes. The second instrument contained 22 questions 33 – still too many, but
much more reasonable. An example of those points being measured can be seen in figure 1. Ultimately,
the authors concluded the existing tools for measurement fell short of the rigorous standards set in
scholarly development of measurement instruments.
6
Figure 1: Herscovitch Measurement
Given this review is from 2008 and our literature review revealed no instruments published since this
article, we concluded our search at that point.
Technology Implementation Frameworks – How does Measuring
Organizational Readiness to Change Fit?
A third point we sought in attempting to locate tools for measuring HIT implementation readiness was
to seek out HIT implementation models. In doing so, we theorized we might well discover measurement
methods preceding the application of the framework. There are a large number of studies related to
models and frameworks. 36,39-47 We located the primary articles covering each major model and
attempted to evaluate them in terms of how they might fit into our research. Primarily we were
7
interested in whether they measured the users prior to implementing a system. Table 1 illustrates the
major models.
Table 1: HIT Implementation Models
Model Description Measurement
Information Success
Model 39
DeLone initially proposed user satisfaction was the key
indicator of IT success with 2 primary elements being
responsible – system quality and information quality.
DeLone later updated the model to include Service Quality
as well.
No
Technology
Acceptance Model
(TAM) 44
Focuses on perceived usefulness and perceived ease of use
to predict user adoption. Those factors interact with the
system as well in a sociotechnical way
Yes, only to
verify model
Information
Technology
Adoption Model
(ITAM) 46
An attempt at revising the TAM including the theory that
there must be a “fit” between the system and the user. This
was a precursor to workflow modeling, but ignored the
clinical environment as a factor.
No
Task-Technology Fit
Model (TTF) 47
Considered the complexity of the clinical environment
examining 3 elements – individual abilities, technology
characteristics and task requirments.
Yes – Post-
implementation
Fit Between
Individuals, Task and
Technology (FITT) 42
Focuses on the fit between 3 elements – Task, Individual
and Technology
Yes – Post-
implementation
Contextual
Implementation
Model (CIM) 43
Focuses on 3 dimension in implementation – organizational
context, clinical context and individual context. Offers little
practical guidance or structure for assessing the 3
dimensions
No
Kukafka et al. 36 Framework based on 5 phases and two propositions stating
HIT is complex and involves a variety of factors and that
success depends on the involvement of target groups using
technology
Involves an
assessment
phase but
provides no
tool
8
In a general sense, the HIT implementation models were of minimal use to our team, though of great
interest nonetheless. Most were theoretically or conceptually based providing very little practical
guidance. And, most assessment was done post-implementation. In the case of the TAM and ITAM, both
models evaluated HIT that was voluntarily implemented in organizations. Of course, the caveat to this is
a good percentage of implementations in hospitals are not voluntary at all. We did, however, draw on
some very important points from this aspect of the review. One point that served useful was to evaluate
the models based on the proposed qualities that facilitated HIT implementation. Two qualities we saw
repeated that were similar to Rogers’ characteristics of an innovation were perceived ease of use and
perceived usefulness which came from Davis’ TAM model and was also utilized in Dixon’s ITAM. Another
aspect that served useful was the general evolution of the models to begin understanding the
complexity of the clinical environment. These elements did give us some guidance in developing points
of measure for an instrument. However, none of these frameworks has been truly evaluated with
scientific rigor and some have not been validated at all. Moreover, none of them performed a pre-
assessment of readiness for HIT implementation – our primary goal. With that, we moved towards the
only three instruments we had located in our review.
Measuring Organizational Readiness for Health IT
As we moved through this process of seeking and reviewing, we had begun to ask the question as to
whether there was truly any difference between a change of any type and a HIT change. That is, would
the same motivations need to be present for a change in HIT as for a change in, say, the process for
testing a patient for any given disorder? We determined there might be specific – very specific –
differences, but if we were to find a tool with broad applicability, we would have to seek the less specific
measurements.
We located three assessment tools 48-50 and only one was a survey. The other two were
assessment frameworks for the adoption/implementation of HIT. We also located two landmark articles
evaluating the qualities of successful implementation 51,52 and a qualitative meta-analysis of HIT
implementation in the literature. 53
The earliest tools we found was not a survey instrument but rather an assessment framework
developed in 2003. 50 The framework evaluates 9 components of an organization’s readiness to
implement CPOE and does this through key interviews, observations and document review. The 9
9
components are listed in Table 2 and were selected from existing reviews and literature of HIT
implementations. There is no mention in the article as to whether this tool was ever validated and we
are left with the assumption it was not. We were, however, interested in the qualities they identified as
indicating readiness for change.
Table 2: Readiness Components 50
Readiness Component DescriptionExternal Environment
Market Regulation
External and internal forces that are forcing the organization to implement CPOE
Organizational Leadership Accountability Vision Planning
The organization’s commitment to CPOE as top priority
Organization Structure & Function Physician Model Resources Communication
The presence of organizational stuctures to include the effectiveness of those structures
Organizational Culture Success w/Improvement CPOE Awareness Innovations
The organization’s capacity to engage in an sustain large scale change
Care Standardization Commitment Experience Compliance
Ability to adopt or develop standard care processes and implement them across organization
Order Management Process Standardization Management Compliance
The present state of order management services, disciplines and processes
Access to Information Clinician experience with computing in clinical practice
IT Composition Clinical involvement IT services and support
The roles, skills, structure and methodologies of the IT department
IT Infrastructure Physical structure and components of IT
10
Nancy Lorenzi also developed a similar framework for assessment dubbed the Success Factor
Profile. 48 Lorenzi’s tool was administered via structured interviews and was a tool meant to assess a
unit’s readiness for pilot projects. The tool measured 4 conceptual areas:
Table 3: Lorenzi Structured Interviews
Conceptual Area Description
Unit Vital Signs Demographics of unit, documentation, staff stability, retention, current
technologies
Information
Infrastructure
Current software, hardware and uniqueness of technologies
Peopleware Staff experience with technology, staff previous response to change,
history with technology change, potential technology champions (early
adopters)
Innovation Prospects Largest benefit and drawback anticipated from electronic
transformation, hardware needs, resources needed for change, likely
barriers and challenges, desire to be a pilot site
Once Lorenzi had acquired the information via the interviews, they scored the units on a 100-point
scale. This was used in the selection process. Again, there is no mention of validation and the points
selected for the structured interview are not commented on in the methods section. That is, we have no
idea how the selected points for the interview were arrived upon by the group or how they decided
these measures would accurately predict the likelihood of success. Again, we did find interest in the
points they selected and what is especially of interest are whether these points trend with other studies.
Perhaps the most thorough study was conducted by Snyder and Fields. 49,54,55 They developed a
48-point scale titled the Organizational Information Technology Innovation Readiness Scale (OITIRS) (see
Appendix A). Through 4 phases they developed the tool and assessed it in the final phase for validity and
reliability with favorable results. The primary problem with the OITIRS is the sheer size and breadth of
the scale. In attempting to validate and determine reliability, the tool suffered a slightly lower score
because the sub-scales may not have been appropriate for all audiences. A future area of research the
11
authors note din their conclusion would be to validate the sub-scales independently or in conjunction
with select other sub-scales from the instrument.
A 48-point scale is entirely too large in terms of use in an average hospital. Our group contended
the average hospital would need a scale no larger than a quarter of that – 12 questions – and that
attempts in receiving an adequate response rate would be dependant on smaller surveys in community
and public hospitals. This, in no way, excludes the work of Snyder and Fields, but is a point we brought
away from the work and considered in developing our own instrument.
In developing an instrument, our first question was “what should be measured?” That is, what
are those qualities or traits we are seeking to either find or avoid in implementing HIT? It would seem
there would be a number of qualities we would seek that could be concluded upon independently of
any studies or development of similar instruments. For example, the computer literacy of a unit or
organization seems particularly important. Leadership support and resources support seem particularly
important. Snyder ad Fields actually conducted a Delphi study to determine the qualities they would
measure for. We did not have that luxury nor the time. Thus we were relegated to using existing studies.
And there were three primary studies we focused on. 48,49,54-56 Aside from Snyder’s study and Lorenzi’s
work, we chose to focus on the work of Ash also.
In evaluating each study, we chose to primarily concern ourselves with two things – the end-
users’ behavior and attitude prior to implementation and organizational culture. In doing so, we were
able to eliminate certain measurements from the three studies we were focusing on such as IT
infrastructure or workflow issues, which would be an issue for consideration both beyond end-user and
their readiness to accept a given technology. It is important to note and underscore our intention was to
measure readiness for an HIT implementation. We theorize issues such as workflow – while they are
important – constitute a separate genre in that they become issues post-implementation and in many
cases there is little that can be done to address these issues once implementation has occurred. In short,
this is a sociotechnical issue that should be addressed prior to implementation and only affects
organizational readiness well after the implementation. We wanted to focus on previous experiences
and measuring the readiness to accept a new change based on that previous experience along with
organizational factors we felt influenced change in organizations.
Ash et al. developed 12 points of consideration in her work and broke them into 3 different
groupings – technology considerations, personal considerations and organizational considerations.
In terms of technology, Ash et al. considered 4 points:
12
Time saved or does the system actually save the end-user time?
Knowledge management structures in place to support CPOE and/or CDS
How well the system is integrated to provide functional support on a number of different levels
Cost of the systems for implementation, training, support and sustainability
Personal Considerations included:
Value of the system to those who use it
Leadership support and liaisons between leadership, IT and the end-user (this latter point could
also be termed as communication)
Support “at the elbow” as Ash et al. termed it and this meant support that interfaced with the
end-user and was there to provide contextual support
Organizational Considerations include:
Foundational underpinnings for the implementation to include support and motivation at an
organizational level
Collaborative project management where the lines of communication are open and the project
includes different voices from different users
Linguistics or rather is the management of the project able to communicate needs in the varying
languages used across disciplines
Continuous improvement efforts in place for the on-going support of the implementation
Motivation of the organization and the context in which the implementation is made
Rahimi et al. identified 11 factors they deemed important for HIT implementation. 53 Their study was a
qualitative meta-analysis of studies published between 2003 and 2007. They identified 17 studies for
inclusion and mined their 11 factors from their analysis. Those factors are summarized in Table 4.
We found the work of Ash et al. to have the greatest applicability to what we were seeking, to be
most comprehensive and saw clear trends between this work and the work of Lorenzi and Snyder and
Fields. Specific trends between the studies were:
Knowledge Management
Valance or value
Leadership
Support (this would include multidisciplinary support from IT, leadership and local management)
13
Foundational structure and underpinnings (this could also be termed as culture)
Motivation
Table 4: Comparison of Qualities for Successful Implementation (* denotes trend between studies)
Lorenzi Snyder & Fields Rahimi et al. Ash et al.External Environment *Resources (Support) Education,
Training,*Support,*Management Support
Computers & Technology Time *Knowledge
Management Integration Costs
*Organizational Leadership
*End-users Information,Needs Assessment
Personnel *Value *Leaderships/Liaisons *Support
*Organizational Culture Processes *Integration, Work Routines, Workflow,Implementation Process
Organization *Foundation *Collaborative Project
Management *Linguistics Continuous
Improvement *Motivation
*Organization Structure & Function
*Values & Goals *Motivation & Rationale
Care Standardization *Knowledge Management
Trust
Order Management Process
*Management Structures
Technical System Performance
Access to Information *Administrative Support Participation & User Involvement
IT Structure System Effectiveness
Our group evaluated these trends for not only consistency across studies, but also for their conceptual
properties. We were concerned with factors based on our own experiences as well. Specifically, the
project manager wanted to see this work as an intersection between the studies above and Regenstrief
studies in HSR involving implementation and lessons learned. HSR at Regenstrief has recently finished
two major studies in implementation of methods to reduce hospital-acquired infections where
compiling lessons learned was a major deliverable to AHRQ as an effort to advice and direct future
14
funding and study opportunities. The project manager wanted to exploit those lessons learned and
apply them to this specific project. In the process, we developed 8 points of consideration for
implementation efforts of HIT.
1. Value & Benefit – Davis’ work 44 underscores the importance of perceived value and benefit. In
many ways, it compliments Rogers’ characteristics of an innovation in that users must either
perceive some value great enough to warrant adoption efforts worthy or a return on investment
large enough to warrant effort expenditure. In any case, we found the user value and perceived
benefit to be a trend in nearly every review we completed. They are interrelated and we believe
it is apt that this measure falls at the top of the list. If there is no perceived benefit or
motivation, the implementation must address this issue prior to any changes being made. That
is, there must be an effort to raise the perceived value and benefit.
2. Previous Experience – As noted above, there is a certain level of vagueness when describing
what an HIT implementation failure is. A point could be made that any implementation leaving a
negative perception of HIT, IT staff, lack of leadership or automation efforts in medicine could be
considered as a node of failure as it then becomes a previous experience. We have found
previous experiences to be of importance, but this is a vague measure in itself as we must ask
“what is the previous experience referring to?” Could it be lack of support? Could it be lack of IT
knowledge or bad management? There could be any number of reasons that perception is
affected. However, we thought it valuable to gage the general optimism or pessimism of a new
implementation by asking about experiences previously with implementations.
3. Support – where is it, who provides it and when? Ash et al. indicate post go-live support is of
greater value than pre go-live training. This is probably because there is context to help after the
implementation takes place meaning practical application occurs – a superior key educational
method. While we are not terribly concerned about what happens post-implementation in
terms of surveying – we are interested in how the users perceive their IT and leadership support
throughout a project. That is, are they optimistic or pessimistic that they will receive personnel
support during this implementation? This point along with number 2 above warrants an
intervention if the perceptions are negative. There must be efforts made to ensure the staff
feels as though this implementation is or will be handled differently and that support will be
heavier this time. In short, if perceptions are negative, the implementation leaders must come
to the table admitting previous failures and ensuring the end-users they want this
15
implementation to be different.
4. Project Stability – this includes organizational stability and vendor stability. How many other
distractions are occurring during the implementation? How well organized is it? An example
would be our current MRSA study. The project manager on that study has had to halt any
widespread efforts on the project because of the H1N1 season. The project manager has also
communicated this to the 4 Indianapolis hospitals indicating this virus is more important than
the study and any changes could be handled once the hospitals are in a normal operating mode
again. This has really communicated two things to the staffs: First, they now feel as though
someone cares and understands their problems and, second, they know they will be allowed to
concentrate on what is important. In a sense, this has created stability in our project at
Regenstrief. It has also created a delay, but a delay that averts a creating a crisis or situation
where lower quality work is achieved means the risk and compromise are acceptable.
5. Leadership – there has to be strong leadership with good relationships between the leadership
and the staff. Leadership has to be committed and has to be willing to receive open and honest
feedback. In some sense, this is a cultural issue, but we decided leadership was worth sorting
into its own category. And, this is a trend we found in every study we evaluated.
6. Multidisciplinary Culture – Implementation requires a number of different groups to work with
one another and all of these groups have differing vocabularies. The question to ask here is one
that measures the ability or potential of the organization to work with different groups. How
confident are they working with one another that they will achieve their goals? How well are
they able to communicate with one another? Note: This measure could both be obtained from
the IT staff and healthcare staff.
7. Feedback – this is really about correcting the wrongs, allowing the users to feel empowered to
make changes in the system or at the very least air their grievances. Does the culture allow this?
Does the leadership listen? Ash termed this as continuous improvement. We felt feedback was a
better mechanism to measure and more specific. Thus we would measure the level of feedback
currently in the organization. That is, if there is a current problem with a system, how well does
your staff, IT department or management listen and understand the problem? How soon do
they address this need and correct it?
8. Trust – above all, this may be the most important factor aside from value and benefit. Physicians
and nurses need to trust not only the changes but the reasoning for implementing changes and
those who are driving the change efforts. This is different from value in that it extends beyond
16
the reasoning for implementation and into human resource issues to include a general feeling of
trust for all parties involved in the implementation.
Once we had developed and refined these points, we began to develop questions designed to measure
the 8 qualities to form a preliminary version of a measurement instrument. They follow:
These questions assume a 5-point Likert Scale, survey style.
Support
There were enough resources to complete the implementation
In previous implementations we have had on-going presence and support both during and
following the implementation
If there was a problem, there was always someone available to act and make decisions
Support was most engaged with front-line staff
People were always seeking feedback for problems in implementation
We have ready access to resources in the event of an unexpected problem
Value
In my opinion, Health IT has the potential to increase value of our organization by:
Improving workflow efficiency of clinical staff
Improving workflow efficiency of IT staff
Improving workflow efficiency of Administrative staff
Reducing errors
Reducing redundancy
Providing infrastructure for business processes
Providing infrastructure for more detailed financial analysis
Providing infrastructure for quality outcomes analysis
Previous experience
In previous HIT implementations:
I have confidence in health information technology
My previous experience in using technology in this organization has been positive
I have experienced a successful implementation of Health IT at some point
17
The technology functioned as advertised and as leadership explained it
We had appropriate change management during the course of implementation
I knew who to go to when issues came up
Multidisciplinary culture
I interact with a number of different disciplines each day (e.g. IT, Leadership,
Administrative staff)
I feel as though there are clear lines of communication between different disciplines in this
organization
The language of the IT staff and leadership is clear to me
Project stability
In previous HIT implementations:
I was not overwhelmed
I did not feel as though my staff was overwhelmed
We had appropriate supervision of the implementation process
I did not feel as though competing demands were more important than the implementation
Feedback
I feel my voice is heard on the unit
I feel as though I can honestly provide feedback on changes without fear of repercussions
I understand the impact changes have on the organization through feedback from managers
and leadership
In previous implementations, we have been a part of the process in providing input
Trust
I have a sense of trust for changes my organization makes
I feel the changes are driven by the desire to provide better care to our patients
I trust that I will have the resources and support I need to do the best job possible
Leadership
Our leaders are engaged with frontline staff in this organization
When there is a change in our hospital, our leaders are fully engaged
18
It is not uncommon for leadership to be present on our unit any given day
I can always approach leadership within our hospitals with concerns or problems at the
unit level
We are encourage to provide feedback to our leaders regularly
In addition to surveying the users to determine readiness, we would recommend approaching the
implementation as Lorenzi et al. did adding structure interviews, ethnographic observations and
interviews with key leadership. This would allow a more holistic picture to be painted of the
organization and would increase identification of key issues. However, this would also increase the
amount of data collected and to be analyzed.
Limitations & Outcomes
There are a number of limitations to our work that should be noted. First, Snyder and Fields used a
Delphi panel to determine what qualities they sought for measurement. This would be ideal in our
scenario, but we would obviously need to locate experts to collaborate on the measurements. Next, our
draft is only preliminary and obviously not tested or validated. Any work we complete should build on
previous work completed rather than duplicating. Thus, our scale above would need to be reduced and
would need to go through a number of revisions. We see our work thus far as merely the starting point
with two primary areas to build upon – a rigorous analysis and selection of characteristics for
measurement and the development of a measurement instrument for validation.
In terms of the usefulness to Regenstrief – this work and compilation serves as a summary of
what has been accomplished in the field. Specifically, the subject bibliography provides a road map for
accessing and building on measurements in organizational readiness. The work we have completed in
development of a preliminary measuring tool is most certainly a fundable opportunity as that is how
Snyder and Fields were able to complete their work. Beyond these two points, there are a number of
existing measurement tools that could be utilized for existing studies in HSR.
Discussion and Conclusions
HIT is critical to the success of medical practice today, with the power to significantly influence
virtually every part of the health care delivery process. Understanding the necessity of HIT is just the
first step. Institutions are now responsible for taking steps to ensure successful implementations of HIT,
19
as the costs for failure are extremely high. Our focus was a review involving the existing models of
understanding Technology Implementation Frameworks, and examining the various available tools for
measuring organizational readiness to change and HIT implementation. Given the seemingly ostensible
importance of measuring an organization’s ability to change during an implementation, it is puzzling
there isn’t more data available with evidence showing that taking these measures makes a difference.
Moreover, why aren’t the scant existing tools being widely used – that is, why are we not measuring our
organizations for their readiness to change. Is it a complete disregard for those who do the work versus
those who direct the work? We found a number of barriers to the use of these tools:
Use of a measurement standard is void in most organizations
Lack of construct validity and reliability for most tools
Lack of a true understanding as to what change behaviors are appropriate to measure against
The length of existing tools was entirely too large – a 1-2 page survey seems most appropriate
given the constraints on time of hospital workers.
A misalignment of the questions being asked and the people filling out the questions – e.g.
certain tools asked questions on technology infrastructure needs of a given workplace, which is
something that practicing clinicians likely will not understand well, and so would not be able to
give accurate answers.
By understanding the above limitations there is certainly room to improve and our work has revealed
that despite attentive efforts in creating and attempting to utilize somewhat elaborate technology
implementation frameworks, and tools for measuring organizational readiness for change and HIT
implementations, we think that there are significant and tangible steps that can be taken in order to
improve upon these tools.
One direction to move towards in terms of developing this work further would be to exploit
motivational research in the field of educational psychology. A number of theorists have emerged to
determine what motivates students to learn and varying methods. There is a growing body of work
adjacent to this in the field of organizational psychology and what motivates humans in the workplace.
Combining these fields of research and known methods in motivation theory would establish a greater
foundation in terms of what should be sought after in implementation efforts and what points motivate
humans in the work place. In short, it would give us a more detailed picture of what it is we want to
20
measure and whether that measure indicates a greater probability of success in terms of
implementation.
Whether the implementation is HIT or simply a change in practice, organizations are failing in
change management efforts today. We believe measuring for change and priming the environment for
change is more complex than previously thought and certainly not a subject for organizations to ignore.
Understanding how changes affect the workers in an organization and attempting to align goals for the
purpose of change management requires an intervention beyond simply the change. We believe one
solution is measuring the readiness for change, understanding where readiness falls short and
attempting to address the shortcomings before implementing costly changes. Further research is
necessary to establish this theory, but we are optimistic this is research worthy of pursuit in health
services research.
21
References:
1. Amatayakul M. 10 ways to keep implementation costs under control. J AHIMA. 2002;73(5):16A-16C.
2. Agrawal A, Wu WY. Reducing medication errors and improving systems reliability using an electronic medication reconciliation system. Jt Comm J Qual Patient Saf. 2009;35(2):106-14.
3. Institute of Medicine (U.S.). Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, D.C: National Academy Press; 2001.
4. Building a Better Delivery System: A New Engineering/Health Care Partnership. Washington, D.C: National Academies Press; 2005.
5. To Err Is Human: Building a Safer Health System. Washington, D.C: National Academy Press; 2000.
6. Chaudhry B, Wang J, Wu S, et al. Systematic review: impact of health information technology on quality, efficiency, and costs of medical care. Ann Intern Med. 2006;144(10):742-52.
7. Einbinder JS, Bates DW. Leveraging information technology to improve quality and safety. IMIA Yearbook of Medical Informatics 2007. Methods Inf Med. 2007;46(Suppl 1):22-29.
8. Shekelle PG, Morton SC, Keeler EB. Costs and benefits of health information technology. Evidence report/technology assessment. 2006;(132):1.
9. America IOM(COQOHCI, (U.S.) IOM. Crossing the Quality Chasm.; 2001.
10. Doebbeling BN, Chou AF, Tierney WM. Priorities and strategies for the implementation of integrated informatics and communications technology to improve evidence-based practice. J Gen Intern Med. 2006;21 Suppl 2:S50-7.
11. Bails D, Clayton K, Roy K, Cantor MN. Implementing online medication reconciliation at a large academic medical center. Jt Comm J Qual Patient Saf. 2008;34(9):499-508.
12. Hatcher M, Heetebry I. Information technology in the future of health care. J Med Syst. 2004;28(6):673-688.
13. Motulsky A, Winslade N, Tamblyn R, Sicotte C. The impact of electronic prescribing on the professionalization of community pharmacists: a qualitative study of pharmacists' perception. J Pharm Pharm Sci. 2008;11(1):131-146.
14. Ward R, Stevens C, Brentnall P, Briddon J. The attitudes of health care staff to information technology: a comprehensive review of the research literature. Health Info Libr J. 2008;25(2):81-97.
15. Makoul G, Curry RH, Tang PC. The use of electronic medical records: communication patterns in outpatient encounters. J Am Med Inform Assoc. 2001;8(6):610-615.
16. Teutsch C. Patient-doctor communication. Med. Clin. North Am. 2003;87(5):1115-1145.
17. Halbesleben JRB, Wakefield DS, Wakefield BJ. Work-arounds in health care settings: Literature review and research agenda. Health Care Management Review. 2008;33(1):2.
22
18. Jha AK, DesRoches CM, Campbell EG, et al. Use of Electronic Health Records in U.S. Hospitals. N Engl J Med. 2009;360(16):1628-1638.
19. Coiera E. Guide to Health Informatics. 2nd ed. A Hodder Arnold Publication; 2003.
20. Coiera E. Putting the technical back into socio-technical systems research. Int J Med Inform. 2007;76 Suppl 1:S98-103.
21. Weick KE, Sutcliffe KM. Managing the Unexpected: Assuring High Performance in an Age of Complexity. 1st ed. Jossey-Bass; 2001.
22. Liebowitz J. What They Didn't Tell You About Knowledge Management. The Scarecrow Press, Inc.; 2006.
23. Grimshaw J, Eccles M, Thomas R, et al. Toward evidence-based quality improvement. Evidence (and its limitations) of the effectiveness of guideline dissemination and implementation strategies 1966-1998. J Gen Intern Med. 2006;21 Suppl 2:S14-20.
24. Weinert CR, Mann HJ. The science of implementation: changing the practice of critical care. Curr Opin Crit Care. 2008;14(4):460-465.
25. Rogers EM, Rogers E. Diffusion of Innovations, 5th Edition. 5th ed. Free Press; 2003.
26. Brown D, McCormack B. Developing postoperative pain management: utilising the promoting action on research implementation in health services (PARIHS) framework. Worldviews Evid Based Nurs. 2005;2(3):131-141.
27. Kitson AL, Rycroft-Malone J, Harvey G, et al. Evaluating the successful implementation of evidence into practice using the PARiHS framework: theoretical and practical challenges. Implement Sci. 2008;3:1.
28. Rycroft-Malone J. The PARIHS framework--a framework for guiding the implementation of evidence-based practice. J Nurs Care Qual. 2004;19(4):297-304.
29. Helfrich CD, Li Y, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): Development of an instrument based on the Promoting Action on Research in Health Services (PARIHS) framework. Implement Sci. 2009;4:38.
30. Chou AF, Yano EM, McCoy KD, Willis DR, Doebbeling BN. Structural and process factors affecting the implementation of antimicrobial resistance prevention and control strategies in U.S. hospitals. Health Care Manage Rev. 2008;33(4):308-322.
31. Woodward-Hagg H, Workman-Germann J, Flanagan M, et al. Implementation of Systems Redesign: Approaches to Spread and Sustain Adoption. Advances in Patient Safety: New Directions and Alternative Approaches. Healthcare Research and Quality (AHRQ). 2008;2:1-15.
32. Ash JS, Lyman J, Carpenter J, Fournier L. A diffusion of innovations model of physician order entry. Proc AMIA Symp. 2001:22-26.
33. Herscovitch L, Meyer JP. Commitment to organizational change: extension of a three-component model. J Appl Psychol. 2002;87(3):474-487.
23
34. Poon EG, Blumenthal D, Jaggi T, et al. Overcoming the barriers to the implementing computerized physician order entry systems in US hospitals: perspectives from senior management. AMIA Annu Symp Proc. 2003:975.
35. Weiner B. A theory of organizational readiness for change. Implementation Science. 2009;4(1):67.
36. Kukafka R, Johnson SB, Linfante A, Allegrante JP. Grounding a new information technology implementation framework in behavioral science: a systematic analysis of the literature on IT use. Journal of Biomedical Informatics. 2003;36(3):218-227.
37. Weiner BJ, Amick H, Lee SD. Conceptualization and measurement of organizational readiness for change: a review of the literature in health services research and other fields. Med Care Res Rev. 2008;65(4):379-436.
38. Lehman WEK, Greener JM, Simpson DD. Assessing organizational readiness for change. J Subst Abuse Treat. 2002;22(4):197-209.
39. Delone WH, McLean ER. The DeLone and McLean Model of Information Systems Success: A Ten-Year Update. J. Manage. Inf. Syst. 2003;19(4):9-30.
40. Aarts J, Peel V. Using a descriptive model of change when implementing large scale clinical information systems to identify priorities for further research. International Journal of Medical Informatics. 1999;56(1-3):43-50.
41. Aarts J, Peel V, Wright G. Organizational issues in health informatics: a model approach. Int J Med Inform. 52(1-3):235-42.
42. Ammenwerth E, Iller C, Mahler C. IT-adoption and the interaction of task, technology and individuals: a fit framework and a case study. BMC Med Inform Decis Mak. 2006;6:3.
43. Callen JL, Braithwaite J, Westbrook JI. Contextual Implementation Model: A Framework for Assisting Clinical Information System Implementations. J Am Med Inform Assoc. 2008;15(2):255-262.
44. Davis FD. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly. 1989;13(3):319-340.
45. Davis FD. User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. Int. J. Man-Mach. Stud. 1993;38(3):475-487.
46. Dixon DR. The behavioral side of information technology. International Journal of Medical Informatics. 1999;56(1-3):117-123.
47. Goodhue DL, Thompson RL. Task-technology fit and individual performance. MIS Quarterly. 1995;19(2):213-236.
48. Lorenzi NM, Smith JB, Conner SR, Campion TR. The Success Factor Profile for clinical computer innovation. Stud Health Technol Inform. 2004;107(Pt 2):1077-1080.
49. Snyder RA, Fields WL. Measuring hospital readiness for information technology (IT) innovation: A multisite study of the Organizational Information Technology Innovation Readiness Scale. J Nurs Meas. 2006;14(1):45-55.
24
50. Stablein D, Welebob E, Johnson E, et al. Understanding Hospital Readiness for Computerized Physician Order Entry. Joint Commission Journal on Quality and Patient Safety. 2003;29:336-344.
51. Ash JS, Bates DW. Factors and forces affecting EHR system adoption: report of a 2004 ACMI discussion. J Am Med Inform Assoc. 2005;12(1):8-12.
52. Ash JS, Fournier L, Stavri PZ, Dykstra R. Principles for a Successful Computerized Physician Order Entry Implementation. AMIA Annu Symp Proc. 2003;2003:36–40.
53. Rahimi B, Vimarlund V, Timpka T. Health information system implementation: a qualitative meta-analysis. J Med Syst. 2009;33(5):359-368.
54. Snyder-Halpern R. Indicators of organizational readiness for clinical information technology/systems innovation: a Delphi study. International Journal of Medical Informatics. 2001;63(3):179-204.
55. Snyder-Halpern R. Development and pilot testing of an Organizational Information Technology/Systems Innovation Readiness Scale (OITIRS). Proc AMIA Symp. 2002:702-706.
56. Ash JS, Fournier L, Stavri PZ, Dykstra R. Principles for a successful computerized physician order entry implementation. AMIA Annu Symp Proc. 2003:36-40.
25