44
1 Beyond Adoption: Does Meaningful Use of EHR Improve Quality of Care? Yu-Kai Lin, Mingfeng Lin, Hsinchun Chen University of Arizona [email protected], [email protected], [email protected] Abstract Electronic health record (EHR) system holds great promise in transforming healthcare. Existing empirical literature typically focused on its adoption, and found mixed evidence on whether EHR improves care. The federal initiative for meaningful use (MU) of EHR aims to maximize the potential of quality improvement, yet there is little empirical study on the impact of the initiative and, more broadly, the relation between MU and quality of care. Leveraging features of the Medicare EHR Incentive Program for exogenous variations, we examine the impact of MU on healthcare quality. We found evidence that MU significantly improves quality of care. More importantly, this effect is greater in historically disadvantaged hospitals such as small, non-teaching, or rural hospitals. These findings contribute not only to the literature on Health IT, but also the broader literature of IT adoption and the business impacts of IT as well. Keywords: Meaningful use, MU, electronic health records, EHR, quality of care This Version: November 24 th , 2014

Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

1

Beyond Adoption:

Does Meaningful Use of EHR Improve Quality of Care?

Yu-Kai Lin, Mingfeng Lin, Hsinchun Chen

University of Arizona

[email protected], [email protected], [email protected]

Abstract

Electronic health record (EHR) system holds great promise in transforming healthcare. Existing empirical

literature typically focused on its adoption, and found mixed evidence on whether EHR improves care.

The federal initiative for meaningful use (MU) of EHR aims to maximize the potential of quality

improvement, yet there is little empirical study on the impact of the initiative and, more broadly, the

relation between MU and quality of care. Leveraging features of the Medicare EHR Incentive Program

for exogenous variations, we examine the impact of MU on healthcare quality. We found evidence that

MU significantly improves quality of care. More importantly, this effect is greater in historically

disadvantaged hospitals such as small, non-teaching, or rural hospitals. These findings contribute not only

to the literature on Health IT, but also the broader literature of IT adoption and the business impacts of IT

as well.

Keywords: Meaningful use, MU, electronic health records, EHR, quality of care

This Version: November 24th, 2014

Page 2: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

2

1. Introduction

Researchers, observers, patients and other stakeholders have long deplored the woeful conditions of the

U.S. healthcare system (Bentley et al. 2008; Bodenheimer 2005; IOM 2001). While the nation’s

healthcare expenditure accounts for almost 18% of the gross domestic product (GDP), about 34% of

national health expenditures (or $910 billion) are considered wasteful (Berwick and Hackbarth 2012).

Meanwhile, preventable medical errors cause nearly 100,000 deaths and cost $17.1 billion annually (IOM

2001; Bos et al. 2011). To address these issues, researchers and policy-makers have been advocating the

adoption and use of Health Information Technology (HIT), with the hope that it will help transform and

modernize healthcare (Agarwal et al. 2010).

An important element of the HIT initiatives is the implementation of Electronic Health Records

(EHR). A full-fledged EHR system contains not only patient data, but also several interconnected

applications that facilitate daily clinical practice, including patient record management, clinical decision

support, order entry, safety alert, health information exchange, among others. EHR with a clinical

decision support system (CDSS) can implement screening, diagnostic and treatment recommendations

from clinical guidelines so as to enable evidence-based medicine (Eddy 2005). Similarly, the functionality

of computerized physician order entry (CPOE) in EHR can detect and reduce safety issues regarding

over-dosing, medication allergy, and adverse drug interactions (Ransbotham and Overby 2010).

However, until recently most U.S. hospitals and office-based practices had been slow in adopting

EHR systems. Jha and colleagues (2009) reported that in a national survey, less than 10 percent of

hospitals had an EHR system in 2009. Similarly, DesRoches et al. (2008) found a 17 percent adoption

rate of EHR in office-based practices in early 2008. More importantly, studies have found mixed evidence

on whether the adoption of EHR improves quality of care (Black et al. 2011; Himmelstein et al. 2010),

which further casts doubt on the benefits of adopting EHR.

One potential explanation for the mixed effect from EHR adoption is that hospitals may not be

actually taking advantage of EHR, even if the system has been installed (Devaraj and Kohli 2003).

Page 3: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

3

Through the Health Information Technology for Economic and Clinical Health (HITECH) Act, the

federal government has been taking steps to promote meaningful use (MU) of EHR to maximize the

potential of quality improvement (Blumenthal 2010). The HITECH Act committed $29 billion dollars

over 10 years to incentivize hospitals and clinical professionals to achieve the MU objectives of EHR

(Blumenthal 2011). Under this law, the Centers for Medicare & Medicaid Services (CMS) has been the

executive agency for the incentive programs since 2011. Through these programs, eligible hospitals and

professionals can receive incentive payments from Medicare, Medicaid or both if they successfully

demonstrate MU. In addition, there will be financial penalties to the hospitals and professionals if they

fail to meet the MU objectives by 2015; that is, they will not receive the full Medicare reimbursement

from CMS. The programs designate multiple stages of MU, where each stage has incremental scopes of

MU objectives and measures.1

With the implementation of these incentive programs, recent surveys show a significant growth of

EHR adoption and MU (Adler-Milstein et al. 2014). However, the ultimate goal of this national campaign

is to improve the quality of care for patients (Blumenthal and Tavenner 2010; Classen and Bates 2011).

So far, however, there have been no empirical studies on the quality effect of MU. This study seeks to

fulfill this gap in the literature by examining the relation between the MU of EHR technology and

changes in hospital care quality.

One major challenge in identifying the effect of EHR or MU on quality of care is endogeneity.

The decision to adopt an EHR system is often correlated with hospital’s characteristics, some of which

not observable to researchers. For example, small, nonteaching, and rural hospitals are slow in adopting

EHR (DesRoches et al. 2012), and they are also likely to perform worse than their counterparts.

Conversely, many better-performing healthcare institutions are also pioneers in EHR adoptions. Examples

include Mayo Clinic, which introduced EHR as early as the 1970s; and Kaiser Permanente, which had

invested about $4 billion on its EHR system before the EHR Incentive Programs (Snyder 2013). Such

1 http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Stage_2.html

Page 4: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

4

endogeneity in observational data could lead to erroneous inference on the effect of adoption or MU on

quality of care.

We address this empirical challenge by exploiting some unique features of the Medicare EHR

Incentive Program. Specifically, under the guidelines of this program, hospitals that demonstrate and

maintain MU are able to receive annual incentive payments up to four years, starting from the year they

attest meeting the MU criteria.2 We strategically identify treatment and control hospitals to study whether

meaningful use of EHR technology improves quality of care. Through our identification strategy and

multiple empirical specifications and robustness tests, we find supporting evidence that MU significantly

improves quality.

2. Related Literature

In this section, we review the existing literature that directly inform our analyses. We start with a review

and synthesis of existing studies that focus on the effects of Electronic Health Records, and highlight the

gap that our study is seeking to fill. Since our focus is the Meaningful Use of EHR, in the second

subsection we discuss a related, albeit smaller, literature on MU. We also briefly review some

representative work from the broader literature of IT adoption and value that inform our study.

2.1 Literature on the Effects of EHR

Given the potential of EHR to change the routines of healthcare delivery, reduce costs, and minimize

errors, there has been a large and growing literature on this topic. Most directly related to our study are

the empirical ones. In this section, we systematically review published empirical studies on the effect of

EHR, and compare them with our study. We focus on studies published in and after 2010 to avoid

significant overlapping with prior review papers (Black et al. 2011; Chaudhry et al. 2006). For each study,

2 The attestation procedure involves filling a formal form on the CMS EHR Incentive Programs Registration

and Attestation website. Hospitals will need to report the vendor and product model of its EHR system and enter their measures regarding each of the required MU criteria. For more information about the registration and attestation procedure, see http://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/downloads/HospAttestationUserGuide.pdf

Page 5: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

5

we summarize the main data sources, data period, data units, main dependent and independent variables,

identification strategy, type of analysis, and main findings. The result of our literature search and analysis

is shown in Table 1. To facilitate comparison, we list this study in the last row of the table. As can be

seen, our study is one of the first to study the effect of MU on healthcare quality by using a more

objective set of measurements for MU, a unique and recent dataset, and empirical identification methods

that leverages features of the Medicare EHR Incentive Program. We discuss our findings from our

literature search and categorization in the remaining of this subsection.

Main data sources; dependent and independent variables. It is apparent in Table 1 that

Healthcare Information and Management Systems Society’s Analytics Database (HADB) is a

predominant data source for research on HIT or EHR. HADB contains information about the adoption of

hundreds of HIT applications, including EHR, CPOE, CDSS, etc., in over four thousand U.S. hospitals.

HADS-based studies typically define and identify a set of HIT applications that are pertinent to the

research goals. Most of the main independent variables in Table 1 are derived from HADB. Many studies

further distinguish stages or capabilities of HIT or EHR implementation based on the adoption records in

HADB. For instance, in studying the effect of EHR adoption on quality of care, Jones et al (2010)

determine EHR capability using four HIT applications, i.e., clinical data repository, electronic patient

record, CDSS, and CPOE. A hospital is said to adopt “advanced EHR” if the hospital adopts all four

applications, “basic EHR” if at least one, and “no EHR” if none. Dranove et al (2012) also distinguish

basic and advanced EHR systems, but using a different set of applications as the criteria. Other than the

distinction between basic and advanced EHR, a number of studies try to mimic the HITECH MU criteria

by mapping them to similar applications in HADB (Appari et al. 2013; Hah and Bharadwaj 2012). We

will address the issues of such mapping in Section 3.1 when we discuss our construction of the MU

variable in this study. In addition to the fields provided by HADB, it is often necessary to include other

data sources so as to identify hospital characteristics and performance. This includes American Hospital

Association (AHA) Annual Survey Database, CMS Case Reports (CMS-CR), and CMS Hospital

Compare (CMS-HC) database.

Page 6: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

6

For dependent variables, the two most popular ones among past studies are hospital operational

cost and process quality. The former is supplied by the CMS-CR data, while the latter is available from

the CMS-HC database.

While process quality of care and MU have shown in a number of prior studies, the variables

were typically constructed using the data from CMS-HC and HADB, respectively. We, instead, use two

new data sources to construct the main dependent and independent variables. For our main dependent

variable, process quality of care, we obtain healthcare performance measures from the Joint Commission

(JC). For our primary independent variable, MU, we use data from the Medicare EHR Incentive Program.

Following prior studies, we also use a number of other data sources to construct control variables. Section

3 provides an in-depth discussion of the datasets and variables in our study.

Data period. It is noteworthy from Table 1 that all the studies, even the most recent ones, are

based on data from 2010 or earlier. According to Jha et al (2009), this is the time that the rate and degree

of EHR use were both low in U.S. hospitals. Specifically, at this period comprehensive EHR system was

used in only 1.5% of U.S. hospitals and just an additional 7.6% had a basic system. Since the U.S.

healthcare system has undergone dramatic policy changes since 2009, there is a significant practical and

scientific need for new data and new empirical analyses. To understand the impact of the HITECH Act

and the latest progress of MU among U.S. hospitals, we use data from around 2012 for our analyses.

Identification strategy and analysis. As discussed earlier, an important empirical challenge in

assessing the impact of EHR is endogeneity; we therefore review how prior literature addresses this

concern. Column 7 of Table 1 summarizes research designs used in each paper to address endogeneity

concerns. We can see that these studies employ various econometric strategies such as fixed effects

(Appari et al. 2013; Dranove et al. 2012; Miller and Tucker 2011; Furukawa et al. 2010; McCullough et

al. 2010), difference-in-differences (McCullough et al. 2013; Jones et al. 2010), instrument variables

(Furukawa et al. 2010; Miller and Tucker 2011), and propensity adjustments (Dey et al. 2013; Appari et

al. 2012; Jones et al. 2010). In addition, the majority of studies in Table 1 employed panel data analysis,

Page 7: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

7

because cross-sectional datasets “will not capture the impact of IT adoption if early adopters differ from

other hospitals along other quality-enhancing dimensions.” (Miller and Tucker 2011, p. 292)

Our analyses are based on a panel dataset. We exploit features of the Medicare EHR Incentive

Programs as an exogenous variation, and adopt a number of empirical strategies (difference-in-differences

and first-difference) to verify and ensure that our findings are robust. We also use propensity score

matching (Rosenbaum and Rubin 1983) to alleviate potential bias from treatment selection, i.e., whether

or not a hospital demonstrates MU in 2012. Full details of our identification strategy is discussed in

Section 4.

Main findings. The last column of Table 1 makes it clear the inconclusive findings on the effect

of EHR adoption in prior studies: 6 positive, 4 negative, and 4 mixed results. The 4 mixed results are

either because EHR had effect on only a subset of measures or because the effects are significant only

under certain conditions. For instance, McCullough et al. (2010) find that the use of EHR and CPOE

significantly improved the use of vaccination and appropriate antibiotic in pneumonia patients, but for the

same population EHR had no effects on increasing smoking cessation advice nor taking blood culture

before antibiotic. Similarly, Furukawa (2011) find that advanced EHR systems significantly improved the

throughput of emergency department, but basic EHR systems did not.

Our results from various estimators consistently show that attaining MU significantly improves

quality of care. Moreover, we also find that the magnitude of this effect varies by several hospital

characteristics, such as hospital size, hospital ownership, geographical region, and urban status of the

hospital location. We find that hospitals traditionally deemed with weaker quality, e.g., s e.g., small, non-

teaching or rural hospitals, attained larger quality improvements than their counterparts.

2.2 Literature on Meaningful Use

Compared to adoption, meaningful use of EHR is a much more challenging goal for hospitals and

healthcare providers (Classen and Bates 2011). Some prior studies in Table 1 have examined “MU,” but

use less formal measurements to identify MU. For example, Appari et al. (2013) examine how MU

Page 8: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

8

impacts process quality by mapping the HITECH MU criteria to HADB. The authors define five levels of

EHR capabilities in which the top two levels satisfy the functionality requirements in the MU criteria. The

authors note that (Appari et al. 2013, p. 358):

“While complete satisfaction of 2011 MU objectives requires fulfilling clinical and

administrative activities using EHR systems, here we measure only whether a hospital

system has the functional capabilities to meet the objectives as we have no data on

whether they actually accomplished the activities.”

Several other studies also use MU functionalities to define MU (e.g., McCullough et al. 2013;

Hah and Bharadwaj 2012). A recent systematic review by Jones et al. (2014) focuses on the effects of MU

on three outcomes: quality, safety and efficiency. The review includes a total of 236 studies published

from January 2010 to August 2013. The review also uses MU functionalities to as a taxonomy to

characterize the literature. Jones et al. (2014) conclude that most of the studies focused on evaluating

CDSS and CPOE, and rarely addressed the other MU functionalities. By contrast, as we will discuss in

Section 3.1, our paper is one of the first to use a systematic and government-mandated public health

program to identify MU.

Finally and more broadly, our study also draws on and contributes to the long-standing literature

on the consequences of technology adoption and IT business value, of which healthcare IT is but one

example (Davis et al. 1989; Ajzen 1991; Attewell 1992; Brynjolfsson and Hitt 1996; Wejnert 2002;

Venkatesh et al. 2003; Tambe and Hitt 2012). Whereas a dominant variable of interest in this literature is

the adoption of technologies, we focus on the meaningful use of a technology and investigate how it

affects an important outcome.

3. Data

We integrate data from multiple sources. Consistent with prior research (Appari et al. 2013), we use the

Medicare provider number as a common identifier to link all hospital-level information. Table 2

Page 9: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

9

summarizes the variables and their data sources, which we discuss in turn. Also consistent with prior

studies in related literature (see Section 2; e.g., Appari et al. 2013; McCullough et al. 2013; Furukawa et

al. 2010; Jones et al. 2012), we investigate non-federal acute care hospitals in 50 states and the District of

Columbia.

3.1 Meaningful Use

An important difference between our study and those reviewed in Table 1 is how we construct our main

independent variable, MU, and its data period. We use data directly from the Medicare EHR Incentive

Program. This dataset is new and unique, and to the best of our knowledge, has not been used in any prior

empirical studies. The latest data, released in May 2014, covers the MU attestation records of U.S.

hospitals as of early 2014. This dataset reflects the most recent development of EHR adoption and

meaningful use in the United States.

The CMS EHR Incentive Programs website provides data about the programs and the recipients

of the incentive.3 The recipient data reveals in which year a hospital demonstrated that it had met the MU

criteria. We look only at the records from the Medicare EHR Incentive Program but not from the

Medicaid program for two reasons. The first reason is data availability. As of the time we conducted this

study, hospital-level information from the Medicaid EHR Incentive Program has not been released. This

is presumably due to the fact that the Medicaid program is locally run by each state agency, which creates

difficulties in aggregating detailed information from multiple sources. In contrast, the Medicare Incentive

Program is run solely by CMS so that information is centralized and more accessible. The second reason

that we use only the Medicare data is its representativeness. The latest statistics from CMS shows that

96.7 percent hospitals which successfully attested MU before January 2014 received incentive payments

from Medicare, and 94.1 percent of these hospitals also received payments from Medicaid. Since most

hospitals register and obtain incentive payments from both programs, the hospitals in the Medicare

3 http://www.cms.gov/EHRIncentivePrograms/

Page 10: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

10

program should mostly overlap with those in the Medicaid program. We therefore focus on the data from

the Medicare program to study the quality effect of MU.4

Using data directly from the Medicare EHR Incentive Program allows us to mitigate three

important shortcomings in prior works that rely on HADB or AHA Healthcare IT Database. First, instead

of mapping the MU criteria to the records of HIT applications in HADB, our approach is much more

direct and objective. There will be no information loss or measurement errors from indirectly representing

MU using secondary data sources. Second, existing HIT survey databases rely on self-reported data. To

our knowledge there is no data auditing process to verify the correctness of the data. In contrast, there are

pre- and post-payment audits in the Medicare EHR Incentive Program to ensure the accuracy of the

attainment of MU objectives, and hence the integrity of data. Third and finally, the MU criteria comprise

not only what EHR functionalities a hospital possesses but also how they are used. The simple presence

of HIT in a hospital does not directly imply meaningful use of the technology (McCullough et al. 2013).

As such, instead of just requiring having an EHR functionality to record patient’s problem list, the

guidelines of Medicare EHR Incentive Program require the following criterion, among others, to be met

before the hospital can be considered MU:

“More than 80% of all unique patients admitted to the eligible hospital or critical access

hospital have at least one entry or an indication that no problems are known for the

patient recorded as structured data.” (Emphasis added)5

Demonstration of actual use of EHR and HIT capabilities is critical in understanding and

explaining the impact of HIT or EHR (Devaraj and Kohli 2003; Kane and Alavi 2008). However, proof or

demonstration of use is typically not recorded in existing HIT survey databases, and hence imposed a

4 Although the Medicare patient population is elder than the regular patient population, it has no effect on our

study. This is because we are looking at Medicare certified providers instead of the Medicare patients. Given that Medicare is the largest payer in the US, almost all hospitals, especially acute care hospitals that we are studying, accept Medicare patients.

5 http://cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/MU_Stage1_ReqOverview.pdf

Page 11: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

11

critical research limitation in HIT evaluation. Since system usage is an integral part in determining

whether MU is met in the Medicare EHR Incentive Program, our MU variable naturally captures this

missing dimension. These three important shortcomings were often neglected in prior studies, and can

potentially explain the mixed findings on the effect of EHR systems in the literature (Agarwal et al. 2010;

Kohli and Devaraj 2003). The newly released data from the Medicare EHR incentive program allows us

to circumvent these issues, and provide new empirical evidence on the effect of MU on quality of care.

3.2 Quality of Care

There are several sources which provide hospital quality data, including CMS-HC, JC, National

Ambulatory Medical Care Survey (NAMCS), and National Hospital Ambulatory Medical Care Survey

(NHAMCS). The first two put more emphasis on inpatient settings whereas the last two, outpatient (Ma

and Stafford 2005). Nonetheless, all these data sources emphasize evidence-based care process (Chassin

et al. 2010) when deriving quality measures for hospitals. In other words, quality of care is only

considered high if a hospital follows the processes and interventions that will lead to improved outcomes,

as suggested by clinical evidence. It is noteworthy that the relationships among different quality metrics

(e.g. process quality, patient satisfaction, 30-day readmission rate, and in-hospital mortality) are weak or

inconsistent (Shwartz et al. 2011; Jha et al. 2007). For instance, in a prospective cohort study with a

nationally representative sample (N=51,946; panel data), Fenton et al. (2012) find that higher patient

satisfaction was surprisingly associated with greater total expenditures and higher mortality rate (both

significant at the 0.05 level). Although the process quality does not necessarily reduce 30-day mortality or

readmission, it has been a primary quality metric used in prior studies (see Table 1) because it is

actionable, targets long-term benefits, and requires less risk-adjustment (Rubin et al. 2001). By the same

token, Chatterjee and Joynt (2014) argue that:

“Although process measures remain minimally correlated with outcomes and may

represent clinical concepts that are somewhat inaccessible to patients, they do have

Page 12: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

12

independent value as a marker of a hospital’s ability to provide widely accepted,

guideline-based clinical care.”

We obtain hospital quality measures from the Joint Commission, formerly known as the Joint

Commission on Accreditation of Healthcare Organizations. The Joint Commission is a not-for-profit

organization that aims to promote care quality and safety. It is critical for a hospital to be accredited by

the Joint Commission in order to obtain a service license and to qualify as a Medicare certified provider

(Brennan 1998). The Joint Commission has long been developing metrics for quality measurement and

improvement.6 There are currently 10 core measure sets, categorized by conditions such as heart attack,

heart failure, pneumonia, surgical care infection prevention, among others. In each core measure set, there

are a number of measures specific to the corresponding medical condition. Examples of the quality

measures are as follows:

• Percentage of acute myocardial infarction patients with beta-blocker prescribed at discharge

• Percentage of heart failure patients with discharge instructions

• Percentage of adult smoking cessation advice/counseling

These metrics are largely aligned with the process quality measures in the CMS-HC dataset that

are more commonly used in prior research (see Table 1). We find that the Joint Commission quality

measures are more comprehensive than the process measures in CMS-HC, since many quality measures

are tracked by the former but not by the later. In addition to their comprehensiveness, quality measures

from the Joint Commission are updated quarterly, whereas those from CMS-HC are updated only

annually. We therefore use the quality measures from the Joint Commission.7

Since the JC quality metric has multiple core measure sets, and each core measure set can contain

multiple specific measures, we derive a composite quality score to represent the overall process quality of

6 http://www.jointcommission.org/performance_measurement.aspx 7 In fact, our identification strategy would not have been possible without quarterly quality data. As of the time

of writing, the latest process-of-care quality metric and patient experience metric in the CMS-HC dataset are available for April 2012 to March 2013. Similarly, the outcome-of-care quality metrics in CMS-HC, i.e., 30-day readmission/mortality rates, are available from July 2010 to June 2012. As such, none of these quality metrics is recent enough to permit a clean empirical identification. See Section 4.2 for the identification strategy in this study.

Page 13: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

13

a hospital. The interpretation of the composite score is intuitive, useful and quantitative: to what degree

(in percentage) does a hospital follows guideline recommendations. Consistent with prior studies (Appari

et al. 2013; Chen et al. 2010), the composite quality score is derived as an average of all specific measures

weighted by the number of eligible patients in each measurement. In the construction of quality score, we

exclude measures that have less than five eligible samples in a hospital in order to ensure data reliability.8

3.3 Control Variables

Prior studies on the effect of EHR typically include a set of control variables to capture the heterogeneity

among hospitals (Angst et al. 2010; Appari et al. 2013; Devaraj and Kohli 2003). The HADB contains

hospital information about the year formed, which allows us to calculate hospital age as of 2012. For

hospitals whose age information is missing, we manually performed search engine queries and

successfully identified about 50 of them by looking up the “About Us” or similar pages on hospital

websites.

A second important control is hospital size. The size of hospitals has been shown to be positively

correlated with EHR adoption as well as quality of care (DesRoches et al. 2012). We operationalize

hospital size using the total number of beds in the CMS-CR data. We also use the CMS-CR data to

capture hospital various throughput measures: annual (Medicare) discharges or inpatient days (Miller and

Tucker 2011). To allow more intuitive interpretations on the effects of these throughput measures, we

rescale the original values to the unit of a thousand before entering them in our models.

To differentiate the general health condition of the served patient population, we use the transfer-

adjusted case mix index (TACMI) from the CMS Inpatient Prospective Payment System (IPPS). The

CMS-HC dataset provide information regarding the ownership of a hospital, which can be broadly

categorized into either government, non-profit, or proprietary hospitals.

We control for the teaching status of hospitals. Specifically, teaching status is a dichotomized

variable determined by whether the hospital is a member of the Association of American Medical

8 Our results are qualitatively similar without imposing this cut off threshold.

Page 14: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

14

Colleges. We further identify whether or not a hospital is located in a rural area by mapping its zip code

to the Rural-Urban Commuting Area (RUCA) version 2.0 taxonomy (Angst et al. 2012; Hall et al. 2006).

Finally, we control for the region of a hospital by matching its zip code to one of the four census regions:

Midwest, Northeast, South, or West.

4. Empirical Strategy

As discussed earlier, a key challenge to identifying the quality impact of EHR adoption or MU is

endogeneity. This section describes how we address endogeneity concerns to approximate a randomized

experiment from observation data, in order to learn about causal relationships (Angrist and Krueger

1999). We begin with a brief overview of the Medicare EHR Incentive Program, followed by our

identification strategy motivated by some unique features of this program.

4.1 Medicare EHR Incentive Program for the Eligible Hospitals

Under the auspices of HITECH legislation, the goal of the CMS EHR Incentive Programs is to promote

meaningful use of EHR through financial incentives. To receive the incentive payment, a hospital must

achieve 14 core objectives as well as 5 out of 10 menu objectives (Table 3), each accompanied with a

very specific measure (Blumenthal and Tavenner 2010). Since the inception of the programs in 2011,

thousands of hospitals have achieved the MU objectives. Full detail of the programs is available through

the programs website (Footnote 3), but here, we highlight its incentive features that help identify the

effect of meaningful use on quality of care.

Hospitals must demonstrate MU by 2015 at the latest, or they will be financially penalized. If

they demonstrate MU before 2015, they will receive annual incentive payments from the Medicare EHR

Incentive Program. The amount of these annual payments is determined by multiplying the following

three factors: initial amount, Medicare share, and transfer factor. The initial amount is at least $2 million,

and more if the hospital discharges a specified number of patients. The Medicare share is, roughly, the

fraction of Medicare inpatient-bed-days in total inpatient-bed-days in the hospital’s fiscal year. Most

Page 15: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

15

important, the transfer factor varies by payment year and the time that the hospital demonstrates MU

(Table 4). A hospital can receive these annual incentive payments up to four years, with the amount

decreasing each year by the transfer factor. If a hospital demonstrates MU in 2013 or earlier, it can

receive 4 years of payments with the transfer factor decreases each year from 1, 0.75, 0.5, to 0.25. If a

hospital demonstrates MU in 2014, it can only receive 3 years of payments with the transfer factor

decreases each year from 0.75, 0.5, to 0.25. Starting from 2015, hospitals that are not meaningfully using

EHR technology will be penalized by a mandated Medicare payment adjustment, in which they will not

receive the full amount of Medicare reimbursements. The degree of payment adjustment will double in

2016 and triple in 2017. The transfer factor and the cumulative penalties, therefore, incentivize hospitals

that did not meet the criteria of MU to adopt and meaningfully use EHR sooner rather than later.9

4.2 Identification Strategy

Although the financial incentive is the same from an eligible hospital to enter the program and attest MU

during 2011 and 2013 (i.e., 4 years of payments), we assume that hospitals would proceed to attest

achieving MU and obtain the incentive payments once they have met the criteria. This assumption is

consistent with a basic premise in the accounting and finance literature: income or earnings in the present

is generally preferable to earnings in the future (Feltham and Ohlson 1999; Ohlson 1995), especially

given the low cost of the attestation procedure. This is also a realistic assumption given the financial

burden to hospitals of acquiring and implementing an EHR system and the financial subsidies available

for achievement of MU. Therefore, for the hospitals that began to attest MU in 2012, we assume that they

did not meet the MU criteria in 2011 or earlier, but did achieve MU in 2012 and later.

9 Payments from the Medicare EHR Incentive Program represent a nontrivial amount of incoming cash flow for

the hospitals. From our data, we see that in the first three years of the program, the median annual payment to hospitals is $1.4 million (with the highest being $7.2 million). To put this number in context: one source (http://www.beckershospitalreview.com/finance/13-statistics-on-hospital-profit-and-revenue-in-2011.html) estimates that the average profit per hospital in 2011 is around $10.7 million. Alternatively, data provided by HADB show that in 2011, the median difference between revenue and operation cost is slightly below $1 million. In either case, the payment incentive from the Medicare EHR Incentive Program is substantial.

Page 16: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

16

For the purposes of identification and estimation, we consider MU as a dichotomous status

regarding whether or not a hospital reaches the MU regulation criteria. To identify the quality effect of

MU, we obtain longitudinal MU attainment records from the Medicare EHR Incentive Program, which

provides data from 2011 to early 2014 (Section 3.1). We construct a panel dataset from this and other data

sources, and employ a difference-in-differences (DID) identification strategy to tease out the quality

effect of attaining MU. We define our treatment group as the hospitals that attained MU in 2012, but not

before. A key and novel component in our identification strategy is that we consider two control groups:

hospitals that attained MU in 2011 at the onset of the EHR Incentive Program (henceforth denoted by

AlwaysMU) and hospitals that had not yet achieved MU by the end of 2012 (henceforth denoted by

NeverMU). The AlwaysMU control group is comprised of hospitals that had reached MU prior to the

implementation of the incentive program, therefore the incentive program had little or no impact on their

MU status. Using these hospitals as a comparison group allows to estimate the effect of MU on quality of

care for hospitals that sped up their process of reaching the MU status due to the incentive program.10 On

the other hand, the NeverMU control group includes hospitals that have not yet reached the MU status as

of the end of 2012. Since these hospitals are likely to be in the process of speeding up their progress

toward MU, using them as an alternative control group provides a more conservative (less optimistic)

estimate of the effect on quality of care. In other words, although hospital’s decisions on expanding

resources to reach the MU status may be endogenous, these two distinct but complementary control

groups allow us to obtain a robust upper and lower bounds for the unbiased MU effect: for the

10 One may argue that hospitals in the AlwaysMU group may also have responded to the legislation and sped up

their MU status. While plausible, this is unlikely to be a first-order issue due to the limited amount of time between the laws and the time that we study, and the length of time for hospitals to implement EHR and obtain MU. HITECH was passed by the congress in 2009, but the detailed mandates in the Incentive Program were not announced until August 2010. If a hospital did not have EHR at that time, it would have taken about two years to implement it (Miller and Tucker 2011). If the hospital already had EHR, it would have taken about another 3 years (median) to move from adoption to MU. These numbers were obtained by following the approach in Appari et al. (2013): for each hospital in the treatment group, we use HADB (2006-2011) to identify the time difference between implementation of all MU functionalities and attestation of MU. While this calculation is only an approximation, these numbers suggest that hospitals in AlwaysMU can be reasonably expected to have reached MU prior to the announcement of the incentive program. Further support of this argument can be seen in Figure 1 later in the paper: only about 18% of the hospitals demonstrated MU in 2011.

Page 17: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

17

unobservable or omitted variables that may confound our MU estimate, their mean effect is likely to be

monotonic with the timing of EHR adoption and MU. Using two control groups therefore provides the

upper and lower bounds of the estimate.11

More specifically, for each hospital in the treatment and control groups we construct a two-period

panel with a pre-treatment period quality score taken from the fourth quarter of 2011 and a post-treatment

period quality score taken from the first quarter of 2013. With this panel data set-up, the average

treatment effect of attaining MU is the difference between the pre-post, within-subjects differences of the

treatment and control groups. We estimate the following model:

( )0 1 2

3,

it i t

i t i it it

Quality TreatmentGroup PostPeriod

TreatmentGroup PostPeriod c X u

β β β

β δ

= + + +

′× + + + (1)

Subscripts i (= 1…N) and t (= 1 or 2) index individual hospitals and time periods, respectively.

Qualityit represents the quality score of hospital i at time t. TreatmentGroupi and PostPeriodt are

indicators for the treatment group and the post-period respectively. TreatmentGroupi is 1 if hospital i is in

the treatment group; 0 otherwise. PostPeriodt is 1 if time t=2, i.e., the post-treatment period; and 0 if time

t=1, i.e., the pre-treatment period. Parameter ci absorbs hospital-level, time-invariant unobserved effects.

Xit is a vector of control variables that we introduced in Section 3.3. Finally, εit are the idiosyncratic errors

which change across i and t.

Model (1) can be estimated by either the fixed effects estimator or by the random effects

estimator (Wooldridge 2002). While we conduct and report both, we note that in a two-period panel, a

simple yet effective way to estimate fixed effects models is through a first-differencing transformation:

( )0 1it i t it itQuality TreatmentGroup PostPeriod X uα α ϕ′∆ = + ∆ × +∆ +∆ (2)

where ΔQualityit = Qualityi2 – Qualityi1, ΔXit = Xi2 – Xi1, and Δuit = ui2 – ui1. Since the hospital-level fixed

effects, i.e., ci, is assumed to be time invariant, they cancel out after first differencing. The first-difference

11 We also considered using a traditional instrument variable approach where the instruments for MU status is

the MU saturation rate (percentage of hospitals that had reached MU status prior to a focal hospital, within a 25 mile radius of the hospital), since hospitals are more likely to reach MU status due to competitive forces. We obtained qualitatively similar results.

Page 18: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

18

(FD) model yields an identical estimates as the fixed effects model, but is easier to implement. Therefore,

we will proceed with our analyses using the random-effects DID model and the FD model. With the two

empirical models, the main interest of our analyses is the estimates of TreatmentGroupi × PostPeriodt. A

positive and significant estimate will support the hypothesis that meaningful use of EHR improves

hospitals’ quality of care.

4.3 Validating the DID Identification Assumption

A critical assumption in the DID identification strategy is the parallel historical trends of the dependent

variable between treatment and control groups (Bertrand et al. 2004). That is, absent the treatment (in our

case, the implementation of the incentive program, the treatment and control groups should demonstrate

similar trends over time in terms of the outcome variable. This assumption is not trivial since the three

hospital groups may present distinct characteristics. Since the historical JC hospital quality data are not

publicly available, we use the quality measures from the CMS-HC dataset as a proxy. As mentioned in

Section 3.2, the quality measures in JC and CMS-HC are largely aligned, but the former is updated

quarterly, whereas those in the later is updated annually. Figure 1(a) shows the annual quality trends of

the treatment and control groups from 2010 to 2012. While there are no signs that the DID assumption is

violated for the NeverMU control group, the historical quality trends between treatment group and control

group AlwaysMU are not perfectly parallel (albeit only slightly).

We address this issue by using propensity score matching (Rosenbaum and Rubin 1983; Xue et

al. 2011; Brynjolfsson et al. 2011; Mithas and Krishnan 2009). Propensity score matching matches

observations in treatment and control groups, according to their propensities in receiving the treatment as

a function of their observable traits. In our context, for each hospital in the treatment group, propensity

score matching identifies the most comparable hospital in the control group. This helps excluding

hospitals that are vastly different in their unobservable qualities from the treatment group. For the

matching process, in addition to a broad set hospital characteristics, we also include quality changes from

2010 to 2011 and from 2011 to 2012 when calculating propensity scores. Figure 1(b) shows that after

Page 19: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

19

matching, the three hospitals groups present parallel historical quality trends. We then conduct DID

between the treatment group and the subset of control group hospitals. We report results without and with

the matching procedure.

5. Results

The data from the Medicare EHR Incentive Program show a significant uptake of EHR and MU among

acute care hospitals from 2011 to 2013 (Figure 2). In 2011, the average rate of MU attainment was 18%

across states. The percentage increased to 54% in 2012 and 86% in 2013. As of November 2014, 4,283

hospitals have achieved the Stage 1 MU. These statistics alleviate the concern that only “good” hospitals

participate in the program, and at the same time, suggest that the incentives are so substantial that EHR

adoption and MU in US acute care hospitals was accelerated from 18% to 86% in just two years.

Table 5 shows some key summary statistics of our dataset. There are 2,344 hospitals in our

dataset, in which 914 belong to the treatment group, 483 the AlwaysMU control group, and 947 the

NeverMU control group.12 Table 5 uncovers some other interesting patterns across these groups. When

compared to hospitals in the AlwaysMU control group, hospitals in the treatment group had significantly

lower case mix, and were more likely to locate in rural areas. When compared to those in the NeverMU

control group, treatment hospitals had significantly higher throughputs, measured by both the number of

inpatient days as well as the number of discharges. These differences could simultaneously correlate with

the quality of care and treatment assignment (i.e., acquiring MU in 2012). Therefore, as a robustness

check we employ propensity score matching to identify a subset of hospitals within the respective control

group that are as similar as empirically possible to those in the treatment group. Additional robustness

checks include quantile analyses and censored regression analyses. To reveal greater policy and

managerial insights, we a) construct a continuous MU variable and investigate the relation between the

12 Based on the CMS-HC dataset, there were 4,860 hospitals in the US by the end of 2011. Among them, 3,459

were acute care hospitals. We excluded federal, tribal and physician-owned hospitals (n=231) and hospitals that were located outside 50 states and DC, e.g., Guam, Virgin Islands, etc. (n=55). Finally, hospitals with missing values in any variables of our models were deleted listwise.

Page 20: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

20

degree of MU and the degree of quality improvement, and b) conduct a stratification analysis to reveal the

potentially heterogeneous effect of MU among different types of hospitals.

5.1 Main Results

Figure 3 presents the mean quality changes among the three hospital groups from the pre-treatment period

to the post-treatment period. We can see that compared to either of the two control groups, the treatment

group exhibits significantly greater quality improvement from the pre-treatment period to the post-

treatment period. To further examine the effect, Table 6 summarizes our estimations across eight different

model setups, based on the choice of estimator, the choice of control group, and whether or not to include

control variables. These results consistently show that meaningful use of EHR has a significant and

positive effect on quality of care. We note that the random-effects DID estimator and the FD estimator

yield highly consistent estimates on the quality effect of MU. The quality effect of MU ranges roughly

between 0.32 and 0.47 across different models. The incremental gain is consistent with findings in Appari

et al. (2013).

To better understand the size of this effect, we illustrate it in the context of an important indicator

of care quality: hospital readmission. Readmission is an important problem in healthcare because it

signifies poor quality of care and generates very high costs (Jencks et al. 2009; Bardhan et al. 2011). The

CMS-HC dataset shows an average 30-day hospital-wide readmission rates of 16% (approximately one in

six) in acute care hospitals at the end of 2011. Our data show that a 0.4 quality improvement can roughly

translate to 0.14% reduction in readmission rate.13 With over 20 million annual inpatient discharges in

U.S. hospitals and the estimate cost of $7,400 per readmission (Friedman and Basu 2004), the 0.14%

13 Prior studies show that the relation between process quality and readmission rate is insignificant (Shwartz et

al. 2011). One potential explanation is that hospitals with high process quality are more likely to attract complex patient cases, which then incurs a higher readmission rate. We use TACMI to address this issue, which is an index used to describe the complexity of a hospital’s overall patient base. Our derivation is as follows. We categorize hospitals by whether their TACMI values are above or below the median TACMI value. The mean quality scores for the high and low TACMI groups are 98.08% and 96.94%, respectively, in the pre-treatment period. In the same period, the high TACMI group had a mean 30-day hospital-wide readmission rates of 15.79% and low TACMI hospitals 16.20%. Based on the above quality-readmission relationship, 0.4% quality improvement from MU can be translated to 0.14% reduction in readmission rate [-0.14 ≈ 0.4 × (15.79−16.20) / (98.08−96.94)].

Page 21: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

21

reduction in readmission rate represents up to 28,000 fewer readmission cases and up to $207.2 million

cost savings per year. While the magnitude of this effect may not seem striking when compared to the

overall healthcare expenditure, it is still not trivial. More importantly, it indicates that the effect of MU on

healthcare quality improvement is indeed in the right direction, even for the first stage of MU over the

short period of time for which we have data.

5.2 Propensity Score Matching

We conduct propensity score matching to address the issues that there are significant differences in

observable hospital characteristics among hospital groups and that historical quality trends of the three

hospital groups are not perfectly parallel. We use the Matching package in R, which optimizes covariate

balance through a genetic search algorithm (Sekhon 2011). Each treatment hospital is matched with three

hospitals in the AlwaysMU (and subsequently NeverMU) control group. We then apply the empirical

models on the matched data to examine the robustness of the prior findings, so as to derive a more

conservative estimate of the impact of MU against these two control groups. Table 7 shows the results

from the matched samples. Across different models, the estimates of TreatmentGroupi × PostPeriodt

remain positive and significant, suggesting the robustness of the MU effect in our main results.

5.3 Other Robustness Checks

Quantile Analysis

Prior research has shown that the ceiling effect of healthcare quality can be an important issue in

health IT research because it affects our interpretation on the effect of health IT (Jones et al. 2010). As

pointed out by Jones et al. (2010), one unit improvement in quality score from 95% to 96% is

considerably more difficult than one unit improvement below that level, say from 70% to 71%. Quantile

regression estimates the treatment effect at different quantile, specifically at the median, so as to

overcome the impacts from ceiling effect (Koenker and Bassett 1978). It, hence, can be considered a

robust check for our earlier main results. Table 8 presents the results from the quantile analysis using the

Page 22: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

22

DID specification. When using AlwaysMU as the control group, the effect is significant at the median.

When using NeverMU as the control group, the effect is significant at the median and the lower quartile.

Table 9 further presents the results using the FD specification. Note that due to the first differencing

transformation, the lower quantiles show a greater MU effect than the upper quantiles. Across all these

specifications however, we see that MU has a positive and statistically significant impact on quality of

care at the median.

Censored Regression Analysis

Another empirical issue is data censoring; that is, our quality metric is strictly bounded in 0 and

100. We found that a number of hospitals attain the maximum quality score. Specifically, in the pre-

treatment period, 37 treatment hospitals, 22 AlwaysMU hospitals, and 35 NeverMU hospitals have the

maximum quality score (100). In the post-treatment period, the numbers are 60, 47, and 48, respectively.

These top-censored observations may lead to bias in our OLS estimations. To address this, we use a Tobit

model to estimate the effect of MU under the DID specification (Tobin 1958). From Table 10 we find that

the effect of MU remains positively significant with a set of similar coefficients comparing to the main

results. This suggests that the censored observations do not have a strong impact on our prior estimations.

5.4 Continuous MU Variable

All the analyses we presented so far consider MU as a dichotomous status: hospital either obtained MU or

not obtained MU. Although this is true from the perspective of the Medicare EHR Incentive Program, and

indeed provided a useful metric and specific goal, it is intuitive to ask if a greater degree of MU could

lead to a greater degree of quality improvement, provided that the hospital has reached the minimum

requirements specified by the MU regulation. To answer this question, we look into the treatment group

and the AlwaysMU control group. These hospitals had achieved MU, and the data from the Medicare

EHR Incentive Program contains information about these hospitals’ performances on the core and the

menu MU objectives (see Section 4.1). We examine different ways to construct the continuous MU

variable. We first consider two scenarios: one with only the core measures and the other with both the

Page 23: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

23

core and the menu measures. For each scenario, we then calculate the average as well as the product of

the measures. Table 11 shows the results from our analysis of the continuous MU variable. Our

estimation indicates that a higher degree of MU indeed provides a greater improvement in quality of care.

This finding is consistent with the early work by Devaraj and Kohli (2003) and reaffirms the importance

of measuring and factoring “actual use” in studying the business impacts of IT.

5.5 MU Effects and Hospital Characteristics: A Stratification Analysis

Our main results in Table 6 showed that MU has positive and significant effect on quality of care.

However, this effect may be heterogeneous across different hospitals. To draw proper policy implications

from our analyses, we investigate how hospital and environmental characteristics influence the effect of

MU through a stratification analysis. We consider a number of characteristics, including hospital size,

ownership, teaching status, region, and urban status.

We conduct a stratification analysis to tease out the quality effect of MU for various subsamples

(strata) of hospitals. In each stratum, we estimate the FD model using only treatment and control hospitals

in that stratum. As an example, in the small size stratum we estimate the model using data only from

hospitals with less than 100 beds in both the treatment and control groups. As another example, in the

government ownership stratum we focus only on hospitals owned by government agencies. We choose

the FD model instead of the DID model because in some strata, there is no variation on certain time-

constant control variables. For instance, in the AlwaysMU control group, the small size stratum has no

teaching hospitals, rendering the DID estimation impossible. The FD estimation, on the other hand, does

not have this problem since the time-independent control variables have no impact on the estimation.

Table 12 and Figure 4 show the results from our stratification analysis. The results are interesting

and have several important policy implications. We find that hospitals traditionally deemed to have lower

quality, such as small, non-teaching, and rural hospitals can in fact attain greater quality improvement

from meaningful use of EHR than other hospitals. Specifically, the effect of MU in small hospitals is over

four times more than the MU effect in large hospitals (0.98 vs. 0.23, model 2 in Figure 4). Similarly, the

Page 24: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

24

effect in rural hospitals is eight times more than the MU effect in urban hospitals (1.16 vs. 0.13, model 2

in Figure 4). It is noteworthy that these disadvantaged hospitals were also the ones that had been shown to

be slower in adopting EHR (Jha et al. 2009; DesRoches et al. 2012). These results suggest that the

Medicare EHR Incentive Program not only accelerated the overall adoption and MU of EHR technology

in general, but more importantly, it significantly enhanced the quality for disadvantaged hospitals that are

in greater needs for better care. In other words, MU of EHR can potentially be an effective approach to

mitigate healthcare disparity.

6. Conclusions

The goal of this study is to investigate the relationship between hospitals’ meaningful use (MU) of

electronic health records (EHR) and quality of care. Through multiple empirical specifications and

numerous robustness checks, we find that meaningful use of EHR significantly improved quality of care.

More importantly, disadvantaged (small, non-teaching, or rural) hospitals tend to attain a greater degree

of quality improvement from MU.

The results from this study are important for three reasons. First, while there have been multiple

studies on the beneficial effects on quality of care resulting from implementation of EHR or MU, to the

best of our knowledge, this study is the first one that has a formal, objective measurement of MU.

Second, we are one of the first to leverage the Medicare EHR Incentive Program for exogenous variations

in identifying the clinical impact of MU. Our findings provide strong empirical evidence on the positive

quality impact from meaningfully use EHR technology. Third, from a policy evaluation perspective, the

findings justify and support the effectiveness of the Medicare EHR Incentive Program and the goal of the

HITECH Act. As the federal initiative begins to move toward the Stage 2 MU,14 this study gives an early

assessment of the clinical benefit and policy implications of the MU initiative.

14 Stage 2 MU requires a greater degree of system usage, consolidates a number of Stage 1 MU measures, and

introduces a few new measures. A comprehensive comparison of Stage 1 and Stage 2 MU objectives is available at http://cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/Stage1vsStage2CompTablesforHospitals.pdf

Page 25: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

25

Some limitations of our study point to several directions for future research. First, there are

multiple stages of MU that the Medicare EHR Incentive Program intends to implement. In this study we

only considered the Stage 1 MU, which comprises basic but essential objectives for meaningful use of

EHR. Second, there have been discussions on the limitation of existing quality measures (Kern et al.

2009; Jones et al. 2010). While we found the effect of MU on the existing quality measures, it would have

been ideal if richer and more accurate measures were available.

Despite these limitations, replicating our empirical framework to higher stages of MU or better

measures of healthcare quality, are all straightforward. Most importantly, our study represents an

important first step in understanding the effect of not just adoption, but meaningful use of EHR

technology on quality of care. Finally, and more broadly, by moving beyond adoption and focusing on the

meaningful use of IT, our study also contributes to the long and growing literature in information systems

on the adoption and value of information technologies (Brynjolfsson and Hitt 1996; Banker and

Kauffman 2004) by examining a different but socially important outcome metric: quality of healthcare

services.

Page 26: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

26

Tables and Figures

Table 1: Summary of prior studies on the effects of EHR

Study Main Data

Sources

Data

Period

Data Units Main Dependent

Variables

Main

Independent

Variables

Identification

strategy

Analysis Main Findings

Agha (2014) AHA, HADB,

MC

1998-

2005

3,880

hospitals

Hospital saving and

quality

Use of HIT (EMR

or CDS)

FE PDA Negative. HIT has no effect

on medical expenditures and

patient outcomes.

Appari et al.

(2013)

CMS-HC,

HADB

2006-

2010

3,921

hospitals

Process quality EMR capability FE PDA Positive. Increased EHR

capability yielded increased

process quality

Dey et al.

(2013)

CMS-CR,

HADB

NA 1,011

hospitals

Operational

performance

EHR capability PA CSA Positive. EHR capability

was positively associated

with operational

performance

McCullough

et al. (2013)

AHA, HADB,

MC

2002-

2007

2,953

hospitals

Patient outcome

(mortality)

Use of EHR and

CPOE

DID PDA Negative. There was no

relationship between HIT

and mortality

Appari et al.

(2012)

CMS-IPPS,

CMS-HC,

HADB

2009† 2,603

hospitals

Medication

administration quality

Use of CPOE and

eMAR

PA CSA Positive. Use of eMAR and

CPOE improved adherence

to medication guidelines

Dranove et al.

(2012)

AHA, CMS-

CR, HADB

1996-

2009

4,231

hospitals

Hospital operating

costs

EHR adoption FE PDA Mixed. EHR adoption was

initially associated with

increased cost, which

decreased after 3 years if

complementary conditions

were met.

Hah and

Bharadwaj

(2012)

AHA, HADB 2008-

2010

2,557

hospitals

Hospital operation and

financial performance

HIT use and HIT

capital

None PDA Positive. HIT use and HIT

capital positively related to

operation and financial

performance

Furukawa

(2011)

NHAMCS 2006 364 EDs ED throughput EMR capability IV CSA Mixed. Advanced EHR

improved ED efficiency, but

basic EHR did not.

Miller and

Tucker

(2011)

CDC-VSCP,

HADB

1995-

2006

3,764

hospitals

Neonatal mortality EHR adoption IV, FE PDA Positive. EHR reduced

neonatal mortality

Page 27: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

27

Romano and

Stafford

(2011)

NAMCS,

NHAMCS

2005-

2007

243,478

patient

visits

Ambulatory quality Use of EHR and

CDS

None CSA Negative. EHR and CDS

were not associated with

ambulatory care quality

Furukawa et

al. (2010)

COSHPD,

HADB

1998-

2007

326

hospitals in

California

Nurse staffing and

nurse-sensitive patient

outcomes

EHR

implementation

FE PDA Negative. EHR systems did

not decease hospital costs,

length of stay, and nurse

staffing levels.

Himmelstein

et al. (2010)

CMS-CR,

DHA, HADB

2003-

2007

Approx.

4,000

hospitals

Hospital costs and

quality

Degree of

Computerization

None CSA Negative. Computerization

had no effect on hospital

costs and quality

Jones et al.

(2010)

AHA, CMS-

HC, HADB

2004,

2007†

2,086

hospitals

Process quality EHR capability DID, FE, PA PDA Mixed. Adopting basic EHR

significantly increased care

quality of heart failure, but

adopting advanced EHR

significantly decrease care

quality of acute myocardial

infarction and heart failure.

McCullough

et al. (2010)

AHA, CMS-

HC, HADB

2004-

2007

3,401

hospitals

Process quality HIT adoption

(EHR & CPOE)

FE PDA Mixed. HIT adoption

improved 2 of 6 process

quality measures

This paper CMS-CR,

CMS-EHRIP,

CMS-HC,

HADB, JC

2011Q4,

2013Q1

2,747

hospitals

Process quality Meaningful use of

EHR

DID, FD, PA PDA Positive. Meaningful use of

EHR significantly improves

quality of care, and the

effect is larger among

hospitals which are small,

non-teaching or located in

rural area. † Technology use/adoption is determined a year before.

Note. AHA=American Hospital Association Annual Survey; CDC-VSCP=Centers for Disease Control (CDC) and Prevention's Vital Statistics

Cooperative Program; CDS=Clinical decision support; CMS-AIPPS=CMS Inpatient Prospective Payment System; CMS-CR=CMS Cost Reports;

CMS-EHRIP=CMS EHR Incentive Programs; CMS-HC=CMS Hospital Compare database; COSHPD=California Office of Statewide Health

Planning and Development Annual Financial Disclosure Reports and Patient Discharge Databases; CPOE=Computerized physician order entry;

DID=Difference-in-differences; DHA=Dartmouth Health Atlas; ED=Emergency department; EHR=Electronic health records; FD=First-

difference; FE=Fixed-effects; HADB=Healthcare Information and Management Systems Society’s Analytics Database; IV=Instrument variables;

JC=the Joint Commission; MC=Medicare claims; NAMCS=National Ambulatory Medical Care Survey; NHAMCS=National Hospital

Ambulatory Medical Care Survey; PA=Propensity adjustments; WSDH=Washington State Department of Health hospital database

Page 28: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

28

Table 2: Data Descriptions

Variable Type Description Source

Meaningful Use

(MU)

Binary,

time-invariant

To indicate whether a hospital reaches MU in a

specific point of time

CMS-

EHRIP

Quality of Care Numeric, time-

varying

The composite quality score for the process of care,

range from 0 (lowest quality) to 100 (highest

quality)

JC

Age Numeric, time-

invariant

Age of hospital as of 2012 (2012 – the year formed) HADB

Size Numeric, time-

varying

Total number of hospital beds CMS-CR

Annual Discharges Numeric, time-

varying

Total number of inpatient discharges in a year CMS-CR

Annual Inpatient

Days

Numeric, time-

varying

Total number of inpatient days in a year CMS-CR

Annual Medicare

Discharges

Numeric, time-

varying

Total number of Medicare inpatient discharges in a

year

CMS-CR

Annual Medicare

Inpatient Days

Numeric, time-

varying

Total number of Medicare inpatient days in a year CMS-CR

Transfer adjusted

case mix index

(TACMI)

Numeric, time-

varying

A value used to characterize the overall severity of

the patient base of the hospital

CMS-IPPS

Teaching status Binary,

time-invariant

Whether the hospital is a member in COTH COTH

Ownership Categorical,

time-invariant

Whether the hospital is owned by a government,

non-profit, or proprietary agency

CMS-HC

Rural area Binary, time-

invariant

Whether the hospital is located in a rural area RUCA 2.0,

CMS-HC

Region Categorical,

time-invariant

Whether the hospital is located in Midwest,

Northeast, South, or West

Census

regions,

CMS-HC

Note. COTH=Council of Teaching Hospitals; RUCA= Rural Urban Commuting Area. Other

abbreviations follow Table 1.

Page 29: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

29

Table 3: Stage 1 MU Core and Menu Objectives

Core Objectives

1 CPOE for Medication Orders

2 Drug Interaction Checks

3 Maintain Problem List

4 Active Medication List

5 Medication Allergy List

6 Record Demographics

7 Record Vital Signs

8 Record Smoking Status

9 Clinical Quality Measures

10 Clinical Decision Support Rule

11 Electronic Copy of Health Information

12 Discharge Instructions

13 Electronic Exchange of Clinical Information

14 Protect Electronic Health Information

Menu Objectives

1 Drug Formulary Checks

2 Advanced Directives

3 Clinical Lab Test Results

4 Patient Lists

5 Patient-specific Education Resources

6 Medication Reconciliation

7 Transition of Care Summary

8 Immunization Registries Data Submission

9 Reportable Lab Results

10 Syndromic Surveillance Data Submission

Table 4: Transfer factors for eligible hospitals

Demonstrate MU

2011 2012 2013 2014 2015

Payment 2011 1

2012 0.75 1

2013 0.5 0.75 1

2014 0.25 0.5 0.75 0.75

2015 0.25 0.5 0.5 0.5

2016 0.25 0.25 0.25

Page 30: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

30

Table 5. Summary Statistics

Treatment

Group

Control

Group

AlwaysMU

P-

Value

Control

Group

NeverMU

P-

Value

# of hospitals 914 483 947

Mean Age 41.63 (37.39) 35.39 (33.44) 0.001 38.21 (37.53) 0.049

Mean Size 229.2 (201.9) 248.3 (194.9) 0.085 212.1 (182) 0.056

Mean Total Inpatient Discharges 11.52 (10.87) 12.62 (10.66) 0.067 10.38 (9.71) 0.017

Mean Total Inpatient Days 53.71 (58.49) 59.15 (55.84) 0.089 48.41 (51.4) 0.038

Mean Medicare Inpatient Discharges 3.75 (3.182) 3.912 (3.344) 0.383 3.399 (3.072) 0.016

Mean Medicare Inpatient Days 19.28 (18.55) 20.15 (18.7) 0.408 17.59 (17.56) 0.043

Mean TACMI 1.459 (0.261) 1.521 (0.255) < 0.001 1.472 (0.269) 0.301

Percent of Teaching Hospitals 10.5 % 11.6 % 0.533 9.2 % 0.34

Percent of Rural Hospitals 30.2 % 20.7 % < 0.001 29.4 % 0.692

Percent of Government Hospitals 15.2 % 12.4 % 0.157 17.6 % 0.158

Percent of Nonprofit Hospitals 67.2 % 60.7 % 0.015 63.8 % 0.123

Percent of Proprietary Hospitals 17.6 % 26.9 % < 0.001 18.6 % 0.587

Percent of Hospitals in the Midwest 24 % 23 % 0.682 19.6 % 0.024

Percent of Hospitals in the Northeast 19.7 % 14.1 % 0.009 15.6 % 0.021

Percent of Hospitals in the South 39.9 % 45.3 % 0.051 43.4 % 0.13

Percent of Hospitals in the West 16.4 % 17.6 % 0.573 21.3 % 0.007

Page 31: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

31

Table 6. Panel Data Models for the Quality Effect of MU

Choice of estimator DID DID DID DID FD FD FD FD

Choice of control group AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU

Includes control variables Yes Yes Yes Yes

TreatmentGroup

-.629*** -.337** -.040 -.083

(.154) (.145) (.133) (.129)

PostPeriod

.389*** .322*** .262*** .180**

(.133) (.136) (.078) (.078)

TreatmentGroup ×

PostPeriod

.319** .320** .446*** .479*** .319** .328** .446*** .438***

(.155) (.156) (.111) (.112) (.155) (.156) (.111) (.112)

Age

-.003* -.003**

(.001) (.001)

Size

-.000 -.000 .000 .000

(.001) (.001) (.001) (.001)

TotalDischarges

.074** .041 -.016 -.006

(.020) (.022) (.040) (.017)

TotalInpatientDays

-.014* -.009 .010 -.016

(.005) (.005) (.014) (.014)

MedicareDischarges

.224* .397*** .097 .153

(.077) (.078) (.156) (.151)

MedicareInpatientDays

-.039 -.058*** -.009 .007

(.015) (.014) (.039) (.030)

TACMI

2.447*** 2.067*** -.535 -1.164*

(.428) (.335) (.820) (.715)

Rural

-.777*** -.394***

(.162) (.154)

Teach

-.253 -.121

(.142) (.139)

Nonprofit

.576*** .314**

(.200) (.153)

Proprietary

1.220*** .722***

(.226) (.210)

Midwest

.765*** .613***

(.220) (.210)

Northeast

.591*** .741***

(.255) (.219)

South

.197 .357**

(.227) (.200)

Constant

98.001*** 93.378*** 97.412*** 93.615*** .389*** .407*** .262*** .313***

(.121) (.805) (.094) (.600) (.133) (.137) (.078) (.087)

Number of observations 2794 2794 3722 3722 1397 1397 1861 1861

Note. Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)

Page 32: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

32

Table 7. Results from the Matched Dataset

Choice of estimator DID DID DID DID FD FD FD FD

Choice of control group AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU

Includes control variables Yes Yes Yes Yes

TreatmentGroup

-.332*** -.362*** .006 -.001

(.073) (.067) (.072) (.067)

PostPeriod

.434*** .332*** .344*** .265***

(.056) (.058) (.041) (.042)

TreatmentGroup ×

PostPeriod

.281*** .305*** .351*** .379*** .281*** .285*** .351*** .343***

(.071) (.072) (.059) (.059) (.071) (.072) (.059) (.059)

Age

-.005*** -.003***

(.001) (.001)

Size

-.000 -.001 .000 .001

(.000) (.000) (.001) (.001)

TotalDischarges

.074*** .034*** -.006 -.009

(.010) (.011) (.020) (.008)

TotalInpatientDays

-.015*** -.006* .002 -.028***

(.002) (.003) (.007) (.008)

MedicareDischarges

.181*** .471*** .023 .209

(.036) (.043) (.071) (.083)

MedicareInpatientDays

-.031** -.072*** .007 .017

(.007) (.008) (.019) (.018)

TACMI

2.844*** 2.194*** .017 -1.673***

(.215) (.197) (.405) (.379)

Rural

-.730*** -.498***

(.075) (.081)

Teach

-.399*** -.239**

(.071) (.078)

Nonprofit

.755*** .229***

(.095) (.083)

Proprietary

1.192*** .821***

(.122) (.107)

Midwest

.822*** .656***

(.104) (.119)

Northeast

.669*** .826***

(.122) (.125)

South

.216** .379***

(.109) (.117)

Constant

97.710*** 92.796*** 97.378*** 93.417*** .434*** .438*** .344*** .410***

(.051) (.379) (.051) (.349) (.056) (.060) (.041) (.046)

Number of observations 11664 11664 13356 13356 5832 5832 6678 6678

Note. Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)

Page 33: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

33

Table 8. Results from the Quantile Analysis with the DID Specification

Choice of control group AlwaysMU AlwaysMU AlwaysMU NeverMU NeverMU NeverMU

Quantile 0.25 0.5 0.75 0.25 0.5 0.75

TreatmentGroup

-.597*** -.468*** -.222*** -.183* -.114 .003

(.135) (.091) (.077) (.135) (.095) (.062)

PostPeriod

.614*** .424*** .302*** .297*** .319*** .234***

(.131) (.064) (.053) (.124) (.085) (.054)

TreatmentGroup ×

PostPeriod

.075 .137* .039 .431*** .241** .078

(.162) (.108) (.093) (.181) (.123) (.081)

Age

-.004*** -.002*** -.001** -.002*** -.002*** -.002***

(.001) (.001) (.001) (.001) (.001) (.001)

Size

-.001 -.000 -.001** -.001 -.001 -.000

(.001) (.001) (.000) (.001) (.001) (.000)

TotalDischarges

.063*** .026*** .018* .030* .027*** .010

(.021) (.014) (.012) (.028) (.014) (.012)

TotalInpatientDays

-.009** -.004* -.002 -.004 -.005*** -.002

(.006) (.004) (.003) (.006) (.003) (.003)

MedicareDischarges

.124* .057** -.054* .355*** .108*** .009

(.081) (.050) (.044) (.091) (.047) (.060)

MedicareInpatientDays

-.022 -.013** .007 -.059*** -.013** -.003

(.015) (.010) (.008) (.017) (.009) (.012)

TACMI

1.624*** .850*** .287*** 1.870*** .949*** .437***

(.242) (.161) (.128) (.176) (.140) (.112)

Rural

-1.136*** -.476*** -.150* -.649*** -.302*** -.054

(.162) (.118) (.085) (.138) (.086) (.057)

Teach

-.148 -.146*** -.152** -.037 -.018 -.147**

(.150) (.094) (.076) (.148) (.101) (.106)

Nonprofit

.822*** .486*** .139* .628*** .265*** .156**

(.207) (.149) (.079) (.167) (.115) (.059)

Proprietary

1.334*** .949*** .513*** 1.037*** .634*** .384***

(.227) (.157) (.091) (.188) (.141) (.075)

Midwest

.486*** .305*** .174** .606*** .310*** .215***

(.141) (.090) (.084) (.129) (.087) (.072)

Northeast

.378*** .206*** .060 .832*** .362*** .290***

(.164) (.085) (.095) (.144) (.081) (.080)

South

.105 .156** .126* .295** .217*** .282***

(.151) (.074) (.086) (.140) (.084) (.070)

Constant

94.204*** 96.734*** 98.659*** 93.134*** 96.294*** 98.051***

(.475) (.309) (.246) (.355) (.262) (.187)

Number of observations 2794 2794 2794 3722 3722 3722

Note. Bootstrapped standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)

Page 34: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

34

Table 9. Results from the Quantile Analysis with the FD Specification

Choice of control group AlwaysMU AlwaysMU AlwaysMU NeverMU NeverMU NeverMU

Quantile 0.25 0.5 0.75 0.25 0.5 0.75

TreatmentGroup

-.597*** -.468*** -.222*** -.183* -.114 .003

(.135) (.091) (.077) (.135) (.095) (.062)

PostPeriod

.614*** .424*** .302*** .297*** .319*** .234***

(.131) (.064) (.053) (.124) (.085) (.054)

TreatmentGroup ×

PostPeriod

.228** .098* .004 .612*** .383*** .228***

(.105) (.067) (.044) (.093) (.053) (.043)

Age

-.006*** -.004*** -.003*** -.003*** -.004*** -.002***

(.001) (.001) (.000) (.001) (.001) (.001)

Size

-.000 .000 .000 -.000 -.000 .000

(.001) (.001) (.000) (.001) (.000) (.000)

TotalDischarges

.114*** .053*** -.003 .078*** .030*** -.006**

(.019) (.014) (.012) (.029) (.014) (.015)

TotalInpatientDays

-.024*** -.012*** -.001 -.017*** -.008*** -.002

(.005) (.003) (.003) (.006) (.003) (.004)

MedicareDischarges

.130* .024 .018 .252*** .109*** .059**

(.083) (.056) (.045) (.090) (.054) (.059)

MedicareInpatientDays

-.012 -.001 -.002 -.027*** -.009 -.008

(.015) (.011) (.009) (.016) (.010) (.011)

TACMI

2.544*** 1.240*** .373*** 2.096*** 1.087*** .399***

(.317) (.187) (.128) (.190) (.119) (.117)

Constant

93.466*** 96.699*** 98.927*** 93.521*** 96.626*** 98.599***

(.513) (.290) (.192) (.310) (.183) (.177)

Number of observations 2794 2794 2794 3722 3722 3722

Note. Bootstrapped standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)

Page 35: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

35

Table 10. Results from the Censored Regression Analysis with the DID Specification

Choice of control group AlwaysMU AlwaysMU NeverMU NeverMU

Includes control variables Yes Yes

TreatmentGroup

-.342*** -.376*** .010 .002

(.075) (.069) (.070) (.066)

PostPeriod

.503*** .384*** .370*** .273***

(.075) (.070) (.070) (.066)

TreatmentGroup ×

PostPeriod

.246** .276*** .356*** .384***

(.107) (.098) (.099) (.094)

Age

-.005*** -.004***

(.001) (.001)

Size

-.000 -.001**

(.001) (.000)

TotalDischarges

.074*** .053***

(.017) (.013)

TotalInpatientDays

-.015*** -.007**

(.004) (.003)

MedicareDischarges

.163*** .441***

(.058) (.055)

MedicareInpatientDays

-.029*** -.071***

(.011) (.011)

TACMI

3.014*** 2.691***

(.142) (.127)

Rural

-.686*** -.384***

(.067) (.063)

Teach

-.397*** -.309***

(.103) (.099)

Nonprofit

.780*** .229***

(.072) (.069)

Proprietary

1.356*** .930***

(.093) (.085)

Midwest

.829*** .704***

(.084) (.079)

Northeast

.655*** .920***

(.092) (.086)

South

.245*** .470***

(.080) (.073)

Constant

97.772*** 92.619*** 97.426*** 92.728***

(.053) (.239) (.049) (.212)

Number of observations 2794 2794 3722 3722

Note. Standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)

Page 36: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

36

Table 11. Analysis of Continuous MU Variable using the FD Specification

Control Group AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU

Continuous MU

Variable Construction Avg CMs Prod CMs Avg CMs Prod CMs

Avg CMs

& MMs

Prod CMs

& MMs

Avg CMs

& MMs

Prod CMs

& MMs

ContinuousMU .381** .694*** .374** .600** .504*** .783*** .505*** .867***

(.170) (.257) (.168) (.284) (.122) (.193) (.122) (.220)

Size .000 .000 .000 .000 .000 .000 .000 .000

(.001) (.001) (.001) (.001) (.001) (.001) (.001) (.001)

TotalDischarges -.017 -.017 -.017 -.015 -.006 -.006 -.006 -.006

(.040) (.040) (.040) (.040) (.017) (.017) (.017) (.017)

TotalInpatientDays .010 .009 .010 .009 -.016 -.017 -.016 -.016

(.014) (.014) (.014) (.014) (.014) (.014) (.014) (.014)

MedicareDischarges .108 .126 .106 .098 .161 .169 .159 .148

(.157) (.155) (.157) (.155) (.151) (.151) (.151) (.151)

MedicareInpatientDays -.010 -.010 -.010 -.006 .006 .006 .006 .008

(.039) (.039) (.039) (.039) (.030) (.030) (.030) (.030)

TACMI -.542 -.489 -.538 -.460 -1.153* -1.125* -1.150* -1.114*

(.819) (.822) (.819) (.823) (.714) (.718) (.714) (.719)

Constant .391*** .339*** .396*** .421*** .298*** .299*** .299*** .324***

(.137) (.130) (.136) (.122) (.087) (.083) (.087) (.083)

Number of observations 1397 1397 1397 1397 1861 1861 1861 1861

Note 1. CMs = core MU measures; MMs = menu MU measures

Note 2. Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)

Table 12. Stratification Analysis on the Quality Effect of MU

Dimension Category

Control Group AlwaysMU Control Group NeverMU

# of Tr/Co

Hospitals MU estimate

# of Tr/Co

Hospitals MU estimate

Size Small (less than 100 beds) 255 / 97 1.22 (.64)** 255 / 268 .98 (.32)***

Medium (100 to 300 beds) 426 / 243 .14 (.15) 426 / 478 .24 (.13)*

Large (more than 300 beds) 233 / 143 -.05 (.12) 233 / 201 .23 (.12)*

Ownership Government 139 / 60 .06 (.55) 139 / 167 .96 (.33)***

Nonprofit 614 / 293 .38 (.17)** 614 / 604 .39 (.12)***

Proprietary 161 / 130 .22 (.38) 161 / 176 .08 (.32)

Teaching status Teach 96 / 56 .03 (.17) 96 / 87 .23 (.21)

Non-teach 818 / 427 .37 (.18)** 818 / 860 .46 (.12)***

Region Midwest 219 / 111 .43 (.17)** 219 / 186 .12 (.23)

Northeast 180 / 68 .32 (.45) 180 / 148 .54 (.21)***

South 365 / 219 .36 (.31) 365 / 411 .62 (.20)***

West 150 / 85 .17 (.17) 150 / 202 .25 (.22)

Urban status Urban 638 / 383 .06 (.10) 638 / 669 .13 (.10)

Rural 276 / 100 1.13 (.61)** 276 / 278 1.16 (.29)***

Note. Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01)

Page 37: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

37

Figure 1. Historical quality trends of the treatment and control groups

Figure 2. Proportions of acute care hospitals attaining MU in each state from 2011 to 2013

Page 38: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

38

Figure 3. Schematic Illustration of Quality Improvement among the Treatment and Control

Groups

Page 39: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

39

Figure 4. Quality Effects of Meaningful Use in Subgroups (95% Confidence Interval)

Page 40: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

40

References

Adler-Milstein, J., DesRoches, C. M., Furukawa, M. F., Worzala, C., Charles, D., Kralovec, P., Stalley, S., and Jha, A. K. 2014. More Than Half of US Hospitals Have At Least A Basic EHR, But Stage 2 Criteria Remain Challenging For Most. Health Affairs 33(9) 1664–1671.

Agarwal, R., Gao, G. (Gordon), DesRoches, C., and Jha, A. K. 2010. The Digital Transformation of Healthcare: Current Status and the Road Ahead. Information Systems Research 21(4) 796–809.

Agha, L. 2014. The Effects of Health Information Technology on the Costs and Quality of Medical Care. Journal of Health Economics 34 19–30.

Ajzen, I. 1991. The Theory of Planned Behavior. Organizational Behavior and Human Decision

Processes 50(2) 179–211.

Angrist, J. D., and Krueger, A. B. 1999. Chapter 23 Empirical strategies in labor economics. Handbook of

Labor Economics, Orley C. Ashenfelter and David Card (ed.), (Vol. Volume 3, Part A) 1277–1366.

Angst, C. M., Agarwal, R., Sambamurthy, V., and Kelley, K. 2010. Social Contagion and Information Technology Diffusion: The Adoption of Electronic Medical Records in U.S. Hospitals. Management Science 56(8) 1219–1241.

Angst, C. M., Devaraj, S., and D’Arcy, J. 2012. Dual Role of IT-Assisted Communication in Patient Care: A Validated Structure-Process-Outcome Framework. Journal of Management Information

Systems 29(2) 257–292.

Appari, A., Carian, E. K., Johnson, M. E., and Anthony, D. L. 2012. Medication Administration Quality and Health Information Technology: A National Study of US Hospitals. Journal of American

Medical Informatics Association 19(3) 360–367.

Appari, A., Eric Johnson, M., and Anthony, D. L. 2013. Meaningful Use of Electronic Health Record Systems and Process Quality of Care: Evidence from a Panel Data Analysis of U.S. Acute-Care Hospitals. Health Services Research 48(2pt1) 354–375.

Attewell, P. 1992. Technology Diffusion and Organizational Learning: The Case of Business Computing. Organization Science 3(1) 1–19.

Banker, R. D., and Kauffman, R. J. 2004. The Evolution of Research on Information Systems: A Fiftieth-Year Survey of the Literature in Management Science. Management Science 50(3) 281–298.

Bardhan, I., Zheng, E., Oh, J., and Kirksey, K. 2011. A Profiling Model for Readmission of Patients with Congestive Heart Failure: A multi-hospital study. ICIS 2011 Proceedings

Bentley, T. G. k., Effros, R. M., Palar, K., and Keeler, E. B. 2008. Waste in the U.S. Health Care System: A Conceptual Framework. Milbank Quarterly 86(4) 629–659.

Bertrand, M., Duflo, E., and Mullainathan, S. 2004. How Much Should We Trust Differences-In-Differences Estimates? The Quarterly Journal of Economics 119(1) 249–275.

Page 41: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

41

Berwick, D. M., and Hackbarth, A. D. 2012. Eliminating Waste in US Health Care. JAMA: The Journal

of the American Medical Association 307(14) 1513–1516.

Black, A. D., Car, J., Pagliari, C., Anandan, C., Cresswell, K., Bokun, T., McKinstry, B., Procter, R., Majeed, A., and Sheikh, A. 2011. The Impact of eHealth on the Quality and Safety of Health Care: A Systematic Overview. PLoS Medicine 8(1) e1000387.

Blumenthal, D. 2010. Launching HITECH. New England Journal of Medicine 362(5) 382–385.

Blumenthal, D. 2011. Wiring the Health System — Origins and Provisions of a New Federal Program. New England Journal of Medicine 365(24) 2323–2329.

Blumenthal, D., and Tavenner, M. 2010. The “Meaningful Use” Regulation for Electronic Health Records. New England Journal of Medicine 363(6) 501–504.

Bodenheimer, T. 2005. High and Rising Health Care Costs. Part 1: Seeking an Explanation. Annals of

Internal Medicine 142(10) 847–854.

Bos, J. V. D., Rustagi, K., Gray, T., Halford, M., Ziemkiewicz, E., and Shreve, J. 2011. The $17.1 Billion Problem: the Annual Cost of Measurable Medical Errors. Health Affairs 30(4) 596–603.

Brennan, T. A. 1998. The Role of Regulation in Quality Improvement. Milbank Quarterly 76(4) 709–731.

Brynjolfsson, E., and Hitt, L. 1996. Paradox Lost? Firm-Level Evidence on the Returns to Information Systems Spending. Management Science 42(4) 541–558.

Brynjolfsson, E., Hu, Y. (Jeffrey), and Simester, D. 2011. Goodbye Pareto Principle, Hello Long Tail: The Effect of Search Costs on the Concentration of Product Sales. Management Science 57(8) 1373–1386.

Chassin, M. R., Loeb, J. M., Schmaltz, S. P., and Wachter, R. M. 2010. Accountability Measures — Using Measurement to Promote Quality Improvement. New England Journal of Medicine 363(7) 683–688.

Chatterjee, P., and Joynt, K. E. 2014. Do Cardiology Quality Measures Actually Improve Patient Outcomes? Journal of the American Heart Association 3(1) e000404.

Chaudhry, B., Wang, J., Wu, S., Maglione, M., Mojica, W., Roth, E., Morton, S. C., and Shekelle, P. G. 2006. Systematic Review: Impact of Health Information Technology on Quality, Efficiency, and Costs of Medical Care. Annals of Internal Medicine 144(10) 742 –752.

Chen, L. M., Jha, A. K., Guterman, S., Ridgway, A. B., Orav, E. J., and Epstein, A. M. 2010. Hospital Cost of Care, Quality of Care, and Readmission Rates: Penny-Wise and Pound-Foolish? Archives

of Internal Medicine 170(4) 340–346.

Classen, D. C., and Bates, D. W. 2011. Finding the Meaning in Meaningful Use. New England Journal of

Medicine 365(9) 855–858.

Davis, F. D., Bagozzi, R. P., and Warshaw, P. R. 1989. User Acceptance of Computer Technology: A Comparison of Two Theoretical Models. Management Science 35(8) 982–1003.

Page 42: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

42

DesRoches, C. M., Campbell, E. G., Rao, S. R., Donelan, K., Ferris, T. G., Jha, A., Kaushal, R., Levy, D. E., Rosenbaum, S., Shields, A. E., and Blumenthal, D. 2008. Electronic Health Records in Ambulatory Care — A National Survey of Physicians. New England Journal of Medicine 359(1) 50–60.

DesRoches, C. M., Worzala, C., Joshi, M. S., Kralovec, P. D., and Jha, A. K. 2012. Small, Nonteaching, And Rural Hospitals Continue To Be Slow In Adopting Electronic Health Record Systems. Health Affairs 31(5) 1092–1099.

Devaraj, S., and Kohli, R. 2003. Performance Impacts of Information Technology: Is Actual Usage the Missing Link? Management Science 49(3) 273–289.

Dey, A., Sinha, K. K., and Thirumalai, S. 2013. IT Capability for Health Care Delivery: Is More Better? Journal of Service Research 16(3) 326–340.

Dranove, D., Forman, C., Goldfarb, A., and Greenstein, S. 2012. The Trillion Dollar Conundrum:

Complementarities and Health Information Technology. Technical Report (NBER Working Paper18281),

Eddy, D. M. 2005. Evidence-Based Medicine: A Unified Approach. Health Affairs 24(1) 9 –17.

Feltham, G. A., and Ohlson, J. A. 1999. Residual Earnings Valuation With Risk and Stochastic Interest Rates. The Accounting Review 74(2) 165–183.

Fenton, J. J., Jerant, A. F., Bertakis, K. D., and Franks, P. 2012. The Cost of Satisfaction: A National Study of Patient Satisfaction, Health Care Utilization, Expenditures, and Mortality. Archives of

Internal Medicine 172(5) 405–411.

Friedman, B., and Basu, J. 2004. The Rate and Cost of Hospital Readmissions for Preventable Conditions. Medical Care Research and Review 61(2) 225–240.

Furukawa, M. F. 2011. Electronic Medical Records and the Efficiency of Hospital Emergency Departments. Medical Care Research and Review 68(1) 75–95.

Furukawa, M. F., Raghu, T. S., and Shao, B. B. M. 2010. Electronic Medical Records, Nurse Staffing, and Nurse-Sensitive Patient Outcomes: Evidence from California Hospitals, 1998–2007. Health

Services Research 45(4) 941–962.

Hah, H., and Bharadwaj, A. 2012. A Multi-level Analysis of the Impact of Health Information Technology on Hospital Performance. Proceedings of the Thirty Third International Conference

on Information Systems (ICIS)

Hall, S. A., Kaufman, J. S., and Ricketts, T. C. 2006. Defining Urban and Rural Areas in U.S. Epidemiologic Studies. Journal of Urban Health 83(2) 162–175.

Himmelstein, D. U., Wright, A., and Woolhandler, S. 2010. Hospital Computing and the Costs and Quality of Care: A National Study. The American Journal of Medicine 123(1) 40–46.

IOM. 2001. Crossing the Quality Chasm: A New Health System for the 21st Century, The National Academies Press, Washington, D.C.

Page 43: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

43

Jencks, S. F., Williams, M. V., and Coleman, E. A. 2009. Rehospitalizations among Patients in the Medicare Fee-for-Service Program. New England Journal of Medicine 360(14) 1418–1428.

Jha, A. K., DesRoches, C. M., Campbell, E. G., Donelan, K., Rao, S. R., Ferris, T. G., Shields, A., Rosenbaum, S., and Blumenthal, D. 2009. Use of Electronic Health Records in U.S. Hospitals. New England Journal of Medicine 360(16) 1628–1638.

Jha, A. K., Orav, E. J., Li, Z., and Epstein, A. M. 2007. The Inverse Relationship Between Mortality Rates And Performance In The Hospital Quality Alliance Measures. Health Affairs 26(4) 1104–1110.

Jones, S. S., Adams, J. L., Schneider, E. C., Ringel, J. S., and McGlynn, E. A. 2010. Electronic Health Record Adoption and Quality Improvement in US Hospitals. American Journal of Managed Care 16(12 Suppl HIT) SP64–71.

Jones, S. S., Heaton, P. S., Rudin, R. S., and Schneider, E. C. 2012. Unraveling the IT Productivity Paradox — Lessons for Health Care. New England Journal of Medicine 366(24) 2243–2245.

Jones, S. S., Rudin, R. S., Perry, T., and Shekelle, P. G. 2014. Health Information Technology: An Updated Systematic Review with a Focus on Meaningful Use. Annals of Internal Medicine 160(1) 48–54.

Kane, G. C., and Alavi, M. 2008. Casting the Net: A Multimodal Network Perspective on User-System Interactions. Information Systems Research 19(3) 253–272.

Kern, L. M., Dhopeshwarkar, R., Barrón, Y., Wilcox, A., Pincus, H., and Kaushal, R. 2009. Measuring the Effects of Health Information Technology on Quality of Care: A Novel Set of Proposed Metrics for Electronic Quality Reporting. Joint Commission Journal on Quality and Patient

Safety 35(7) 359–369.

Koenker, R., and Bassett, G. 1978. Regression Quantiles. Econometrica 46(1) 33–50.

Kohli, R., and Devaraj, S. 2003. Measuring Information Technology Payoff: A Meta-Analysis of Structural Variables in Firm-Level Empirical Research. Information Systems Research 14(2) 127–145.

Ma, J., and Stafford, R. S. 2005. Quality of US Outpatient Care: Temporal Changes and Racial/Ethnic Disparities. Archives of Internal Medicine 165(12) 1354–1361.

McCullough, J. S., Casey, M., Moscovice, I., and Prasad, S. 2010. Electronic Health Records and Clinical Decision Support Systems: Impact on National Ambulatory Care Quality. Health Affairs 29(4) 647–654.

McCullough, J. S., Parente, S., and Town, R. 2013. Health Information Technology and Patient

Outcomes: The Role of Organizational and Informational Complementarities. Technical Report (NBER Working Paper18684),

Miller, A. R., and Tucker, C. E. 2011. Can Health Care Information Technology Save Babies? Journal of

Political Economy 119(2) 289–324.

Page 44: Beyond Adoption: Does Meaningful Use of EHR Improve ...misrc.umn.edu/wise/2014_Papers/1.pdf · further distinguish stages or capabilities of HIT or EHR implementation based on the

44

Mithas, S., and Krishnan, M. S. 2009. From Association to Causation Via a Potential Outcomes Approach. Information Systems Research 20(2) 295–313.

Ohlson, J. A. 1995. Earnings, Book Values, and Dividends in Equity Valuation. Contemporary

Accounting Research 11(2) 661–687.

Ransbotham, S., and Overby, E. 2010. Does Information Technology Increase or Decrease Hospitals Risk? An Empirical Examination of Computerized Physician Order Entry and Malpractice Claims. ICIS 2010 Proceedings

Romano, M. J., and Stafford, R. S. 2011. Electronic health records and clinical decision support systems: impact on national ambulatory care quality. Archives of Internal Medicine 171(10) 897–903.

Rosenbaum, P. R., and Rubin, D. B. 1983. The Central Role of the Propensity Score in Observational Studies for Causal Effects. Biometrika 70(1) 41–55.

Rubin, H. R., Pronovost, P., and Diette, G. B. 2001. The Advantages and Disadvantages of Process‐based Measures of Health Care Quality. International Journal for Quality in Health Care 13(6) 469–474.

Sekhon, J. S. 2011. Multivariate and Propensity Score Matching Software with Automated Balance Optimization: The Matching package for R. Journal of Statistical Software 42(i07)

Shwartz, M., Cohen, A. B., Restuccia, J. D., Ren, Z. J., Labonte, A., Theokary, C., Kang, R., and Horwitt, J. 2011. How Well Can We Identify the High-Performing Hospital? Medical Care Research and

Review 68(3) 290–310.

Snyder, B. 2013. How Kaiser Bet $4 Billion on Electronic Health Records -- and Won. InfoWorld

Tambe, P., and Hitt, L. M. 2012. The Productivity of Information Technology Investments: New Evidence from IT Labor Data. Information Systems Research 23(3-part-1) 599–617.

Tobin, J. 1958. Estimation of relationships for limited dependent variables. Econometrica 24–36.

Venkatesh, V., Morris, M. G., Davis, G. B., and Davis, F. D. 2003. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 27(3) 425–478.

Wejnert, B. 2002. Integrating Models of Diffusion of Innovations: A Conceptual Framework. Annual

Review of Sociology 28(1) 297–326.

Wooldridge, J. M. 2002. Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, MA.

Xue, M., Hitt, L. M., and Chen, P. 2011. Determinants and Outcomes of Internet Banking Adoption. Management Science 57(2) 291–307.