Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
The Pennsylvania State University
The Graduate School
College of Education
ON THE TECHNICAL AND ALLOCATIVE EFFICIENCY OF
RESEARCH-INTENSIVE HIGHER EDUCATION INSTITUTIONS
A Thesis in
Higher Education
by
Carlo S. Salerno
Submitted in Partial Fulfillmentof the Requirements
for the Degree of
Doctor of Philosophy
August 2002
We approve the thesis of Carlo S. Salerno.
Date of Signature
____________________________________ ______________________Roger L. GeigerProfessor of EducationThesis AdvisorChair of Committee
____________________________________ ______________________J. Fredericks VolkweinProfessor of Education
____________________________________ ______________________Michael J. DoorisAffiliate Assistant Professor of Education
____________________________________ ______________________Irwin Q. FellerProfessor of Economics
____________________________________ ______________________Dorothy H. EvensenAssociate Professor of EducationIn Charge of Graduate Programs in Higher Education
iii
ABSTRACT
Much has been written since the early 1990s on the topic of containing costs and
increasing productivity at higher education institutions in the United States. A review of
the literature however reveals remarkably little empirical evidence to verify underlying
claims of productive and cost inefficiency. Data envelopment analysis (DEA) is used to
assess, in terms of academic labor inputs, the extent to which research-intensive
universities are technically and allocatively efficient in the joint production of education
and research. In doing so this study contributes to the underdeveloped body of research
on US higher education efficiency by providing an economic understanding of how
concepts like institutional quality, competition, form of control, and the presence of
medical schools are associated with productive and cost efficient behavior.
iv
TABLE OF CONTENTS
LIST OF FIGURES............................................................................................................ vi
LIST OF TABLES ............................................................................................................vii
ACKNOWLEDGMENTS................................................................................................viii
Chapter 1 ............................................................................................................................. 1
Historical Context ............................................................................................................ 1The Need for and Importance of the Study...................................................................... 6
Prior Cost and Production Studies .............................................................................. 6Nonprofit literature.................................................................................................... 14
Labor vs. Capital ............................................................................................................ 17Research question........................................................................................................... 21
Chapter 2 ........................................................................................................................... 23
Returns to Scale.............................................................................................................. 24Technical and Allocative Efficiency.............................................................................. 25Academic Labor Inputs .................................................................................................. 30Higher Education Outputs.............................................................................................. 34A Framework for Analysis............................................................................................. 37Data Envelopment Analysis as a Method for Assessing Efficiency .............................. 45Summary ........................................................................................................................ 52
Chapter 3 ........................................................................................................................... 54
Overview ........................................................................................................................ 54Outputs ........................................................................................................................... 56Inputs.............................................................................................................................. 62Technique Applied in the Study..................................................................................... 65Analytical Techniques.................................................................................................... 72
Chapter 4 ........................................................................................................................... 79
Institutions analyzed in the technical efficiency model ................................................. 79Technical Efficiency DEA Results ................................................................................ 86Institutional Characteristics for the Allocative Efficiency Model ................................. 99Allocative Efficiency DEA Results.............................................................................. 102Summary ...................................................................................................................... 106
v
Chapter 5 ......................................................................................................................... 109
Initial Caveats............................................................................................................... 109DEA as an approach to measuring efficiency .............................................................. 111Discussion .................................................................................................................... 113Additional Considerations............................................................................................ 124Conclusion.................................................................................................................... 125
References ....................................................................................................................... 127
Appendix A � Technology, production and cost concepts in economics ....................... 140
Appendix B � Input/Output measures for tier 1.............................................................. 148
Appendix C � Input/Output measures for tier 2.............................................................. 150
Appendix D � Technical efficiency output for tier 1 ...................................................... 152
Appendix E � Technical efficiency output for tier 2....................................................... 154
Appendix F � Input/Output weights for tier 1 VRS model (000) ................................... 157
Appendix G � Input/Output weights for tier 2 VRS model (000)................................... 160
Appendix H � Benchmark values for tier 1 VRS model................................................. 165
Appendix I � Benchmark values for tier 2 VRS model .................................................. 168
vi
LIST OF FIGURES
Figure 2.1 � Graphical Depiction of Technical Efficiency ............................................... 26
Figure 2.2 � Graphical Depiction of Allocative Efficiency .............................................. 28
Figure 2.3 �Technical efficiency and university�s academic labor................................... 39
Figure 2.4 � Comparison of DEA and regression ............................................................. 49
Figure 2.5 � Effects of determining relative and not absolute efficiency in DEA............ 51
Figure 3.1 � Decomposing Efficiency............................................................................... 66
Figure 3.2 � Graphic Depiction of Slack Variables .......................................................... 77
Figure A.1 � Production Plans and Production Possibility Sets ..................................... 141
Figure A.2 � Monotonicity or Free-Disposability........................................................... 142
Figure A.3 - Convexity.................................................................................................... 144
Figure A.4 � Isoquant Map ............................................................................................. 145
vii
LIST OF TABLES
Table 3.1 � Input/Output Vectors and Associated Variables ............................................ 55
Table 4.1 �Tier 1 Institutions by Carnegie Classification (CC)........................................ 82
Table 4.2 � Tier 2 Institutions by Carnegie Classification (CC)....................................... 82
Table 4.3 � Summary Statistics of Input/Output Measures for TE Model ....................... 86
Table 4.4 � Mean Technical Efficiency Scores................................................................. 87
Table 4.5 � Decomposition of Scale Efficiency Results................................................... 89
Table 4.6 � Distribution of Technical Efficiency Scores for VRS Model ........................ 90
Table 4.7 � Tier 1 Results Sorted by Institutional Control ............................................... 92
Table 4.8 � Tier 2 Results Sorted by Institutional Control ............................................... 93
Table 4.9 � Tier 1 Results Sorted by Presence of Medical Facilities................................ 94
Table 4.10 � Tier 2 Results Sorted by Presence of Medical Facilities.............................. 95
Table 4.11 � Mean Input/Output Weights for Technically Efficient Institutions (000) ... 96
Table 4.12 � Marginal Rates of Transformation ............................................................... 97
Table 4.13 � Marginal Productivity of Inputs to Outputs by Tier..................................... 98
Table 4.14 � Summary Statistics of Input/Output Measures for AE Model ................... 101
Table 4.15 � Disaggregated Overall Efficiency Scores for Public Institutions .............. 103
Table 4.16 � Rates of Transformation and Marginal Productivity for Public Sample.... 105
Table 5.1 � Benchmarks for Select Technically Inefficient Institutions......................... 118
viii
ACKNOWLEDGMENTS
This effort would not have succeeded were it not for the unconditional support of
my mother and father. I owe a debt of gratitude to my thesis advisor, not only for being
an outstanding mentor, but also for instilling in me the passion to pursue academic
research. I also thank my friends and colleagues who offered invaluable feedback
whenever asked. Most importantly, I could not have done this without the endless supply
of support and patience from my wife. Finally, I want to thank my son for teaching me
how important an education really is.
Chapter 1
“Discovery consists of seeing what everybody has seen,and thinking what nobody has thought.”
- Albert Szent-Gyorgi
Historical Context
For almost 40 years, between the mid-1950s and the early-1990s, higher
education enjoyed the prosperity of being a growth industry. From 1955 to 1974 new
institutions opened at a rate of one every two weeks1 and existing institutions pursued
more major capital additions than in the prior 200 years combined (Rush, 1992). Even
when the capital boom finally subsided in the late 1970s and absolute decline hit several
years later, higher education experienced a �second wind� of growth extending through
the 1980s. Growing enrollments by non-traditional students and a relatively strong
economy fostered the development of new academic programs and a host of new non-
academic services (Clotfelter, 1996).
1 Many of the institutions opening in the latter half of this period were community colleges.
2
In order to pay for this growth, institutions required ever-increasing amounts of
revenue. For public institutions, this was realized mainly through annual increases in
state appropriations and at private universities through tuition increases and generous
philanthropy. Yet in the early 1990s the balloon of sustainable growth finally popped. A
wave of budget reductions and financial belt-tightening forced many institutions around
the country into a state of retrenchment; freezing capital projects, deferring equipment
purchases and capital upgrades, and freezing pay raises (Phillips, Morell, & Chronister,
1996; Layzell, 1992).
With the recession of the early 1990s as a backdrop, growing public concern over
rising college costs finally reached a watershed. During the 1980s, tuition increases at
private institutions annually outpaced the rate of inflation but continued to persist largely
because of parallel increases in household incomes. By 1993, with the economy still
sluggish, tuition at private universities was still increasing while endowment levels
reached record levels, generating public outrage over these institutions� pricing policies
(Ehrenberg, 2000). At public institutions, leaders were facing lower appropriation levels
stemming not only from smaller statewide budgets, but also increasing competition from
other social programs such as healthcare and corrections. Operating under the mantra
�do more with less,� states began to seriously explore cost escalation and productive
efficiency issues in all public programs and particularly within higher education. The
heavy reliance by research universities on graduate teaching assistants to teach lower-
level courses and legislator�s views that faculty care only about graduate education and
research brought about funding cuts and wave of faculty productivity studies in an effort
to curb costs while maintaining quality (Layzell, 1992).
3
With the federal government continuing to commit more and more funding to
higher education, it was only a matter of time before prudence would demand members
of Congress too face the growing concern over rising college costs. In 1980, estimated
federal support for higher education totaled approximately $55-billion.2 By 1999 that
figure had swelled to just over $88-billion even though federal on-budget3 support for
postsecondary education actually fell during this time from $22.2-billion to $18.2-billion.
What stimulated the growth were policy shifts on two different fronts: a significant
commitment by the federal government for tuition support and, to a lesser extent,
institutional support vis-à-vis research. During this time indirect support in the form of
student aid rose in real terms by $23.1 billion while appropriations for academic research
grew by $7.6 billion.
In the face of increasing federal support for higher education, public concerns that
colleges and universities cared little about containing costs and increasing productivity
continued to grow. Institutions were raising tuition while students and their families were
shouldering greater portions of financing a collegiate education. At the same time
universities embarked on billion dollar capital campaigns and lobbied state governments
for greater annual appropriations in the name of maintaining existing levels of quality.
When the National Commission on the Cost of Higher Education (NCCHE)4 released the
tentative results of their study into the nature of college costs, much to the dismay of
lawmakers and the general public, the panel concluded that for the most part American
higher education was actually a bargain. As one panel member noted, �there is a lot
2 Figures represent the total of on-budget, off-budget, and estimated federal tax expenditures in constant1999 dollars. Data is from the U.S. Department of Education�s National Center for Education Statistics.3 On-budget, as defined by NCES, are funds specifically earmarked for federal programs.4 The NCCHE panel was created in 1997 by Republican lawmakers to study rising college costs.
4
wrong with higher education�but the one thing colleges can�t be accused of is gouging
the public� (Anderson cited in Burd, 1997).
For the most part, public concerns with higher education costs and productivity
seem to have been sparked by three converging forces: demand-side cost escalation, a
depressed economy, and a shift in the burden of financing a college education toward
students and their families. When the public, and eventually legislators, began seriously
looking at why it cost so much to send their children to college their initial perceptions
were already tempered by largely negative images of university education. Books like
Charles Sykes� Profscam (1988) inflamed public opinion with anecdotal stories of
professors only teaching late-morning classes and frequently missing office hours. In
doing so they could devote more time to the activities they truly enjoyed: collaborating
with their more intellectual graduate students in their departments and focusing on
relatively obscure research like �Evolution of the Potholder: From Technology to Popular
Art� (p. 102). At the same time, the picture put forth of undergraduate education was of
students crammed into 500-person lecture halls being taught by non-native speakers of
English. As tuition continued to rise, especially at private institutions, while at the same
time public institutions continued to seek greater levels of state appropriations it seemed
obvious to infer that universities were productively inefficient and cared little about
containing costs.
The often complex nature of advancing and transferring knowledge made it
difficult for those in the academy to dispel negative perceptions even with studies
showing faculty members at public research universities working on average over 55
hours per week (Jordan, 1994). In response, the past decade has borne witness to an
5
outpouring of literature within the academy on cost-containment and increasing
productivity in higher education. Guided largely by cost escalation theories with names
like �cost disease,� the �revenue theory of cost� (Bowen, 1980), and the �administrative
lattice and academic ratchet� (Massy and Zemsky, 1994) an underlying tone persists that
universities care little, if at all, about issues of efficiency in lieu of direct oversight or
externally-imposed incentives.
Yet any approach seeking answers to questions of institutional productivity and
costs must eventually address the technical aspects with which universities convert inputs
into outputs as well as how the market prices of those inputs influence resource allocation
decisions. As the response to the NCCHE report suggests, to this day the cost structure
of American higher education remains something of an enigma to both policymakers and
the public. Nor do the images of university behavior put forth by Charles Sykes
adequately capture the technical efficiency with which universities utilize their academic
labor inputs, like faculty and graduate students, to jointly produce undergraduate
education and research.
The following sections in this chapter review prior cost, production, and
efficiency studies of higher education institutions and identify a gap in the literature
between perceptions of cost and productive inefficiency and a dearth of empirical and
theoretical work to justify such claims. To that end this study poses the research question
�in terms of academic labor, to what extent are research-intensive universities technically
and allocatively efficient in the joint production of education and research?�
6
The Need for and Importance of the Study
A review of the literature on higher education production and costs yields the
following problem:
The �conventional wisdom� (Brinkman, 1990) is that universities do not seek to
minimize costs and are inefficient from a production standpoint. Is this view
correct? Do existing empirical studies of higher education costs or production
and economic theories of nonprofit organizations justify this view?
Prior Cost and Production Studies
Cost studies of colleges and universities first began to appear en masse in the late-
1960s. While contributing to the understanding of higher education costs, the usefulness
of the majority of these studies5 proved limited however by the assumption institutions
only produced a single output (Cohn, Rhine, and Santos, 1989). With advances occurring
during the 1980s and early-1990s in econometric techniques and industrial organization
theories of multi-product firms, a number of higher education studies employing multi-
product cost functions have emerged (see Toutkoushian, 1999; King, 1997; Dundar and
Lewis, 1995; Glass, McKillop, and Hyndman, 1995; de Groot, McMahon, and Volkwein,
5 See Dundar and Lewis (1995) and Cohn, et al. (1989) for extensive literature reviews of single output coststudies of higher education institutions.
7
1991; Cohn, et al., 1989), contributing significantly to the literature on the cost structure
of higher education institutions (Dundar and Lewis, 1995).
Even though these empirical analyses capture important aspects of university cost
behavior, like marginal and average costs or economies of scale and scope (Hoenack,
1990) their results offer little in the way of empirically verifying economic theories tying
universities� cost behavior to the technical aspects of their production process. In the
strictest sense, the economic concept of the cost function implicitly assumes that firms
seek to minimize their costs of production for a given output level. In essence the
function traces out what is called the expansion path of cost-minimizing choices for a
firm in terms of output levels and input prices.6 As such the statistical coefficients
estimated in these studies reflect the average behavior of an assumed to be efficient group
of institutions (Brinkman, 1990; Cohn, 1979; Carlson, 1975).
An alternative economic tool frequently employed to examine production in
higher education institutions is the production function. This mathematical construct
depicts the maximum amount of output a firm can produce using different combinations
of inputs (Nicholson, 1995). A review of the higher education production literature
reveals widespread support for the view that the production process at universities7 is
largely unknown. In one of the most thorough surveys of the topic David Hopkins (1990)
reviews over 30 studies and summarizes the general consensus by asserting,
It would be well to observe that no researcher to date has successfully
characterized the [higher education] production function�The reasons for this are
many, but all boil down to the fact that the technologies of instruction, research,
6 This concept is explicated in more detail in chapter 2.7 In fact it is well established in the literature that this applies to nearly all aspects of education.
8
and public service are poorly understood, and the tools for estimating the requisite
functional forms and coefficients are woefully inadequate to the task. To be more
specific, not only are we lacking appropriate measures of quality, but the very
nature of the interactions between, for example, teaching and research is difficult
to express in mathematical terms (p. 12).
In a similar survey, Gilmore and To (1992) conclude that existing conceptualizations of
academic productivity �provide useful commentary and theoretical discussion, [but] do
not offer any empirical evaluations� (p. 38). More pragmatically, they argue that the
single-output framework commonly utilized often fails to take into account the need to
allocate inputs that are used to produce several outputs (p. 38).
It is well established in the economic literature that the production function and
the cost function are actually two different ways of examining the same phenomenon. As
a result, production functions also subsume the notion that the firm, or firms, under study
are operating optimally. First put forth by Shepherd (1953), the �duality� relationship as
it is called, is the cornerstone of optimization theory in production economics.
In sum, the production function has limited usefulness as an approach to
understanding institutional efficiency. Foremost, the fact that higher education
production process is still best characterized as a �black box� (Massy, 1996, p. 19)
implies that prior research has yet to successfully identify the relevant technological
relationships that any efficiency study would actually seek to test. More importantly
though, from an empirical standpoint duality implies that production functions, like cost
functions, already assume that institutions are operating efficiently.
9
A third type of analysis focuses specifically on identifying efficient behavior of
institutions. In the literature this is referred to as �frontier analysis.� As its name
suggests, the goal of this approach is to determine some set of best practice, or relatively
efficient institutions that form a frontier which other, inefficient institutions, can be
benchmarked against. The types of approaches applied can be grouped into two general
categories: a) statistically-oriented analyses and b) those based on non-parametric linear
programming techniques.
Neither type of analysis is used widely in the study of education in general and
even less in higher education. The statistically based approach is commonly referred to
as stochastic frontier analysis (SFA). In SFA, a parametric model is specified and
estimated similar to that of a production function. What differentiates it from the more
traditional estimation techniques is that the error term is divided into two components,
one assumed to be a normally distributed stochastic error term and the other a positive,
half-normally distributed term associated with technical inefficiency. The SFA technique
is an adaptation of an earlier type of analysis called deterministic frontier analysis, which
assumed the entire error term captured inefficiency. The extent to which SFA permeates
the higher education literature is evident in Worthington�s (2001) empirical survey of
frontier analyses in education. Of the handful of SFA analyses he identifies, none
focused on higher education institutions. Independently only studies by Vink (1997) of
higher education in three European countries and Izadi, Johnes, Oskrochi, and
Crouchley�s (2002) study of higher education institutions in the UK were found to use
this method.
10
The linear programming approach is alternatively and more popularly referred to
as data envelopment analysis (DEA). First put forth by Farrell (1957) and more fully
developed by Charnes, Cooper, and Rhodes (1978), DEA is a linear-programming
technique that creates a convex hull, or envelope, around observable data. After
determining a frontier of relatively efficient firms, a variety of inefficiency measures can
be developed for each institution in the sample.8 The two primary measures are technical
and allocative efficiency. The former loosely refers to how well institutions minimize the
use of their physical inputs and the latter how well they minimize costs.
A dearth of literature in this area given that DEA is specifically designed to assess
efficiency attests to a need for further research. Surveys of the DEA literature (Avkiran,
2001; Worthington, 2001) suggest there are only about 15 studies applying DEA to
higher education. Approximately half of these use the department as the unit-of-analysis
(e.g., Siegel, Waldman, and Link, 1999; Chen, 1997; Beasley, 1995; Johnes and Johnes,
1995; Ahn, Arnold, Charnes, Cooper, 1989). Of those examining institutional efficiency,
nearly all focus on higher education systems outside the United States, including Canada
(McMillan and Datta, 1998), the Netherlands (Jongbloed, Goudriaan and van Ingen,
1998; Jongbloed and Koelman, 1996; Jongbloed, Koelman, Goudriaan, de Groot, Haring
and van Ingen, 1994; Jongbloed and Vink, 1994), Australia (Avkiran, 2001; Coelli, 1997)
and the United Kingdom (Athanassopoulos and Shale, 1997). Less than a handful of
studies were found that applied DEA to higher education institutions in the United States.
The earliest of these was an analysis done by Daryl Carlson (1975) using data
from the early-1970s. While his work technically predates DEA, he utilized linear
programming and produced Farrell-type measures of efficiency. He presented technical
8 A more detailed discussion of DEA is presented in chapters 2 and 3.
11
and allocative efficiency scores for two broad classes of institutions: highly selective
private liberal arts colleges and public comprehensive universities. Addressing the
question of technical efficiency he found �quite dramatic� (p. 51) differences between
average and frontier institutions in terms of institutions� costs-per-student, largely
suggesting many of the institutions in his sample were not technically efficient. He also
found institutions considered efficient using simplified input and output measures
become inefficient when a more complex set of variables was introduced.
A second study by Ahn, Charnes, and Cooper (1988) focused specifically on
technical efficiency for 161 public and private doctoral-granting institutions, while
controlling for the presence of medical schools using 1984-85 data. When results were
compared by institutional control, they found public institutions without medical schools
to be technically more efficient than their private counterparts (mean = 70% and 64%
respectively). When comparing institutions with medical schools, publics were shown to
be, on average, 84% technically efficient and for privates 77%. However, when testing
whether the mean scores differ for the two types of institutions, they were only able to
verify differences at the α = .10 significance level.
In order to calculate allocative, or price, efficiency requires knowledge of an
institution�s relative input-prices. As such, Ahn, et. al. eschewed the question of
allocative efficiency due to lack of data. Carlson, on the other hand, did compute price
efficiency measures but did so using nominal input prices. Additionally, in comparing
the allocative and technical efficiency scores computed for each institution he offered
12
contradictory evidence to the extent that some universities were shown to be allocatively
efficient but not technically efficient.9
A third study by Goudriaan and de Groot (1991) examined the cost efficiency of
147 public and private universities (49 privates and 98 publics) as part of a larger study
on the effect of state regulation on cost efficiency. Using only faculty salaries as an
input, they calculated a mean cost efficiency score of 77% for all institutions in their
sample. When public and private universities were examined separately, publics were
shown to be slightly more cost-efficient than privates (79% versus 74%).
Carlson�s study was the most comprehensive in scope, yet was done nearly 30
years ago, which limits the relevance of the results today. In terms of the sample, neither
comprehensive universities nor private liberal arts colleges have significant research
orientations; hence it is difficult to consider efficiency in the light of the joint production
process. Finally, the study does not account for economies of scale. Very small
institutions, in terms of inputs and outputs, are jointly compared to relatively large
institutions.
The main limitation to the Ahn, Charnes, and Cooper study is that it employs
cost-based input measures and a mixture of physical and cost-based output measures. As
a result the efficiency scores depict the extent to which institutions minimize
instructional, physical, and overhead expenditures in the production of education and
federal grants and contracts, which was their proxy measure for research. The ability to
determine technical efficiency is limited in that it is difficult to affix an economic
interpretation to the input tradeoffs in the production of outputs. Additionally, as Byrnes
9 This is discussed further in chapter 2. In short though, allocative efficiency essentially subsumes thedefinition of technical efficiency by including the additional constraint that the rate of technical substitution
13
and Valdmanis (1994) note, �studies that focus only on technical inefficiencies in
production may lead to erroneous conclusions as to the nature of cost-minimizing
inefficiency� (p. 130).
A second limitation stems from the assumption that institutions in their sample are
operating under constant returns to scale. In chapter 3 it is shown that scale efficiency is
actually a subcomponent of technical efficiency. Yet the samples they use eschew
analysis of scale efficiency. Medical schools are separated due to the fact that they use
significantly more inputs and produce greater levels of output, which is exactly the type
of relationship scale efficiency measures are designed to test. Though their findings
suggest that mean efficiency scores are higher at universities with medical schools, it is
not possible to determine the extent to which the difference is attributable to scale size or
because they have medical schools.
Goudriaan and de Groot�s analysis focuses only on cost efficiency. While the
narrow scope of the study produces a useful measure of cost efficiency it does not
distinguish between inefficiencies that may result from using an inappropriate physical
input-mix. For example, comparing two institutions having $50-million salary outlays, it
may be the case that one uses it to fund 1,000 faculty members while another may fund
1,500 faculty members.
To sum, in all three studies the samples of institutions are either ambiguous,
inappropriate for examining the joint production process, or are subsorted for analysis
based on suspect reasons. Individually the results offer interesting evidence on the cost
efficiency of various types of higher education institutions. Taken together though, none
between any two inputs be equal to the rate at which those inputs can be traded in the market.
14
take a rigorous approach that considers input tradeoffs or fully accounts for economic
efficiency by disaggregating it into technical and allocative components.
Nonprofit literature
Serious study of nonprofit organizations is a relatively recent addition to the
economic literature (Gassler, 1986). Unlike the long-standing neoclassical �theory of the
firm,� no such nonprofit parallel exists. Instead, the theories economists put forth about
nonprofits primarily attempt to explain why these firms tend to rise and survive alongside
similar for-profit firms. In addition, empirical analyses often focus heavily on the
healthcare industry where the co-existence of for-profit and non-profit hospitals offers the
opportunity to empirically control for one group�s status (Ben-Ner and Gui, 1993; Rose-
Ackerman, 1986). As such, the literature offers very few instances where economists�
perspectives on nonprofit cost and production efficiency are applied to the higher
education industry.
The working hypotheses economists studying non-profits employ is that non-
profit firms, for behavioral and market forces reasons, are likely to be less technically-
and cost-efficient relative to for-profit firms. This perspective rests on three views, each
of which is discussed below. The most widely accepted position is based on the
observation that market pressures typically encountered by for-profit firms do not seem to
exist in the non-profit sector (Winston, 1999; Rothschild and White, 1993). In the
parlance of economists this is considered a �property rights� problem. As Estelle James
and Susan Rose-Ackerman (1986) explain, in for-profit firms, the manager as the residual
15
claimant has an incentive to minimize �shirking� because they receive the profits they
generate.10 Moreover, even in the absence of this incentive the fact that for-profit firms
potentially risk �takeovers� in the capital market still assures that managers will seek to
minimize shirking (p. 37).
Yet these market characteristics do not emerge in the nonprofit sector. There are
no capital markets and the residual claimant is the organization itself. It follows then that
nonprofit firms� managers have little incentive to minimize waste and thus should
generate relatively higher costs than for-profits. However, in James and Rose-
Ackerman�s review11 of the empirical economic literature testing this hypothesis they
observe that the available evidence is �far from conclusive� to support this claim (p. 38).
The second rationale is largely a product of Estelle James� work (1990, 1986,
1978) on non-profit firms� ability to engage in cross-subsidization. She argues that non-
profits possess preferences about the goods or services they produce and that some of
these goods provide utility, or value to the firm, while others do not. As a result, non-
profits are driven to cross-subsidize, or �carry out a set of profitable activities that do not
yield utility per se to derive revenues they can then spend on utility-maximizing activities
that do not cover their own costs� (1978, p. 87). In one of the few examples from the
literature focusing on higher education institutions, she proposes that research
universities earn �profits� from producing undergraduate education that they use in turn
to subsidize costly, loss-making services like graduate education and research, which the
universities derive value from in the form of prestige.
10 If the manager of a firm knows that they will receive the profits from the firm, they have an incentive tomaximize revenue and minimize costs in order to reap the greatest amount of profit they can.11 Ben-Ner and Gui (1993) refer to James and Rose-Ackerman�s The Nonprofit Enterprise in MarketEconomics as one of the most thorough volumes surveying the non-profit literature.
16
Rothschild and White (1993) however challenge the conclusiveness of this claim.
The rationale they put forth is based on the modern definition of cross-subsidization put
forth by Faulhaber (1975). This definition states that given some firm serving multiple
customers, cross-subsidization occurs only if it is possible for another firm to serve a
subset of these customers and make a profit (p. 15). As evidence against James�
reasoning, the authors point to the fact that research universities serve multiple
constituents like undergraduates, graduates, and consumers of research and still survive
in competition with institutions like liberal arts colleges that only serve undergraduates
(p. 14-15).
Finally some research suggests that nonprofits may possess technological
preferences or a desire to use some input combinations over others, resulting in relatively
higher cost curves. Depending on whether inputs enter into the arguments of the
objective function, if the nonprofit manager prefers to use labor rather than capital, the
solution to the constrained utility maximization problem suggests an inefficiently large
amount of labor will be employed. As an example, James and Rose-Ackerman offer the
preference professors may have to teach in small classes, though they temper this claim
with the assertion that, �the empirical evidence is far from clear� (1986, p. 39).
In general, economic models of nonprofit behavior all possess a relatively
common structure. As profit-maximization is clearly not a tenable goal, researchers often
consider some measure of prestige as an alternative objective (Clotfelter, 1996; James,
1990). The typical model is structured under the assumptions that a non-profit firm seeks
to maximize some utility function12 subject to the constraint that, in the long run,
12 In a broader sense, firms do not always seek to maximize utility. As such economists frequently refer tothis in more general terms as an objective function.
17
revenues equal costs. Where the models differ from each other are in which arguments
researchers� perceive should enter into the firm�s utility function. Various models
suggest utility is obtained from the outputs firms produce (Lackdawalla and Philipson,
1998; Gassler, 1986; Newhouse, 1970) while others argue that firms derive utility from
their inputs (Gassler, 1986; Garvin, 1980). Though the objective functions may vary in
this respect, the equilibrium conditions derived in all of these instances suggest that,
given their goals, nonprofits are �compelled� to seek to minimize costs (Goudriaan and
de Groot, 1991).
Labor vs. Capital
There is little doubt that the production process in higher education institutions is
a labor-intensive activity. Cost and production studies frequently use instructional costs
as a proxy for academic labor because they are largely composed of faculty salaries
(Goudriaan and de Groot, 1991; Cohn, et. al., 1989). In addition, researchers modeling
higher education production tend to put considerable weight (e.g., King, 1997 and
Dundar and Lewis, 1994) on academic labor as an input. In Dundar and Lewis� (1994)
study, one of the more recent and most comprehensive estimations of the higher
education production function, they claim �quality students and quality faculty�are the
major cog driving educational production� (p. 209). Others like Leslie, Oaxaca, and
Rhoades (2001) even go so far as to claim human capital is a university�s �only real
asset� (p. 261).
18
Yet physical capital is an important asset as well, particularly in the production of
research. One study even suggests that capital expenditures may represent up to 40% of
institutions� total costs (Winston, 2000). Especially in research laboratories and the
physical or biological sciences, laboratory space and state-of-the-art equipment make up
a significant portion of some department�s budgets.
Unfortunately, obtaining meaningful capital measures for higher education
institutions is extremely difficult and for this reason are not included in this study. To the
economist, costs of capital represent rental rates, or the cost universities would incur if
they were to rent the resources in the market. They reflect what Winston (2000)
describes as three unambiguous components; the current replacement value of the capital
stock, the real economic depreciation incurred, and the opportunity cost of tying up those
resources in their current form (p. 38). For the most part it is very difficult to separate
these components from the accountancy framework they are recorded in. An example
posed by Winston best illustrates the nature of the problem.
Accounting procedures represent the value of a school's capital stock by its
historic, or book value and not its current value. A university, then, that spent $10,000 to
build a general-purpose classroom building in the early 1900s would still have that
building valued on its books today at $10,000 rather than what it could be rented in the
capital market for today. At the same time, depreciation schedules exist as a tax vehicle
to capture the economic wear that occurs on capital over its usable life. The basis for this
depreciation, however, is the book value and not the replacement value of the capital
stock. Finally, not all higher education institutions maintain their books using the same
19
accounting techniques. This lack of uniformity makes it difficult, even if one has access
to individual institution�s balance sheets, to obtain consistent measures.
Conceptually, universities possess a great deal of flexibility in annually assigning
teaching loads, granting faculty release time for research, and determining levels of other
labor inputs such as adjunct faculty or graduate teaching assistants. The significant
majority of a university�s capital however is fixed, mostly in land and a wide array of
classroom, research, and administration buildings. Moreover, in terms of capital and
labor tradeoffs, higher education is not an industry that substitutes faculty members with
classroom buildings or computers. While advances in technology spur new approaches
to educating students that rely less heavily on academic labor, such as distance education,
the preponderance of students at research and doctoral institutions are still taught on
campus. In this regard, capital may be perceived more as a complement to academic
labor rather than as a substitute.
It is this complementary nature that leads to the main problem with not including
some capital measure in the analysis. Specifically it fails to account for differences
across institutions, at any given point in time, in the amount of physical resources any
particular institution has on hand. As an example, consider two institutions employing
the same number of faculty and producing the same number of publications. Assume that
one of the institutions has double the available capital as the other. Ceteris paribus, the
institution using less capital is the more efficient producer. Yet, in lieu of some indicator
of available capital, the two institutions would have been regarded as equally efficient.
The problem becomes more complicated by the fact that the general direction of
the bias from omitting capital is not generally known in advance. The concept of
20
diminishing marginal productivity of labor suggests that, given a fixed amount of capital,
at some point the marginal productivity of each additional worker will fall as each worker
reduces the amount of usable capital per person. If an institution has a large (small)
capital stock relative to the number of workers available then the marginal productivity of
an added worker will tend to rise (fall). This however is something that must be
empirically estimated to determine where a particular institution lies on the marginal
productivity curve.
Where not considering some capital measure is likely to have the greatest impact
is project capital used in the production of research. In lieu of a reliable measure one can
arguably impose an assumption based on the observation that most research performed at
higher education institutions is externally sponsored through federal and state support,
industry, and philanthropy.13 In most cases faculty members securing external funding
are required to include a budget outlining the financial resources necessary to obtain
capital for the research they intend to pursue. It is plausible to assume that the amount of
project capital employed should be commensurate with the research being performed.
In the end, the decision comes down to choosing one of two equally unattractive
options: a) accepting the limitations that go with omitting a potentially relevant variable
from the analysis or b) using a suspect measure and drawing potentially erroneous
conclusions based on it. This study takes the first approach on the belief that using an
unreliable measure not only compromises the validity of the other results (like the first
option) but by introducing a variable without knowing what the potential biases are, also
unnecessarily complicates the study design and analysis.
21
Research question
In spite of popular perceptions that colleges and universities do not pursue
efficient production or cost practices, a review of the literature reveals remarkably little
empirical evidence to either support or refute assertions of allocative and technical
inefficiency at research universities. Two of the most commonly used estimation
techniques in the study of higher education institutions, production and cost functions,
implicitly assume efficient behavior. Of the few studies found that specifically assess
efficient production in US higher education institutions, the variety of approaches and
differing choices of input and output measures suggests little consensus over how to
appropriately and comprehensively capture technical or cost efficiency, much less both.
Coupled with the dearth of empirical studies is little evidence for any theory to
guide efforts at measuring efficiency. In terms of the economic literature, theories of
nonprofits tend to focus on why these institutions form rather than on behavioral aspects.
Hypotheses assert that a lack of market mechanisms and preferences over using particular
input combinations prohibit efficient behavior. However, when comparing non-profits�
cost functions with similar, for-profit firms, surveys of the literature conclude that the
empirical evidence is inconsistent as to whether non-profits are any less allocatively or
technically efficient. At the same time, while economic models of nonprofit behavior
vary in their formulation, the solution to the constrained optimization problem suggests
non-profits do operate on their respective expansion paths.
13 Data collected from the National Science Foundation�s WEBCASPAR database shows that between1990 and 1997, institutionally sponsored research and development expenditures only accounted forapproximately 18% of all R&D expenditures at colleges and universities.
22
It is this gap between perceptions of productive and cost inefficiency and
inconclusive theoretical and empirical research to verify such claims that motivates the
research question posed in this study:
In terms of academic labor inputs, to what extent are research-intensive
universities technically and allocatively efficient in the joint production of
education and research?
Chapter 2 is divided into three main sections. The first outlines the economics
behind the four types of efficiency considered in this study: technical, scale, allocative,
and overall efficiency. The second section addresses academic labor inputs and issues
surrounding how they are allocated to produce education and research. The third section
develops a framework for analysis to understand why universities may choose to engage
in efficient behavior as well as where inefficiencies may arise. In the last section, data
envelopment analysis as an empirical approach to testing institutional efficiency is
discussed. Chapter 3 describes the methodological aspects of the study including variable
constructions and how efficiency measures are calculated. In chapter 4 the results from
the DEA analyses are presented followed by a discussion of the results and their
implications in chapter 5.
23
Chapter 2
“If I have seen further than others,it is by standing on the shoulders of giants.”
- Sir Isaac Newton
To produce something may be defined as taking some form of inputs and, through
a process, transforming them into some product: an output. The process used in this
transformation is what is referred to as the technology employed. For example, given
some wood, graphite, rubber, and tin it is possible to convert these inputs, using some
technology, into the output commonly known as a pencil. In the case of universities,
using inputs like faculty, computers, and books it is possible through different
technologies to convert these inputs into the output �new knowledge.�
Several economic relationships, concepts, and definitions related to productive
and cost efficiency recur frequently in this chapter and throughout the remainder of this
paper. Though necessary to understanding the efficiency concepts described in the
following sections, it consists of introductory-level microeconomics described using set
theory, the calculus, and algebra. For those interested, or who require such a background,
these topics are treated in Appendix A. In the first section of this chapter the goal is to
characterize the various forms of productive and cost inefficiency evaluated in this study.
24
Returns to Scale
One of the first production economics queries, posed by Adam Smith back in the
18th century, was how levels of output change when inputs are increased (Nicholson,
1995). In a conceptual experiment Smith hypothesized that in some cases doubling all
inputs would result in a doubling of the output. However for other cases, doubling all the
inputs may allow the firm to engage in certain practices like specialization and division of
labor, allowing it to more than double production. Conversely, he surmised that if a firm
were very large to begin with, doubling the inputs may make it difficult for managers to
effectively oversee operations and hence output may increase by less than double.
These scenarios are conceptual representations of what economists refer to as
returns to scale. Formally, given some production function Y = f(K,L), a firm is said to
be operating at constant returns to scale if scaling up all of its inputs by some factor s
implies, f(sK,sL) = s(f(K,L)) = sY. If scaling up inputs results in a less than equal
increase in output, f(sK,sL) < s(f(K,L)) = sY, then the firm is said to be operating at
decreasing returns to scale. Finally, if scaling up inputs entails a greater than equal
increase in output, f(sK,sL) > s(f(K,L)) = sY, then it is said to be operating at increasing
returns to scale.
Economic theory suggests that, in the long run, competitive firms will continue
adjusting their scale size to the point that they operate at constant returns. While there
are several reasons a firm can be operating at different returns to scale, Varian (1992)
suggests that deviations from constant returns to scale are largely an efficiency
phenomenon. Increasing returns to scale implies that, by raising output, subdividing
labor and engaging in specialization implies fewer inputs can be employed per output:
25
hence inputs will be more productive. A similar, albeit converse, argument holds for
decreasing returns to scale. A firm hiring extra workers in a strong economy must
continue to pay their wages when the economy turns sour. Some of these employees may
be sitting idle or only working for several hours each day though they are classified as
full-time. As Varian has shown, �it can always be assumed that decreasing returns to
scale is due to the presence of some fixed input� (p. 16).
Technical and Allocative Efficiency
Economists frequently distinguish between three interrelated kinds of efficiency
in production when it comes to firms. The first is referred to as technical efficiency.
Formally, a firm has allocated a fixed set of resources efficiently if it has them fully
employed and if the RTS [rate of technical substitution] between the inputs is the same
for every output the firm produces (Nicholson, 1995, p. 549). For the single output case,
it can be said that a technically efficient firm is one that minimizes input usage in the
production of that output: it operates on the isoquant for a given output level. Figure 2.1
shows graphically how technical efficiency is considered in a two-output case by using
what is called an Edgeworth Box.
An Edgeworth Box is constructed from isoquant maps for two different outputs
utilizing the same inputs in the production process. Overlaying the two maps right on top
of each other and then �pivoting� one of the maps (around the points c and d in Figure
2.1) creates the �box.� The horizontal axis represents the total amount of Input 1
available to the firm for producing outputs A and B. The vertical axis has the same
26
interpretation but for Input 2. The isoquants are labeled corresponding to their respective
outputs and the index numbers reflect successively higher levels of output produced.
Using this framework it is possible to illustrate the difference between efficiently
producing single versus multiple outputs.
Figure 2.1 – Graphical Depiction of Technical Efficiency
Input 1
Input2
OutputB
OutputA
A2
A3
A1
B2
B1
B3
F
P2
P3
P1
d
c
27
Consider a firm operating at F, lying at the intersection of isoquants A2 and B1.
Recall that the RTS between Inputs 1 and 2 is equal to the slope of the isoquant at that
point. As it is depicted in Figure 2.1, the firm is producing on the isoquant for both
outputs (A2 and B1), which, in the case of a single output, would seem to imply that the
firm is technically efficient. From the definition of technical efficiency however, this is
incorrect and can be shown in the following way.
Assuming that output A is fixed at A2 and using the definition of the isoquant (see
Appendix A), the firm can use more of Input 1 and less of Input 2 and produce at point
P2. Since P2 is still on the A2 isoquant the firm has only reallocated its input mix and is
producing the same level of output A as before. However, in moving to point P2 it is now
producing B2 units of output B, which is greater than what they were producing when
they were at point F because they have moved to a higher isoquant. It is also clear from
the diagram that the RTS between the inputs for producing Output A and for Output B
are now equal. Thus the point P2 represents a technically efficient production point for
this firm; no further reallocation of inputs can produce more of one good without
decreasing the amount that can be produced of the other good.
This last statement is almost identical to the popular definition of Pareto
efficiency. Pareto developed his efficiency concept for an exchange economy without
production. As Nicholson shows however, technical efficiency is only a necessary and
not a sufficient condition for Pareto efficiency. In some industries firms may be efficient
producers but they may produce the �wrong� goods (p. 548).
To define the second efficiency measure, allocative or price efficiency, involves
the additional requirement that the RTS between any two inputs be equal to the rate at
28
which they can be traded in the market, which is the ratio of their respective prices.
Figure 2.2 is a graphical representation of allocative efficiency for a single-output, two-
input firm. The graph is constructed by removing Output A from the Edgeworth Box and
now also includes a price line reflecting the ratio of the inputs� price-substitutability in
the market.
Figure 2.2 – Graphical Depiction of Allocative Efficiency
Input 1
B2B1
A
J
Input2
B3
Ratio of inputprices
Isoquants for threeoutput levels
29
Considering point J, the firm is operating on the B2 isoquant and hence it is
technically efficient. However, because it is operating to the right of the price vector it is
paying a higher price for its inputs than what they can be purchased for. Thus, even
though the firm is technically efficient, in terms of price efficiency it is operating at an
inefficient point. Given the relative input prices and the rate at which these inputs can be
substituted in the market the firm can reduce its use of Input 1, increase its use of Input 2,
and move to point A which still produces the same level of output (by remaining on the
B2 isoquant) for less cost. If one considers the ray from the origin to point J, the
difference between point J and the price line represents the amount of allocative
inefficiency within the firm.
If the price line were imposed on the Edgeworth Box in Figure 2.1, allocative
efficiency would require that the line be drawn through point P2 and tangent to the A2 and
B2 isoquants. At this point it is important to observe that the shapes of the isoquants in
Figures 2.1 and 2.2 are drawn arbitrarily. Only with adequate knowledge of the true
functional form of the production process is it possible to derive the actual shape of the
isoquants.
The third measure of efficiency, called economic or overall efficiency, is the
product of the technical and allocative efficiencies. As point A in Figure 2.2 is on both
the isoquant and isocost lines, the firm is technically and allocatively efficient. Hence a
firm operating at point A would be considered overall efficient. If one were to consider a
locus of these points for increasing levels of output they would represent a path of cost-
minimizing input combinations for the firm. The parametric equation fitted through these
points is referred to as the cost function.
30
From the discussion it is clear that production and cost functions subsume the
concepts of technical and allocative efficiency. Production functions assume that firms
are technically efficient and then trace out the maximum amount of output that can be
produced using a minimum amount of inputs. Cost functions assume the firms are both
technically and allocatively efficient and then trace out the relationship between
maximum levels of output and minimum prices.
In order to empirically assess technical efficiency first requires identifying and
characterizing a firm�s inputs and outputs, and allocative efficiency imposes the
additional need for relative input prices. Furthermore, in most cases it is also necessary
to pose some a priori assumption about the nature of the underlying technology: the
parametric shape of the isoquants. The next section addresses academic labor inputs and
how they are allocated to produce the academic outputs of higher education institutions.
Academic Labor Inputs
Faculty members are a university�s most visible academic labor input, possessing
the unique characteristic of being necessary, at some level, to the production of both
education and research. A fundamental resource allocation problem facing universities
then is how to divide faculty members� time toward the production of each output.
There is a widespread belief however, that this allocation all too often favors the
production of research at the expense of teaching. In the early-1990s this concern
prompted many states to develop and implement faculty workload studies (State Council
of Higher Education for Virginia, 1991; Arizona Joint Legislative Budget Committee,
31
1993). One study by Hines and Higman (1996) showed that by 1995, 23 states required
faculty members to report how they spent their work hours. In some instances these
studies turned into full-fledged policies. A faculty workload mandate established by the
Ohio legislature in 1993, for example, explicitly required faculty members to increase the
amount of time spent toward teaching by 10% more than they had in 1990 (Ohio Board
of Regents, 1994).
The extent to which this perception is reality remains unresolved. A study by
Presley and Engelbridge (1998) showed that public institutions in Maryland consistently
met state minimum teaching requirements. In a review of faculty workload studies,
Jordan (1994) found that research institutions reported teaching hours to be the greatest
proportion of time spent amongst all activities. Reporting results from a national study
he showed that public research universities reported 43% of their time for teaching and
only 29% for research. For doctoral institutions the teaching figure was slightly higher
and the research figure slightly lower (47% and 22% respectively). When all institutions
were considered (publics and privates, research, doctoral, and comprehensive) time
reported to be devoted to teaching rose to 56% while the total time for research declined
to 16% (p. 17). In another study by Bray, Braxton, and Smart (1996) they found that
faculty members exhibiting high research productivity were more likely than other, less
research productive faculty members, to make themselves accessible to the undergraduate
students they taught.
There is also evidence that faculty members� involvement in education,
particularly at research and doctoral institutions, is less likely to involve teaching lower-
level undergraduate courses. Statistics released in December 2000 by the Coalition on
32
the Academic Work Force (CAW) show, at least for the social sciences and humanities,
that full-time tenure-track faculty members now teach less than half of all introductory
undergraduate courses (Townsend, 2000). Baldwin and Chronister (2001) help paint a
clearer picture by documenting the hiring practices of full-time tenured and non-tenured
faculty (FTNTT) at 59 research universities in 1996. They found that 73% of FTNTT
faculty members at research universities were hired specifically to teach lower-division
courses while the remaining 26% were hired to teach a combination of lower- and upper-
division courses (p. 32). When the authors evaluated doctoral-granting institutions they
found them to be more likely than research universities to hire FTNTT strictly to teach at
the lower division (86% compared to 73%).
The perception that research takes precedence over teaching still persists. What
the available research shows, however, tends to contradict this view. Based on the
available information what can be said is that faculty do make significant contributions to
both teaching and research but when it comes to the former, those efforts are primarily
directed toward teaching upper-level undergraduate and graduate courses as opposed to
lower-level undergraduate students.
In stark contrast to the measures of labor used in most empirical studies of higher
education production and costs, faculty are not the only academic labor input universities
have at their disposal. Graduate students, for example, play a critical role as well. In
terms of education, they are frequently called upon to teach introductory courses,
especially in service-oriented departments like economics, physics, and English
(Diamond and Gray, 1987). At some universities, where introductory class sizes can
frequently swell beyond 300 students, a professor may only teach the full class only one
33
day per week while a cadre of graduate students spend the remainder of the week guiding
smaller recitations of up to 50 or 60 students. The extent to which teaching assistants are
perceived by universities to be valuable input to the production of education is also
evident in their sheer numbers. Benjamin (1998) reports that, in the early-1990s, there
were approximately 441,000 full-time faculty teaching at 4-year institutions and just over
200,000 teaching assistants.
The unionization movement among teaching assistants across the country
provides further evidence of their role in the production of education. Two examples
have received considerable attention in the past several years. The first was a report
issued by the Graduate Employees and Students Organization, a labor movement that for
years sought to establish itself as a union at Yale. In 1999 it received substantial press
coverage in asserting that tenured and tenure-track faculty at the prestigious university
only spent 30% of all classroom hours teaching undergraduates (Wilson, 1999). An even
higher profile case involved a bid by graduate teaching assistants to unionize at New
York University. In April 2000, the National Labor Relations Board ruled teaching
assistants were in fact employees and thus free to unionize, �marking a first for federal
labor law, which governs private-sector bargaining� (Leatherman, 2000).
Though their contribution to producing education may seem significant in its own
right, neither faculty nor students would hesitate to profess the importance of graduate
students in the production of research also. Being researchers-in-training they frequently
aid faculty in collecting data, performing experiments, or conducting analyses. Research
has shown that graduate education and research are actually natural complements to one
another in the higher education production process (Nerlove, 1972).
34
Even though scholars and the general public pay much attention to graduate
students and FTNTT faculty, they are largely ignored in empirical analyses of higher
education production and costs. Of the cost and production studies reviewed in chapter 1,
only Toutkoushian (1999) explicitly mentions that the faculty input measure he used
represented only full-time faculty. Similarly, only King (1997) disaggregated faculty into
tenured and non-tenured categories and included graduate students as a production input.
Higher Education Outputs
While higher education institutions produce a variety of outputs, this study is
concerned with what Estelle James (1978) refers to as �academic products� (p. 78) of
universities, the advancement and transference of knowledge. Translating these tasks
into two identifiable outputs presents the standard notions of research and education.
While straightforward to conceptualize, both resist detailed characterization. Outputs
possess both tangible and intangible aspects (Hopkins and Massy, 1981 cited in Hopkins,
1990) that are often difficult to capture empirically. Cost and production studies of
higher education institutions readily acknowledge that the coefficients estimated are
distorted by the difficulty in effectively accounting for input and output quality (see King,
1997; Dundar and Lewis, 1995; Nelson and Hevert, 1992; de Groot, et al., 1991; and
Cohn, et al., 1989). Unfortunately, lack of consensus on the part of researchers over how
to adequately account for quality and the substantial costs, in both time and resources, of
obtaining meaningful data has left this issue largely unresolved. This has led many
35
research efforts to follow Nelson and Hevert�s lead of �bowing to tradition� (p. 474) and
using traditional measures while simply recognizing that the limitation exists.
The main issue addressed here is whether �education� and �research� as outputs
are sufficiently detailed to understand and assess institutional efficiency. From the
discussion in the last section on academic labor inputs, there is little reason to believe
that, across institutions, research would need to be subdivided for this study into
categories like basic or applied, or federally-sponsored versus industry-sponsored. On
the other hand, the prior discussion does suggest that institutions consciously allocate
education-related inputs in a targeted manner. It is reasonable to conclude then that
classifying the transference of knowledge as �education� is not descriptive enough in the
current context. Burton Clark (1995) offers a useful framework for disaggregating
education that is consistent with why research-intensive universities might choose to
allocate their academic labor inputs in the way that they do.
Clark considers higher education along two arrays: the content provided at
different levels of a college education and the manner in which the education process
takes place. In terms of the former, he frames content by the extent to which experts in a
particular field validate that discipline�s knowledge. At one end of the spectrum, there is
what is called �codified� or �book� knowledge. This is information almost universally
accepted by the established researchers in a field as being true. The other end of the
spectrum is marked by more controversial research and fringe ideas. The knowledge put
forth lacks discipline consensus and may be accepted as valid by only a few researchers
in the field. In terms of the latter, the education process, at one end knowledge is
something that is committed to memory such as historic events, economic definitions,
36
and mathematical algorithms. At the other end, information is meant to be used for
drawing conclusions, linking together ideas, and determining for oneself the extent with
which ideas may be valid.
Much of what higher education students learn in the formative years of university
education can be thought of as �codified� or book knowledge. The general education
classes and introductory courses in different disciplines expose students to a discipline�s
established paradigms: those concepts and relationships possessing a high degree of
consensus among researchers in a particular field. At this stage, the process of education
generally involves committing those fundamentals to memory and requires little in the
way of abstract thinking. In the latter years of an undergraduate education, students
generally pursue intermediate or advanced courses within a major field of study. In these
classes, while emphasis still leans toward obtaining codified knowledge, attention shifts
to developing detailed understanding of some distinct sub-field within a discipline.
The third stage represents a bridge of sorts between the philosophies underlying
undergraduate and graduate education. Here the focus of the educational content shifts
away from codified knowledge and more towards relatively advanced topics and
techniques that may or may not be well established within a particular discipline. In
terms of process, students are more likely to apply critical, more abstract, reasoning skills
in evaluating the validity of proposed arguments. Depending on the topic and the level of
student ability, some advanced undergraduate courses may take this approach but this is
more likely to be a phenomenon associated with master�s programs and the beginning of
doctoral study. At the last stage, usually defined by advanced doctoral study, the
emphasis is almost solely on reading �cutting edge� research, critiquing what often times
37
may be controversial topics at the fringe of existing paradigms, and developing one�s
own ideas through a dissertation.
A Framework for Analysis
Considering the economics of efficiency, Clark�s taxonomy of education, and the
observations about how universities allocate academic labor inputs, there is reason to
believe that universities behave in a manner consistent with optimizing, and hence
efficient, behavior.14 The following outlines a highly simplistic model of university
behavior that rationalizes the discussion to this point.
Suppose, as economists suggest (James, 1990), that the research university wishes
to maximize prestige, which they derive directly from the outputs undergraduate
education (U), graduate education (G), and research (R). This latter assumption is based
on prior economic studies of university behavior (James, 1990; Garvin, 1980) that also
posit prestige is derived from the outputs themselves. Assume these outputs are produced
using only three different kinds of academic labor. The primary input, faculty (F), has at
its tasks to do research (FR), teach graduate students (FG), or teach undergraduate students
(FU). The other two labor inputs are teaching assistants (TA) and research assistants
(RA), the latter being a positive function of FR. In addition, consider a short-run analysis
where input and output quality are fixed, the university knows the productivity of each
input, and all inputs exhibit diminishing marginal productivity. As universities� overall
enrollment levels do not shift dramatically from year to year, it can be assumed that these
14 See �nonprofit literature� section of chapter 1.
38
levels are fixed in the short run. Hence the objective is to maximize R given set
enrollments of U and G and a budget constraint. The problem facing the university is to
choose levels of FR, TA, and RA maximizing R.
If it were the case that a university had no TA or RA inputs, then FU and FG would
both be determined in advance. As the university knows the productivity of FR, the
amount of R produced would be implicitly defined. Ideally though, a university would
choose to allocate F to those tasks where its marginal productivity is highest. This
implies faculty members that are more productive at research (relative to teaching) would
be allocated to research and those more productive at teaching would be allocated to
teaching. As the goal is to maximize production of R, technical efficiency would require
the university adjust the number of faculty (either positively or negatively) until the
marginal productivities of faculty to research and teaching were equal (MPFR = MPFT).
Using the Edgeworth Box introduced earlier in this chapter, it is possible to
demonstrate how a university may be driven to incorporate TA and RA inputs in order to
maximize production of research and to be technically efficient in the joint production of
both outputs. Consider the graphical illustration in Figure 2.3. The outputs are now
education and research. The horizontal sides of the box represent total faculty labor
available for producing both outputs and the vertical sides represent total graduate student
labor. RA labor is measured from the corner marked �research� and TA is measured
from the corner marked �education.� Presented this way, increases in TA (RA) require
decreasing RA (TA) by an equivalent amount. The isoquants marked by E represent
successively greater levels of education output and those marked by R represent levels of
research produced.
39
Assume enrollments are fixed at E1 and that the university only employs faculty
members to produce E and R (where E = U + G). In this case the university would be
operating at point C on the E1 isoquant, where TA labor is zero.15 Given the amount of
faculty employed to produce E it is possible to draw a vertical line and find the isoquant
where faculty produce research without RAs: point B on the R1 isoquant. Thus, if a
research university had teaching and research assistants available but did not use them,
the university could produce E1 units of education and R1 units of research.
Figure 2.3 –Technical efficiency and university’s academic labor
15 The point is drawn slightly off the box ed
RAlabor
Research
Education
E2
E3
E1
R2
R3
C
P3
TAlabor
B
ge for illustrative
Faculty
R1
J
purposes.
40
Now consider what would happen if the university provided graduate students
with research assistantships. This would allow the university to increase its production of
R and it could now operate at point C in Figure 2.3. Since faculty are still employed in
the same fixed proportion, introducing research assistants allows production of research
to increase from R1 to R2 while still producing E1 units of education.
The ability to provide research assistantships at this level may be limited as RAs
are a positive function of FR. Operating at C implies the amount of research performed at
R2 brings in enough financial resources to fund all eligible graduate students with
research assistantships. For argumentative purposes, assume C is a feasible operating
point.
As the discussion earlier points out, C is still not a technically efficient operating
point. The university can reduce the number of research assistants and replace them with
teaching assistants. These additional teaching assistants still permit E1 units of education
to be produced, by moving along the isoquant to point P3, but now R increases again to
R3.
The shift from C to P3 occurs for two reasons. First, at point C the marginal
productivity of teaching assistants is greater than that of research assistants. Again using
the idea that inputs should be allocated where they are most productive, it would be more
efficient if the university reduced the number of research assistants and increased
teaching assistants until the marginal productivity of each were equal. Second, at point C
the marginal productivity of faculty to teaching is lower than that to research. Because
their marginal productivity to teaching is relatively lower, this implies faculty would be
41
more productive doing research. Reallocating faculty away from teaching and towards
research allows the university to put faculty where they can contribute the most. The
reason they can do it is because they have an alternate input in the form of teaching
assistants. The result is the same level of education being provided while increasing
research. Thus utilizing graduate teaching and research assistants provides universities
the flexibility to efficiently allocate inputs, in the face of scarce resources, to maximize
production of outputs and hence prestige.
It is important to recognize that the extent to which inputs can be traded off
depends entirely on the shape of the isoquants. From the discussion in chapter 1 of the
higher education production function, researchers do not know the exact shape of
isoquants for higher education institutions. The fashion in which they are depicted in
Figure 2.3 is done for illustrative purposes. In reality, being drawn this way suggests an
unreasonably high degree of elasticity between graduate students and faculty. Moving
from point C to P3 in Figure 2.3 implies the reduction in faculty is greater than the
increase in teaching assistants suggesting that teaching assistants are more productive
than faculty over that range of the isoquant. It is more likely that the tradeoff between
faculty and graduate students is relatively inelastic: graduate assistantships also serve as
�training� hence one would not expect students to be more productive than faculty. In
this case, the isoquants drawn in Figure 2.3 would only look like the more vertical parts
of both the education and research isoquants.
In addition, the extent to which any university can increase research output will
depend on the degree of substitutability between the inputs. If the tradeoff is relatively
inelastic, allocating less faculty and more teaching assistants to the production of
42
education would not result in the dramatic change from R2 to R3 that is depicted in the
figure. At the extreme, if the isoquants were nearly vertical (almost perfectly inelastic),
adding a significant number of teaching assistants may only allow the university to
increase research by only a very small amount. Thus, increasing research output will
depend not only on the quantity, but also the quality of the alternative labor it has at its
disposal.
The move from C to P3 also reveals an important efficiency consideration: even if
the university separately produces education and research efficiently it can still be
inefficient at jointly producing them. Operating on both the education and research
isoquants does not imply technical efficiency. In fact, depending on the elasticity of the
isoquants, universities can be very inefficient at joint production.
Yet inefficiencies can emerge for reasons other than inefficient allocations.
Universities may not be producing either of the outputs efficiently. Assuming enrollment
levels are fixed, over employing FU, FG, or TA may result in the university not operating
on the E1 isoquant. In this case, the university might be operating at point J in Figure 2.3.
An immediate result that can be drawn is that research output will be lower even if it is
produced efficiently. However, the converse need not be true. The university could be
producing R2 units of research inefficiently, at point P3 for example, and still be
producing education efficiently. Finally, it may even be the case that the university
inefficiently employs all of its inputs.
An important question then is whether certain characteristics or practices of
universities play a role in determining the extent to which these inefficiencies occur.
Research by Winston (1999) and Hoxby (1997) offers one possible explanation. Both
43
claim that universities use their available wealth to purchase higher quality inputs to the
production process. The link between wealth and quality is drawn based on the argument
that competition for these high quality inputs drives up their price, hence only those
institutions that can afford it will be able to purchase them. In turn, Winston claims that
�very wealthy schools with high quality students use that peer input as a substitute for
other inputs�[t]his technique would produce an inferior undergraduate product were it
not offset by an ample number of excellent fellow students� (p. 26).
If competition drives up the price institutions must pay for high quality inputs to
the point that only institutions with considerable wealth can afford them, then a
significant incentive exists for universities to employ these scarce resources in the most
productive manner. Moreover, the claim by Winston that universities obtain high-quality
inputs because they can be used as substitutes implies they are purchased specifically for
the purpose of providing the flexibility to allocate inputs efficiently. What this suggests
is that institutions using higher quality inputs should be more likely then to exhibit
technically efficient behavior.
While this simple model does not account for other potentially relevant factors
like capital, it does capture a great deal. Faculty members are obligated to teach and do
research. Since they are the only input capable of performing, or overseeing, the research
obligation (assuming research assistants are researchers-in-training) they represent the
critical component to performing all types of academic research whether it is basic or
applied, or industry versus government funded. At the same time, higher education from
Clark�s perspective suggests that faculty members need not be critical to all aspects of the
production of education. Given the three categories of education discussed previously
44
(lower-level undergraduate, upper-level undergraduate, and graduate), the one category
that faculty are absolutely critical to is graduate education. There are no other options for
educating graduate students besides using faculty members. The courses are too
specialized and the knowledge too advanced to be taught by anyone else but the
established experts.
This is where the nature of undergraduate education affords universities more
flexibility in determining its input mix. Through the provision of graduate education,
research universities nurture a viable alternate input for teaching lower-level
undergraduate courses in graduate students. After three or four years of advanced study,
these students are arguably just as well-versed as junior faculty in the established
paradigms in their field to teach in a course where the goal is learning fundamental
concepts and committing facts to memory. This is consistent with the evidence presented
earlier in the chapter that universities tend to employ teaching assistants to conduct
recitations within large lecture introductory courses or to teach certain lower-level
undergraduate classes. Utilizing teaching assistants, as viable academic labor inputs for
educating lower-level undergraduate students permits universities the flexibility to devote
more faculty to the production of outputs where their marginal productivity is highest and
commensurately where they are essential: research and graduate education, and to a
lesser extent upper-level undergraduate education.
Provided that the model outlined here is a fair representation of university
behavior, the decision to use graduate teaching and research assistants is done for
optimizing reasons: maximizing research output in order to maximize prestige. Choosing
45
to allocate academic labor in this fashion, to maximize output (prestige) given scarce
resources, implies universities also attempt to engage in efficient production behavior.
To this point, the discussion has addressed the economics behind the efficiency
measures to be evaluated, the relevant inputs and outputs, and a framework for
understanding how the relationships between these measures may influence institutional
efficiency. The last section of this chapter considers how data envelopment analysis can
be used as a tool for empirically evaluating these efficiency concepts in the context of the
framework provided.
Data Envelopment Analysis as aMethod for Assessing Efficiency
This study employs data envelopment analysis (DEA) to develop relative
measures of technical and allocative inefficiency for research-intensive higher education
institutions. The mathematical underpinnings of DEA can be traced back to Farrell
(1957) and DeBreau (1951) who were the first to consider how linear programming could
be used to empirically capture economic aspects of the production phenomenon.
Specifically, Farrell examined how to incorporate into empirical analyses the ability to
account for multiple inputs in the production of a single output. In doing so, he put to
form a linear programming approach that could empirically estimate relative measures of
technical and allocative inefficiencies for a sample of firms.
46
One of the most common measures of productivity is to simply take the ratio of a
firm�s outputs to its inputs (Cooper, Seiford, and Tone, 2000). The difficulty with this
type of measure though is that in moving from the single-input/single-output case to
multiple inputs and outputs it becomes necessary to �weight� each measure and arrive at
some composite, or virtual, input and output measure. When these weights are not
known, researchers must make a priori assumptions about them in order to perform an
empirical analysis. This is what occurs in economic research when a set of data is tested
using a parametric model. Researchers choose a particular functional form for the
production function (e.g., Cobb-Douglass or Leontief) and implicitly restrict the values
the weights can take based on what economic theory suggests should be employed.
In the early 1970s Charnes, Cooper, and Rhodes were working on a project
funded by the then US Office of Education to evaluate special programs for
disadvantaged children in US public schools. Dissatisfied with the results of their
econometric approaches, Rhodes brought Farrell�s paper to Cooper�s attention. Building
on Farrell�s work they coined the phrase �data envelopment analysis� and developed
what was to become the first DEA model, the CCR-ratio form (Cooper, Seiford, and
Tone, 2000).
The mathematical complexity of their methodology is well documented in the
literature and beyond the scope of the discussion at this point.16 Essentially though,
Charnes et. al. considered how to use linear programming to maximize the ratio of the
sum of the weighted outputs to weighted inputs. The approach they employed
specifically �solved� for the set of output and input weights that maximized each firm�s
weighted ratio. To do this involved setting up a linear programming problem for each
47
decision-making unit17 (DMU) that maximized this output/input ratio relative to all other
DMUs in the sample. As such, DEA does not impose weights on the model but lets the
data itself determine an �optimal� set of weights: one that maximizes a DMUs efficiency
score.
There are several justifications for considering DEA as an appropriate method in
the current context. Foremost, as a linear programming application, DEA is
nonparametric. Compared to a typical parametric approach (e.g., production or cost
function estimations using ordinary least squares or maximum likelihood estimation) the
fact that DEA solves for the weights rather than imposing them in advance means that it
places the least number of restrictions on the underlying technology (Charnes, Cooper,
Lewin, and Seiford, 1994) which is useful when the production process is not known,
such as in education. In addition, the fact that it can handle multiple inputs and multiple
outputs is appealing for an industry like higher education that uses a variety of inputs to
produce education, research, and an array of other services.
In terms of the study at hand, DEA offers an additional advantage vis-à-vis
parametric representation. As Charnes, et al. describe:
In parametric analysis, the single optimized regression equation is assumed to
apply to each DMU [decision making unit]. DEA, in contrast, optimizes the
performance measure of each DMU. This results in a revealed understanding
about each DMU instead of the depiction of a mythical �average� DMU (p. 4-5).
16 In chapter 3, the linear programming aspects of DEA are explicated in further detail.17 Charnes, Cooper, and Rhodes were also the first to refer to the units-of-analysis in DEA by this term.
48
This idea is made clearer by the graphical interpretation examining the single-
input and single-output case in Figure 2.4. In standard parametric procedures like
ordinary least squares (OLS), the objective is to fit a line minimizing the sum of the
squared distances from each point to the regression line, denoted R1 in the figure. In
doing so, the parameters generated only reflect the extent to which changes in the
dependent variable are explainable by movement in the independent variables for the
average unit in the sample. In contrast, DEA creates an efficient frontier (F1) by solving
a separate linear programming problem for each unit. As such, the technique allows for
measuring each unit�s performance relative to the frontier.18
As was discussed in Chapter 1, stochastic frontier analysis is also a practical
alternative for assessing efficiency. Unlike DEA though, because it is parametric some
functional form must first be specified. This is a highly restrictive assumption to make
for an industry like higher education where the production process, and hence the proper
functional form to use, is largely unknown.
This leads to the main benefit of using DEA. It subsumes the three economic
characteristics that Varian suggests regular technologies exhibit (see Appendix A). Since
the frontier is generated from the data itself, the trivial condition that the output can be
produced using some input mix is satisfied. Second, monotonicity or free-disposability is
satisfied by construction.19 Finally, the monotonicity condition also ensures that the
linear program constructs horizontal and vertical portions of the frontier, which implies
that the convexity condition holds.
18 It is important to note that the observations on the F1 envelope in Figure 2.4 may not necessarilycorrespond to the tangent points in Appendix A due to the fact that DEA only purports to measure relative,and not absolute, efficiency.
49
Figure 2.4 – Comparison of DEA and regression
Reprinted from Charnes, Cooper, Lewin, and Seiford (1994).
The technique is not without its fair share of limitations. Foremost, because DEA
constructs a frontier from the data itself, the efficiency measures derived in any given
analysis are relative and not absolute efficiency measures. This implies that the results
19 Nearly all DEA models assume free-disposability of inputs and outputs. Only in certain models wherethe researcher specifically seeks to test for the presence of congestion is free-disposability not assumed.
Input
Output
O
R1
F1
50
obtained are valid in as much as they reflect how efficient DMUs are, relative to others in
that particular sample. Consider the scenarios outlined in Figure 2.5.
Figure 2.5 is an input-space map for a population of firms (the whole of which is
represented by stars, circles, and triangles) producing a single output using two inputs. In
the first case, the researcher performs a DEA analysis on the entire population with the
constructed frontier graphically represented by the line marked �A.� Now assume the
researcher instead draws a random sample of firms (or selects a particular sample for
another reason), resulting in the construction of the frontier represented in Figure 2.5 by
line �B.� It is clear from the figure that when the entire population was considered, firm
R was not found to be efficient but is regarded as such in the sample. At the same time,
firm Q is shown to be efficient in both instances. Had the researcher chosen, randomly or
otherwise, another sample of firms, the results would again likely differ, as shown by the
frontier marked �C� that evaluates only those firms depicted as stars.
What this example demonstrates is two important ways in which DEA results
change with the selection of decision-making units to be evaluated. First the shape of the
constructed isoquant may change depending on characteristics of firms in the sample.
Second, as was discussed above, firms deemed efficient relative to one group may in fact
be inefficient when compared to another. The primary implication of computing
efficiency in this way is that it is not possible to develop measures of absolute efficiency.
Even if the entire population of some group of firms was analyzed, one cannot say that
the constructed frontier represents the absolute minimum input usage possible in the
production of the outputs specified.
51
Figure 2.5 – Effects of determining relative and not absolute efficiency in DEA
This leads to a second stringent limitation to using DEA. The fact that it is
relatively restriction free in terms of underlying assumptions and is a nonparametric
approach means it is not possible to state with any statistical certainty that the efficiency
scores or weights generated by the analysis are reliable estimates of some true value. In
addition, research by Mettas, Vargas and Whybark (2001) for example has shown that
DEA efficiency scores can be highly sensitive to data errors. As such, the results are
generally valid only to the extent that they apply to the decision-making units in that
particular analysis.
X1
X2
R
A
B
C
Q
52
Taken individually or together, these present formidable barriers to successfully
employing DEA for the purpose of this study. This does not mean though that DEA is an
inappropriate tool for testing the research question posed here, rather that special
consideration must be given to designing the study in such a way that it mitigates these
concerns. Particularly it means analyzing, to the extent possible, the population of
interest to increase the likelihood that the constructed frontier approaches that formed by
the population. In addition, it suggests that greater consideration be given to analyzing
mean values rather than the observations of individual DMUs. This approach has been
taken in other DEA studies (Ahn, et. al., 1988, Grosskopf, Margaritis, and Valdmanis,
2001) as it permits limited use of nonparametric statistical techniques to draw statistical
conclusions about the distribution of scores and for differences between larger groups.
Overall, there is strong evidence to support the notion that DEA is a sound
approach for assessing efficiency (Cherchye and Pout, 2000). In Bowlin�s (1998)
overview of DEA, for example, he summarizes the results of several studies where DEA
and translog models were estimated simultaneously using hypothetical data sets and
simulations with known inefficiencies. He concludes that in each case DEA
outperformed the other techniques in identifying both sources and levels of inefficiency
as well as estimating returns to scale (p. 11-12).
Summary
This chapter characterizes a variety of efficiency measures, including technical
and allocative efficiency and shows the extent to which they are inter-related. It also
53
characterizes the allocation of academic labor inputs used in the joint production of
education and research. Finally, it is shown that data envelopment analysis, in spite of its
inherent limitations, possesses several desirable properties that make it suitable not only
for studying efficiency, but particularly higher education efficiency.
54
Chapter 3
“There are no such things as applied sciences,only applications of science.”
- Louis Pasteur
Overview
The population of interest is those institutions defined by the 1994 Carnegie
Classification system as Research Universities or Doctoral Universities.20 The reasons
for choosing this population are two-fold. While faculty members at nearly all
institutions engage in some form of sponsored research, the expectation to produce
research at these institutions is much greater and hence represents a logical choice when
examining the joint production phenomenon. In addition, these institutions arguably
provide the widest array of academic offerings and thus are most likely to attract a more
diverse array of students than other institutional types.
Empirical studies frequently distinguish between institutions with medical schools
and academic hospitals and those without, arguing the former have unique cost and
20 These categories are further subdivided into a I and II format. Research Universities are defined as thoseinstitutions with a full range of baccalaureate programs, a commitment to graduate education and placing ahigh priority on research (minimum of $15.5 million of research income for Research University II statusand $40 million to be a Research University I). Doctoral Universities are generally defined as offering a
55
production characteristics (Ahn, et al., 1988). In order to minimize the confounding
effect of medical-related cost and production issues this study made considerable effort to
control or �factor out� medical aspects of university behavior from the input and output
measures. While it is not possible to completely remove these types of factors,
methodologically the effort increases the homogeneity of the institutions in the analysis
and thus provides for a more concise overall study design.
In order to empirically test the different notions of efficiency outlined in chapter 2
requires three vectors of data: inputs, relative input prices, and outputs. The following
sections describe the measures used in this study, the source data, and how they are
operationalized for analysis. There are a total of 10 variables. Table 3.1 lists the
individual components for each of the three vectors.
Table 3.1 – Input/Output Vectors andAssociated Variables
Outputs: 1. Undergraduate Educationa. Lower-levelb. Upper-level
2. Graduate Education3. Research
Inputs: 1. Faculty2. Graduate Teaching Assistants3. Graduate Research Assistants
Input Prices: 1. Faculty Wages2. Graduate Assistantship Wages
a. Teaching Assistantsb. Research Assistants
full range of baccalaureate programs and are committed to graduate education by offering doctoral study inat least several disciplines.
56
Outputs
Three education output measures were developed: lower- and upper-level
undergraduate education and graduate education. While nearly all empirical cost and
production studies of higher education institutions distinguish between graduate and
undergraduate education, it is not common practice to disaggregate undergraduates into
lower- and upper-level groups. There are, however, three closely related reasons
justifying its use here.
First, as was shown in chapter 2, the technologies used to educate lower-level and
upper-level undergraduates differ considerably, particularly at the large research and
doctoral-granting universities. Introductory sections for many disciplines may have
hundreds of students and be taught with a single faculty member and several graduate
teaching assistants. The marginal cost, in terms of increased time and effort, of adding
one more student in a class of even 100 students is practically zero. This structure is
consistent with Burton Clark�s taxonomy from chapter 2. The material taught in these
courses is generally codified knowledge that students must commit to memory. This
places a considerable amount of responsibility on the student in the education process and
thus makes it possible to conduct much larger course sections.
In contrast, upper-level courses tend to be much smaller reflecting Clark�s idea
that knowledge taught at this level tends to be more specialized. Economics students at
this level, for example, can take courses in labor economics, international economics,
57
public finance, or even game theory. Specialization also implies smaller course sections
that require more instructors to educate the same number of students. Class assignments
are more likely to involve essays and lengthy papers linking together different ideas,
which arguably require considerably more time to grade.
The second reason, recalling the discussion of higher education inputs from
chapter 2, is that this separation is reflected in the allocation of academic labor.
Introductory and other lower-level courses are more likely to be taught by graduate
teaching assistants whereas upper-level undergraduate courses tend to be taught by
tenured and tenure-track faculty members. Third, given that graduate teaching
assistantships (stipends) pay considerably less than tenure-oriented faculty positions and
that labor inputs are shown to be allocated in such a pointed fashion, it is logical to reflect
these factors in an analysis where the focus is on input tradeoffs and cost concerns.
There is considerable disagreement across economic studies of higher education
institutions as to what is the �best� way to quantify the output �education.� Consider two
institutions both educating the same number of students where one provides what might
be called an �excellent� education while the other provides only a �standard� education.
To simply compare the two based on the physical number of �educations� they provide
would seriously mask any physical and financial differences in production that this study
seeks to determine.
What this example highlights is the need to account for, and the difficulty with
measuring, aspects of education quality. The closest one comes to obtaining some
institutional measure of education quality is in the college rankings provided by US News
and other popular publications. Unfortunately, these rankings have been criticized by
58
some of the most highly regarded universities in these lists for their �false precision�
(Caspar, 1996). In the higher education literature, most efforts at providing such a
measure attempt to do so by considering graduates� salaries (Brewer and Ehrenberg,
1996) and no studies were found that sought to provide institutional measures of quality.
Unfortunately computing some institutional quality measure based on graduates�
salaries would be an extremely complicated and cumbersome task. Accounting for
differences across disciplines would require some weighting scheme be developed and
properly applied to create an institutional measure. Collecting data for a single institution
would be a daunting task, much less for a sample of 100 or 200.
As such, though researchers suggest that an ideal measure of an institution�s
education output attach some institutional �quality weight� to the physical number of
students it educates (Nelson and Hevert, 1992), the opportunity costs of obtaining the
necessary data are simply too great for it to be feasible. This is evident in nearly all of
the studies reviewed for this paper: education output is almost exclusively proxied by
physical counts like full time equivalent (FTE) enrollments or degrees granted while
recognizing the quality limitation exists.
When comparing these two commonly used measures, degrees granted has
several drawbacks that render it highly suspect as a measure of education output in an
efficiency study like the one presented here. Specifically, it strongly neglects the fact that
students who do not complete degrees may still receive one, two, or even three years
worth of education. To use the number of degrees granted in 1993 then would not
account for graduate students enrolled but not yet finished with their studies, nor would it
include the thousands of first- and second-year students frequently taught by teaching
59
assistants and lecturers. In this respect, it puts a downward bias on the number of
students receiving a year�s worth of education at any given time.
For this study, quality concerns are accounted for by subsorting the sample and
are discussed in detail in the first section of chapter 4. To measure the quantity of
education produced, undergraduate and graduate education are measured by FTE
enrollments. The source data for enrollments comes from the 1993 IPEDS� Fall
Enrollment Survey. Enrollment figures include all students except those who are: a)
studying abroad, b) exclusively auditing classes, c) taking courses at branch campuses in
a foreign country, and d) enrolled exclusively in courses not applicable toward obtaining
a formal degree. Three part-time students are equated to one full-time student.
Undergraduate counts are taken directly from the survey�s total numbers of �upper-� and
�lower division undergraduate students� categories. The graduate enrollment figure
comes from the �all graduate enrollment� category. In addition to first-time graduate
students, this figure includes students classified as �other than first-time graduate
students� and �non-degree-seeking students,� but not �professional students.�21
The same quantity/quality conundrum is also evident in researchers attempts to
develop measures of research output. Empirical studies almost exclusively use either
publication counts (usually number of journal articles) or research expenditures (usually
based on the amount of sponsored research funding). Those advocating the former
usually point to the fact that expenditures are actually an input and not an output of the
production process. They also suggest that publication counts are preferable because
expenditure measures tend to neglect quality aspects and �define away any variation in
21 The IPEDS survey defines professional students as those in programs leading to degrees in chiropractic,dentistry, law, medicine, optometry, osteopathy, pharmacy, podiatry, theology, and veterinary medicine.
60
productivity (de Groot, et. al., 1991, p. 425). Those in favor of the research expenditures
approach argue that not all research output is in the form of journal articles. Book
reviews, plays, musical scores, and patents issued are all viable outputs for certain
disciplines. At the same time, while not all research is based on sponsored funds, Cohn,
et. al. (1989) suggest that �the ability of an IHE to generate such funds is closely
correlated with its research output, at least insofar as it is perceived by sponsors� (p. 285).
The ideal output measure, as suggested by Cohn, et. al., would involve a weighted
measure of all the different research outputs an institution produces. Unfortunately,
specifying the weights a priori requires value judgments as to the respective worth of any
given output. Johnes and Johnes (1995), for example, showed in their study of
economics departments in the United Kingdom that efficiency scores are highly sensitive
to the weight assignments given to different publications like journal articles, books, and
book reviews.
This study explicitly distinguishes between technical and cost (allocative)
efficiency. In choosing this route it is desirable, to the extent possible, to distinguish cost
measures from physical measures. To make the technical efficiency analysis as cost-
neutral as possible, it is preferable notwithstanding the limitations to use some form of
publication counts here.
An institution�s research output is proxied here by the number of journal articles.
While this may be narrow in scope, in terms of developing an institution-wide measure,
this metric is more likely to capture research output across more disciplines per
institution than any other. The source data comes from the Institute for Scientific
Information�s (ISI) Citation Indexes. Three databases are available which cover all
61
academic programs: the Science Citation Index, the Social Science Citation Index, and
the Arts and Humanities Citation Index. Institutional publication counts were computed
by summing the number of non-medical-related journal articles22 published in English in
each index. Data was collected for 1993 and 1994 under the assumption that research
publications in 1994 would reflect the result of actual research conducted in 1993. These
two figures were then averaged together.
Three major limitations arise in choosing to model research output in this fashion.
First, as was discussed above, journal articles are not the only form of research output
faculty produce. This measure then is likely to underestimate research efforts at
institutions with large faculties in the arts (e.g. drama, art, and music) producing plays
and musical scores as well as in the humanities (e.g. history) where a large number of
books are written. Second, the publication counts themselves do not distinguish between
the number of articles published at an institution in what are frequently referred to as a
discipline�s �A journals,� those regarded within different disciplines as the highest
quality publications (e.g. Administrative Science Quarterly or the American Economic
Review). At the hypothetical extreme, an examination of two institutions having similar
publication counts could reveal one institution�s publications to all be in �A� journals
whereas the other would have none. Not being able to account for this masks a
significant indicator of research quality. Finally, from an accounting perspective,
publication counts mask the research efforts in multi-year projects. Particularly in the
physical and biological sciences, research projects may take several years to complete
22 In order to eliminate publications from medical faculty the following approach was used to querypublication counts in the ISI web-based search engine. Using Ohio State University as an example, therelevant search command was �Ohio State Univ same Columbus not (dept med or hosp or med sch or schmed or med ctr or coll med).
62
and the publications that result may not accurately reflect the year in which the work was
completed. It may also be that a publication is not submitted or accepted until one or two
years after the actual research has been completed.
Inputs
This study examines the extent to which research-intensive higher education
institutions are efficient at allocating their academic labor inputs in the joint production
of education and research. It does so in an effort to capture the real economic
substitutions institutions make between the different kinds of labor at their disposal in the
production process. Under the best of circumstances the relevant input measures then
would be comprehensive in scope and capture quality differences in the different inputs.
Comprehensive implies including all groups of individuals who, in some capacity, are
directly responsible for advancing or transferring knowledge: teachers and scientists. The
latter would ideally assign some institutional �quality weight� to the different groups of
labor inputs for each institution.
Again, quality concerns are treated in chapter 4. However, an immediate concern
arises in developing a comprehensive set of labor inputs in terms of technical staff. Two
IPEDS surveys are available that provide institutional staff headcounts: the Salaries,
Tenure, and Fringe-Benefits of Full-time Instructional Faculty (STFB) and the Fall Staff
Survey. While the Fall Staff Survey provides institutional data for a category called
�technical and paraprofessional,� two problems emerge in using this figure. First the
63
accompanying salary information is reported as interval data. While a scheme could be
devised to develop some weighted mean value for each institution, the highest salary
range is $30,000 and above, which severely limits the ability to capture the more
expensive end of the spectrum. Second the category includes a wide-array of positions
that are either a) medically-related or b) non-academic-related. Some of these positions
listed in the general instructions include medical and dental technicians, radio operators,
drafters, photographers, and dieticians.
This is problematic because this survey represents the only available source for
technical staff headcounts. It is not possible to isolate only the academic-related
positions from the �technical and paraprofessional� category nor is it possible to
construct a satisfactory price measure. For these reasons, a measure for technical staff is
not included in this analysis. What this means in terms of interpreting the findings from
this study is that the efficiency scores will have an upward bias and that the bias will be
greater the higher the proportion of technical staff is relative to an institution�s total
number of academic staff.
Data for the faculty input came from IPEDS� 1993 STFB survey. The survey
reports those faculty: a) employed full-time, b) having more than 50% of their time
classified as instruction (includes those with release time for research), c) on sabbatical,
and d) listed as department chairs. The headcount does not include preclinical and
clinical medicine faculty. The survey disaggregates faculty into 6 academic ranks
subsorted by numbers of faculty a) with tenure, b) on the tenure track, and c) not on the
tenure track. To construct an institution�s faculty measure involved summing the totals
for each of the 6 categories.
64
A primary drawback to using this measure is that it fails to account for a
significant number of researchers who do not teach. In this respect, the measure used
will tend to make institutions using a greater proportion of strict researchers seem more
productive than they actually are. Unfortunately, like the issues with technical staff, the
highest salary interval is $55,000 and above. This makes it difficult to adequately
distinguish between institutions on the high end of the scale. In addition, the figure does
not discriminate between full-time researchers and those having joint teaching and
research assignments.
Data for the graduate research assistant and teaching assistant measures was
collected from two sources: the 1993 IPEDS Fall Enrollment Survey and the National
Science Foundation�s (NSF) Survey of Graduate Students and Postdoctorates in Science
and Engineering. The NSF survey provides institutional data on the numbers of students
supported by teaching and research assistantships for a large number of engineering,
physical, and social science disciplines.
To construct the input measures, data was collected from the NSF survey for the
years 1992 through 1994 on the number of students per institution with teaching and
research assistantships and the total number of students in the �all mechanisms of
support� category.23 This data was averaged and used to estimate the percentage of
teaching assistants and research assistants for each institution by calculating the ratios of
teaching and research assistants to the number reported in the �all mechanisms� category.
As part-time graduate students generally do not receive teaching or research
assistantships, these estimates were instead multiplied by the total number of full-time
graduate students reported in the IPEDS survey to obtain the actual measures.
65
Where an institution may have had multiple campuses, data was collected only for
the main campus. For some measures, however, data was available only on a system-
wide basis. In these cases main-campus data was estimated from the system-wide data
based on the percentage of main-campus headcounts relative to the system-wide figures.
The limited input-price data for graduate teaching and research assistants made it
necessary to develop a second set of institutions for the joint analysis of technical and
allocative efficiency. Because of this, the construction of the input price measures is
presented in chapter 4 when the sample construction for the allocative efficiency analysis
is discussed.
Technique Applied in the Study
The specific DEA technique used here is based on the same approach taken by
Byrnes and Valdmanis (1994) in their DEA analysis of non-profit hospitals. Four
efficiency scores were calculated: overall efficiency, technical and allocative efficiency,
and scale efficiency.24 These different measures are all captured graphically in the input-
space map in Figure 3.1.
To compute the overall measures the data are first enveloped by a technology that
assumes universities are operating at constant returns to scale (CRS), labeled Lconst in
Figure 3.1. With relative input prices (wages) denoted by the isocost line WW�, the cost-
23 In the survey instruction forms, this category includes all forms of support including individual support.24 As Byrnes and Valdmanis state, scale efficiency is in fact a product of decomposing technical efficiencyinto scale, pure-technical, and congestion components. As the latter two are not relevant to this study andthe sum of their parts has no real economic interpretation, they are not considered here.
66
minimizing input combination occurs at point E, where the CRS-constructed isoquant is
tangent to the isocost line. Considering a university operating at point R, the overall
efficiency measure (OER) can be represented graphically by the ratio OT/OR. Note that
when overall efficiency is constructed this way the efficiency score can range between
zero and one with the latter implying the university is fully overall efficient. It is also
important to point out that the efficiency measures here as well as the others presented in
this section are all calculated from the origin. Evaluating the efficiency of some
university operating at F in Figure 3.1 would thus involve tracing the line between the
origin and F. In the DEA literature these types of efficiency measures are referred to as
radial measures of efficiency.
Figure 3.1 – Decomposing Efficiency
Adapted from Byrnes and Vald
Lvar
Lconst
E
X2
T
S
R
W
O
U
manis (1994
X1
W�F
)
67
To compute this measure the relative prices of the m inputs for university R are
defined as PR = (p1R, p2R, �, pmR). Letting Y be the vector of outputs and X the vector of
inputs, the estimated minimum cost for the university, MCR(Y, X, P), is found by solving
the following linear programming problem:
MCR(Y, X, P) = minimize PR · XR (1)XR,λj
s.t. Yλ ≥ YRXλ ≤ XR λj ≥ 0
The solution to (1) is the estimated minimum cost for university R. It is found by solving
for the vector of weights (λ) that minimizes the total costs of university R subject to the
constraints that university pr less output and uses more input than the other
institutions being analyzed. actual costs be PR · XR, the overall efficiency
measure is defined by the fo ratio:
OER(Y
oduces
Letting
llowing
, X, P) = MCR(Y, X, P)/(PR · XR) (2)
68
The first subcomponent of the overall measure, technical efficiency TER(Y, X),
depicts how far university R is operating off the constructed isoquant. In Figure 3.1 this
is represented by the ratio OS/OR. To compute this ratio involves solving the following
linear program:
TER(Y, X) = minimize θ (3)λj, θ
s.t. Y · λ ≥ YRX · λ ≤ Xr · θ λj, θ ≥ 0
It is clear from Figure 3.1 that the value of θ, the technical efficiency score for
university R, again ranges between zero and one where the latter now implies the firm is
(relatively) technically efficient. Conceptually, what equation (3) does is try to radially
contract (move along the dashed line in Figure 3.1 toward the origin) university R�s
inputs to the minimum level that is still technically feasible. Importantly, feasible is
defined by those institutions using similar input proportions (those points closest to the
dashed line) and who are already minimizing inputs (those making up the constructed
frontier) relative to all other institutions in that analysis. In the example here, since
universities E and F satisfy both of these conditions, the most university R can radially
reduce its inputs to some convex combination of E and F�s input mixes; point S in Figure
3.1. Equation (3) is more commonly referred to as the Charnes, Cooper, and Rhodes
69
(CCR) constant returns to scale ratio model and was the product of their seminal 1978
paper.
To calculate the price-dependent subcomponent, allocative efficiency (AER),
requires taking the ratio of the solutions to (2) and (3). Formally,
AER(Y, X, P) = OER(Y, X, P)/TER(Y, X) (4)
Thus, the allocative efficiency measure �captures inefficiency due to the fact that
[university R], even if on the isoquant with its factor proportions, did not pick the �right�
input combination given relative input prices� (Byrnes and Valdmanis, 1994, p. 133). In
Figure 3.1, allocative inefficiency is depicted by the ratio OT/OS and again can take on a
value between zero and one.
The remaining measure, scale efficiency, is a product of relaxing the CRS
restriction. In describes the extent to which university R is not operating on a scale
consistent with long-run equilibrium, or CRS. In Figure 3.1, this is depicted by OS/OU,
the difference between the isoquant measuring CRS and the isoquant that relaxes this
assumption, labeled Lvar. To calculate this component first requires solving the following
linear programming problem:
70
WR(Y, X) = minimize θ (5)λj, θ
s.t. Y · λ ≥ YRX · λ ≤ XR · θ nΣ λj = 1j=1
λj, θ ≥ 0
Comparing this formulation with (3), the CCR model, it is clear that they are
similar in every respect with the exception that (5) imposes a convexity constraint by
forcing the lambdas (input and output weights) to sum to one. The result is a tighter
convex hull around the data. First put forth by Banker, Charnes, and Cooper (1984) the
BCC ratio form, as it has come to be known, is considered one of the most widely used
DEA models. An immediate result is that for each DMU, the efficiency score obtained
from (5) will always be at least as high as that from (4). As Coelli, Rao, and Battese
(1997) suggest, �the convexity constraint essentially ensures that an inefficient firm is
only �benchmarked� against firms of a similar size�in a CRS DEA, a firm may be
benchmarked against firms which are substantially large (smaller) than it� (p. 150). By
taking the ratio of the technical efficiency (TER) score to the solution in (5), one obtains a
scale-efficiency measure, SR(Y, X):
SR(Y, X) = TER(Y, X)/WR(Y, X) (6)
71
The usefulness of the scale efficiency score is limited however by the fact that it
only indicates if scale inefficiency exists and does not describe whether the institution is
operating at increasing or decreasing returns to scale. Coelli et. al. (1997) has shown that
the nature of scale inefficiency can be determined by running an additional linear
program under the condition of non-increasing returns to scale (NIRS). Formally,
WR(Y, X) = minimize θ (7)λj, θ
s.t. Y · λ ≥ YRX · λ ≤ XR · θ nΣ λj ≤ 1j=1
λj, θ ≥ 0
Equation (7) is nearly identical to the VRS model in (5) with the exception here
that the lambdas now must sum to less than or equal to one. By allowing for decreasing
returns to scale in the NIRS model it is possible to compare the computed scores to those
obtained from the variable returns to scale model to determine the nature of the scale
inefficiency. If the VRS and NIRS scores are equal, this implies that that the institution
is operating at decreasing returns. If the scores are not equal then the institution is
operating at increasing returns.
72
Analytical Techniques
Three distinct limitations arise in analyzing DEA results. First, as it is applied
here,25 DEA does not permit one to perform traditional hypothesis testing. Specifically, it
is not possible to determine whether a DMUs input and output weights nor its efficiency
scores have any statistical significance. The typical approach to reporting results in DEA
studies is to simply present the efficiency scores and to perform a qualitative analysis of
sorts by examining the individual efficiency scores to try and explain anomalous or
outlying observations. In some cases (Ahn, et. al., 1988, Grosskopf, Margaritis, and
Valdmanis, 2001), results are reported as mean values. This makes it possible to use
nonparametric statistical tests in order to determine whether differences exist between the
mean scores or variances of groups of institutions.
This study takes the mean-oriented approach for three reasons. First, while DEA
results permit researchers to evaluate the efficiency of individual institutions, the goal
here is not to identify specific areas for improvement but to gain a better understanding of
how efficient research-intensive institutions are as a group. Second, even though every
attempt has been made to ensure the data accurately reflects the levels of academic labor
an institution uses, random disturbances and the potential for inaccurate reporting limit
the usefulness of relying heavily on any one institution�s scores. Finally, the ability to
employ some inferential statistics is appealing in considering the extent to which the
results may be generalizeable.
25 While there do exist a class of stochastic DEA models they are relatively recent additions to the DEAliterature.
73
One particular statistical test that can be employed, proposed by Banker (1993) is
a test to determine whether the difference in efficiency ratings for two particular samples
is statistically significant. Assuming that the inefficiency ratings follow an exponential
distribution26 the relevant test statistic (Ftest) is:
Ftest = [Σ (1/hj-1)/nj]/[Σ (1/hj-1)/nj] Ni = (n1, n2, �,N)
j ε N1 j ε N2
where:
Ni = The total number of DMUs in sample �i.�
hj = The DEA efficiency rating for the jth institution in the ith sample.
The computed test statistic follows an F-distribution with 2*N1 and 2*N2 degrees
of freedom.
In a marked deviation from prior studies, the mean input and output weights
generated by the DEA analyses are also reported here. As was discussed in chapter 2, the
input and output weights are the �unknown� that the DEA program seeks to solve.
Evaluated individually, they represent the extent to which that input or output influences
a particular institution�s efficiency score. In economic terms they are implicit, or
shadow, prices. When the ratios of different weights are calculated they have additional
economic interpretations. By taking the ratio of two input weights, one obtains a measure
26 Banker also considers the case of a half-normal distribution.
74
of the marginal rate of technical substitution (RTS) between the inputs. In the case of
two output weights, the ratio is the marginal rate of transformation (RPT) between the
outputs (Rosen, Schaffnit, and Paradi, 1998).
While these are useful empirical estimations of underlying economic concepts,
they do have their limitations. For example, the calculated marginal rate of technical
substitution between faculty and research assistants should describe the rate at which the
inputs mix can be altered to maintain the same level of output. The problem is, the
�output� here is the joint products of research and education and, as the model is set up
here, research assistants (technically) do not contribute to the production of education.
In order to account for this, the following approach was used. In chapter 2 it was
shown that the RTS between two inputs is equal to the negative inverse of the marginal
productivity of those inputs. Rosen, et. al. (1998) has shown that in taking the ratio of an
output weight to an input weight the result is a measure of marginal productivity of the
input vis-à-vis that output.27 From this marginal productivity measures for research
assistants to publications, teaching assistants to lower-undergraduate education, faculty to
publications and faculty to education were constructed. These were used to calculate
RTS values that discriminate between labor used in the education process and that used in
the production of research.
A second limitation involves the sensitivity of DEA results to different input and
output measures. Prior research has shown that institutions may be considered efficient
under one set of input and output variables and, at the extreme, be considered extremely
inefficient when another set of variables are introduced (Johnes and Johnes, 1995). What
75
is more, because there are no econometric tests to evaluate the impact of including or
excluding variables, researchers often must rely on the variation in efficiency scores
stemming from including or excluding variables in order to determine which variables to
include in an analysis (McMillan and Datta, 1998). As a result, most DEA studies report
the results of several analyses, using different variable sets, to test the robustness of the
efficiency scores.
This issue has far-reaching implications in this study. Efficiency is framed here
specifically as a labor phenomenon and it is difficult to postulate an alternative set of
input measures. The only plausible candidate might be instructional expenditures but this
would not be appropriate here for two reasons. First, it is a cost-based measure and thus
would not make it possible to discriminate between technical and allocative
inefficiencies. Second, the particular focus on academic labor would be compromised by
the fact that instructional expenditures include more than faculty salaries.
While DEA is considered an atheoretical technique, it is only so in that,
mathematically, it imposes no functional form on the data. As researchers discussing the
mechanics of DEA point out (Cooper, et. al., 2000; Bowlin, 1998), great effort should be
made to incorporate not only a comprehensive set of input and output measures but to
take considerable care in choosing outputs �that can be reasoned to be manifestations of
inputs� (Avkiran, 2001). It is evident in reviewing prior DEA studies of higher education
institutions that there is little consensus over which input and output measures should be
included. Measures are frequently used not because they capture the underlying
production process of colleges and universities but because the data is readily available
27 As Rosen, et. al. (1998) note, this interpretation is not strictly correct for individual observations makingup the constructed frontier because the derivative is not defined at the joint of two straight lines. However,
76
(e.g., Ahn, et. al., 1988), or because they have been largely applied in other DEA studies
(e.g., McMillan and Datta, 1998).
This study focuses specifically on academic labor. The inputs used here were
selected because they represent realistic tradeoffs in the production of both education and
research. In terms of outputs, while they are in some measure consistent with prior
studies, their use is justified by Clark�s taxonomy of education and the specific way in
which academic labor seems to be allocated. Moreover, the decision to disaggregate
physical input quantities from their respective costs excludes the possibility of using the
only alternative, cost based measures, for most of the variables in the analysis. As such,
only one model is specified based on the limited alternative measures and a degree of
confidence that the relevant relationships are properly specified.
The third difficulty involves what are known in the DEA literature as �slack�
variables. The best way to grasp the notion of slack in this context and how it affects
efficiency scores is graphically. An example is presented in Figure 3.2.
In this example, the institution operating at point E forms part of the constructed
frontier, implying in this example that it is technically efficient. Point T, on the other
hand, is not on the frontier and hence is considered technically inefficient. From
institution T�s linear programming problem (the CCR or BCC ratio models) it should be
able to radially contract its use of both inputs and be able to operate, efficiently, at point
T�. Yet, as Figure 3.2 shows, while operating at T� minimizes the use of input X1 the
institution operating at point E also uses this amount of X1 and still uses less of input X2.
The vertical distance between points E and T� represents the amount of slack, or excess
the weights reported are the mean values across all institutions.
77
use, of input X2 that firm T still uses because it has been projected back to the free-
disposal region of the efficient frontier.
Figure 3.2 – Graphic Depiction of Slack Variables
While slack vari
authors like Coelli, et. a
considerable space to th
operating on the free-di
X2
O
T
T�
ables tend not to be reported in empirical applications, several
l. (1997) and Cooper, Seiford, and Tone (2000) devote
e topic in theoretical treatments. The fact that institutions
sposal region of the frontier are not technically efficient in the
X1
E
78
strictest sense has motivated the development of a more restrictive definition of technical
efficiency called Pareto-Koopmans efficiency. First put to form by Farrell (1957) and
later completed28 by Charnes, Cooper, and Rhodes (1978), Pareto-Koopmans efficiency
requires that θ (the measure of technical efficiency) equal one and that all slacks must
equal zero.
Whether or not slack values need to be reported in DEA studies is still open to
debate. Even Coelli, et. al. (1997) suggest that the importance of slacks may be
overrated. As they point out, their existence is an artifact of a piecewise-constructed
frontier. With an infinite number of observations or an alternative frontier construction
that creates a smooth isoquant, the slack issue would disappear (p. 176). Given this
rationale and that none of the DEA analyses reviewed for this study reported the slack
values, they are not addressed here either.
Chapter 4 reports the results of the DEA analyses in the following manner. The
first section addresses the sample of institutions evaluated in the technical efficiency
model and how they are grouped for analysis. In section two, the results are reported and
statistical tests are performed to determine whether any differences exist between the
mean values in the two sample groups as well as whether characteristic variables
influence the efficiency scores. The third section addresses the institutions evaluated in
the allocative efficiency model and describes the construction of the relative price
measures. In the last section the results from the allocative efficiency analysis are
reported.
28 Charnes, Seiford, and Tone (2000) point out that while Farrell came up with the idea for Pareto-Koopmans efficiency he was not able to find a tractable way to fully test for it.
79
Chapter 4
“Science is the great antidote tothe poison of enthusiasm and superstition”
- Adam Smith
Institutions analyzed in the technicalefficiency model
Researchers suggest that groups of DMUs in a DEA analysis be �relatively
homogeneous� (Bowlin, 1998, p. 19). Universities, unfortunately, do not fit this
condition. While they all employ faculty members and graduate assistants, teach
students, and produce research there can be significant quality differences between
institutions in terms of both inputs and outputs. So much so that the institutions here
arguably do not operate in a perfectly competitive market but instead in what
Chamberlain (1962) called a monopolistically competitive market, where the firms all
produce and sell slightly differentiated versions of what is essentially the same product.
It would be misleading, for example, to jointly evaluate the technical efficiency of
Harvard and Idaho State University with respect to research output. The former arguably
possesses a disproportionate share of the world’s leading researchers. An argument could
80
be made that these faculty members may produce relatively fewer publications than other
faculty. However, if the quality of the research they perform is much higher, then the
publications they produce are likely to be worth more than an average publication in
terms of value-added. In this case, comparing the two institutions on the basis of
publication counts alone would not reflect one institution�s use of notably higher-quality
inputs or production of higher-quality outputs. To successfully compare institutions
producing similar, but different, products would first require finding some basis and then
determining what an appropriate equivalent would be for each product by which they
could be properly compared.
In the previous chapter, the physical input and output measures were defined and
operationalized for use in the DEA analyses. This section completes that task by
considering how to incorporate some aspect of quality into these measures. As was
stated in the last chapter, it would be ideal to come up with some quality basis with which
relative measures between institutions could be derived. Unfortunately a lack of prior
research, and consensus, over how to operationalize such a concept makes it not feasible
here.
The pressing question is, given that it is not possible to develop reliable
institutional measures of quality, what would a �second best� option look like? A
plausible answer emerges when one considers price-discriminating behavior in
monopolies. Like the problem here, the most attractive option for a monopoly would be
to have knowledge of what each consumer of their good would be willing to pay. With
this information they could theoretically charge each individual the price they would be
willing to pay in order to extract the most profit. Referred to as perfect, or first-degree,
81
price discrimination, the main reason it is hypothetical is because it is not possible to
obtain such detailed information, similar to the problem here. A more practical from of
price discrimination would involve dividing the buyers into two or more categories: high
and low or even high, middle, and low. The �second best� best alternative facing the
monopolist is to partition all of the buyers into general groups and subsequently treat
each as an individual entity for pricing purposes.
This idea forms the rationale for the approach taken here. It would be desirable to
know the quality of each input and output for each institution in this study. However this
information is not available. As such, the second best approach would be to subdivide
the institutions into general groupings and consider each group individually.
To do this, the sample is subdivided into two groups using the following
approach. From the 241 institutions in the population categorized as research or doctoral
granting, data was available for 183, or 75%, of the institutions. All institutions were
ranked in descending order according to their Mean Scholarly Quality of Program
Faculty (MSQPF) rating from the National Research Council�s (NRC) (1993)
Assessment of Research-Doctorate Programs in the United States.29 These ratings were
summed and percentage shares of the total value were calculated for each institution.
From the cumulative density function (CDF) of these scores, those institutions making up
the first 50% of the CDF were grouped as �First Tier� institutions and the remainder as
�Second Tier� institutions. The general summary statistics for these two groupings are
presented in Tables 4.1 and 4.2.
29 The MSQPF rating is based on a survey instrument sent out by NRC where faculty members arepresented 50 randomly selected similar departments at other institutions and asked to rate, on a scale of 1 to5, with the latter being the highest: a) the scholarly quality of their faculty and b) their effectiveness in
82
Table 4.1 –Tier 1 Institutions by CarnegieClassification (CC)
Research I Research II Doctoral I Doctoral II Total
Public 38 4 1 0 43
Private 20 4 0 1 25
CC Total 58 8 1 1 68
Medical 43 0 0 1 44
Table 4.2 – Tier 2 Institutions by CarnegieClassification (CC)
Research I Research II Doctoral I Doctoral II Total
Public 15 20 22 21 78
Private 6 6 14 11 37
CC Total 21 26 36 32 115
Medical 16 11 3 4 34
As Tables 4.1 and 4.2 show, the top 37% of the sample institutions (68 of 183)
garner half of the total MSQPF ratings. Approximately 85% of the Tier 1 institutions are
Research I�s and publics outnumber privates by almost two to one. The Tier 2
institutions, on the other hand, are more evenly distributed across Carnegie classifications
educating research scholars/scientists. Additionally, raters were asked to identify no more than 5 programsin the highest category.
83
though there are slightly more doctoral-granting institutions. Like Tier 1, the public
universities outnumber privates by just over two to one.
While this measure only crudely captures notions of quality, it nonetheless has
appeal. As the variable name suggests, the MSQPF value is probably as good as any
available measure of overall faculty quality30 as well as research output quality. As the
ratings form part of the criteria for measuring the quality of graduate education programs,
it is also likely to reflect the quality of an institution�s teaching and research assistants.
In terms of the education output, the correlation coefficient between the MSQPF values
and the minimum SAT score for the top 25% of an institution�s entering class is .603
which is consistent with researchers observations that SAT scores are a comparable
measure of education quality (Geiger, 2000; Clotfelter, 1991; Webster, 1986). As a final
check, a cursory glance at the universities in the two tiers presents what Winston (1999)
describes as the �usual suspects� one would expect to find in each group.
As with all education related studies, adequately controlling for the various
quality aspects of inputs and outputs is problematic. While the two tiers in the technical
efficiency model were divided explicitly to account for quality differences between the
groups� outputs this is clearly a crude measure and a strong case can be made that little
quality variation will exist between the lowest scoring institutions from Tier 1 and the
highest scoring institutions from Tier 2. Every institution will have particular
departments or programs considered nationally renowned or highly respectable, and some
many more than others. These differences are largely hidden when the institution is the
unit-of-analysis. Some universities draw students from a national pool of applicants
30 While this measure may reflect familiarity with a discipline�s knowledge base, it does not capture theextent to which faculty members might be considered quality teachers.
84
whereas others may draw their students almost exclusively from their own state. For the
latter, the available pool of applicants may influence the extent to which they can draw
high-quality students, as inputs or as outputs (Hoxby, 1997). In terms of institutional
resources, some universities are more well-endowed than others, which is also likely to
influence their ability to attract high-quality faculty/researchers.
Ideally, it would be preferable to develop quality-control weights for each input
and output measure or to sufficiently divide the institutions into very tight, homogeneous
groupings. The former is complicated by the fact that there is little consensus over how
to empirically capture notions of quality for any input or output measure. The latter
would force the sample sizes down to a point where it would not be possible to perform
meaningful statistical analyses. Under the circumstances, the approach used here
recognizes the limited ability to control for quality but tried to strike a balance between
the two ideal approaches.
It was pointed out in the last chapter that the input and output measures were
constructed specifically to control for the presence of medical schools, academic
hospitals, and medical research centers. The faculty input measure does not include
clinical or pre-clinical faculty members nor do the input or output measures of graduate
students. In addition, every attempt was made to remove those journal articles whose
authors come from any medical-related part of the university.
It is likely though that medical schools influence university efficiency in other
ways. Geiger and Feller (1995) for example identify a strong link between medical
schools and the more academic biological sciences. Here they are considered as a
characteristic, or dummy, variable whose presence alone may have an impact on
85
technical efficiency. In this study there are a total of 78 universities possessing various
medical facilities. Fifty-six percent of these institutions are in Tier 1. As Table 4.1
shows, with a single exception, all of the medical institutions in Tier 1 are Research I
institutions. From Table 4.2 it is evident that they are distributed only slightly more
evenly across the Carnegie classification in Tier 2. Almost half are at Research I schools
(47%) and 80% are Research I or II institutions.
Table 4.3 depicts summary statistics on the input and output variables, by tier, for
the technical efficiency model. More detailed figures are presented in Appendices B and
C. Looking at Table 4.3, on average, the institutions in Tier 1 use significantly more of
each input relative to institutions in Tier 2. Comparing the ratios of mean input values
between the groups, Tier 1 institutions, on average, use about 1.7 times more faculty
members, more than double the amount of teaching assistants, and nearly four-times the
number of research assistants.
This difference in input usage between the tiers is largely explainable by the
amount of education and research output each group produces. Tier 1 institutions on
average educate about 1.4 times as many lower- and upper-level undergraduate students
and double the number of graduate students. The largest difference though is in research
output. Tier 1 institutions produce, on average, almost 4.5 times as many publications.
The figures in Table 4.3 also suggest significantly greater input- and output-level
variability for Tier 1 institutions. A comparison of the standard deviations between tiers
shows about double the variation in input and output usage for the institutions in Tier 1.
86
Table 4.3 – Summary Statistics of Input/Output Measures forTE Model
Tier 1 Tier 2
Mean Max Min SD Mean Max Min SDInputs:
Faculty 1,068 2,261 171 513 618 1,439 137 291
ResearchAssistants 1,222 3,417 18 765 327 1,266 0 287
TeachingAssistants 765 2,679 448 492 336 1,128 2 246
Outputs:
Publications 1,185 3,043 175 609 265 1,047 11 215
Lower-LevelUndergraduates 6,578 15,623 448 3,926 4,635 12,976 645 2,656
Upper-LevelUndergraduates 7,208 17,851 452 4,749 4,921 14,190 617 2,839
GraduateStudents 4,461 10,641 900 2,485 2,127 6,045 368 1,239
Technical Efficiency DEA Results
Two DEA analyses were performed, per tier, to obtain the technical efficiency
scores: the first under the assumption of constant returns to scale (CRS) and the second
under variable returns to scale (VRS). Table 4.4 depicts the mean technical efficiency
87
scores for both tiers. Standard deviations are listed in parentheses. All values in the
CRS, VRS, and Scale Efficiency columns can vary between zero and one with the latter
implying 100% technical efficiency (or in the case of the last column scale efficiency).
The scale measure is computed by taking the ratio of the CRS and VRS scores. Detailed
outputs, by institution, for the two tiers are presented in Appendices D and E.
Table 4.4 – Mean Technical Efficiency Scores
CRS VRS ScaleEfficiency N =
Tier 1 87.9% 92.6% 95.0% 68S.D. (11.4%) (9.8%) (6.5%)
Tier 2 77.2% 85.7% 90.6% 115S.D. (13.7%) (12.8%) (11.0%)
Difference 10.7% 6.9% 4.4%
The results from Table 4.4 indicate that in all three efficiency measures
institutions in the first tier, on average, are more efficient. The Banker Test (see chapter
3) was applied to determine whether the mean scores for the two tiers were statistically
different from each other for both the CRS and VRS models. Both computed F-statistics
were significant beyond the α = .01 level (CRS F230,136 = 3.575: VRS F230,136 = 3.497)
indicating that the mean values between the two tiers are statistically different. This
result must be interpreted with care however as Zhang and Bartels (1998) have shown
that smaller sample sizes have a natural tendency to exhibit higher mean efficiency
88
scores. 31 Because the Banker test can only be applied to the CCR- and BCC-ratio
models, a Mann-Whitney U test was employed to determine whether the mean scale
efficiency scores for the two tiers differ. Also called a rank-sum test, it is non-parametric
and considered a �powerful alternative� (Aczel, 1989, p. 759) when the assumption of a
normal distribution is not met. The large-sample test statistic (z) of 3.2142 is significant
at the α = .05 level indicating that the scale efficiency scores are statistically different
from each other.
Compared to IHEs of similar sizes (VRS) the institutions in Tier 1 (Tier 2), on
average, are approximately 93% (86%) efficient at choosing an input mix that minimizes
inputs for a given level of output. When all institutions are modeled under the
assumption of constant returns (CRS), the results in Table 4.4 indicate Tier 1 (Tier 2)
institutions on average are operating at 88% (77%) technical efficiency. Turning to the
third column, scale efficiency, the mean scores show that Tier 1 (Tier 2) institutions are
operating, on average, at 95% (91%) scale efficiency, or put alternatively, on a scale
consistent with long-term equilibrium.
The scale efficiency scores in Table 4.4 are aggregate measures; what they do not
show is whether scale inefficiencies are due to the institutions operating under increasing
or decreasing returns to scale. As was shown in chapter 3, computing an additional DEA
analysis for each tier under the assumption of non-increasing returns to scale (NIRS)
makes it possible to determine the nature of scale inefficiencies. This is done by
comparing the VRS technical efficiency scores with the NIRS scores: if the VRS and
NIRS scores are equal the institution exhibits decreasing returns (DRS) and if the two are
not equal then it exhibits increasing returns (IRS). Table 4.5 depicts the number of
31 I would like to thank Tim Coelli for bringing this article to my attention.
89
institutions, by tier, exhibiting DRS, IRS, or constant returns to scale. The values in
parentheses represent the relative shares of each category by tier. Because scale
inefficiency is determined by the extent to which institutions are not operating at constant
returns to scale, the CRS category shows the number of institutions that were computed
as being scale efficient.
Table 4.5 – Decomposition of Scale Efficiency Results
CRS IRS DRS
Tier 1 19 (28%) 30 (44%) 19 (28%)
Tier 2 17 (15%) 60 (52%) 38 (33%)
The results in Table 4.5 indicate that more than 1 out of every 4 institutions in
Tier 1 and approximately 1 out of every 6 institutions in Tier 2 are relatively scale
efficient. In comparing the numbers institutions exhibiting IRS versus DRS, the results
show that institutions are more likely to be operating at increasing returns. A
contingency analysis was performed to test the null hypothesis that the proportions of
institutions in the different scale categories were equal for the two tiers. The computed χ2
statistic of 4.6822 with two degrees of freedom is not significant even at the α =.10 level,
implying that the distributions of scale inefficiencies are not statistically different from
each other.
90
In Table 4.6 the institutions in both tiers are sorted into 10% efficiency ranges to
evaluate the distribution of efficiency scores in the VRS models.32 The column �#
efficient� depicts those institutions with efficiency scores of 100% (those institutions
making up the constructed frontier). The values in parentheses represent the relative
share of institutions in that category for that tier.
Table 4.6 – Distribution of Technical EfficiencyScores for VRS Model
# Efficient 90 < X < 100 80 < X < 90 70 < X < 80 X < 70
Tier 1 33 (49%) 12 (18%) 13 (19%) 8 (12%) 3 (4%)
Tier 2 34 (30%) 16 (14%) 24 (21%) 28 (24%) 13 (11%)
A contingency analysis was performed to determine whether the proportions of
institutions falling into each category were equal between tiers. The resulting χ2 statistic
of 10.4132 with four degrees of freedom, is statistically significant at the α =.01 level
suggesting that the proportions of institutions falling into the different efficiency ranges
for Tier 1 are statistically different from those in Tier 2.
The results in Table 4.6 show that approximately half of the institutions in Tier 1
are classified as technically efficient in the VRS model. Summing the cumulative
percentages of the categories from left to right reveals that two out of every three
32 For the remainder of this section, only the results from the VRS analyses are reported. The only reason
91
institutions are operating above 90% efficiency (49% + 18%) and more than 8 out of 10
are at least 80% technically efficient. For Tier 2, the results show approximately one of
every three institutions are classified as technically efficient. Again summing the
cumulative percentages shows that 44% are at least 90% efficient, and two out of every
three are at least 80% efficient. A comparison between the tiers shows that Tier 1
institutions are more likely than Tier 2 institutions to be classified as technically efficient
or above 90% efficient. Conversely, Tier 2 institutions are more likely to be classified in
the other, lower efficiency, categories.
The next step in the analysis considers whether technical or scale efficiency
scores are confounded by form of institutional control (public or private). In Table 4.7,
Tier 1 institutions are sorted by public or private and by whether or not an institution is
considered technically efficient.
Considering whether type of institutional control confounds the VRS efficiency
scores, the percentages of public and private institutions falling into the two efficiency
categories are roughly inverted suggesting form of control may confound VRS efficiency
scores. From a contingency analysis, however, the computed χ2-statistic (2.0825) with
two degrees of freedom is not significant at the α =.05 and hence it is not possible to
reject the null hypothesis that the proportions of publics and privates classified as
efficient or not efficient are statistically different from each other.
Analyzing the scale efficiency scores in the same manner presents a similar result.
The computed χ2-statistic (.4851) is not statistically significant even at the α =.10 level
indicating that the proportion of publics and privates being classified as fully scale
efficient are not statistically different from each other. Thus, for Tier 1 institutions,
that the CRS analyses were performed was to calculate the scale efficiency measures.
92
neither VRS nor scale efficiency scores seem to be affected by form of institutional
control.
Table 4.7 – Tier 1 Results Sorted by InstitutionalControl
VRS Scale
Public Private Public Private
Efficient 18 (42%) 15 (60%) 12 (28%) 9 (36%)
Inefficient 25 (58%) 10 (40%) 31 (72%) 16 (64%)
In Table 4.8, the same figures are reported for Tier 2 institutions. Because of the
larger sample size and wider distribution of scores, efficiency scores are disaggregated
into three categories rather than two. For the VRS scores, the computed χ2-statistic
(1.4607) with four degrees of freedom is not significant at the α =.10 level. As such it is
not possible to reject the null hypothesis that the proportions of publics and privates
classified as efficient or not efficient are statistically different from each other.
In terms of any differences in the distribution of scale efficiency scores, the
computed χ2-statistic (3.0188) is also not significant indicating the proportions of publics
and privates being classified as fully scale efficient are not statistically different from
93
each other. Like Tier 1, neither VRS nor scale efficiency scores seem to be affected by
form of institutional control for the institutions in Tier 2.
Table 4.8 – Tier 2 Results Sorted by InstitutionalControl
VRS Scale
Public Private Public Private
Efficient 22 (28%) 12 (32%) 13 (17%) 4 (11%)
Ineffic. > 80 30 (38%) 10 (27%) 57 (73%) 25 (68%)
Ineffic. 80 > x >60 26 (33%) 15 (41%) 8 (10%) 8 (21%)
The next step in the analysis considers whether technical or scale efficiency
scores are affected by the presence or absence of medical facilities. In Table 4.9, Tier 1
institutions are sorted into those with and without hospitals or medical centers and by
whether or not the institution is likely to construct part of the frontier.
For the VRS model the figures suggest that institutions without hospitals or
medical centers are more likely to be categorized as technically efficient. From a
contingency analysis, the computed χ2-statistic (4.8848) with one degree of freedom is
statistically significant at the α =.05 level. Hence it is necessary to reject the null
hypothesis that the proportions of medical and non-medical institutions are equally likely
to be technically efficient.
94
For scale efficiency scores, the computed χ2-statistic (1.1688) is not statistically
significant even at the α =.10 level indicating that institutions with medical facilities are
just as likely as those without to be classified as scale efficient.
Table 4.9 – Tier 1 Results Sorted by Presence ofMedical Facilities
VRS Scale
Medical None Medical None
Efficient 17 (39%) 16 (67%) 11 (25%) 9 (38%)
Inefficient 27 (61%) 8 (23%) 33 (75%) 15 (62%)
To sum, the distribution of VRS technical efficiency scores for Tier 1 institutions
is shown to be affected by the presence of medical facilities. Universities without
hospitals or medical centers are almost twice as likely to be regarded as technically
efficient. At the same time, the results indicate that the presence or absence of medical
facilities does not affect scale efficiency results.
In Table 4.10 similar results are reported for Tier 2 institutions. Because of the
larger sample size it was again possible to disaggregate efficiency scores into three
categories rather than two. For VRS scores, the computed χ2-statistic (1.0116) with four
degrees of freedom is not significant even at the α =.10 level. It is not possible to reject
the null hypothesis that the proportions of medical and non-medical institutions classified
as efficient or not efficient are statistically different from each other.
95
In terms of scale efficiency scores, the computed χ2-statistic (6.3054) is
statistically significant at the α =.05 level indicating that the proportions do differ
between the two types of institutions. Institutions with hospitals or medical centers are
less likely to be considered fully scale efficient (3% versus 20%). Moreover, these
institutions are overrepresented in the lowest scale efficiency category (20% versus 11%).
Table 4.10 – Tier 2 Results Sorted by Presence ofMedical Facilities
VRS Scale
Medical None Medical None
Efficient 8 (24%) 26 (32%) 1 (3%) 16 (20%)
Ineffic. > 80 12 (35%) 28 (35%) 26 (77%) 56 (69%)
Ineffic. 80 > X > 60 14 (41%) 27 (33%) 7 (20%) 9 (11%)
The last sets of results reported for the technical efficiency analysis are the mean
input and output weights for the technically efficient institutions. Table 4.11 shows the
mean weights for those institutions in the two tiers categorized as technically efficient.33
Standard deviations are shown in parentheses. Individual institution�s input and output
weights are listed in Appendices G and H.
33 From chapter 2, the marginal rate of substitution and marginal rate of transformation are products ofexamining changes on the isoquant or the production possibilities curve.
96
Table 4.11 – Mean Input/Output Weights forTechnically Efficient Institutions (000)
Inputs Outputs
Faculty RA TA LUG HUG GRAD PUBS
Tier 1 0.7472 0.4970 2.1863 0.0447 0.0187 0.0711 0.2538(0.9920) (0.9357) (6.6411) (0.0879) (0.0388) (0.0994) (0.5046)
Tier 2 1.5608 0.7791 2.7568 0.0962 0.0632 0.1264 0.3834(1.7151) (1.6168) (6.8235) (0.1913) (0.1196) (0.2095) (0.7536)
The Mann-Whitney U test used to evaluate the mean differences in scale
efficiency is also used here to test whether the individual mean input and output weights
for the two tiers are statistically different from each other. The computed (z)-statistic for
each set of input and output measures was not statistically significant even at the α =.10
level. As such, it is not possible to reject the null hypothesis that the mean values for any
of the weights are statistically different between tiers.
Table 4.12 depicts the mean marginal rates of transformation between the outputs
for the two tiers. Looking at the PUBS row, the results suggest that on average,
technically efficient Tier 1 (Tier 2) institutions could forego producing one publication
and instead produce approximately 5.67 (3.98) lower-level undergraduates (LUGs), 13.56
(6.06) higher-level undergraduates (HUGs) and 3.56 (3.03) graduate students. When
graduate students are considered, on average the universities in Tier 1 (Tier 2) could
97
forego producing one graduate education and instead produce 1.58 (1.31) LUGs, 3.8 (2)
HUGs and .28 (.32) publications.
Table 4.12 – Marginal Rates of Transformation
Tier 1
LUG HUG GRAD PUBSLUG 2.3907 0.6291 0.1762HUG 0.4183 0.2632 0.0737GRAD 1.5896 3.8001 0.2801PUBS 5.6740 13.5648 3.5696
Tier 2
LUG HUG GRAD PUBSLUG 1.5205 0.7610 0.2508HUG 0.6577 0.5005 0.1649GRAD 1.3141 1.9981 0.3296PUBS 3.9871 6.0625 3.0342
Table 4.13 shows the calculated marginal productivities of inputs to select outputs
for both tiers. On average, the figures indicate that for institutions in Tier 1 (Tier 2)
adding another research assistant should permit 1.96 (2.03) additional publications to be
produced. Employing an additional teaching assistant should allow universities, on
average, to educate 48.88 (28.67) lower-level undergraduates. Summing the numbers for
the three education outputs shows the marginal productivity of faculty to education to be
67.14 (53.26) additional students. For research publications, the marginal productivity is
2.94 (4.07) publications.
98
Table 4.13 – Marginal Productivity of Inputs to Outputsby Tier
Tier 1
Faculty RA TALUG 16.70 48.88HUG 39.93GRAD 10.51PUBS 2.94 1.96
Tier 2
Faculty RA TALUG 16.23 28.67HUG 24.68GRAD 12.35PUBS 4.07 2.03
The calculated RTS between research assistants and faculty in the production of
research publications is -1.50 (2.94/1.96) and the RTS between teaching assistants and
faculty is -1.37 (67.14/48.88). For Tier 2, the RTS between research assistants and
faculty is -2.00 and the RTS between teaching assistants and faculty is -1.86. These
figures suggest that, on average, if an institution were to reduce the number of faculty by
one they would have to increase their number of teaching and research assistants by
approximately 1.5 for Tier 1 institutions and approximately 2 for Tier 2 respectively.
99
Institutional Characteristics for the Allocative Efficiency Model
For the allocative efficiency model it was necessary to redefine the sample.
Testing for allocative efficiency requires a third vector of data for analysis: relative input
prices. This information was obtained from the University of Nebraska at Lincoln�s
National Survey of Graduate Assistant Stipends, Graduate Fellowships, and Postdoctoral
Fellowships (NSGAS).
For the year of interest 103 institutions participated in the survey. Considering
only those institutions also employed in this analysis reduced the applicable number to
58. The NRC rankings for this sample of institutions varied from a high of 4.06
(University of Illinois at Urbana-Champaign) to a low of .61 (Middle Tennessee State
University). For the same reasons that the institutions were sorted into quality tiers in the
earlier analysis, the wide variation in quality did not permit using all 58 universities. As
the majority of institutions either were originally from Tier 2 or from the lowest portion
of Tier 1, in the interest of group homogeneity only those institutions with NRC values
below 3.10 were included. This further reduced the number of institutions to 44. Finally,
because only 9 institutions were privates and it would not be possible with such a small
number to draw statistical comparisons, only the 35 publics were used in the analysis.
The distribution of these institutions across Carnegie classifications is as follows: (15)
Research I, (9) Research II, (6) Doctoral I, and (5) Doctoral II.
The salary information for faculty members comes from IPEDS� STFB survey
(see chapter 3) and is disaggregated by faculty rank. The input-price measure was
100
calculated by first weighting the average faculty salary for each rank by the number of
faculty members in that rank. These 6 values were then averaged to obtain each
institution�s nominal faculty input-price. Each institution�s average faculty salary was
then weighted by a cost-of-living index from the Internet site www.Homefair.com to
obtain the relative price measure.
To compute the input-prices for institutions� teaching and research assistants,
average stipend levels (for both research and teaching assistants) were calculated based
on the number of departments reporting figures in the NSGAS survey. For some
institutions only 5 or 6 departments may have reported these values while for others as
many as 50 or 60 departments may have participated. These average stipend levels were
then weighted by the same cost-of-living index used for the faculty input-price to obtain
relative input-price measures. The physical input and output measures are the same as
those used in the technical efficiency model. The summary statistics for the inputs, input-
prices and outputs are presented in Table 4.14.
Comparing Table 4.14 to Table 4.4 (Tier 2 summary statistics), the institutions here are
more representative of the high end of the Tier 2 input distributions. On average they use
approximately 25% more faculty, 50% more research assistants, and about 40% more
teaching assistants relative to the whole of Tier 2. For each measure the standard
deviation is slightly higher for this sample. In terms of outputs, on average, the
institutions here produce about double the number of publications and educate about
2,100 more lower-level undergraduates, 2,000 more upper-level undergraduates, and 600
more graduate students. While the variability in publications here is greater than that in
Tier 2 overall, for all three education outputs the variability in this sample is less.
101
Table 4.14 – Summary Statistics of Input/OutputMeasures for AE Model
35 Public Institutions
Mean Max Min SDInputs:
Faculty 854 1658 206 302
ResearchAssistants 603 1693 7 427
TeachingAssistants 520 1102 5 263
Input-Prices:
Faculty 51792 69300 39893 6696
ResearchAssistants 8002 13159 4812 1701
TeachingAssistants 7460 12013 5191 1426
Outputs:
Publications 516 1201 36 324
Lower-LevelUndergraduates 6725 12639 658 2273
Upper-LevelUndergraduates 6860 12443 2249 2297
GraduateStudents
2765 5314 738 1296
102
Allocative Efficiency DEA Results
In the previous section many of the non-parametric statistical tests were applied to
determine whether the mean efficiency scores tended to differ between the two quality
tiers or whether institutional characteristics potentially distorted the results. In the
allocative efficiency model this was not a viable approach. The efficiency scores
reported below are only for a single sample of institutions, hence no comparison group
exists. In the same respect, testing whether efficiency scores differ by form of control or
by medical facilities also is not applicable since the sample only focuses on public
institutions and only 9 institutions here have medical facilities.
Table 4.15 reports the overall efficiency score for each institution and the
decomposition of that score into its technical and allocative components. Because the
overall efficiency measure is computed under the assumption of constant returns34 it was
necessary that the technical efficiency scores be computed in the same fashion.
Based on the results in Table 4.15, the institutions in this sample, on average, are
operating at approximately 77% of total efficiency (76.9). The lowest reported score is
for the University of Georgia (54.5%) and four institutions in the sample received 100%
efficiency scores: Middle Tennessee State University, the University of South Florida,
the University of Texas at Dallas and the University of Toledo.
34 The software program DEAP 1.1 does allow the researcher to compute a variable returns to scale cost-efficiency model. Unfortunately, it does not generate the input or output weights when generating results.
103
Table 4.15 – Disaggregated Overall Efficiency Scoresfor Public Institutions
Overall
EfficiencyTechnicalEfficiency
AllocativeEfficiency
Colorado State University 0.8244 0.9051 0.9109 Florida State University 0.8884 0.9631 0.9224 Idaho State University 0.6683 0.9396 0.7112 Iowa State University 0.7060 0.8108 0.8707 Kansas State University 0.7870 0.8556 0.9199 Middle Tennessee State University 1.0000 1.0000 1.0000 Montana State University - Bozeman 0.8544 0.8946 0.9551 North Carolina State U. at Raleigh 0.6762 1.0000 0.6762 Northern Illinois University 0.7929 0.8419 0.9418 Oregon State University 0.7328 0.8581 0.8540 SUNY at Binghamton 0.8566 0.9526 0.8993 SUNY at Buffalo 0.7303 0.9173 0.7961 Temple University 0.7218 1.0000 0.7218 University of Alabama 0.7564 0.7780 0.9723 University of Arkansas 0.5536 0.6117 0.9049 University of Cincinnati 0.7666 0.9901 0.7743 University of Delaware 0.6724 0.8357 0.8046 University of Georgia 0.5452 0.6136 0.8885 University of Hawaii at Manoa 0.6560 0.9136 0.7180 University of Iowa 0.6884 0.8211 0.8385 University of Louisville 0.7963 0.9845 0.8088 University of Missouri, Columbia 0.7560 0.8654 0.8736 University of Nebraska at Lincoln 0.7064 0.7672 0.9206 University of New Hampshire 0.7648 0.7718 0.9910 University of N. Carolina at Greensboro 0.6704 1.0000 0.6704 University of North Dakota 0.8522 0.9012 0.9456 University of Oregon 0.7749 0.8120 0.9543 University of S. Carolina at Columbia 0.7087 0.9603 0.7380 University of South Florida 1.0000 1.0000 1.0000 University of Tennessee at Knoxville 0.6011 0.7585 0.7924 University of Texas at Dallas 1.0000 1.0000 1.0000 University of Toledo 1.0000 1.0000 1.0000 University of Wisconsin-Milwaukee 0.7957 1.0000 0.7957 Utah State University 0.8849 1.0000 0.8849 Washington State University 0.7342 0.7457 0.9845
Average 0.7692 0.8877 0.8697S.D. 0.1174 0.1091 0.1005
While it is possible to determine scale inefficiencies in allocative, or cost-efficiency models, it is notpossible here under the constant returns assumption.
104
When total efficiency is disaggregated into its technical and allocative
components the results indicate that, on average, approximately half of the overall
inefficiency was due to institutions being technically inefficient and half due to
allocatively inefficiency. The lowest score in the technical efficiency column is for the
University of Arkansas (61.2%) whereas 9 of the 35 institutions, or 26% of the entire
sample had technical efficiency scores of 100%.
This implies that one out of every four institutions in the sample, while
(relatively) minimizing the use of their academic labor inputs, did not pick the �right�
input-minimizing mix based on the relative input prices. By construction, the four totally
efficient institutions are classified as allocatively efficient. The lowest score in the
allocative efficiency column is 67.04% for the University of North Carolina at
Greensboro.
Table 4.16 shows the marginal rates of transformation between outputs and the
relevant marginal productivities for the technically efficient institutions in the sample.
Considering the top table first, the computed values suggest that on average, foregoing
production of one publication, it would be possible to produce 21 lower-level
undergraduates, 40 higher-level undergraduates, and 1.61 graduate students. In terms of
graduate students, foregoing one unit of graduate education would permit an additional
13 lower-level undergraduates, 25 higher-level undergraduates and .62 publications.
In the bottom table, the figures show that increasing research assistants by one, on
average, institutions should be able to produce 2.28 additional publications. Adding an
105
additional teaching assistant would allow institutions, on average, to educate 113
additional lower-level undergraduates. Summing the numbers for the three education
outputs shows the marginal productivity of faculty to be 217 additional students. For
research publications, the marginal productivity is 3.44 publications.
Table 4.16 – Rates of Transformation and MarginalProductivity for Public Sample
Marginal Rates of Transformation
LUG HUG GRAD PUBSLUG 1.9129 0.0760 0.0474HUG 0.5228 0.0397 0.0248GRAD 13.1534 25.1619 0.6230PUBS 21.1127 40.3875 1.6051
Marginal Productivities
Faculty RA TALUG 72.66 113.28HUG 138.99GRAD 5.52PUBS 3.44 2.28
Calculating the RTS between research assistants and faculty in the same way that
was done in the technical efficiency section indicates an RTS of -1.5 (3.44/2.28). The
calculated RTS between teaching assistants and faculty is -1.92 (217.17/113.28). These
figures suggest that, on average, if an institution was to reduce the number of faculty by
106
one they would have to increase their number of teaching and research assistants by
approximately 1.9 and 1.5 respectively.
Summary
This paper poses the question, �in terms of academic labor inputs, to what extent
are research-intensive universities technically and allocatively efficient in the joint
production of education and research?� The results of the variable returns to scale
technical efficiency analyses indicate that, on average, Tier 1 (Tier 2) institutions are
approximately 93% (86%) efficient at allocating their academic labor compared to the
�best practice� institutions in the respective tier groups.
As institutions are grouped to account for variations in output and input quality,
the statistically different mean efficiency scores between the groups suggest that �high-
quality� institutions, on average, are likely to be operating closer to best practice
universities than other institutions. When the distributions of efficiency scores for the
two tiers are compared to each other, the difference between tiers is shown to be
statistically different. Tier 1 institutions are more likely to be classified as technically
efficient or clustered near the constructed efficient frontier. Two out of every three
institutions in the higher quality tier are at least 90% technically efficient compared to
just less than half of Tier 2 institutions and more than 4 of every 5 Tier 1 institutions is at
least 80% efficient compared to 2 of every 3 institutions in Tier 2.
Form of institutional control (public or private) is shown to not affect technical
efficiency scores. For both groups, public institutions are just as likely to construct the
107
efficient frontier as privates. However, when the presence of medical facilities is
considered, technical efficiency scores are shown to differ for Tier 1. Institutions without
medical facilities are almost twice as likely to be considered technically efficient than
those with.
In terms of scale efficiencies, institutions in Tier 1 (Tier 2) are shown to be, on
average, 95% (91%) efficient at operating on a scale consistent with long-term
equilibrium. The difference between the mean scale efficiency scores for the two tiers is
also shown to be statistically significant. Compared to the most scale efficient
institutions in each quality group, high quality institutions are more likely to operate on a
scale consistent with long-run equilibrium than other institutions. Considering the nature
of scale inefficiencies, there are no statistically significant differences between tiers and
in both groups more institutions are likely to be operating at increasing returns to scale
than decreasing.
When form of institutional control is accounted for, the results suggest that
publics are just as likely to be regarded as scale efficient as privates in both tiers.
However, when the presence of medical facilities is considered, scale efficiency scores
are shown to differ for Tier 2. Only one medical institution in Tier 2 was classified as
scale efficient (3% of all medical institutions) versus 16 without (20% of all nonmedical
institutions). Moreover, institutions with medical facilities are shown to be more likely
than those without to be the least scale efficient.
Considering efficiency as the joint product of technical and allocative components
for a sample of public institutions, the results show public institutions to be 77% efficient
overall. When this score is decomposed into technical and allocative efficiencies,
108
approximately half of the overall inefficiency is attributable to each component under the
assumption of constant returns to scale. The institutions in the sample are shown to be,
on average, 89% efficient in choosing an input-minimizing mix of academic labor, and
87% efficient at choosing an input bundle that is cost minimizing. When individual
institution�s scores are examined, 9 institutions were shown to be using a technically
efficient input-mix but of these, only four picked a bundle that also minimized academic
labor costs.
109
Chapter 5
“The difficulty lies, not in the new ideas, but in escapingthe old ones, which ramify, for those brought up as most
of us have been, into every corner of our minds.”
- John Maynard Keynes
Initial Caveats
It is beneficial at this point to reiterate the implications of assessing relative and
not absolute efficiency before proceeding with a discussion over the results in chapter 4.
What is presented in this study are not measures of absolute efficiency. While it would
be desirable to obtain such a measure, it is clear the from the discussion in chapter 2 of
DEA�s merits as tool for measuring efficiency that it is not effective for achieving such a
goal.
As such, it is not possible to directly compare the results obtained in this study to
past efforts because the institutions evaluated in each are different. This implies that it is
also not possible to directly compare the efficiency scores for the two tiers in the
technical efficiency part of this study either. The result showing tier 1 to have a higher
mean technical efficiency score than tier 2 does not imply tier 1 institutions are more
efficient than tier 2. It means that tier 1 institutions are more likely to be clustered nearer
to the tier 1 (relative) frontier than tier 2 institutions are to the tier 2 (relative) frontier.
110
It is also important to recognize that because it was not possible to include all
institutions from the population in the analysis, even the relative efficiency scores are
valid only if the omitted institutions would not have formed part of the constructed
frontier had they been included. An illustrative example of this can be found in the
results presented in chapter 4. In the allocative efficiency analysis, the University of
South Florida received a CRS technical efficiency rating of 100%. Yet when the same
scores were calculated for tier 2, a larger group that included those institutions in the
allocative efficiency analysis, the university only received an 89% rating. Hence, by not
considering institutions that would have constructed part of the relatively efficient
frontier, the results from the allocative efficiency DEA analysis lead to the erroneous
conclusion that the University of South Florida is 100% technically efficient.35
What this means is that even those institutions in the largest sample that are
regarded as technically efficient may in fact all be inefficient. It could be the case that
there simply are no higher education institutions that are absolutely efficient. Moreover,
this observation can be extended to say that even if suitable data were available to
analyze the entire population, it is not possible to determine whether the most efficient
institutions are operating at absolute minimum cost or with absolute minimum input
usage. It may be that one or several institutions are absolutely efficient; unfortunately it
is not possible to derive this and thus would be incorrect to assume so.
35 At the extreme case if one of the omitted institutions was more efficient than every other institution atusing all of its inputs, the constructed frontier would degenerate to a free disposal frontier resembling aLeontief technology. In this case all of the institutions previously regarded as technically efficient wouldnow be considered technically inefficient.
111
DEA as an approach to measuring efficiency
The main benefit that comes with using DEA here is that the inefficiencies it
identifies are directly observable. Provided that the inputs selected are employed in the
production of the outputs specified, DEA provides a straightforward, relatively restriction
free way to determine how efficiently those inputs are used. While the measures
produced are relative to which DMUs are being evaluated, being able to account for this
�known inefficiency� has at least two attractive uses, both of which are discussed below.
First, it mitigates the problems associated with drawing incorrect assumptions in
an analysis. In chapter 2, stochastic frontier analysis (SFA) was discussed as an
alternative method for assessing technical efficiency. Unlike DEA, SFA is a parametric
approach where the traditional residual term is disaggregated into two parts: a normally
distributed residual and an additional residual term representing technical efficiency. The
fundamental assumption behind SFA is that this latter term is positive and generally
follows a half-normal distribution.
For the technical efficiency scores computed in SFA to be reliable, the
assumption about the distribution of the efficiency residual must hold. As the results
from Table 4.6 show, in this study the efficiency scores for Tier 2 did not follow a half-
normal distribution. Had the same analysis conducted here been done instead using SFA,
the assumption about the distribution of the efficiency scores most likely would have led
to different, and arguably incorrect, results. As the DEA analysis derives the inefficiency
112
directly from the data, it eliminates the need to complicate the analysis with an additional,
and potentially incorrect, assumption.
Second, the ability to remove �known inefficiency� can provide more reliable
estimates of frequently derived production concepts. For example, the traditional
approach to obtaining estimates of the marginal rate of technical substitution (RTS)
between inputs requires evaluating the derivative of an empirically estimated production
function at a particular value. However, the usefulness of this estimate is suspect for
cases like higher education because the RTS at any point will be influenced to some
extent by which functional form the researcher specifies a priori. Moreover, even if
higher education institutions are output maximizers and engage in efficient production
practices, empirical analyses of production will not reveal �textbook� efficiency. Various
external influences acting on the university like imposed state regulations or universities
in a state of adjustment are both likely to surface in an empirical analysis as inefficiency.
As such, the computed RTS values will be biased by inefficiencies in the data.
These restrictions are relaxed in a DEA specification. Because no functional form
is imposed in advance, the isoquant constructed from the data allows researchers to factor
out known inefficiencies. The implication of this is that the RTS estimates from a DEA
analysis are less likely to be distorted by including inefficient institutions in the sample.
This particular approach was applied in this study. The RTS values in chapter 4 were not
computed based on all institutions in each tier, only from those institutions comprising
part of the constructed frontier.
Third, as the results from this study demonstrate, while DEA may not be able to
determine absolute efficiency it is very useful for identifying patterns of inefficiency. For
113
example, it is possible to perform a DEA analysis and then, through contingency analysis,
determine whether efficiency scores are likely to differ depending on qualitative
differences like form of control or the presence of medical facilities. Alternatively, by
sorting the institutions into groups based on some criterion, it is also possible to
determine whether the mean values and distribution of the efficiency scores are
statistically different from each other. This latter approach proved useful to
demonstrating how efficiency scores differ when institutions are sorted by overall quality.
Finally while DEA as a technique may be regarded as atheoretical, this does not
imply that the study it is framed in is also. The researcher determines the relevant inputs
and outputs as well as how they should be incorporated into an analysis. This study
demonstrates how a plausible approach can be developed to explain why inefficiencies
emerge and then empirically test, using a relatively restriction-free technique like DEA,
whether the data is consistent with such a rationale.
Discussion
Overall, the results from this study are consistent with the model of university
behavior outlined in the framework for analysis section of chapter 2. There it was argued
that competition in the input markets for high quality students should act as an incentive
to universities employing those inputs to engage in more efficient production behavior.
With regards to technical efficiency, Tier 1 institutions are shown to be, on average, just
under 93% efficient and Tier 2 institutions approximately 86%. The statistically
significant difference between the mean technical efficiency scores supports that claim
114
and finds further support in the distributions of scores, which were also shown to be
statistically different between tiers. Whereas 67% of Tier 1 efficiency scores fell
between 90% and 100%, over half (56%) of the efficiency scores for Tier 2 were below
80%. Less variability and a higher mean efficiency score for Tier 1 both support the
notion that competition for high quality inputs acts as an incentive to use those inputs
more efficiently in the face of scarce resources.
Additional evidence can be found by looking only at those institutions the DEA
analysis regarded to be technically efficient. If there is a greater degree of substitutability
between high quality graduate students and faculty members, it would be evident in lower
marginal rates of technical substitution (RTS) between inputs relative to those computed
for Tier 2. An RTS value of -1 between inputs indicates they are perfect substitutes at
that point on the isoquant. The further the RTS deviates from this (either positively or
negatively) implies more (less) of one input would have to be added if the amount of the
other input were reduced by one unit.
If one assumes, at the margin, that the production of education is additively
separable36 then summing the marginal productivities of faculty in the production of all
three education outputs provides some measure of the marginal productivity of faculty to
the production of the more general output education. These summary RTS measures
were calculated in chapter 4. For Tier 1, the mean RTS between faculty and research
assistants was �1.50 and between faculty and teaching assistants it was �1.37. For tier 2,
the values were �2.0 and �1.8 respectively. In both cases, the mean RTS measures for
36 Additive separability implies that the marginal productivity in producing one of the outputs does notchange when the increasing or decreasing the amount produced of another output. A case can be made thatadding a single student to an undergraduate course is not likely to influence the marginal productivity offaculty to the production of other education outputs.
115
Tier 1 are closer to �1 that that for Tier 2, indicating both teaching and research assistants
at institutions using higher quality inputs are more likely to be regarded as substitutes for
faculty labor relative to other institutions. This result however, has to be considered in
light of the fact that DEA does not permit statistical testing of the results, hence it is not
possible to say with any statistical certainty that the differences between the RTS values
are different than zero. Moreover, this relationship is dependent on whether the additive
separability assumption holds. Nonetheless, the general finding regarding quality
differences and efficiency is important because it offers empirical support linking the two
ideas. This relationship has not been addressed or estimated in prior studies of higher
education efficiency and thus represents an important result.
A final observation that can be drawn from the technical efficiency results
concerns the extent to which institutions are efficient in the joint production of education
and research. In the discussion from chapter 2 on the economics of technical efficiency,
it was shown that technical efficiency in a two-output case requires the RTS values
between inputs be the same in the production of all outputs. There is partial evidence to
support this based on the calculated RTS values for the institutions found to be relatively
efficient in each tier. For both tiers the calculated RTS values are reasonably close to
each other (�1.50 and �1.37 for Tier 1; -2.00 and -1.8 for Tier 2). However, in the
absence of statistical tests to determine whether the difference is significant, it is not
possible to draw further conclusions.
The findings from the scale efficiency analyses provide additional evidence to
support other researchers� findings with regard to how competitive forces shape higher
education institutions. Geiger and Feller (1995) for example show that the historically
116
pre-eminent research universities were not able to maintain their share of total academic
research expenditures during the 1980s. One reason they suggest is that the
characteristics making them prestigious37 limited their ability to expand during a decade
marked by increased overall funding for research. These institutions had already
maximized much of their productive capacity. As evidence, they present figures showing
that institutions experiencing the greatest gains in research share were those which
expanded their number of full professors by more than the average while those
experiencing the greatest loss expanded by less.
This pattern is also evident in the results presented here. Institutions that have
already maximized their productive capacity should, at the least, not be operating at
increasing returns to scale. Of the 14 institutions in Tier 1 that were also among the
greatest losers of research share in the Geiger and Feller study, only two are shown to be
operating at increasing returns. In contrast institutions that have room to grow should
either be operating at increasing returns or be at constant returns, given that there is a
three-year gap between the last year in their data and that used here. Only two of the
institutions among those exhibiting the greatest gains in research share are shown to be
operating at decreasing returns. 38
The finding that form of control was not found to affect relative technical or scale
efficiency scores contradicts the results presented by Ahn, Charnes, and Cooper (1988)
who found public doctoral granting institutions were more likely than privates to be
technically efficient. Other findings from this study, however, provide evidence to help
37 The authors define the most prestigious institutions as being �characterized by low turnover, a relativelylarge number of senior faculty, and a high percentage of full professors� (p. 22).38 Assuming institutions with scale efficiency scores above 99% are also operating at constant returns toscale. If not the numbers change to 3 and 4 institutions respectively.
117
explain the lack of any statistically significant relationship and, in doing so, shed light on
an important consideration in empirical studies of costs and production in higher
education. Specifically, the DEA analysis reveals how similar peer institutions� input
mixes are, which is consistent with some researchers� suggestions (Winston, 1999; Frank
and Cook, 1995) that higher education institutions tend to emulate those institutions it
perceives to be its direct competitors.
When an institution is found to be technically inefficient, radial measures of
efficiency computed by DEA software programs consider how much of a proportional
contraction in all of the inputs must occur in order to make that institution efficient. In
the course of doing this, these programs all provide information on the inefficient
institution�s �reference� DMUs: the group of institutions making up the part of the
constructed frontier that the inefficient institution is projected onto. In the DEA literature
the reference group for an inefficient institution is regarded as those DMUs the inefficient
one should try and �emulate� in order to be efficient. The output also provides weights
for the reference units indicating how much the inefficient unit�s input mix would look
like institutions in the reference group when projected onto the frontier. Table 5.1 lists a
select group of technically inefficient universities and their respective reference groups.
The parenthetical values represent the reference institution weights. Using Columbia as
an example, the results show that if it were to radially reduce all of its academic labor
inputs to the point that it would lie on the constructed frontier, its input mix would look
like an amalgamation of Harvard, the University of California at Berkeley, Yale, and
USC: though it would most resemble Harvard.
118
Table 5.1 – Benchmarks for SelectTechnically Inefficient Institutions
Northwestern Columbia UNC at Chapel Hill
Arizona State (.029) Harvard (.711) UCSB39 (.158)U. Chicago (.032) UC Berkeley (.037) Colorado (.225)Stanford (.317) Yale (.123) Arizona State (.168)NYU (.243) USC (.129) Stanford (.191)Rensselear (.378) NYU (.005)
USC (.110)Emory (.144)
Iowa State UVA U. Illinois at UC
Indiana (.055) UCSB (.327) Wisconsin (.554)Texas A&M (.064) U. Penn (.349) UT Austin (.153)UCSB (.367) NYU (.195) Purdue (.142)Colorado (.478) Rensselear (.128) Ohio State (.151)NYU (.036)
What is most striking about Table 5.1 is the extent to which the inefficient
institutions� reference groups are likely to resemble the institutions these universities
actually consider their peers or competitors. Northwestern and Columbia�s reference
groups are both primarily composed of other elite privates. In contrast the reference
group for the University of Illinois is composed of other, very large public institutions,
which, with the exception of the University of Texas at Austin, are other �Big Ten�
39 University of California at Santa Barbara
119
institutions40 from the Midwest. Another interesting result is the group of benchmark
institutions for the University of Virginia. Popularly known as a �private� public
institution, the results here suggest that UVA�s radially contracted input mix would more
likely resemble the upper-echelon of private institutions.
These findings are important for two reasons. First, they provide empirical
evidence to support the idea that universities behave like those they see as their direct
competitors. While Columbia�s peer group, for example, consists strictly of other elite
private universities it is well established that these institutions all compete for similar
students and faculty. Second, if institutions behave similar to their competitors, then
separating universities into publics and privates as is done in most cost and production
studies of higher education institutions masks an important underlying relationship. As
efficiency, production, and costs are all inter-related concepts, results like those presented
in Table 5.1 raises questions over the traditional practice of dividing institutions in
empirical analyses by form of control. Had universities like UVA or the University of
North Carolina at Chapel Hill only been compared to other publics it would not have
been possible to consider that, in practice, they behave to some extent like privates.
This is not to say that privates (publics) should not be compared only to other
privates (publics). For research questions focusing specifically on state policies or on
tuition practices, for example, there are numerous reasons why form of control should be
taken into account. What is being argued here is that studies of higher education
efficiency, production, and costs in general need to consider the possibility that publics
and privates arguably do not employ different technologies in the joint production of
40 The �Big Ten� is actually the name of the athletic conference these institutions participate in. However,these institutions also loosely operate what is called the �Academic Big Ten� and frequently benchmark
120
education and research and that disaggregating institutions in this way overlooks key
behavioral considerations.
The results regarding the presence or absence of medical facilities are difficult to
interpret. Every effort was made to factor medical aspects out of the data and earlier it
was argued that differences emerging would be related to the link these departments and
schools have with traditional departments in the physical and biological sciences. The
statistically significant results showing Tier 1 universities having medical schools were
less likely to construct part of the relatively efficient frontier does support this idea. The
findings are also consistent with the idea that medical research may exert a positive
influence on the research productivity of these traditional departments. As such, when
joint efforts between biological or physical sciences and university medical schools are
factored out of the analysis the finding that these institutions are likely to be less efficient
suggests Tier 1 research productivity in the �medically-related� disciplines is strongly
tied to the collaborative relationships they have with medical research.
Using the same logic, the absence of such a finding for Tier 2 would suggest
significantly less collaboration between medical schools and the traditional departments
described above. While the influence of medical schools on Tier 2 does not emerge in
the technical efficiency scores, it does in scale efficiency. The finding that institutions
with medical schools in Tier 2 were less likely to be regarded as scale efficient would
seem to indicate that the presence of medical schools is likely to increase productivity,
not because of collaborative efforts but because they increase the overall amount of
research produced. When factored out of the analysis, the finding that these institutions
many aspects of their performance to other Big Ten schools.
121
with medical schools are less likely to be scale efficient suggests a significant portion of
research is being produced in only one part of the university.
It is important to pause here and consider how the data limitations addressed in
chapter 3 may affect the results and conclusions presented so far. From the summary
statistics provided for the two tiers in chapter 4, Tier 1 institutions were shown to use
significantly more labor relative to Tier 2. Had technical staff been included, this would
have likely lowered some institutions� technical and scale efficiency scores.41 While not
possible to state with certainty, it is likely that this would have had a greater effect on the
institutions in Tier 1. In general, the physical and biological science departments at these
institutions are larger and hence would be more apt to rely on using larger numbers of
technical staff. For similar reasons, the fact that the faculty headcounts do not include
full-time researchers or those having more than 50% of their time devoted to research is
likely to have the same effect. If significant, this could lower the mean technical and
scale efficiency scores for Tier 1 to the point that there would be no statistically
significant difference between the tiers or, at the extreme, reverse the results. Finally the
decision to sort institutions into two groups based on their mean MSQPF scores raises
concerns that the groups themselves may not properly be specified and thus distort the
results. A strong case can be made that the difference between the lower scoring
institutions from Tier 1 and the higher scoring institutions in Tier 2 would warrant
examining these institutions as one group.
In terms of overall and allocative efficiencies, the results show a sample of Tier 2
public institutions operating at 77% and 87% respectively. The usefulness of these
41 Note though that if all institutions used relatively the same proportion of technical staff, the scores wouldremain unchanged.
122
figures are limited though as data limitations did not permit comparisons between groups
and the number of institutions analyzed was small, which increases the likelihood that the
frontier would be altered significantly by including more institutions (see University of
South Florida example earlier in the chapter). However, presenting the results does
highlight how useful this type of analysis can be for testing assertions regarding non-
profit behavior. Disaggregating inputs into their physical and cost components allows
researchers to consider whether institutions may be efficient at allocating physical inputs
even if cost efficiency is not sought or achieved. Higher education is an industry whose
institutions� goals differ considerably from the cost-conscious, profit-maximizing firms
guiding economic notions of efficiency. If universities are going to be productively
efficient, it will more likely emerge in the allocation of their physical, and not cost,
inputs. To not consider this in an empirical analysis is a serious omission, yet the
literature review conducted for this study revealed no instances where efficiency was
considered or tested in such a manner. In this respect, the results from the analysis show
how disaggregating inputs into physical and cost components makes it possible to
evaluate whether physical input allocations stemming from non-cost-related objectives
may conflict with cost-minimizing objectives.
An example of this can be found in the discussion of the non-profit literature from
chapter 1. One of the three assertions put forth about non-profit behavior was that non-
profits might possess preferences over using particular input combinations that would
result in higher cost curves. To empirically test an assertion such as this first requires
separating physical input usage from costs and evaluating whether technical efficiency is
being achieved but not cost efficiency. Focusing exclusively on only one type of
123
efficiency (e.g. technical) does not permit one to test this type of hypothesis, nor does
using cost-based input measures.
In this study the findings indicate only about half of the overall inefficiency is
actually cost-related. These results are far from conclusive but suggest, at least for these
institutions, that cost-containment may not be a primary goal. Moreover, if it was the
case that institutions were driven to efficiently minimize physical input usage instead,
then disaggregating overall efficiency should reveal cost inefficiency to be the main
source of overall inefficiency, which it does not.
There are several compelling reasons to cast doubt over the validity of these
results or whether they can be generalized beyond the group being evaluated. The
number of institutions analyzed is comparably small (35) and are mostly doctoral
granting universities operating in what can be called regional markets. Had the data been
available, it would have been preferable to analyze cost behavior from institutions in Tier
1 instead. Those institutions are not only more likely to operate in competitive input and
output markets, but because they also tend to attract resources from a larger geographic
pool regional differences would not factor as heavily into the analysis. From the
technical efficiency analysis between Tiers 1 and 2, it was shown how the distribution of
efficiency scores in the latter were more likely to be widely dispersed. Moreover,
because they are all public institutions and come from a variety of different states, no
accounting is made for the effects of different state regulations such as the performance-
based funding mechanisms in Tennessee and South Carolina. James (1990), for example,
notes that one of the main distinctions between public and private institutions is the
decrease in institutional autonomy publics face as a tradeoff to the stability of annual
124
state appropriations. Because state legislators have what James calls �monopsonistic
power [to] specify certain inputs and outputs that must be met� (p. 89) it could be that the
efficiency scores here reflect inefficient physical or cost-based allocations stemming from
state regulations.
While the actual results are severely limited in their descriptive ability, the
analysis itself demonstrates another instance where DEA has value as an analytical tool.
Specifically, the designs of prior higher education efficiency studies were not capable of
jointly estimating overall, technical, and allocative efficiency. In this respect, what was
done here represents a significant departure from past approaches and reveals how DEA
may be used for testing hypotheses regarding non-profit behavior.
Additional Considerations
Two important limitations in the analysis affect all aspects of this study. These
are presented in this section rather than repeatedly drawing attention to them. The first
has to do with the exclusion of a capital measure. It cannot be stressed enough that, as a
result, the findings from this study present only a partial view of higher education
efficiency. A truly comprehensive study of technical and allocative efficiency in higher
education institutions would require measures of capital be included. Developing a
comprehensive understanding requires not only considering the influence these other
variables will have on efficiency but also how the cross-effects, or interactions, between
the measures considered here and those not are likely to influence the results. It must be
125
recognized that what is presented here is a partial picture and that the conclusions derived
as a result may change considerably by including one or both of these omitted factors.
The analysis masks another important consideration recognized in other studies
(Ahn, et al, 1988): treating efficiency as a dynamic process. The static model employed
here only provides a �snapshot� of institutional efficiency without accounting for
dynamic shocks affecting individual or even whole groups of institutions (e.g. recently
enacted policy initiatives from different state governments). It does not consider whether
adjustments are being made or if there exists some stable efficiency �level� around which
institutions tend to converge toward. It could be that many of the institutions shown to be
technically or allocatively inefficient are simply in a state of adjustment. While modeling
efficiency over time may reveal stable, possibly efficient operating points not evident in
static analyses, it is necessary to first establish a basis from which this can occur. One
goal of this study is to do just that.
Conclusion
This study provides a much needed empirical perspective on productive and cost
efficiency in higher education institutions. At the same time, what has been done here
only scratches the surface of a topic that has historically been neglected yet is critically
important to gaining a stronger theoretical understanding of production and cost behavior
in higher education institutions in general. While much research still needs to be done,
this study represents a solid step in the right direction.
126
The reason for this can be found in the development of an economic rationale for
why universities may strive to be efficient and how inefficiencies may arise in the
process. It is not enough to incorporate traditional measures of higher education inputs
and outputs into a DEA analysis and simply interpret the results. Understanding whether
efficiency is even a rational expectation of university production, much less what factors
may give rise to it, requires forethought about the underlying production process. By
taking such an approach in this study and outlining a plausible framework for analysis, it
was possible to do more than just identify relevant input and output measures and to
generate relative efficiency scores. Being able to corroborate the results with more
general observations on higher education production provides evidence that there is more
to gain by attempting to understand why inefficiency occurs rather than just measuring it.
In chapter 1, a gap was identified between perceptions of university efficiency
and empirical evidence to support such claims. What has been done here �closes� that
gap to some extent by providing empirical results that reveal how concepts like quality,
competition, form of control, and the presence of medical schools are associated or
influence efficient behavior in research universities. To that end, though absolute
measures of higher education efficiency continue to elude, this study does succeed in
shedding new light on important relationships that arise and should be taken into account
when evaluating higher education efficiency.
127
References
Aczel, A.D. (1989). Complete business statistics. Boston, MA: R. D. Irwin, Inc.
Ahn, T., Arnold, V., Charnes, A., & Cooper, W.W. (1989). DEA and ratio
efficiency analyses for public institutions of higher learning in Texas. Research in
Governmental and Nonprofit Accounting. Vol. 5, 165-185.
Ahn, T., Charnes, A., & Cooper, W.W. (1988). Some statistical and DEA
evaluations of relative efficiencies of public and private institutions of higher learning.
Socio-economic Planning Sciences 22(6), 259-269.
Arizona Joint Legislative Budget Committee, Higher Education Research Section.
Arizona Faculty Workload Study: Findings and Policy Issues. Phoenix: Joint Legislative
Budget Committee, 1993.
Athanassopoulos, A.D, & Shale, E. (August,1997). Assessing the Comparative
Efficiency of Higher Education Institutions in the UK by Means of Data Envelopment
Analysis. Education Economics 5(2), 117-134.
Avkiran, N.K. (2001). Investigating technical and scale efficiencies of Australian
universities through data envelopment analysis. Socio-Economic Planning Sciences 35.
57-80
Baldwin, R.G., & Chronister, J.L. (2001). Teaching without tenure. Baltimore,
MD: The John Hopkins University Press.
Banker, R.D. (1993). Maximum likelihood, consistency and data envelopment
analysis: A statistical foundation. Management Science 39(10). 1265-1273.
128
Banker, R.D., Charnes, A., Cooper, W.W., Swarts, J., & Thomas, D.A. (1989).
An introduction to data envelopment analysis with some of its models and their uses.
Research in Governmental and Nonprofit Accounting, 5, 125-163.
Banker, R.D., Charnes, A., & Cooper, W.W. (1984). Models for estimating
technical and scale efficiencies in data envelopment analysis. Management Science 30(9).
Beasley, J.E. (Apr., 1995). Determining teaching and research efficiencies. The
Journal of the Operational Research Society 46(4). 441-452.
Benjamin, E. (February 1998). Declining faculty availability to students is the
problem � but tenure is not the explanation. American Behavioral Scientist, 41(5). 716-
735.
Ben-Ner, A., & Gui, B. (1993). The nonprofit sector in the mixed economy.
Ann Arbor, MI: The University of Michigan Press.
Bifulco, R., & Bretschneider, S. (2001). Estimating school efficiency: A
comparison of methods using simulated data. Economics of Education Review, 20, 417-
429.
Bowen, H.R. (1980). The Costs of Higher Education. San Francisco:
Jossey-Bass.
Bowlin, W.F. (1998). Measuring performance: An introduction to data
envelopment analysis (DEA). The Journal of Cost Analysis (fall), 3-27.
Brewer, D.J. & Ehrenberg, R.G. (1996). Does it pay to attend an elite private
college: Cross cohort evidence on the effects of college quality on earnings. National
Bureau of Economic Research Working Paper Series. Number 5613.
129
Brinkman, P.T. (2000). The economics of higher education: Focus on cost. In
M.F. Middaugh (Ed.), Analyzing costs in higher education: What institutional
researchers need to know. New Directions for Institutional Research, 106, (Summer).
Brinkman, P.T. (1990). Higher education cost functions. In S.A.
Hoenack & E.I. Collins (Eds.), The economics of American universities. Buffalo, NY:
State University of New York Press.
Burd, S. (1997a). The finger pointing begins over the work of a federal panel on
college costs. The Chronicle of Higher Education, Dec 19.
Burd, S. (1997). Republican pressure leads to shift in study of higher education
costs. The Chronicle of Higher Education, Dec 12.
Byrd, L., Jr. (1994). Practical considerations and suggestions for measuring
faculty workload. New Directions for Institutional Research, 83, (Fall).
Byrnes, P., & Valdmanis, V. (1994). Analyzing technical and allocative
efficiency of hospitals. In A. Charnes, W.W. Cooper, A.Y. Lewin, & L.M. Seiford (Eds.),
Data envelopment analysis: Theory, methodology and applications. Norwell, MA:
Kluwer Academic Publishers Group.
Carnegie Foundation. (1994). A classification of institutions of higher education.
Technical report. Princeton, NJ: The Carnegie Foundation for the Advancement of
Teaching.
Carlson, D. (1975). Examining efficient joint production processes. In R. A.
Wallhaus (Ed.), Measuring and increasing academic productivity. New Directions for
Institutional Research, 8, (Winter).
130
Caspar, G. (1996). Private letter from Gerhard Casper, president of Stanford
University, to James Fallows, editor of U.S. News & World Report. [On-line].
Available: www.stanford.edu/dept/pres-provost/president/speeches/961206gcfallow.html
Chamberlain, Edward, H., The Theory of Monopolistic Competition, 8th Edition,
Cambridge, MA: Harvard University Press, 1962.
Charnes, A., Cooper, W.W., Lewin, A.Y., & Seiford, L.M. (Eds.). (1994). Data
envelopment analysis: Theory, methodology and applications. Norwell, MA: Kluwer
Academic Publishers Group.
Charnes, A., Cooper, W.W., & Rhodes, E.L. (1978). Measuring the efficiency of
decision making units. European Journal of Operational Research, 2(6). 429-444.
Chen, T. (1997). A measurement of the resource utilization efficiency of
university libraries. International Journal of Production Economics 53(1), 71-80.
Cherchye, L., & Post, T. (2000). Methodological advances in DEA: A survey and
an application for the Dutch electricity sector. Discussion paper number 111 from
Erasmus University of Rotterdam, Erasmus Research Institute of Management.
Clark, B.R. (1995). Places of inquiry: Research and advanced education in
modern universities. Berkeley: University of California Press.
Clotfelter, C.T. (1996). Buying the best: Cost escalation in elite higher
education. Princeton: Princeton University Press.
Clotfelter, C.T. (1991). Explaining the demand. In Clotfelter, C.T., Ehrenberg,
R.G., Getz, M., & Siegfried, J.J. (Eds.). Economic challenges in higher education.
Chicago: University of Chicago Press.
131
Coelli, T., Prasada Rao, D.S., & Battese, G.E. (1998). An Introduction to
Efficiency and Productivity Analysis. Boston: Kluwer Academic Publishers.
Cohn, E. (1979). The Economics of Education. Cambridge: Ballinger.
Cohn, E., Rhine, S.L.W., & Santos, M.C. (1989). Institutions of higher education
as multi-product firms: Economies of scale and scope. The Review of Economics and
Statistics 71 (2), 284-290.
Cooper, W.W., Seiford, L.M., & Tone, K. (2000). Data envelopment analysis: A
comprehensive text with models, applications, references and DEA-solver software.
Boston, MA: Kluwer Academic Publishers.
Diamond, R.M., Gray, P. (1987). National study of teaching assistants. Center
for Instructional Development, Syracuse University.
Debreu, G. (1951). The coefficient of resource utilization. Econometrica, 273-
292.
de Groot, H., McMahon, W., & Fredericks Volkwein, J. (1991). The cost
structure of American research universities. The Review of Economics and Statistics 73
(3), 424-431.
Dolan, R.C. & Schmidt, R.M. (1994). Modeling institutional production of
higher education. Economics of Education Review 13 (3), 197-213.
Dundar, H., & Lewis, R.D. (1995). Departmental productivity in American
universities: Economics of scale and scope. Economics of Education Review 14 (2),
119-144.
Ehrenberg, R. G. (2000). Tuition rising: Why college costs so much.
Cambridge, MA: Harvard University Press.
132
Farrell, M.J. (1957). The measurement of productive efficiency. Journal of the
Royal Statistical Society, Series A, 120(3). 253-290.
Faulhaber, G.R. (1975). Cross-subsidization: Pricing in public enterprise.
American Economic Review (65). 966-977.
Garvin, D.A. (1980). The economics of university behavior. New York:
Academic Press.
Gassler, R.S. (1986). The economics of nonprofit enterprise: A study in applied
economic theory. New York: University Press of America, Inc.
Geiger, R.L. (2000, November). Super students: Universities in the marketplace
for high-ability students. Paper presented at the annual meeting of the Association for the
Study of Higher Education, Sacramento, CA.
Gilmore, J.L., & To, D. (1992). Evaluating academic productivity and quality.
In C.S. Hollins (Ed.), Containing costs and improving productivity in higher education.
New Directions for Institutional Research, 75, (Fall).
Glass, J.C., McKillop, D.G., & Hyndman, N. (1995). Efficiency in the provision
of university teaching and research: An empirical analysis of UK universities. Journal of
Applied Econometrics 10 (1), 61-72.
Goldberger, M.L., Maher, B.A., & Flattau, P.E. (Eds.). (1995). Research-
doctorate programs in the united states: Continuity and change. Washington: National
Academic Press.
Goudriaan, R., & de Groot, H. (1991). Regulation and the performance of
American universities. Paper presented at the CIRIEC conference for Public versus
Private Enterprises: In search of the real issues. Liege, April 4-5.
133
Grosskopf, S., Margaritis, D., & Valdmanis, V. (2001). The effects of teaching on
hospital productivity. Socio-Economic planning Sciences, 35, 189-204.
Hines, E. R. & Higham, J. R. (1996). Faculty workload and state policy. Paper
presented at the annual meeting of the Association for the Study of Higher Education,
Memphis, TN. November, 1996.
Hoenack, S.A. (1990). An economist�s perspective on costs within higher
education institutions. In S.A. Hoenack & E.I. Collins (Eds.), The economics of
American universities. Buffalo, NY: State University of New York Press.
Hopkins, D.S.P. (1990). The higher education production function: Theoretical
foundations and empirical findings. In S.A. Hoenack & E.I. Collins (Eds.), The
economics of American universities. Buffalo, NY: State University of New York Press.
Hoxby, C.M. (1997, December). How the changing market structure of U.S.
higher education explains college tuition. National Bureau of Economic Research,
Working Paper 6323.
James, E. (1990). Decision processes and priorities in higher education. In S.A.
Hoenack & E.I. Collins (Eds.), The economics of American universities. Buffalo, NY:
State University of New York Press.
James, E. (1986). How nonprofits grow: A model. In S. Rose-Ackerman (Ed.),
The economics of nonprofit institutions. NewYork: Oxford University Press.
James, E. (1978). Product mix and cost disaggregation: A reinterpretation of the
economics of higher education. The Journal of Human Resources 13 (2), 157-186.
James, E., & Rose-Ackerman, S. (Eds.). (1986). The nonprofit enterprise in
market economics. New York: Harwood Academic Publishers.
134
Johnes, J. & Johnes, G. (1995). Research funding and performance in U.K.
university departments of economics: A frontier analysis. Economics of Education
Review 14(3), 301-314.
Jongbloed, B., R. Goudriaan & D.C. van Ingen (1998), Kostendeterminanten en
Doelmatigheid van het Nederlandse Hoger Onderwijs, Beleidsgerichte Studies Hoger
Onderwijs en Wetenschappelijk Onderzoek 57, Ministerie van Onderwijs, Cultuur en
Wetenschappen. Den Haag: SDU.
Jongbloed, B.W.A. & Koelman, J.B.J. (1996), Universiteiten en Hogescholen
vergeleken, Beleidsgerichte Studies Hoger Onderwijs en Wetenschappelijk Onderzoek
38, Ministerie van Onderwijs, Cultuur en Wetenschappen. Den Haag: SDU.
Jongbloed, B.W.A., Koelman, J.B.J., Goudriaan, R., de Groot, H., Haring,
H.M.M. & van Ingen, D.C. (1994), Kosten en Doelmatigheid van het Hoger Onderwijs
in Nederland, Duitsland en Groot-Brittannië, Beleidsgerichte Studies Hoger Onderwijs
en Wetenschappelijk Onderzoek 35, Ministerie van Onderwijs, Cultuur en
Wetenschappen. Den Haag: SDU.
Jongbloed, B. & Vink, M. (1994), Assessing efficiency in British, Dutch and
German universities, in: Goedegebuure, L. & van Vught, F. (eds.), Comparative policy
studies in higher education. Utrecht: Lemma.
Jordan, S.M. (1994). What have we learned about faculty workload: The best
evidence. In J.F. Wegin (Ed.), Analyzing faculty workload. New Directions for
Institutional Research, 83, (Fall).
King, W.D. (1997). Input and output substitution in higher education. Economic
Letters, 57, 107-111.
135
Koshal, R.K., & Koshal, M. (1995). Quality and economies of scale in higher
education. Applied Economics 27, 773-778.
Lakdawalla, D., Philipson, T. (1998). Nonprofit production and competition.
NBER Working Paper number W6377.
Layzell, D.T. (1992). Tight budgets demand studies of faculty productivity. The
Chronicle of Higher Education, Feb 19.
Leatherman, C. (2000). NLRB ruling may demolish the barriers to T.A. unions
at private universities. The Chronicle of Higher Education, April 14.
Leslie, L.L., Oaxaca, R.L., & Rhoades, G. (2001). Technology transfer and
academic capitalism. In A.H. Teich et al (eds.) AAAS Science and Technology Policy
Yearbook: 2001. Washington DC: American Association for the Advancement of
Science. p 261-77.
Massy, W.F. (1996). Resource allocation in higher education. Ann Arbor, MI:
The University of Michigan Press.
McMillan, M.L. & Datta, D. (1998). The relative efficiency of Canadian
universities. Canadian Public Policy, 24(4), 485-511.
Metters, R.D., Vargas, V.A., & Whybark, D.C. (2001). An investigation of the
sensitivity of DEA to data errors. Computers & Industrial Engineering, 41, 163-171.
Mooney, C.J. (1992). AAUP Criticizes colleges� reliance on part-time teachers.
The Chronicle of Higher Education, Nov 18.
Nelson, R., & Hevert, K.T. (1992). Effect of class size on economies of scale
and marginal costs in higher education. Applied Economics 24, 473-482.
136
Nerlove, M. (1972). On tuition and the costs of higher education: Prolegomena
to a conceptual framework. The Journal of Political Economy 80, (3.2), 178-218.
Newhouse, J.P. (1970). Toward a theory of nonprofit institutions: An economic
model of a hospital. American Economic Review 60 (1), 64-75.
Nicholson, W. (1995). Microeconomic theory: Basic principles and extensions.
New York: The Dryden Press.
Ohio Board of Regents (1994). Report of the Regents’ advisory committee on
faculty workload standards and guidelines. February 18, 1994. Available On-line at:
http://.bor.ohio.gov/plandocs/workload.html.
Phillips, E.C., Morell, C., & Chronister, J.L. (1996). Responses to reduced state
funding. In D.W. Breneman & A.L. Taylor (Eds.), Strategies for promoting excellence in
a time of scare resources. New Directions for Higher Education, 94, (Summer).
Presley, J. B. & Engelbride, E. (1998). Accounting for faculty productivity in the
research university. Review of higher education, 22 (1), 17-37.
Rose-Ackerman, S. (1986). The economics of nonprofit institutions. New York:
Oxford University Press.
Rosen, D., Schaffnit, C., & Paradi, J.C. (1998). Marginal rates and two-
dimensional level curves in DEA. Journal of Productivity Analysis, 9, 205-232.
Rothschild, M., & White, L.J. (1995). The analytics of the pricing of higher
education and other services in which the customers are inputs. Journal of Political
Economy, 103(3). 573-586.
137
Rothschild, M., & White, L.J. (1993). The university in the marketplace: Some
insights and some puzzles. In C.T. Clotfelter & M. Rothschild (Eds.). Studies of supply
and demand in higher education. Chicago: University of Chicago Press.
Shepherd, R.W. (1953). Cost and Production Functions. Princeton: Princeton
University Press.
Siegel, D., Waldman, D., & Link, A. (1999). Assessing the impact of
organizational practices on the productivity of university technology transfer offices: An
exploratory study. NBER Working Paper number W7526.
State Council of Higher Education for Virginia. Results of the Virginia Faculty
Survey. Richmond: State Council of Higher Education for Virginia, 1991.
Sykes, C.J. (1988). Profscam: Professors and the Demise of Higher Education.
New York: St. Martin�s Griffin.
Toutkoushian, R.K. (1999). The value of cost functions for policymaking and
institutional research. Research in Higher Education 40 (1), 1-15.
Townsend, R.B. (2000). Summary of data from surveys by the coalition on the
academic workforce. [On-line]. Available: www.theaha.org/caw/cawreport.htm
Tuckman, H.P., Pickerill, K.L. (1988). Part-time faculty and part-time academic
careers. In D.W. Breneman and T.I.K. Youn (Eds.), Academic Labor Markets and
Careers. New York: Falmer Press.
University of Nebraska at Lincoln (1993-4). National survey of graduate stipends,
graduate fellowships, and postdoctoral fellowships. University of Nebraska-Lincoln,
Office of Graduate Studies.
138
U.S. Department of Education. (2000). National Center for Education Statistics.
Federal Support for Education: Fiscal Years 1980 to 1999, NCES 2000-019, by Charlene
M. Hoffman. Washington, D.C.
Varian, H.R. (1992). Microeconomic Analysis. New York: W.W. Norton
Vink, M.J.C. (1997). Efficiency in higher education: A comparative analysis on
sectoral and institutional level. Twente, The Netherlands: Universiteit Twente.
Webster, D.S. (1986). Academic quality rankings of American colleges and
universities. Springfield, IL: Charles C. Thomas Publisher.
Wilson, R. (2001). Percentage of part-timers on college faculties hold steady
after years of big gains. The Chronicle of Higher Education, Today�s News, April 23,
2001.
Wilson, R. (1999). Yale relies on TA�s and adjuncts for teaching, report says.
The Chronicle of Higher Education, April 9.
Winston, G.C. (2000). A guide to measuring college costs. In M.F. Middaugh
(Ed.), Analyzing Costs in Higher Education: What Institutional Researchers Need to
Know. New Directions for Institutional Research, 106, (Summer).
Winston, G.C. (1999). Subsidies, hierarchy and peers: The awkward economics
of higher education. Journal of economic perspectives. 13, 13-36.
Worthington, A.C. (2001). An empirical survey of frontier efficiency
measurement techniques in education. Education Economics, 9(3), 245-268.
Zemsky, R. (1994). Faculty discretionary time: Departments and the academic
ratchet. Journal of Higher Education, (65). January/February, 1-22.
139
Zhang, Y., & Bartels, R. (1998). The effect of sample size on the mean efficiency
in DEA with an application to electricity distribution in Australia, Sweden and New
Zealand. Journal of Productivity Analysis, 9, 187-204.
140
Appendix A – Technology, production and cost concepts in economics42
In order to understand the relationship between production functions, cost
functions, and efficiency it useful to build on a simple, single-output example. All of the
concepts discussed below readily translate to the multi-output case. In the interest of
mathematical simplicity and without loss of generality the single-output case is presented
here as it is conceptually more instructive.
Suppose a university uses two inputs, faculty members (F) and graduate teaching
assistants (TA), and produces the output �one semester of introductory economics
education to a class of two hundred students� (E). Further, assume that there are only
two ways in which the university can use these inputs to produce the output: a) using one
F and two TA or b) using two F and one TA. Both of these ways represent what are
called feasible production plans. Now assume the university can also double its input
usage 2(F) and 2(TA) to produce two introductory economics courses. This means that
there are now a total of four input/output production plans (E, F, TA): (1,1,2), (1,2,1),
(2,2,4), and (2,4,2). If these four plans represent the only ways to produce the output E
then, taken together, they represent the university�s production possibilities set: the set of
all feasible input and output combinations that are technologically possible.43
In order to graph these two different levels of output in the same input space, one
can define the input requirement set as all of the input combinations that can produce at
42 This section is derived to a great extent from Varian (1992).43 Mathematically, if a firm uses h inputs and produces j outputs then a production plan can be described asa vector in Rh+j and the production possibilities set, Y, as the subset of Rh+j consisting all of the feasibleproduction plans.
141
least some given level of output.44 For this example, the input requirement set is shown
graphically in Figure A.1. The points depicted as squares show the production plans
(1,1,2) and (1,2,1) and the triangles show (2,2,4) and (2,4,2). The plans represented by
triangles also belong to the input requirement set of E=1 because they produce at least as
much output (in this case it is actually more).
Figure A.1 – Production Plans and Production Possibility Sets
Given this space of feasible input/output combinations there are three reasonable
assumptions that can be imposed on the production plans, which Varian suggests
�regular� technologies exhibit. The first is the trivial condition that there be at least some
way to produce the output. The second is a concept called monotonicity. This means that
if the firm can use X1 units of input one and X2 units of input two to produce Y units of
44 Formally, Varian (1992) defines the input requirement set as: V(y) = { x ∈ Rn
+ : (y,-x) ∈ Y}, where �x�
F
TA
142
output, then they should be able to use slightly more of one or both inputs and still
produce Y units of output. Monotonicity is also referred to as free disposability of inputs
condition. Considering the graphical example, the input requirement set V(E) now
includes all points (E,F,TA) in the �northeast� quadrant of each existing production plan.
This is shown in Figure A.2.
Figure A.2 – Monotonicity or Free-Disposability
The third property of regular technologies states that, given two different
production plans producing the same amount of output, it should be possible to use some
�mix� of these two plans and still produce the same level of output. Drawing again on a
university example, suppose E is redefined so that an introductory economics course now
is a vector of inputs and �y� is a scalar output. The negative sign on x implies that it is a �negative output.�
X2
X1
143
consists of 500 students and that there are two feasible production plans (E,F,TA): (1,1,4)
and (1,3,1). It seems reasonable that some mix of these input levels could also be
employed, like two faculty members and two teaching assistants. This is referred to as
the convexity assumption. In this example, the plan (1,2,2) is the only discrete mix
available. If the factors were scaled up by a thousand though, there may be hundreds of
discrete combinations available. If there were an infinite number, it is possible to
represent these as a straight line joining the two plans.45 This is shown in Figure A.3.
Note that for each of these linear combination plans, the free-disposability assumption
also applies.
The shaded area in Figure A.3 represents the input requirement set for producing
the output level E = 1: E(1). Notice again though that the two production plans
represented by triangles also are included in E(1). Hence there are feasible production
plans in E(1) that produce greater levels of output. In order to define those plans unique
to any E(y), one can define the isoquant as those input combinations in E(y) capable of
producing exactly Y units of output. Strictly speaking, in Figure A.3 the isoquant for
E(1) consists of the two production plans denoted as squares. However, under the free-
disposability and convexity assumptions all input combinations bounding the shaded area
also produce exactly E = 1. Thus the isoquant for E(1) can be also defined as the closed
convex hull46 of the input requirement set.
45 Mathematically, if x and x� are in V(y), then tx + (1-t)x� is also in V(y) for 1 ≥ t ≥ 046 The term �closed� implies that the set E(y) also contains those points making up the boundary itself.
144
Figure A.3 - Convexity
As there are only a finite number of production plans presented here, the isoquant
is �kinked.� By fitting a curve through the points it is possible to �smooth� the isoquant.
The equation that generates this curve is what is called the parametric representation of
the technology. Figure A.4 shows how this might be done for the isoquants in Figure
A.3. This graph is commonly referred to in economic texts as the �isoquant map� or the
�level curves� of the technology. From the definition of an isoquant it can be shown that
it not only represents the production plans producing exactly Y units of output but also
the plans that use the minimum level of inputs necessary.
X2
X1
E(1)
E(2)
145
Figure A.4 – Isoquant Map
This last statement can also be expressed in the following way: Y is the maximum
amount of output that can be produced using the input combinations defined by a
particular isoquant. For example, considering Figure A.4, the two plans represented by
the squares (E = 1) could each produce a lesser amount (perhaps the class is only half
full) but it is not possible to produce more than one unit. If it were, and say it could
produce two units, then the E(2) isoquant would not be the minimum amount of input
necessary to produce two units, E(1) would be.
The parametric function describing this relationship, the maximum amount of
output that can be produced using different combinations of inputs is called the
production function.47 As an example, one of the most widely used production functions
is the Cobb-Douglass model: y = f(K,L) = KαLβ. Here K and L represent a firm�s capital
47 Only in the case of a single output is this referred to as a production function. In the case where a firmproduces multiple outputs, this formulation is referred to as the �transformation function.�
F
TA
E = 1
E = 2
146
and labor inputs respectively. Using this function, it is possible to determine the
maximum amount of output a firm can produce using different amounts of capital and
labor. If instead output is fixed at some level Y, the Cobb-Douglass model depicts the
shape of the isoquant for that output level (e.g., E = 2 in Figure A.4).
Using the parametric representation, it is possible to consider what happens if a
firm reduces its use of one input and increases the use of the other in order to maintain
the same level of output. This is referred to as the marginal rate of technical substitution
(RTS) between the inputs. In the two-input case, it is computed by evaluating the
derivative of the production function when output is fixed (i.e., the slope of the
production function).48 In keeping with the university example, again consider Figure
A.4 and suppose that the university is producing output level E(y) and is using the input
mix F = 40 and TA = 20 (the left triangle for example in Figure A.4). In addition assume
that the isoquant is defined by the following Cobb-Douglass formulation: E = g(F,TA) =
F.5 (TA.5). The RTS between faculty and teaching assistants evaluated at the point (F,TA)
= (40,20) is �2 implying that the university could add one teaching assistant and use two
less faculty (38,19) and still produce Y units of output. Given the way the Cobb-
Douglass model is formulated here, if the university was instead using 20 faculty and 40
teaching assistants, the new RTS can be obtained by inverting the old RTS, which would
now equal -.5. In this case, if the number of teaching assistants were reduced by two, the
university would only need to add one faculty member to still be able to produce y units
of education.
48 Where there are more than two inputs, the RTS can be found by taking the total differential of thefunction and only allowing the inputs of interest to change.
147
From the production function it is also possible to determine the extent to which
output increases when only one of the inputs are increased. For any input, for example
capital, one can define the marginal productivity of capital as the additional amount of
output that can be produced when an additional unit of capital is employed, holding all
other inputs constant. Given some production function y = F(K,L) this involves taking
the partial derivative of the production function with respect to the input under
evaluation. For example, suppose a university uses 50 faculty members and 30 teaching
assistants to teach 60 classes. If the university adds one more faculty member and can
now teach 61 classes then the marginal productivity of the 51st faculty member is one
class. Finally, it can also be shown mathematically that the RTS between two inputs is
equal to the ratio of their marginal productivities (Nicholson, 1995).
148
Appendix B – Input/Output measures for tier 1
0
2
4
6
8
10
12
14
0 500 1000 1500 2000 2500 3000
Series: PUBSSample 1 68Observations 68
Mean 1184.824Median 1100.000Maximum 3043.000Minimum 175.0000Std. Dev. 609.2050Skewness 0.646856Kurtosis 2.921053
Jarque-Bera 4.759777Probability 0.092561
0
2
4
6
8
2000 4000 6000 8000 10000
Series: GRADSSample 1 68Observations 68
Mean 4461.191Median 4297.500Maximum 10641.00Minimum 900.0000Std. Dev. 2485.002Skewness 0.577473Kurtosis 2.527125
Jarque-Bera 4.412949Probability 0.110088
0
2
4
6
8
10
12
0 2000 4000 6000 8000 10000120001400016000
Series: LOWUGSSample 1 68Observations 68
Mean 6577.485Median 6665.000Maximum 15623.00Minimum 448.0000Std. Dev. 3926.295Skewness 0.502989Kurtosis 2.319793
Jarque-Bera 4.178246Probability 0.123796
0
2
4
6
8
10
12
0 4000 8000 12000 16000
Series: HIGHUGSSample 1 68Observations 68
Mean 7207.824Median 6131.000Maximum 17851.00Minimum 452.0000Std. Dev. 4749.138Skewness 0.531376Kurtosis 2.178681
Jarque-Bera 5.111349Probability 0.077640
149
Appendix B
Cont’d
0
2
4
6
8
400 800 1200 1600 2000
Series: FACULTYSample 1 68Observations 68
Mean 1068.162Median 991.0000Maximum 2261.000Minimum 257.0000Std. Dev. 513.0878Skewness 0.491567Kurtosis 2.342427
Jarque-Bera 3.963709Probability 0.137813
0
2
4
6
8
10
0 400 800 1200 1600 2000 2400
Series: TASample 1 68Observations 68
Mean 764.5882Median 733.5000Maximum 2679.000Minimum 18.00000Std. Dev. 491.8715Skewness 1.005821Kurtosis 5.043457
Jarque-Bera 23.29684Probability 0.000009
0
2
4
6
8
10
12
14
0 500 1000 1500 2000 2500 3000 3500
Series: RASample 1 68Observations 68
Mean 1222.485Median 1028.000Maximum 3417.000Minimum 171.0000Std. Dev. 764.6000Skewness 0.958025Kurtosis 3.286253
Jarque-Bera 10.63403Probability 0.004907
150
Appendix C – Input/Output measures for tier 2
0
5
10
15
20
25
0 200 400 600 800 1000
Series: PUBSSample 1 115Observations 115
Mean 265.2000Median 211.0000Maximum 1047.000Minimum 11.00000Std. Dev. 214.5732Skewness 1.361879Kurtosis 4.370215
Jarque-Bera 44.54501Probability 0.000000
0
5
10
15
20
1000 2000 3000 4000 5000 6000
Series: GRADSSample 1 115Observations 115
Mean 2126.948Median 1722.000Maximum 6045.000Minimum 368.0000Std. Dev. 1238.549Skewness 0.855335Kurtosis 3.093145
Jarque-Bera 14.06386Probability 0.000883
0
2
4
6
8
10
12
14
2000 4000 6000 8000 10000 12000
Series: LOWUGSSample 1 115Observations 115
Mean 4634.765Median 4174.000Maximum 12976.00Minimum 645.0000Std. Dev. 2655.578Skewness 0.705780Kurtosis 3.197651
Jarque-Bera 9.734593Probability 0.007694
0
2
4
6
8
10
12
14
2000 4000 6000 8000 10000 12000 14000
Series: HIGHUGSSample 1 115Observations 115
Mean 4920.991Median 4644.000Maximum 14190.00Minimum 617.0000Std. Dev. 2838.490Skewness 0.651869Kurtosis 3.072203
Jarque-Bera 8.169527Probability 0.016827
151
Appendix C
Cont’d
0
5
10
15
20
0 200 400 600 800 1000
Series: TASample 1 115Observations 115
Mean 335.9304Median 259.0000Maximum 1128.000Minimum 2.000000Std. Dev. 245.7444Skewness 1.005579Kurtosis 3.322613
Jarque-Bera 19.87983Probability 0.000048
0
5
10
15
20
0 200 400 600 800 1000 1200
Series: RASample 1 115Observations 115
Mean 327.2609Median 228.0000Maximum 1266.000Minimum 0.000000Std. Dev. 287.3509Skewness 1.301349Kurtosis 4.013950
Jarque-Bera 37.38520Probability 0.000000
0
2
4
6
8
10
12
200 400 600 800 1000 1200 1400
Series: FACULTYSample 1 115Observations 115
Mean 617.5913Median 572.0000Maximum 1439.000Minimum 137.0000Std. Dev. 291.0792Skewness 0.585397Kurtosis 2.730110
Jarque-Bera 6.917257Probability 0.031473
152
Appendix D – Technical efficiency output for tier 1
DMU VRS CRS ScaleEfficiency
Returns toScale
1 University of California-Berkeley 1.000 1.000 1.0002 University of Illinois at Urbana-Champ. 0.922 0.853 0.925 DRS3 University of Michigan at Ann Arbor 1.000 0.825 0.825 DRS4 University of California-San Diego 1.000 1.000 1.0005 University of California-Los Angeles 1.000 1.000 1.0006 University of Wisconsin-Madison 1.000 0.987 0.987 DRS7 University of Texas at Austin 1.000 0.922 0.922 DRS8 University of Minnesota - Twin Cities 0.873 0.823 0.943 DRS9 University of Washington � Seattle 0.772 0.717 0.929 DRS10 Purdue University 1.000 1.000 1.00011 University of North Carolina - Chapel Hill 0.756 0.748 0.989 IRS12 University of California-Santa Barbara 1.000 1.000 1.00013 Pennsylvania State University 1.000 0.894 0.894 DRS14 University of California-Davis 0.855 0.842 0.985 DRS15 University of Virginia 0.886 0.875 0.987 IRS16 Ohio State University 1.000 0.896 0.896 DRS17 Indiana University at Bloomington 1.000 1.000 1.00018 Georgia Institute of Technology 1.000 0.947 0.947 IRS19 University of Maryland at College Park 0.928 0.921 0.992 DRS20 University of Arizona 0.948 0.929 0.981 DRS21 SUNY at Stony Brook 0.864 0.833 0.965 IRS22 Texas A&M University 1.000 1.000 1.00023 University of Florida 0.930 0.637 0.685 DRS24 University of Massachusetts at Amherst 0.778 0.766 0.984 IRS25 University of Colorado at Boulder 1.000 1.000 1.00026 North Carolina State Univ. at Raleigh 0.812 0.806 0.992 IRS27 Iowa State University 0.837 0.832 0.994 IRS28 Michigan State University 1.000 0.873 0.873 DRS29 University of Iowa 0.902 0.894 0.992 IRS30 University of Pittsburgh 0.892 0.892 1.000 DRS31 University of Utah 0.978 0.942 0.964 IRS32 Oregon State University 0.943 0.876 0.928 IRS33 Virginia Polytechnic and State Univ 0.739 0.735 0.994 IRS34 University of Georgia 0.658 0.653 0.992 IRS35 Colorado State University 0.977 0.969 0.992 IRS36 University of Illinois at Chicago 0.791 0.788 0.996 IRS37 University of Alabama at Birmingham 0.955 0.886 0.928 DRS38 Arizona State University 1.000 1.000 1.00039 Massachusetts Institute of Technology 1.000 0.907 0.907 DRS40 Harvard University 1.000 1.000 1.00041 California Institute of Technology 1.000 1.000 1.00042 Stanford University 1.000 1.000 1.00043 University of Chicago 1.000 1.000 1.00044 Princeton University 0.945 0.920 0.973 DRS45 Yale University 1.000 1.000 1.000
153
DMU VRS CRS ScaleEfficiency
Returns toScale
46 Columbia University 0.588 0.559 0.951 DRS47 University of Pennsylvania 1.000 1.000 1.00048 Carnegie Mellon University 0.853 0.751 0.880 IRS49 Duke University 0.860 0.797 0.927 IRS50 Northwestern Univ 0.883 0.857 0.971 IRS51 New York University 1.000 1.000 1.00052 University of Rochester 1.000 0.906 0.906 IRS53 Brown University 0.799 0.714 0.893 IRS54 University of Southern California 1.000 1.000 1.00055 Washington University 0.876 0.798 0.911 IRS56 Emory University 1.000 1.000 1.00057 Vanderbilt University 0.743 0.678 0.912 IRS58 Case Western Reserve University 0.946 0.812 0.858 IRS59 University of Delaware 1.000 1.000 1.00060 University of California-Santa Cruz 1.000 1.000 1.00061 University of Oregon 0.984 0.971 0.987 IRS62 University of California-Riverside 0.897 0.794 0.885 IRS63 Brandeis University 1.000 0.750 0.750 IRS64 Rice University 1.000 0.887 0.887 IRS65 Rensselaer Polytechnic Institute 1.000 0.879 0.879 IRS66 University of Notre Dame 0.731 0.660 0.903 IRS67 College of William and Mary 0.837 0.663 0.792 IRS68 Dartmouth College 1.000 0.914 0.914 IRS
154
Appendix E – Technical efficiency output for tier 2
DMU VRS CRSScaleEfficiency
Returnsto Scale
1 SUNY at Buffalo 0.865 0.756 0.874 DRS2 University of Kansas 1.000 0.820 0.820 DRS3 Florida State University 1.000 0.825 0.825 DRS4 University of Connecticut 0.803 0.672 0.837 DRS5 University of Cincinnati 1.000 0.797 0.797 DRS6 Temple University 1.000 0.696 0.696 DRS7 University of Kentucky 1.000 0.641 0.641 DRS8 University of Hawaii at Manoa 0.612 0.601 0.981 DRS9 Louisiana State Univ (A&M) 1.000 0.563 0.563 DRS10 University of Nebraska at Lincoln 0.938 0.704 0.751 DRS11 University of Missouri, Columbia 0.904 0.781 0.863 DRS12 University of Tennessee at Knoxville 1.000 0.620 0.620 DRS13 Virginia Commonwealth University 0.763 0.756 0.992 DRS14 Utah State University 1.000 1.000 1.00015 West Virginia University 0.841 0.791 0.941 DRS16 Georgetown University 0.835 0.810 0.969 DRS17 Boston University 1.000 0.812 0.812 DRS18 University of Miami 0.580 0.566 0.977 IRS19 Tufts University 0.669 0.614 0.917 IRS20 Tulane University 0.702 0.696 0.991 IRS21 Howard University 0.470 0.461 0.980 IRS22 University of Houston 1.000 1.000 1.00023 University of South Carolina at Columbia 0.910 0.753 0.828 DRS24 SUNY at Albany 0.830 0.798 0.961 DRS25 University of Rhode Island 0.700 0.679 0.970 IRS26 University of Vermont 0.694 0.671 0.967 IRS27 Washington State University 0.765 0.641 0.838 DRS28 University of South Florida 1.000 0.890 0.890 DRS29 Kansas State University 0.808 0.804 0.995 DRS30 University of Oklahoma 0.846 0.815 0.963 DRS31 University of Wisconsin-Milwaukee 0.799 0.787 0.985 DRS32 University of Wyoming 0.624 0.598 0.958 IRS33 Clemson University 0.668 0.652 0.977 DRS34 Texas Tech University 1.000 0.975 0.975 DRS35 University of Louisville 0.768 0.755 0.984 IRS36 Southern Illinois University-Carbondale 0.891 0.829 0.931 DRS37 Auburn University 0.827 0.716 0.866 DRS38 University of Arkansas 0.609 0.602 0.989 IRS39 University of Mississippi 0.791 0.744 0.941 IRS40 University of Idaho 0.689 0.641 0.931 IRS41 Mississippi State University 0.729 0.707 0.970 IRS42 Syracuse University 0.737 0.657 0.891 DRS43 Lehigh University 0.695 0.628 0.903 IRS44 St Louis University 0.787 0.690 0.876 IRS45 George Washington University 1.000 1.000 1.000
155
DMU VRS CRSScaleEfficiency
Returnsto Scale
46 Northeastern University 0.738 0.724 0.981 DRS47 Brigham Young University 1.000 0.879 0.879 DRS48 Northern Arizona University 0.950 0.933 0.983 IRS49 SUNY at Binghamton 0.820 0.795 0.969 IRS50 University of North Carolina at Greensboro 0.752 0.729 0.969 IRS51 University of North TX 1.000 0.971 0.971 DRS52 Georgia State University 0.844 0.827 0.980 DRS53 University of Missouri, Rolla 0.912 0.726 0.796 IRS54 Bowling Green State Univ 0.767 0.762 0.994 DRS55 University of Texas at Arlington 0.887 0.850 0.958 DRS56 University of Alabama 0.734 0.729 0.993 DRS57 Old Dominion University 0.763 0.745 0.976 IRS58 University of Texas at Dallas 1.000 1.000 1.00059 University of Missouri, Kansas City 0.680 0.617 0.907 IRS60 University of Toledo 1.000 1.000 1.00061 Northern Illinois University 0.744 0.717 0.964 DRS62 Ball State University 0.740 0.699 0.944 DRS63 Western Michigan University 1.000 1.000 1.00064 University of Southern Mississippi 0.765 0.738 0.966 IRS65 University of Northern Colorado 0.918 0.900 0.980 IRS66 Illinois State University 0.911 0.896 0.983 DRS67 University of South Dakota 0.831 0.714 0.859 IRS68 Texas Woman's University 1.000 1.000 1.00069 Indiana University of PA 0.816 0.810 0.993 IRS70 Drexel University 0.957 0.922 0.964 IRS71 Boston College 0.731 0.723 0.990 IRS72 Southern Methodist University 0.619 0.558 0.901 IRS73 University of Denver 0.742 0.709 0.956 IRS74 Loyola University of Chicago 0.782 0.730 0.933 IRS75 St John's University (Jamaica, NY) 0.976 0.975 0.999 IRS76 Adelphi University 1.000 1.000 1.00077 Andrews University 1.000 0.535 0.535 IRS78 American University 0.860 0.833 0.969 IRS79 Catholic University of America 0.622 0.518 0.833 IRS80 Hofstra University 1.000 1.000 1.00081 Fordham University 0.941 0.919 0.976 IRS82 Florida Institute of Technology 0.881 0.717 0.814 IRS83 Clark Atlanta University 1.000 0.825 0.825 IRS84 George Mason University 1.000 1.000 1.00085 University of Nevada-Reno 0.882 0.825 0.935 IRS86 Colorado School of Mines 0.968 0.641 0.662 IRS87 University of Maine 0.771 0.727 0.943 IRS88 University of New Hampshire 0.723 0.696 0.962 IRS89 University of Massachusetts Lowell 0.730 0.697 0.954 IRS90 Michigan Technological University 0.811 0.669 0.825 IRS91 University of Missouri, St Louis 1.000 1.000 1.00092 Montana State University - Bozeman 0.851 0.808 0.949 IRS93 University of Central Florida 1.000 1.000 1.00094 University of Alabama in Huntsville 0.768 0.653 0.851 IRS
156
DMU VRS CRSScaleEfficiency
Returnsto Scale
95 New Jersey Institute Technology 0.927 0.818 0.882 IRS96 University of Southwestern Louisiana 1.000 1.000 1.00097 University of Montana 0.882 0.765 0.868 IRS98 North Dakota State University 1.000 1.000 1.00099 Idaho State University 0.873 0.768 0.880 IRS100 University of North Dakota 0.923 0.843 0.913 IRS101 Florida Atlantic University 0.794 0.740 0.932 IRS102 Portland State University 1.000 1.000 1.000103 Wichita State University 0.827 0.778 0.940 IRS104 Middle Tennessee State University 1.000 1.000 1.000105 Wake Forest University 0.726 0.505 0.696 IRS106 Clark University 1.000 0.586 0.586 IRS107 Clarkson University 1.000 0.701 0.701 IRS108 University of Tulsa 0.830 0.531 0.640 IRS109 Baylor University 0.728 0.666 0.914 IRS110 Worcester Polytechnic Institute 0.903 0.602 0.666 IRS111 Texas Christian University 0.943 0.744 0.789 IRS112 Stevens Institute of Technology 1.000 0.813 0.813 IRS113 Duquesne University 0.850 0.777 0.914 IRS114 Seton Hall University 1.000 1.000 1.000115 University of Detroit Mercy 0.996 0.734 0.737 IRS
157
Appendix F – Input/Output weights for tier 1 VRS model (000)
DMU Faculty RA TA LUG HUG GRAD PUBS University of California-Berkeley 0.005 0.059 0.840 0.000 0.000 0.000 0.329 Univ. of Illinois at Urbana-Champ. 0.535 0.000 0.000 0.038 0.000 0.040 0.089 University of Michigan 0.000 0.429 0.000 0.000 0.000 0.065 0.170 University of California-San Diego 1.353 0.000 0.000 0.000 0.066 0.000 0.278 Univ. of California-Los Angeles 0.210 0.339 0.000 0.000 0.017 0.065 0.095 University of Wisconsin-Madison 0.000 0.000 0.771 0.000 0.003 0.058 0.181 University of Texas at Austin 0.031 0.299 0.002 0.011 0.014 0.058 0.000 University of Minnesota 0.000 0.333 0.596 0.028 0.009 0.076 0.115 University of Washington - Seattle 0.000 0.112 0.626 0.026 0.010 0.067 0.086 Purdue University, Main Campus 0.617 0.071 0.000 0.067 0.000 0.000 0.000 Univ. of N. Carolina at Chapel Hill 0.632 0.080 0.154 0.027 0.028 0.107 0.096 Univ. of California-Santa Barbara 1.283 0.159 0.000 0.064 0.038 0.000 0.169 Pennsylvania State Univ. 0.008 0.009 0.950 0.000 0.040 0.000 0.169 University of California-Davis 0.445 0.392 0.000 0.000 0.025 0.000 0.413 University of Virginia 1.024 0.000 0.000 0.078 0.000 0.101 0.011 Ohio State University 0.003 0.557 0.000 0.036 0.006 0.001 0.179 Indiana University at Bloomington 0.000 1.195 0.000 0.065 0.000 0.026 0.000 Georgia Institute of Technology 1.373 0.000 0.344 0.088 0.026 0.161 0.000 Univ. of Maryland at College Park 0.630 0.137 0.000 0.028 0.020 0.068 0.087 University of Arizona 0.563 0.166 0.000 0.024 0.022 0.054 0.102 SUNY at Stony Brook 1.172 0.069 0.000 0.032 0.046 0.148 0.064 Texas A&M University 0.001 0.050 0.779 0.015 0.044 0.000 0.000 University of Florida 0.000 0.000 1.156 0.017 0.025 0.056 0.096 Univ. of Massachusetts - Amherst 0.691 0.058 0.173 0.081 0.000 0.094 0.000
158
DMU Faculty RA TA LUG HUG GRAD PUBS University of Colorado at Boulder 0.000 1.775 0.581 0.021 0.036 0.000 0.318 N. Carolina State Univ. at Raleigh 0.692 0.000 0.404 0.116 0.000 0.000 0.000 Iowa State University 0.730 0.000 0.205 0.076 0.002 0.089 0.000 Michigan State University 0.000 0.000 0.931 0.066 0.000 0.000 0.000 University of Iowa 0.940 0.000 0.000 0.072 0.000 0.089 0.000 University of Pittsburgh 0.627 0.012 0.153 0.055 0.000 0.070 0.058 University of Utah 0.928 0.000 0.354 0.042 0.046 0.091 0.000 Oregon State University 1.339 0.000 0.312 0.043 0.065 0.162 0.047 Virginia Polytechnic 0.553 0.000 0.157 0.066 0.000 0.076 0.000 University of Georgia 0.508 0.000 0.143 0.066 0.001 0.078 0.000 Colorado State University 0.895 0.021 0.256 0.055 0.022 0.106 0.000 University of Illinois at Chicago 0.867 0.018 0.000 0.037 0.033 0.110 0.000 Univ. of Alabama at Birmingham 0.000 0.000 10.753 0.000 0.230 0.000 0.000 Arizona State University Main 0.000 0.596 0.321 0.000 0.037 0.062 0.000 MIT 0.000 0.000 2.062 0.000 0.000 0.000 0.410 Harvard University 0.000 0.000 1.395 0.000 0.000 0.099 0.000 California Institute of Technology 3.891 0.000 0.000 0.000 0.000 0.000 0.837 Stanford University 0.971 0.000 0.381 0.021 0.000 0.101 0.172 University of Chicago 0.975 0.308 0.000 0.000 0.000 0.124 0.202 Princeton University 0.382 1.262 0.097 0.000 0.000 0.000 0.763 Yale University 0.000 0.000 35.714 0.000 0.000 0.163 0.304 Columbia University 0.000 0.198 0.449 0.000 0.000 0.064 0.289 University of Pennsylvania 1.007 0.000 0.000 0.052 0.007 0.084 0.061 Carnegie Mellon University 1.647 0.000 0.434 0.174 0.000 0.221 0.140 Duke University 1.364 0.130 0.000 0.005 0.068 0.201 0.129 Northwestern Univ 1.014 0.094 0.000 0.000 0.048 0.142 0.081 New York University 0.000 1.229 0.000 0.024 0.000 0.095 0.000 University of Rochester 1.730 0.000 0.406 0.144 0.000 0.208 0.052 Brown University 1.565 0.381 0.154 0.154 0.000 0.269 0.305
159
DMU Faculty RA TA LUG HUG GRAD PUBS University of Southern California 0.000 1.073 0.000 0.010 0.000 0.062 0.277 Washington University 1.414 0.014 0.341 0.113 0.000 0.228 0.000 Emory University 0.000 4.585 9.393 0.000 0.000 0.433 0.000 Vanderbilt University 1.262 0.253 0.000 0.110 0.000 0.216 0.173 Case Western Reserve University 1.703 0.000 0.650 0.041 0.000 0.358 0.000 University of Delaware 0.904 0.000 0.434 0.119 0.000 0.000 0.000 University of California-Santa Cruz 1.388 1.827 0.000 0.000 0.208 0.000 0.000 University of Oregon 1.364 0.197 0.000 0.113 0.000 0.134 0.000 University of California-Riverside 1.948 0.296 0.000 0.156 0.000 0.264 0.236 Brandeis University 1.999 1.841 0.395 0.045 0.055 0.172 2.117 Rice University 1.787 0.000 13.655 0.000 0.000 0.000 2.058 Rensselaer Polytechnic Institute 2.710 0.000 0.000 0.160 0.021 0.308 0.000 University of Notre Dame 1.318 0.073 0.251 0.146 0.000 0.225 0.140 College of William and Mary 1.833 0.000 0.737 0.096 0.167 0.276 0.000 Dartmouth College 2.409 0.000 2.792 0.469 0.000 0.000 0.000
160
Appendix G – Input/Output weights for tier 2 VRS model (000)
DMU Faculty RA TA LUG HUG GRAD PUBS SUNY at Buffalo 0.963 0.000 0.078 0.025 0.031 0.114 0.175 University of Kansas, Main Campus 0.928 0.202 0.000 0.045 0.027 0.076 0.108 Florida State University 0.920 0.028 0.015 0.012 0.039 0.058 0.320 University of Connecticut 0.891 0.000 0.072 0.018 0.034 0.117 0.183 University of Cincinnati 0.720 0.002 0.262 0.056 0.000 0.067 0.000 Temple University 0.000 3.012 0.000 0.044 0.007 0.137 0.000 University of Kentucky 0.296 0.832 0.000 0.053 0.000 0.000 0.676 University of Hawaii at Manoa 0.862 0.000 0.123 0.015 0.054 0.145 0.000 Louisiana State Univ & Agric & Mechanical 0.004 0.035 1.224 0.012 0.000 0.056 0.762 University of Nebraska at Lincoln 0.887 0.000 0.000 0.047 0.019 0.000 0.589 University of Missouri, Columbia 1.112 0.000 0.000 0.056 0.019 0.000 0.609 University of Tennessee at Knoxville 0.000 0.272 0.932 0.030 0.023 0.071 0.338 Virginia Commonwealth University 1.194 0.000 0.165 0.029 0.050 0.144 0.152 Utah State University 1.396 0.000 1.810 0.119 0.000 0.112 0.439 West Virginia University 0.913 0.000 0.323 0.041 0.028 0.129 0.000 Georgetown University 0.657 5.253 1.153 0.000 0.000 0.291 1.077 Boston University 0.031 1.338 0.023 0.021 0.000 0.139 0.000 University of Miami 1.115 0.000 0.370 0.067 0.063 0.186 0.049 Tufts University 1.704 0.000 1.125 0.145 0.063 0.257 0.440 Tulane University 1.955 0.170 0.146 0.116 0.000 0.297 0.000 Howard University 0.974 0.000 0.876 0.171 0.000 0.157 0.721 University of Houston 0.000 6.452 0.000 0.000 0.048 0.000 0.715DMU Faculty RA TA LUG HUG GRAD PUBS
161
DMU Faculty RA TA LUG HUG GRAD PUBS University of South Carolina at Columbia 0.820 0.000 0.183 0.000 0.029 0.121 0.293 SUNY at Albany 1.397 0.361 0.000 0.043 0.047 0.169 0.000 University of Rhode Island 1.215 0.000 0.712 0.080 0.056 0.153 0.219 University of Vermont 1.429 0.000 1.090 0.142 0.044 0.109 0.596 Washington State University 1.036 0.000 0.000 0.058 0.020 0.000 0.631 University of South Florida 0.000 0.001 1.959 0.000 0.038 0.119 0.000 Kansas State University 0.740 0.000 0.977 0.083 0.000 0.000 0.602 University of Oklahoma, Norman Campus 1.179 0.000 0.163 0.027 0.046 0.133 0.140 University of Wisconsin-Milwaukee 0.610 3.254 0.000 0.069 0.009 0.180 0.000 University of Wyoming 1.267 0.000 0.742 0.096 0.067 0.183 0.263 Clemson University 1.170 0.000 0.087 0.044 0.050 0.163 0.000 Texas Tech University 1.206 0.000 0.219 0.046 0.035 0.084 0.046 University of Louisville 1.178 0.064 0.623 0.067 0.050 0.148 0.100 Southern Illinois University-Carbondale 0.528 2.814 0.000 0.000 0.093 0.000 0.106 Auburn University 0.719 0.000 0.479 0.066 0.000 0.000 0.655 University of Arkansas 0.935 0.000 0.752 0.095 0.040 0.076 0.436 University of Mississippi 1.757 0.000 0.956 0.104 0.078 0.218 0.034 University of Idaho 1.458 0.000 0.895 0.113 0.085 0.109 0.416 Mississippi State University 1.260 0.000 0.738 0.080 0.056 0.152 0.218 Syracuse University 1.022 0.217 0.000 0.042 0.028 0.131 0.215 Lehigh University 1.712 0.000 1.539 0.214 0.000 0.197 0.906 St Louis University 2.092 0.000 2.881 0.227 0.000 0.231 1.937 George Washington University 0.000 0.000 5.780 0.000 0.000 0.188 0.000 Northeastern University 1.243 0.000 0.108 0.034 0.053 0.160 0.000 Brigham Young University 0.002 4.411 0.003 0.042 0.032 0.000 0.000 Northern Arizona University 1.375 0.000 0.747 0.064 0.047 0.135 0.000 SUNY at Binghamton 1.893 0.775 0.000 0.064 0.116 0.120 0.022 University of North Carolina at Greensboro 1.608 0.000 0.942 0.102 0.072 0.194 0.279DMU Faculty RA TA LUG HUG GRAD PUBS
162
DMU Faculty RA TA LUG HUG GRAD PUBS University of North TX 1.331 0.000 0.000 0.025 0.036 0.125 0.000 Georgia State University 1.203 0.000 0.172 0.016 0.054 0.147 0.000 University of Missouri, Rolla 2.972 0.000 1.825 0.203 0.153 0.196 0.749 Bowling Green State Univ 1.449 0.000 0.000 0.042 0.068 0.123 0.000 University of Texas at Arlington 1.549 0.000 0.115 0.000 0.068 0.180 0.000 University of Alabama 1.242 0.000 0.092 0.043 0.048 0.158 0.000 Old Dominion University 1.316 0.000 0.648 0.072 0.056 0.167 0.064 University of Texas at Dallas 4.828 0.002 0.000 0.000 0.000 0.000 0.955 University of Missouri, Kansas City 1.664 1.045 0.973 0.000 0.145 0.376 0.338 University of Toledo 1.665 0.399 0.000 0.088 0.000 0.000 0.000 Northern Illinois University 0.957 0.083 0.086 0.037 0.035 0.122 0.148 Ball State University 0.468 0.000 1.218 0.083 0.000 0.092 0.000 Western Michigan University 1.237 0.611 0.000 0.036 0.057 0.060 0.000 University of Southern Mississippi 1.802 0.000 0.000 0.000 0.145 0.000 0.000 University of Northern Colorado 1.910 0.000 1.037 0.097 0.071 0.205 0.000 Illinois State University 1.211 0.614 0.000 0.043 0.075 0.000 0.000 University of South Dakota 2.045 2.315 0.605 0.244 0.000 0.186 0.000 Texas Woman's University 2.133 1.249 1.195 0.000 0.121 0.315 0.000 Indiana University of PA 1.289 0.000 1.381 0.150 0.000 0.051 0.000 Drexel University 3.497 0.000 0.000 0.125 0.170 0.090 0.000 Boston College 1.487 0.020 0.376 0.065 0.065 0.200 0.000 Southern Methodist University 1.464 0.000 1.268 0.186 0.000 0.237 0.715 University of Denver 2.468 0.000 0.250 0.141 0.000 0.353 0.000 Loyola University of Chicago 1.483 0.000 1.740 0.000 0.103 0.254 0.542 St John's University (Jamaica, NY) 1.393 0.000 1.140 0.080 0.049 0.116 0.000 Adelphi University 3.715 0.000 0.032 0.000 0.000 0.495 0.019 Andrews University 3.739 0.000 14.632 0.966 0.088 0.447 0.646 American University 1.961 0.278 0.414 0.074 0.074 0.236 0.000DMU Faculty RA TA LUG HUG GRAD PUBS
163
DMU Faculty RA TA LUG HUG GRAD PUBS Catholic University of America 1.268 2.368 1.319 0.000 0.000 0.538 1.312 Hofstra University 0.000 12.378 0.000 0.000 0.000 0.760 0.000 Fordham University 1.727 0.000 0.882 0.075 0.062 0.184 0.000 Florida Institute of Technology 2.301 2.934 2.158 0.000 0.065 0.723 0.000 Clark Atlanta University 3.158 0.000 2.728 0.374 0.000 0.190 0.000 George Mason University 0.000 0.000 10.526 0.000 0.054 0.208 0.000 University of Nevada-Reno 1.967 0.000 1.152 0.108 0.075 0.204 0.294 Colorado School of Mines 4.064 0.000 2.831 0.831 0.003 0.000 0.454 University of Maine 1.642 0.000 0.962 0.102 0.071 0.193 0.278 University of New Hampshire 1.209 0.000 0.766 0.086 0.064 0.073 0.338 University of Massachusetts Lowell 1.642 3.147 0.213 0.245 0.000 0.000 1.415 Michigan Technological University 2.408 0.000 1.057 0.172 0.182 0.000 0.000 University of Missouri, St Louis 3.289 0.000 0.000 0.089 0.168 0.042 0.000 Montana State University - Bozeman 1.685 0.000 0.966 0.103 0.091 0.000 0.510 University of Central Florida 1.274 0.000 1.247 0.000 0.097 0.000 0.000 University of Alabama in Huntsville 2.622 0.000 1.609 0.218 0.164 0.210 0.804 New Jersey Institute Technology 2.036 3.803 2.117 0.000 0.000 0.605 1.474 University of Southwestern Louisiana 1.558 0.018 1.044 0.101 0.049 0.000 0.000 University of Montana 2.014 0.277 0.718 0.114 0.133 0.000 0.000 North Dakota State University 0.000 0.000 34.483 0.049 0.000 0.000 3.692 Idaho State University 1.810 3.623 0.202 0.240 0.000 0.000 0.000 University of North Dakota 1.821 1.735 0.000 0.071 0.144 0.000 0.000 Florida Atlantic University 1.464 0.000 1.534 0.000 0.104 0.262 0.000 Portland State University 2.268 0.000 0.220 0.036 0.075 0.211 0.000 Wichita State University 1.800 0.000 0.979 0.101 0.076 0.213 0.033 Middle Tennessee State University 0.000 0.000 0.000 0.140 0.000 0.039 0.000 Wake Forest University 2.557 2.666 0.004 0.187 0.362 0.000 0.842 Clark University 4.413 4.377 0.047 0.312 0.597 0.000 1.325
164
DMU Faculty RA TA LUG HUG GRAD PUBS Clarkson University 4.529 0.000 2.508 0.329 0.307 0.000 1.480 University of Tulsa 2.838 3.931 0.201 0.348 0.217 0.000 1.803 Baylor University 1.396 1.288 0.000 0.069 0.135 0.004 0.000 Worcester Polytechnic Institute 3.494 3.642 0.005 0.229 0.442 0.000 1.030 Texas Christian University 2.633 0.869 1.262 0.163 0.189 0.000 0.851 Stevens Institute of Technology 5.897 0.132 1.849 0.000 0.000 1.086 0.000 Duquesne University 2.215 1.348 1.057 0.193 0.000 0.363 0.000 Seton Hall University 2.865 0.000 0.017 0.219 0.000 0.000 8.692 University of Detroit Mercy 3.005 3.952 2.805 0.092 0.000 0.941 0.000
165
Appendix H – Benchmark values for tier 1 VRS model
Note: single value benchmarks indicate the institution was regarded as efficient and reports howmany times it appears in another institutions reference group
DMU Benchmarks1 University of California-Berkeley 82 U. of Illinois at Urbana-Champ. 6 (0.554) 7 (0.153) 10 (0.142) 16 (0.151)3 University of Michigan 14 U. of California-San Diego 25 Univ. of California-Los Angeles 06 University of Wisconsin-Madison 37 University of Texas at Austin 18 University of Minnesota 1 (0.204) 3 (0.116) 25 (0.387) 38 (0.131) 40 (0.119) 54 (0.044)9 U. of Washington - Seattle 1 (0.335) 6 (0.180) 22 (0.089) 25 (0.186) 38 (0.136) 40 (0.074)10 Purdue University, Main Campus 311 U. of N. Carolina at Chapel Hill 12 (0.158) 25 (0.225) 38 (0.168) 42 (0.191) 51 (0.005) 54 (0.110) 56 (0.144)12 Univ. of California-Santa Barbara 2013 Pennsylvania State Univ. 014 University of California-Davis 1 (0.345) 12 (0.491) 25 (0.096) 41 (0.068)15 University of Virginia 12 (0.327) 47 (0.349) 51 (0.195) 65 (0.128)16 Ohio State University 117 Indiana U. at Bloomington 718 Georgia Institute of Technology 019 Univ. of Maryland at College Park 1 (0.309) 12 (0.224) 17 (0.103) 38 (0.226) 51 (0.120) 54 (0.019)20 University of Arizona 1 (0.288) 12 (0.090) 17 (0.165) 22 (0.119) 25 (0.153) 38 (0.185)
166
21 SUNY at Stony Brook 4 (0.063) 12 (0.123) 38 (0.114) 47 (0.070) 51 (0.115) 65 (0.516)22 Texas A&M University 723 University of Florida 1 (0.025) 6 (0.192) 22 (0.296) 25 (0.404) 38 (0.083)24 Univ. of Massachusetts - Amherst 12 (0.527) 17 (0.141) 25 (0.163) 51 (0.115) 56 (0.054)25 University of Colorado at Boulder 1626 N. Carolina State Univ. at Raleigh 25 (0.802) 59 (0.118) 68 (0.080)27 Iowa State University 12 (0.367) 17 (0.055) 22 (0.064) 25 (0.478) 51 (0.036)28 Michigan State University 029 University of Iowa 10 (0.118) 12 (0.566) 51 (0.316)30 University of Pittsburgh 10 (0.062) 12 (0.218) 22 (0.250) 25 (0.109) 47 (0.042) 51 (0.317)31 University of Utah 12 (0.396) 25 (0.035) 38 (0.278) 56 (0.132) 60 (0.160)32 Oregon State University 4 (0.175) 38 (0.109) 42 (0.039) 60 (0.485) 63 (0.000) 65 (0.192)33 Virginia Polytechnic 12 (0.310) 17 (0.374) 25 (0.128) 51 (0.187)34 University of Georgia 12 (0.324) 17 (0.059) 22 (0.201) 25 (0.270) 51 (0.146)35 Colorado State University 12 (0.519) 22 (0.010) 25 (0.200) 38 (0.063) 51 (0.152) 56 (0.055)36 University of Illinois at Chicago 12 (0.559) 38 (0.030) 47 (0.005) 51 (0.356) 65 (0.050)37 Univ. of Alabama at Birmingham 25 (0.239) 56 (0.761)38 Arizona State University Main 1439 MIT 040 Harvard University 341 California Institute of Technology 942 Stanford University 643 University of Chicago 244 Princeton University 1 (0.007) 25 (0.442) 41 (0.513) 54 (0.039)45 Yale University 246 Columbia University 1 (0.037) 40 (0.711) 45 (0.123) 54 (0.129)47 University of Pennsylvania 448 Carnegie Mellon University 12 (0.054) 41 (0.190) 52 (0.593) 56 (0.037) 63 (0.125)49 Duke University 38 (0.071) 41 (0.026) 42 (0.067) 43 (0.112) 51 (0.053) 65 (0.672)50 Northwestern Univ 38 (0.029) 42 (0.317) 43 (0.032) 51 (0.243) 65 (0.378)
167
51 New York University 2052 University of Rochester 253 Brown University 12 (0.101) 41 (0.154) 51 (0.005) 56 (0.124) 60 (0.346) 63 (0.270)54 University of Southern California 555 Washington University 42 (0.085) 51 (0.067) 52 (0.486) 56 (0.138) 63 (0.224)56 Emory University 1057 Vanderbilt University 41 (0.092) 51 (0.154) 60 (0.094) 63 (0.118) 65 (0.542)58 Case Western Reserve University 41 (0.244) 42 (0.066) 45 (0.379) 63 (0.311)59 University of Delaware 160 U. of California-Santa Cruz 761 University of Oregon 12 (0.195) 17 (0.083) 51 (0.131) 60 (0.591)62 University of California-Riverside 12 (0.044) 41 (0.181) 51 (0.015) 60 (0.553) 65 (0.207)63 Brandeis University 864 Rice University 065 Rensselaer Polytechnic Institute 966 University of Notre Dame 12 (0.354) 41 (0.008) 51 (0.028) 56 (0.002) 63 (0.595) 65 (0.013)67 College of William and Mary 38 (0.003) 56 (0.147) 60 (0.252) 63 (0.266) 68 (0.331)68 Dartmouth College 2
168
Appendix I – Benchmark values for tier 2 VRS model
Note: single value benchmarks indicate the institution was regarded as efficient and reports howmany times it appears in another institutions reference group
DMU Benchmarks1 SUNY at Buffalo 17 (0.319) 22 (0.265) 34 (0.206) 45 (0.112) 51 (0.062) 58 (0.037)2 University of Kansas 03 Florida State University 04 University of Connecticut 17 (0.476) 22 (0.156) 28 (0.042) 45 (0.013) 51 (0.155) 58 (0.158)5 University of Cincinnati 56 Temple University 07 University of Kentucky 18 University of Hawaii at Manoa 28 (0.072) 45 (0.415) 51 (0.047) 63 (0.184) 93 (0.282)9 Louisiana State University 010 University of Nebraska at Lincoln 5 (0.115) 7 (0.393) 22 (0.463) 47 (0.029)11 University of Missouri, Columbia 5 (0.177) 22 (0.618) 58 (0.160) 60 (0.045)12 U. Tennessee at Knoxville 013 Virginia Commonwealth U. 22 (0.135) 34 (0.009) 45 (0.211) 51 (0.024) 58 (0.261) 63 (0.359)14 Utah State University 115 West Virginia University 5 (0.073) 17 (0.054) 34 (0.537) 45 (0.250) 63 (0.086)16 Georgetown University 22 (0.204) 45 (0.048) 58 (0.102) 76 (0.430) 80 (0.216)17 Boston University 518 University of Miami 45 (0.255) 58 (0.058) 60 (0.137) 63 (0.144) 93 (0.075) 112 (0.330)19 Tufts University 45 (0.043) 58 (0.095) 60 (0.099) 80 (0.045) 84 (0.127) 112 (0.590)20 Tulane University 45 (0.138) 58 (0.528) 60 (0.167) 76 (0.055) 112 (0.111)
169
21 Howard University 58 (0.038) 60 (0.103) 80 (0.735) 98 (0.095) 112 (0.029)22 University of Houston 1623 U. of S. Carolina at Columbia 17 (0.558) 22 (0.121) 28 (0.141) 45 (0.164) 58 (0.017)24 SUNY at Albany 45 (0.145) 51 (0.378) 58 (0.333) 60 (0.119) 63 (0.025)25 University of Rhode Island 58 (0.131) 60 (0.175) 80 (0.189) 84 (0.246) 93 (0.132) 112 (0.128)26 University of Vermont 58 (0.113) 60 (0.115) 80 (0.037) 98 (0.575) 107 (0.116) 112 (0.044)27 Washington State University 5 (0.015) 22 (0.680) 58 (0.179) 60 (0.126)28 University of South Florida 429 Kansas State University 22 (0.620) 58 (0.138) 60 (0.241) 98 (0.000)30 University of Oklahoma, Norman 22 (0.108) 34 (0.378) 45 (0.209) 51 (0.110) 58 (0.141) 63 (0.055)31 U. of Wisconsin-Milwaukee 22 (0.405) 45 (0.039) 60 (0.254) 63 (0.053) 76 (0.250)32 University of Wyoming 58 (0.176) 60 (0.136) 80 (0.130) 84 (0.027) 93 (0.244) 112 (0.285)33 Clemson University 45 (0.039) 51 (0.217) 58 (0.312) 60 (0.191) 63 (0.241)34 Texas Tech University 535 University of Louisville 45 (0.110) 60 (0.259) 63 (0.033) 80 (0.005) 84 (0.165) 93 (0.241) 112 (0.188)36 S. Illinois University-Carbondale 22 (0.137) 47 (0.277) 93 (0.338) 104 (0.248)37 Auburn University 5 (0.250) 22 (0.652) 58 (0.012) 60 (0.086)38 University of Arkansas 14 (0.416) 58 (0.148) 60 (0.176) 80 (0.134) 96 (0.085) 98 (0.041)39 University of Mississippi 60 (0.159) 63 (0.048) 80 (0.016) 84 (0.021) 93 (0.217) 112 (0.538)40 University of Idaho 58 (0.154) 60 (0.125) 80 (0.450) 93 (0.105) 107 (0.051) 112 (0.114)41 Mississippi State University 58 (0.026) 60 (0.069) 80 (0.235) 84 (0.072) 93 (0.474) 112 (0.124)42 Syracuse University 17 (0.196) 22 (0.120) 45 (0.221) 51 (0.103) 58 (0.216) 60 (0.144)43 Lehigh University 58 (0.211) 60 (0.032) 80 (0.378) 98 (0.057) 112 (0.322)44 St Louis University 58 (0.017) 77 (0.057) 80 (0.389) 98 (0.189) 112 (0.347)45 George Washington University 3546 Northeastern University 45 (0.089) 51 (0.182) 58 (0.279) 63 (0.380) 102 (0.070)47 Brigham Young University 448 Northern Arizona University 60 (0.075) 63 (0.439) 80 (0.074) 84 (0.244) 112 (0.169)49 SUNY at Binghamton 58 (0.027) 60 (0.207) 76 (0.186) 91 (0.250) 93 (0.188) 112 (0.143)50 U. of N. Carolina at Greensboro 58 (0.009) 60 (0.039) 80 (0.628) 84 (0.113) 93 (0.089) 112 (0.122)
170
51 University of North TX 1452 Georgia State University 28 (0.105) 45 (0.534) 51 (0.000) 63 (0.078) 93 (0.282)53 University of Missouri, Rolla 58 (0.032) 60 (0.032) 80 (0.097) 93 (0.080) 107 (0.588) 112 (0.170)54 Bowling Green State Univ 51 (0.092) 58 (0.146) 60 (0.604) 93 (0.159)55 University of Texas at Arlington 45 (0.065) 51 (0.295) 58 (0.267) 93 (0.373)56 University of Alabama 45 (0.019) 51 (0.064) 58 (0.188) 60 (0.414) 63 (0.315)57 Old Dominion University 45 (0.208) 60 (0.078) 63 (0.092) 84 (0.094) 93 (0.243) 112 (0.285)58 University of Texas at Dallas 5259 U. of Missouri, Kansas City 45 (0.024) 58 (0.065) 68 (0.294) 80 (0.151) 93 (0.065) 112 (0.400)60 University of Toledo 5561 Northern Illinois University 22 (0.051) 34 (0.132) 45 (0.024) 51 (0.353) 58 (0.073) 60 (0.024) 63 (0.344)62 Ball State University 22 (0.095) 47 (0.125) 60 (0.575) 104 (0.206)63 Western Michigan University 2364 University of Southern Mississippi 91 (0.584) 93 (0.416)65 University of Northern Colorado 60 (0.138) 63 (0.048) 80 (0.680) 84 (0.027) 112 (0.107)66 Illinois State University 47 (0.056) 60 (0.234) 63 (0.202) 93 (0.508)67 University of South Dakota 60 (0.197) 76 (0.031) 77 (0.364) 80 (0.195) 112 (0.213)68 Texas Woman's University 169 Indiana University of PA 80 (0.218) 96 (0.534) 98 (0.018) 104 (0.230)70 Drexel University 58 (0.381) 60 (0.160) 91 (0.258) 112 (0.201)71 Boston College 45 (0.019) 58 (0.126) 60 (0.244) 63 (0.066) 102 (0.400) 112 (0.145)72 Southern Methodist University 45 (0.035) 58 (0.113) 60 (0.108) 80 (0.325) 112 (0.419)73 University of Denver 45 (0.229) 58 (0.260) 60 (0.025) 112 (0.486)74 Loyola University of Chicago 45 (0.266) 58 (0.016) 80 (0.542) 93 (0.037) 112 (0.139)75 St John's University (Jamaica, NY) 60 (0.155) 63 (0.066) 80 (0.227) 84 (0.350) 104 (0.202)76 Adelphi University 1877 Andrews University 878 American University 45 (0.363) 58 (0.030) 60 (0.040) 63 (0.057) 76 (0.093) 112 (0.416)79 Catholic University of America 45 (0.075) 58 (0.045) 76 (0.157) 80 (0.136) 112 (0.586)80 Hofstra University 40
171
81 Fordham University 45 (0.581) 63 (0.027) 84 (0.027) 93 (0.024) 112 (0.340)82 Florida Institute of Technology 45 (0.082) 76 (0.034) 77 (0.102) 80 (0.055) 112 (0.726)83 Clark Atlanta University 184 George Mason University 1585 University of Nevada-Reno 58 (0.124) 60 (0.110) 80 (0.262) 84 (0.094) 93 (0.089) 112 (0.321)86 Colorado School of Mines 60 (0.029) 77 (0.128) 83 (0.084) 107 (0.131) 112 (0.628)87 University of Maine 58 (0.075) 60 (0.237) 80 (0.125) 84 (0.060) 93 (0.088) 112 (0.415)88 University of New Hampshire 58 (0.117) 60 (0.373) 80 (0.079) 93 (0.167) 96 (0.029) 107 (0.236)89 University of Massachusetts Lowell 58 (0.031) 60 (0.152) 76 (0.514) 77 (0.033) 80 (0.270)90 Michigan Technological University 60 (0.050) 93 (0.092) 96 (0.146) 107 (0.713)91 University of Missouri, St Louis 392 Montana State U. - Bozeman 58 (0.106) 60 (0.037) 93 (0.118) 96 (0.450) 107 (0.288)93 University of Central Florida 3594 U. of Alabama in Huntsville 58 (0.075) 60 (0.031) 80 (0.133) 93 (0.049) 107 (0.058) 112 (0.654)95 New Jersey Institute Technology 45 (0.023) 58 (0.054) 76 (0.066) 80 (0.444) 112 (0.413)96 U. of Southwestern Louisiana 1197 University of Montana 60 (0.247) 93 (0.194) 96 (0.006) 106 (0.544) 107 (0.009)98 North Dakota State University 899 Idaho State University 60 (0.159) 76 (0.066) 77 (0.128) 80 (0.647)100 University of North Dakota 76 (0.108) 93 (0.045) 96 (0.660) 106 (0.187)101 Florida Atlantic University 45 (0.040) 80 (0.257) 93 (0.417) 112 (0.287)102 Portland State University 2103 Wichita State University 60 (0.157) 63 (0.016) 80 (0.347) 84 (0.084) 93 (0.071) 112 (0.325)104 Middle Tennessee State University 4105 Wake Forest University 58 (0.008) 60 (0.000) 76 (0.201) 80 (0.125) 93 (0.003) 96 (0.051) 106 (0.612)106 Clark University 7107 Clarkson University 9108 University of Tulsa 58 (0.049) 60 (0.045) 76 (0.172) 77 (0.631) 80 (0.063) 106 (0.039)109 Baylor University 60 (0.027) 76 (0.034) 93 (0.114) 96 (0.576) 106 (0.249)
172
110 Worcester Polytechnic Institute 58 (0.034) 60 (0.000) 76 (0.012) 80 (0.093) 93 (0.004) 96 (0.007) 106 (0.849)111 Texas Christian University 58 (0.014) 80 (0.428) 93 (0.010) 96 (0.014) 98 (0.126) 106 (0.408)112 Stevens Institute of Technology 39113 Duquesne University 45 (0.009) 60 (0.083) 76 (0.279) 80 (0.261) 112 (0.369)114 Seton Hall University 0115 University of Detroit Mercy 45 (0.028) 76 (0.012) 77 (0.429) 80 (0.210) 112 (0.322)
Curriculum Vitae
Carlo S. Salerno
EducationThe Pennsylvania State University Ph.D. in Higher EducationUniversity Park, PA August, 2002
Eastern Michigan University B.B.A. in EconomicsYpsilanti, MI April, 1997
Work Experience9/01 � Present: Research Associate, Center for Higher Education Policy
Studies, Universiteit Twente, Netherlands
Research ExperienceResearch Assistant: US Dept. of Education funded study titled, �Enhancing faculty
contributions to learning productivity,� Center for the Study ofHigher Education, The Pennsylvania State University
Assessment Coordinator: NSF funded study, �the Biogeochemical researchinitiative for education,� Department of Geosciences,The Pennsylvania State University
Teaching ExperienceTeaching Assistant: HI ED 562 Organizational Theory and Higher EducationTeaching Assistant: ECON 351 Money and Banking
Conference PresentationsSalerno, C.S. (2000, November). An empirical investigation of the structuralcomponents of institutional prestige. Paper presented at the annual meeting of theAssociation for the Study of Higher Education (ASHE). Sacramento, CA.
Wortman, T.I., Salerno, C.S. (2000, November). International students: toward ahierarchy of needs. ASHE International forum, Sacramento, CA.
Salerno, C.S. (2000, April). Assessing the Biogeochemical Research Initiative forEducation at the Pennsylvania State University. NSF Integrative Graduate Educationand Research Training (IGERT) Assessment Conference. Arlington, VA.
Beach, A., Salerno, C.S., & Colbeck, C.L. (1999, April). How and why teacherschange: A case study of research university faculty. ASHE, Montreal, Canada.
Colbeck, C.L., Salerno, C.S., & Moran, L.M. (1998, November). Mandated time:Direct and indirect influences of a state faculty workload mandate. ASHE, Miami, FL.