233
0361-0365/99/0400- 0233$16.00/0 q 1999 Human Sciences Press, Inc.
Research in Higher Education, Vol. 40, No. 2, 1999
Research and Practice
LINKING PERFORMANCE TO FUNDINGOUTCOMES AT THE STATE LEVEL FORPUBLIC INSTITUTIONS OF HIGHER EDUCATION:Past, Present, and Future
Daniel T. Layzell
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :This article discusses the use of performance indicators and performance-basedfunding by states for their systems of higher education drawing on the existing litera-ture on the topic. The different types of mechanisms currently used by states formeasuring institutional performance are described as are recent state experienceswith performance indicators, including their pitfalls and limitations. Additionally, asummary of the current status of performance-based funding applications is dis-cussed including some of the reported difficulties in implementing such fundingmodels. Finally, the author explores the future implications of performance indicatorsand performance-based funding mechanisms for public higher education.
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
State-level policymakers (e.g., legislators, governors) have been monitoring
the performance of publicly funded institutions of higher education since the
late 1970s via a variety of accountability and other assessment mechanisms.
During the past few years, budgetary constraints paired with an ongoing inter-
est in accountability and programmatic outcomes by policymakers have also
brought about a renewed interest in the uses and implications of performance-
based budgeting Ð that is, allocating resources to institutions based on the ex-
tent to which they achieve previously established goals, objectives, and out-
comes (Gaither et al., 1994).
This procedure is by no means a new concept in public budgeting, either in
general or for higher education specifically. The federal government attempted
to implement various forms of performance-based budgeting in the late 1960s
and early 1970s (e.g., PPBS, zero-based budgeting), and the state of Tennessee
Daniel T. Layzell, Principal, MGT of America, Inc., 2425 Torreya Drive, Tallahassee, FL 32303;
dan6mgtamer.com.
234 LAYZELL
has had an ongoing performance-based funding program for higher education in
place since 1978. However, a primary difference between performance-based
funding then and now is the comprehensive nature of some of the initiatives
currently under consideration by state policymakers and the seriousness with
which these initiatives are being considered. Indeed, whereas the state-level ac-
countability and assessment systems that were adopted in the 1970s and 1980s
were generally ª voluntaryº for institutions of higher education (see Aper et al.,
1990), they are increasingly becoming mandatory.
Because the essence of performance-based funding is the system of perfor-
mance indicators on which the allocation is based, this article begins with a
discussion of performance indicators before moving into a discussion of perfor-
mance-based funding systems. Thus, drawing on the existing literature on the
topic, the purpose of this article is to:
1. Discuss the different types of mechanisms currently in place for measuring
institutional performance.
2. Briefly describe recent state experiences with performance indicators for in-
stitutions of higher education.
3. Describe the pitfalls and limitations of performance indicator usage in higher
education.
4. Discuss the current status of performance-based funding applications at the
state level for public higher education, including the reported difficulties
experienced in their implementation.
5. Discuss the future implications of such mechanisms for public higher educa-
tion
Early on in the writing of this article it occurred to me that perhaps a better
subtitle would be ª The Good, the Bad, and the Ugly.º The reason is that, as in
other areas of public policy, some systems of performance indicators are better
conceived and operationalized than others. Likewise, performance-based fund-
ing mechanisms have their positive and negative aspects. The point of these
observations is to alert the reader that the overriding goal of this article is not to
advocate a ª best setº of performance indicators (there is no such animal) or the
establishment of performance-based funding systems, per se. Rather, it is this
author ’ s goal to discuss the ª state of the artº in these areas as well to address
some of the strengths, weaknesses, success stories, and failures experienced by
states to date.
STATE-LEVEL APPROACHES TO PERFORMANCE INDICATORSYSTEMS
As noted by Sizer, Spee, and Bormans (1992), there are five primary uses for
performance indicators:
LINKING PERFORMANCE TO FUNDING OUTCOMES 235
1. Monitoring: Promoting the ongoing assessment of a program, institution, or
system.
2. Evaluation: Enabling the measurement of the attainment of goals and objec-
tives.
3. Dialogue: Providing a concrete basis for communicating with others about
abstract policy concepts and goals.
4. Rationalizati on: Promoting a rational and coherent policymaking process.
5. Resource allocation: Providing a rational basis for the allocation of resources.
In short, then, the establishment of performance indicators is ultimately based
on the desire for accountability. At the state level, accountability is operation-
alized through the setting of goals and objectives for higher education and the
periodic measurement of progress toward those goals and objectives using ac-
cepted indicators. While the setting of statewide goals and objectives for higher
education is activity unique to every state, Ewell and Jones (1994) note four
approaches commonly used in measuring progress toward accountability goals
and objectives:
1. Inputs, processes, outcomes: a ª productionº process model aimed at measur-
ing the value added to departing students perhaps through pre- and post-
assessments
2. Resource efficiency and effectiveness: an approach designed to measure the
efficient usage of key resources such as faculty, space, and equipment using
ratio analyses or similar techniques
3. State need and return on investment: an approach built on the assumption
that higher education is a strategic investment for statesÐ it is designed to
measure the fit between higher education and state needs (e.g., workforce
preparation)
4. ª Customerº need and return on investment: an approach built on the notion
of ª consumerismº that is designed to measure the impact of higher educa-
tion in meeting individual needs (e.g., retention and graduation rates, em-
ployability of graduates)
Table 1 shows some of the actual performance indicators that have been used
by some states in each of these models. The four approaches are not indepen-
dent and the authors note that most states employing performance indicators
borrow from one or more of the other areas. Ewell and Jones further note
that, ª The point for policy makers is less to choose among them [the four
approaches] as much as it is to ensure that those responsible for developing any
planned statewide indicator system recognize the need to be guided by an ex-
plicit policy frameworkº (p. 13). In short, the policy goals and objectives
236 LAYZELL
TABLE 1. Examples of State-Level Performance Indicators by Type of Approach
Inputs, Processes,
and Outcomes
Resource
Efficiency and
Effectiveness
State Need and
Return on
Investm ent
Customer Need
and Return on
Investm ent
Average ACT and
SAT scores of
entering fresh-
men classes
First-year retention
rates
Six-year graduation
rates
Time to degree
Credits to degree
GRE scores
Pass rates on licen-
sure exams (e.g.,
CPA, nursing)
Student-faculty ra-
tios
Average faculty
contact hours
Cost per credit
hour by disci-
pline and student
level
Instructional space
utilization by
time of day (i.e.,
percent of class-
room/lab space
in use)
Econom ic impact
studies (i.e., the
multiplier effect
of state invest-
ment in higher
education)
Degrees granted
per 100,000
working age
population by
level
Percent of state
high school
graduates enroll-
ing in state insti-
tutions of higher
education
Overall level of
employer satis-
faction with
graduates
Percent of gradu-
ates placed in a
job related to
their field or
graduate/profes-
sional school
one year after
graduation
Average starting
salaries of grad-
uates by field
Overall level of
student/alumni
satisfaction with
educational ex-
perience
Pass rates on licen-
sure exams.
should drive the selection of performance indicators, and not the other way
around.
RECENT STATE EXPERIENCES WITH PERFORMANCE INDICATORS
According to a recent survey by the State Higher Education Executive Offi-
cers (SHEEO), three-fourths of the states (38) currently report or use perfor-
mance indicators for higher education in some way (SHEEO, 1997). The pri-
mary uses of these performance indicators are for accountability reporting or
ª consumer information.º The survey also found that the most frequent recip-
ients of performance indicator reports were state legislatures and governors,
which is not surprising given that the reporting of performance measures is
frequently mandated through state statute.
Gaither (1997) cites several lessons learned from past state experiences with
performance indicators:
LINKING PERFORMANCE TO FUNDING OUTCOMES 237
1. The number of performance indicators should be kept to a minimum ( , 20).
2. Performance indicators should not be developed in a top-down manner.
3. Both faculty and the state legislature need to be involved in the development
of the indicators for lasting success (i.e., gain ª buy inº ).
4. One indicator model cannot be applied to all types of institutions effectively
without diminishing diverse missions (i.e., one size does not fit all).
5. Policymakers tend to prefer quantitative as opposed to qualitative measure-
ment.
6. Indicators should have financial incentives for institutions.
7. Performance results must be communicated in a timely and understandable
fashion for policymakers and the public at large.
The relative importance of each of these ª lessonsº varies; however, it should be
noted that each one was probably learned painfully by at least one state during
its performance indicator implementation.
LIMITATIONS AND PITFALLS OF CURRENT PERFORMANCEINDICATOR SYSTEMS FOR HIGHER EDUCATION
The use of performance indicators has a number of limitations and pitfalls
that need to at least be considered, if not controlled by those seeking to develop
them. The January/February 1997 issue of Assessment Update was devoted
to the topic of performance indicators for higher education and included case
studies from five state systems of higher education: New York (SUNY), Wis-
consin (University of Wisconsin System), Texas (Texas A&M System), Colo-
rado, and Missouri. While the underlying situation in each state was somewhat
different, there were at least six common caveats noted throughout the case
studies, which are described in the following sections. These ª lessonsº are aug-
mented with other research findings where applicable.
Data limitations. A significant practical issue that impacts the development
and implementation of performance indicators is the availability of data. The
recent SHEEO survey found that data availability actually determined the per-
formance indicators used in three-fourths of the 38 states currently using such
measures. Of course, four-fifths of the states indicated that they were also en-
gaging in new data collection to support their system of performance indicators.
This practice has both positive and negative attributes. The ability to work
with existing data collection systems reduces the start-up time and cost in-
volved for implementing a performance indicator system. It also improves the
ª comfort levelº of those involved and thus the credibility of the process. On the
other hand, sticking only to those indicators for which data are currently avail-
able may not result in the most useful or appropriate set of performance indica-
238 LAYZELL
tors. The significant proportion of states engaging in new data collection sug-
gests a recognition of this limitation.
More is not necessarily better: having too many indicators. A common pitfall
that many policymakers (and institutions) fall prey to is the desire to utilize
several performance indicators, thinking that it somehow provides a more com-
plete picture of institutional performance (Neal, 1995). This situation is typ-
ically a symptom of having no explicit framework for guiding the development
of performance indicators (see subsection below). It is also often the result of a
policy development process where there are many different opinions as to what
is important to measure.
The end result of having too many indicators is twofold. First, the more
indicators on which an institution is measured, the less important any one of
those indicators becomes and vice versa. This is due to the fact that the process
of minimizing the number of indicators is a prioritization process that ensures
that the resulting performance indicators are viewed as important. Second, as
more indicators and goals are added, the institution runs into the real danger of
conflicting goals and results. For example, if policymakers decide that institu-
tions should both show a high level of access and admission for freshmen (i.e.,
an open-door policy) as well as high graduation rates, they will likely find that
one is not necessarily compatible with the other.
Some critics of this ª minimizedº approach might offer the alternative view
that institutions of higher education are highly complex organizations and re-
quire an equally complex set of indicators to measure performance. While this
author would be the first to attest to the complexity of colleges and universities,
I would also argue that the equally important role served by performance indi-
cators as vehicles for public (read ª politicalº ) accountability is ill-served by the
ª laundry listº approach. The clarity of message provided by accountability/
performance reporting is greatly enhanced by having a small number of well-
defined and well-conceived indicators.
No policy framework for performance indicators . A critical pitfall that should
be avoided is the temptation to rush into developing performance indicators
without an explicit policy framework to guide that development. The first two
questions to ask are: (1) What does the state view as the most significant goals
to be achieved by higher education? and (2) How should they be measured?
Every policymaker has his or her own ª favoriteº indicators that may or may not
fit into these broader strategic goals. Absent these strategic goals, performance
indicators simply become a laundry list of odds and ends that result in bother-
some data collection and reporting activities with no end.
A related problem that can sometimes occur is where a forward-thinking
institution or system may have developed a cohesive set of indicators that relate
directly to a policy framework (e.g., a strategic plan), but it may not satisfy an
important external constituency such as a powerful legislator or the governor in
LINKING PERFORMANCE TO FUNDING OUTCOMES 239
whole or in part. The decision for the institution at that point is to either modify
the plan to satisfy the external constituency (and perhaps alienate some internal
constituencies) or ignore these concerns and risk political (and financial) reper-
cussions. Or, the external constituency may unilaterally mandate its own set of
indicators, which could be significantly less desirable than the current situation.
Although these tensions are never completely unavoidable, they can be great-
ly reduced by involving these external constituencies early on in the process. It
is this author ’ s experience that for the most part, the aforementioned scenario is
the result of:
(a) poor communication between the institution/sys tem and the external con-
stituency in question;
(b) the perception of institutional/ system arrogance on the part of the external
constituency; or
(c) all of the above.
A strategy of early consultation with key legislators, the governor ’ s office, and
other state policymakers will often help to mitigate, if not avoid, such prob-
lems.
To be or not to be: quantitative vs. qualitative indicators. In the development
phase of every system of performance indicators, there is a constant (and un-
avoidable) tension between the desire to keep things numeric and ª measurableº
and the desire to address the less tangible but equally important aspects of the
institution. Performance indicator systems that focus solely on quantitative
measures are more comfortable and familiar for policymakers but may provide
a one-dimensional view of the organization. On the other hand, the develop-
ment of valid qualitative indicators of organizational performance requires a
rigor and discipline that is difficult to achieve. A well-balanced system of per-
formance indicators will include both types of measures.
Confusing ª inputs,º ª processes,º and ª outcomes.º One common pitfall of
performance indicator systems is the blurring of organizational inputs, pro-
cesses, and outcomes in the development of measures. For example, one popu-
lar ª performance measureº used by many states is faculty instructional work-
load. Although this is an important measure of institutional resource utilization
and says something about an institution’ s internal budget process, it does not
say anything about the instructional outcome. On the other hand, institutional
outcomes do not occur in a vacuum; they are directly related to inputs and
processes. In short, a well-rounded set of performance indicators should explic-
itly incorporate aspects of organizational inputs, processes, and outcomes but
should be up front about the differences among the three.
Lack of broad ª buy inº up front. As alluded to earlier, the quickest way to
doom any new policy initiative, the establishment of performance indicators
240 LAYZELL
included, is to develop them without sufficient input from the organization’ s
key stakeholders, both internally and externally (Boatright, 1995). While a
more consultative process requires more time, discipline, and patience, it is also
likely to be more successful and sustainable in the long run.
LINKING PERFORMANCE TO FUNDING: THE NEXT LOGICAL STEP
Performance-based funding is the logical extension of a system of perfor-
mance indicators and it directly ties together accountability, performance, and
funding levels. To understand the unique nature of such mechanisms, we first
need to consider the two predominant modes of resource allocation for public
institutions of higher education currently in use: incremental budgeting and for-
mula budgeting.1
Incremental budgeting is the traditional Ð and dominant Ð form of govern-
mental budgeting. This mode starts with an institution’ s prior year base budget
and allocates increases (or decreases) to that base according to a set of estab-
lished decision rules or budget guidelines. Examples of decision rules or guide-
lines include inflationary increases for supplies and utilities or cost-of-living
adjustments for employee salaries. Under this practice, policymakers typically
focus on ª cost to continueº items (e.g., increases/decreases due to inflationary
increases or workload changes) and in some instances new program initiatives.
Formula budgeting , in contrast, refers to a mathematical basis for requesting
and/or allocating dollars to institutions of higher education using a set of cost
and staffing factors (e.g., cost per credit, student/faculty ratios) in relationship
to specified inputs (e.g., student credit hours, enrollment levels). Funding for-
mulas for public higher education have been used by states for more than a
half-century. They were originally envisioned as a means to distribute public
funds for higher education in a rational and equitable manner, and have evolved
over time into relatively complicated mechanisms with multiple purposes and
outcomes.
Performance-based budgeting is different from the two primary modes of
allocating resources to public colleges and universities currently used in that
ª resources flow only after the recipient of the funds can demonstrate that a
specified outcome has, in fact, been producedº (Jones, 1997). In short, incre-
mental and formula budgeting methods have a needs-based approach to re-
source allocation, whereas performance-based budgeting/fun ding has a more
merit-based approach to funding.
According to Carter (1994), a performance-based budget has the following
four characteristics:
1. It presents the major purpose for which funds are allocated and sets measur-
able objectives.
LINKING PERFORMANCE TO FUNDING OUTCOMES 241
2. It reports on past performance and uses common cost classifications that
allow programs to be compared rather than focusing on line-item compari-
sons.
3. It offers management flexibility to reallocate money as needed and to pro-
vide rewards for achievement or penalties for failure.
4. It incorporates findings from periodic program evaluations that are support-
ed by credible information that can be independently audited.
Jones (1997), further notes that for a performance-based funding initiative to
be implemented, the following four elements must be in place:
1. The objectives to be attainedÐ either outcomes or demonstration of good
practice.
2. The ª metrics of successº Ð specific measures or definitions on which perfor-
mance is calculated.
3. The basis of rewardÐ the benchmarks of success.
4. The method for resource allocation.
Performance-based funding initiatives for higher education have had some
success, most notably in Tennessee (Folger, 1988). Tennessee’ s incentive fund-
ing program and the additional funding received by institutions both directly
and indirectly as the result of this program have been held up as the prime
example of these benefits. Ashworth (1994) cautions, however, that fully imple-
menting performance-based funding for higher education has two fundamental
problems. First, ª Uniform agreements on the values that would have to be
cranked into a formula do not exist, and data are not available within reason or
within tolerable costs to feed such a formula systemº (1994, p. 11). Second, it
is conceivable that if all funding were distributed on a performance basis, there
could be significant redistribution of funds from year to year. This situation
would adversely affect the institution’ s ability to plan and execute, ultimately
defeating the purpose of performance budgeting.
CURRENT STATUS OF PERFORMANCE-BASED FUNDINGEFFORTS FOR HIGHER EDUCATION
Despite these potential limitations, the concept of performance-based funding
for higher education is alive and well in several states. The recent SHEEO
(1997) study found that 22 states were using performance indicators as the basis
for allocating resources to their institutions of higher education, either directly
or indirectly.
One way to assess the impact of a policy initiative is to view it from a
longitudinal perspective. Table 2 presents a comparison of the status of perfor-
242 LAYZELL
TABLE 2. Status of State Perform ance-Based Funding Program s for Higher
Education: 1994 and 1997
Performance-Based Funding
(P-BF) In Place
1997
Plan To Implement P-BF
In Future
No Current Plans to
Implement P-BF
1994 Direct Link Indirect Link 1994 1997 1994 1997
Arizona
Arkansas
Colorado
Connecticut
Florida
Minnesota
Missouri
Nebraska
Tennessee
Arkansas
Colorado
Florida
Kentucky
Missouri
Ohio
Tennes-
see
Arizona
Connecticut
Delaware
Hawaii
Idaho
Illinois
Iowa
Kansas
Mississippi
Montana
N. Carolina
Oregon
Rhode Island
Texas
Washington
Idaho
Kentucky
Mississippi
New Mexico
N. Dakota
Ohio
Oregon
Pennsylvania
S. Carolina
S. Dakota
Indiana
Louisiana
Maine
Maryland
Minnesota
S. Carolina
S. Dakota
Utah
Virginia
Wyoming
Alabama
Alaska
Delaware
Georgia
Illinois
Indiana
Kansas
Louisiana
Maine
Maryland
Massachusetts
Michigan
Nevada
New Jersey
New York
N. Carolina
Texas
Utah
Vermont
Washington
Wisconsin
Wyoming
California
New Jersey
New Mexico
Oklahoma
W. Virginia
Wisconsin
(n 4 9) (n 4 7) (n 4 15) (n 4 10) (n 4 10) (n 4 22) (n 4 6)
Note: There were 9 nonrespondents in the 1994 survey and 11 nonrespondents in the 1997 survey.
Sources: Layzell and Caruthers, 1995; SHEEO, 1997.
mance-based funding for higher education at the state level in 1994 and 1997
based on the results from two different studies. The 1994 data are from a study
based on a 50±state survey conducted by Layzell and Caruthers (1995), and the
1997 data are from the recent SHEEO survey. Although there were some differ-
ences between the two surveys, they were minor in a methodological sense
given that the SHEEO survey was itself based on the survey form developed by
Layzell and Caruthers.
Several points illustrated in this table bear discussing. First, the number of
states reporting the use of performance-based funding grew significantly be-
tween 1994 and 1997 Ð from 9 to 22. Of the 22 states reporting the use of
LINKING PERFORMANCE TO FUNDING OUTCOMES 243
performance indicators for allocating resources in 1997, seven reported that
there was a direct linkage between the use of performance measures in resource
allocation and 15 indicated that there was an indirect linkage. There is a subtle
but significant difference between having a direct versus an indirect linkage in
performance-based funding. Having a direct linkage means that the attainment
or lack of attainment of an objective as measured by any performance indicator
has a direct impact on the resources provided to the institution. For example, if
an institution’ s funding were partially based on the attainment of a 60% gradua-
tion rate for all entering freshmen within six years, and the institution had only
a 45% graduation rate, this situation would result in a negative impact on the
resources provided to the institution.
On the other hand, having an indirect linkage in place provides for a much
more subjective atmosphere of interpretation in applying performance indica-
tors in the resource allocation process. That is, while performance indicators
play a role in allocating resources to institutions under such a model, there are
other factors considered as well. Thus, in our previous example of the six-year
graduation rate, the institution would not necessarily be assured of a reduction
in funding or other negative consequences.
As indicated in Table 2, the same number of states indicated an intent to
implement performance-based funding in the near future (within the next two
years) in both 1994 and 1997. Interestingly, five of the states that had indicated
an intent to implement such an initiative in 1994 actually had one in place by
the time of the 1997 survey. Also notable is the fact that five states that had
indicated no intent to implement performance-based funding for higher educa-
tion in 1994 indicated in the 1997 survey that they had plans to implement it
within two years. More notable is the fact that six states that had indicated no
intent to implement performance-based funding in 1994 reported the current,
though indirect, use of performance indicators in the resource allocation process
in the 1997. These changes are indicative of the incremental change in attitudes
seen regarding performance-based funding for higher education.
The state of South Carolina presents an interesting case study regarding the
future of performance-based funding for higher education. In 1996, the South
Carolina legislature, prompted by a group of private business leaders in the
state, enacted the most significant performance-based funding program to date.
This program has been implemented by the state Commission for Higher Edu-
cation and is based on institutional performance across 37 specific performance
indicators that will be phased in through the year 2000. At that point 100% of
state funding for public higher education will be allocated based on institutional
performance of these indicators. The significance of this particular program
becomes quite clear when considering the fact that other state performance-
based funding initiatives allocate from less than 1% to 4% of state funding for
higher education (Burke and Serban, 1997).
244 LAYZELL
REPORTED DIFFICULTIES RELATED TO PERFORMANCE-BASEDFUNDING IMPLEMENTATION
A recent study by Burke and Serban (1997) of performance-based funding
for public higher education explored, among other things, the difficulties expe-
rienced by states related to the implementation of performance-based funding
initiatives. The study was based on a survey of various individuals involved in
the implementation process including governors, legislators, state-level higher
education officials, institutional governing boards, institutional administrators,
and faculty leaders.
The survey found that the three top factors consistently mentioned as major
difficulties in implementing performance-based funding by these individuals are
(1) the selection of performance indicators; (2) the selection of ª successº crite-
ria (benchmarks of success); and (3) the small amount of funding allocated for
the initiative. Clearly, the first two are key factors to the success of any perfor-
mance-based funding initiative. The inability of a state to successfully over-
come these roadblocks would effectively scuttle the initiative. The third factor
reflects the need to provide a meaningful incentive for institutions to take per-
formance funding seriously. If the dollar value is too low, neither institutions
nor policymakers will likely find performance-based funding worth the effort.
PROSPECTS FOR THE FUTURE
The rapid growth in the number of states employing performance-based fund-
ing in recent years for their institutions of higher education and the large num-
ber of states planning to adopt it in the near future suggest that this practice will
at least be in place for the foreseeable future. The recent SHEEO survey results
suggest that states may choose mechanisms that more indirectly (as opposed to
directly) link performance and funding levels as a way to ª test outº perfor-
mance indicators and also address technical implementation issues without
damaging institutional resources. Clearly, the development and utilization of
performance indicators will continue to be strongly tied to data availability and
the related technical capabilities of states. Also, the significant number of states
presently employing an ª indirect linkageº between institutional performance
and funding levels suggests a desire to maintain an element of subjectivity and
stability in the state resource allocation process for higher education.
This author would offer four suggestions for those considering the develop-
ment of a performance-based funding initiative for higher education institu-
tions, which are drawn from the literature as well as actual experience:
1. Keep it simple. This advice ranges from using a minimum number of perfor-
mance indicators to the development of the actual resource allocation mech-
LINKING PERFORMANCE TO FUNDING OUTCOMES 245
anism. Unnecessary complexity only serves to hinder implementation and
communication to key individuals involved in the process.
2. Communicate and clarify often. Making sure that everyone involved under-
stands the goals and objectives of the development process and that each
step is clearly described will greatly facilitate the implementation of perfor-
mance-based funding.
3. Provide room for error and experimentation. Given that the development of
performance indicators is likely to result in unforeseen difficulties, the pro-
cess of developing a performance-based funding program should also pro-
vide room for error and experimentation at the beginning.
4. Learn from others’ experiences, but develop your own program . The process
of learning from others’ experiences, good and bad, with the development
and implementation of performance-based funding is an extremely useful
process in the development of one’ s own program. However, every state
should also ensure that its program reflects its own particular needs and
concerns.
The ultimate question regarding performance-based funding is, of course,
whether it will actually serve to improve institutional performance in the long
run. Given the start-up time involved in implementing such programs, it will
likely be at least five years or more before there is any clear evidence regarding
the success or failure of these initiatives. However, it should be noted that a
large proportion of the individuals surveyed in the Burke and Serban (1997)
study felt that one of the advantages of performance funding was the ª potential
to improve higher education.º This finding suggests that, at the very least, a
positive political environment for the development of performance-based fund-
ing models should be in place in the foreseeable future for those institutions,
systems, and states seriously considering this funding model.
NOTE
1. See Layzell and Caruthers (1995) for a more detailed description of incremental budgeting and
formula budgeting.
REFERENCES
Aper, J., Cuver, S. M. and Hinkle, D. E. (1990). Coming to terms with the accountabilityversus improvem ent debate of assessment. Higher Education 20(4): 471±483.
Assessment Update (1997). Special issue on perform ance indicators in higher education.January/Februar y, 9(1).
Boatright, K. J. (1995). University of Wisconsin System accountability. In G. H. Gaither(ed.), Assessing Performance in an Age of Accountability. New Directions for HigherEducation, vol. 91 (pp. 51±64). San Francisco: Jossey Bass.
246 LAYZELL
Burke, J. C., and Serban, A. M. (1997). Performance Funding and Budgeting for PublicHigher Education: Current status and future prospects. Albany, NY: Rockefeller Insti-tute of Government.
Ewell, P. T., and Jones, D. (1994). Pointing the way: Indicators as policy tools in highereducation. In Sandra Rupert (ed.), Charting Higher Education Accountability: A Sour-cebook on State Level Performance Indicators. Denver, CO: Education Commissionof the States.
Folger, J. (1988). Designing state incentive programs that work. Paper presented at theNational Center for Postsecondary Education Governance and Finance Conference onState Fiscal Incentives, Denver, CO, November.
Gaither, G. H. (1997). Performance indicator system s as instrum ents for accountabilityand assessment. Assessment Update 9(1): 1±2, 14±15.
Gaither, G. H., Nedwek, B. P., and Neal, J. E. (1994). Measuring Up: The Promises andPitfalls of Performance Indicators in Higher Education. ASHE-ERIC Higher Educa-tion Reports, Number 5. Washington, DC: ASHE-ERIC Clearinghouse.
Jones, D. (1997). Perspectives on performance funding: Concepts, practices, and princi-ples. Unpublished draft.
Layzell, D. T., and Caruthers, J. K. (1995). Performance funding at the state level:Trends and prospects. Paper presented at the 1995 Association for the Study of HigherEducation Annual Meeting, Orlando, FL, November.
Neal, J. E. (1995). Overview of policy and practice: Differences and similarities in de-veloping higher education accountability. In G. H. Gaither (ed.), Assessing Perfor-mance in an Age of Accountability. New Directions for Higher Education, vol. 91 (pp.5±10). San Francisco: Jossey Bass.
Sizer, J., Spee, A., and Bormans, R. (1992). The role of perform ance indicators in highereducation. Higher Education 24(2): 133±156.
State Higher Education Executive Officers (1997). State survey on performance mea-sures. Preliminary results.