6
Essays, opinions, and professional judgments are welcome in this section of EP. Forum articles speak to and about the philosophical and ethical dilemmas of our profession. Authors are invited to express their views to encourage constructive dialogue centered on issues. To keep the dialogue substantive, other articles motivated by previous Forum presentations should have independent titles and themes. Items labeled “Comments on . . .” and “Rejoinder to . , .” will not be published in Forum- such responses, and those not of article length or nature, are welcome and encouraged in the Letters section of EP. Standard citations and reference lists should acknowledge and identify earlier contributions and viewpoints. Manuscripts should not exceed IO double-spaced typewritten pages in length. Policy Evaluation for Policy Communities: Confronting the Utilization Problem JERRY MITCHELL A worry for evaluators of public programs and policies is the under-utilization of their work. One study after another has found that evaluations are sporadically used to improve policy outcomes, and in sundry instances, never even read (Goldstein et al., 1978; Nathan, 1988). The underlying problem, according to many scholars, is that evaluations have simply failed to be relevant to the major policy debates of our time (deLeon, 1988; Fischer, 1987). Assuming relevance is the key issue, an obvious way to increase utilization is to link evaluation more closely to user needs and policy circumstances (Nathan, 1988). In fact, the literature has begun to make these linkages by examining how and why key policy actors (such as legislators, administrators, lobbyists, etc.) use evaluation research (Chelimsky, 1987). Even though this burgeoning literature has been instruc- tive, little consideration has been given to which methodological approaches are suitable for particular policy contexts. The purpose of this article is to examine the relationship between alternative evaluation approaches and the concerns of policy communities. The assumption is that there are certain criteria and methods that are more relevant to particular types of policy Jerry Mkkll l Department of Public Administration, Baruch College/CUNY, 17 Lexington Avenue, Box 336, New York, NY 10010 Evaluatiott F%actice, Vol. 11. No. 2, 1990, pp. 109-I I4 ISSN: 0191-8036 109 Copyright Q 1990 by JAI Press, Inc. All tights of repmductiott in any form resewed.

Policy evaluation for policy communities: Confronting the utilization problem

Embed Size (px)

Citation preview

Essays, opinions, and professional judgments are welcome in this section of EP.

Forum articles speak to and about the philosophical and ethical dilemmas of our profession. Authors are invited to express their views to encourage constructive dialogue centered on issues. To keep the dialogue substantive, other articles motivated by previous Forum presentations should have independent titles and themes. Items labeled “Comments on . . .” and “Rejoinder to . , .” will not be published in Forum- such responses, and those not of article length or nature, are welcome and encouraged in the Letters section of EP. Standard citations and reference lists should acknowledge and identify earlier contributions and viewpoints. Manuscripts should not exceed IO double-spaced typewritten pages in length.

Policy Evaluation for Policy Communities: Confronting the Utilization Problem

JERRY MITCHELL

A worry for evaluators of public programs and policies is the under-utilization of their work. One study after another has found that evaluations are sporadically used to improve policy outcomes, and in sundry instances, never even read (Goldstein et al., 1978; Nathan, 1988). The underlying problem, according to many scholars, is that evaluations have simply failed to be relevant to the major policy debates of our time (deLeon, 1988; Fischer, 1987).

Assuming relevance is the key issue, an obvious way to increase utilization is to link evaluation more closely to user needs and policy circumstances (Nathan, 1988). In fact, the literature has begun to make these linkages by examining how and why key policy actors (such as legislators, administrators, lobbyists, etc.) use evaluation research (Chelimsky, 1987). Even though this burgeoning literature has been instruc- tive, little consideration has been given to which methodological approaches are suitable for particular policy contexts.

The purpose of this article is to examine the relationship between alternative evaluation approaches and the concerns of policy communities. The assumption is that there are certain criteria and methods that are more relevant to particular types of policy

Jerry Mkkll l Department of Public Administration, Baruch College/CUNY, 17 Lexington Avenue, Box 336, New York,

NY 10010

Evaluatiott F%actice, Vol. 11. No. 2, 1990, pp. 109-I I4

ISSN: 0191-8036

109

Copyright Q 1990 by JAI Press, Inc.

All tights of repmductiott in any form resewed.

110 Evaluation Practice, 11(2), 1990

discussions. From this perspective, an evaluation should have a greater opportunity to be utilized when the choice of a methodology is based on the interests of the policy actors involved.

THE AUDIENCE FOR EVALUATION

Although evaluations are typically solicited by specific enterprises, increasingly the users come from an assortment of organizations and institutions (Walker, 1981). Currently, a policy community is the favorite expression for the collection of individuals whose primary concern is with the formulation and implementation of a set of ends (goals or missions) and means (programs or organized activities) in an area of public policy (Kingdon, 1984).

Policy communities exist in many areas, for example, health, housing, defense, environmental protection, criminal justice, economic development, and higher educa- tion. The members typically include legislators, administrators, interest group leaders, lobbyists, and public or private researchers. For the purposes of this paper, an important activity within policy communities is the routine exchange of specialized information and ideas to reconcile misunderstandings, defuse controversial proposals, resolve substantial political differences, and improve outcomes (Walker 1981).

A FRAMEWORK FOR POLICY EVALUATION

The challenge for utilization-focused evaluators is to choose criteria and methods appropriate to the information needs of particular policy communities. The framework in Figure 1 identifies four cells, each of which represents a type of community characterized by the agreement and/or disagreement over policy ends and means.

Analytical criteria and methods are placed within particular cells because each produces information relevant to the focus of that policy community. This does not mean that the model can categorize all of the debates or methodologies that might exist. Instead the framework depicts how evaluators may think about the information needs of different audiences. It is not meant to be the ideal typology for policy-making and evaluation research, but rather an indicator of how methodologies contribute to certain arguments.

The policy community in Cell 1 is characterized by substantial agreement over both ends and means. The major problems are well-defined, the goals of policy are understood, and most programs are refined and clearly related to the agreed on goals. Unlike other types of communities, the patterns of interaction are stable and the influence of external actors is minimal. The focus is on monitoring outcomes and maintaining consensus.

Effectiveness is a justifiable criterion for analyzing the concerns of this commu- nity. The emphasis is on how well the agreed on means are achieving the desired ends. Effectiveness works for this type of community because the goals of policy are clearly

Policy Evaluation for Policy Communities: Confronting the Utiliition Problem 111

specified. The function of evaluation is to reveal which programs are not fully effective so the community can correct matters, such as by adjusting the design of policy or by increasing expenditures.

The ordinary methodology for measuring effectiveness is experimental research. This method helps determine, through before and after assessments, the extent to which clearly defined program goals are accomplished. The data can indicate, for example, how programs should be modified to maximize resource allocations and encourage the attainment of collective goals.

The policy community in Cell 2 agrees about the ends of policy but not the means. Such disagreements may arise, for instance, when admirable goals are thwarted by poorly designed programs. At other times, the problem may be a simple one of administration. That is, the differences among policy actors result from administration that is “sloppy, unorganized, inadequately trained, poorly staffed, and badly man- aged” (Baier et al., 1986: 198). Thus, as the administration of a program improves, so should the policy community’s agreement over the program’s worth.

The analytical focus for this community is efficiency. This criterion is measured in terms of outcome units to dollars expended or by assessing the amount of administrative input needed to produce a desired level of output, The emphasis is on finding less costly (financial and otherwise) alternatives or better administering current programs, procedures, or activities.

Cost-benefit analysis furnishes the type of information necessary to compare the tangible costs of alternative means to achieve the agreed on ends. This analysis simply involves calculating the ratio of monetary costs and benefits of alternative programs and activities. The policy community can use cost-benefit evaluations to justify the replication of efficient programs or the replacement of inefficient activities.

Policy ends

Agreement Disagreement Policy means

Cell 1

Criterion: Effectiveness

Method: Experimental research

Cell 3

Criterion: Responsiveness

Method: Survey research

Cell 2

Criterion: Efficiency

Cell 4

Criterion: Equity

Methods: Cost-benefit analysis, implementation analysis I

Method: Participatory research

Agreement

Disagreement

Figure I. Policy evaluation and policy communities.

112 Evaluation Practice, 11(2), 1990

Implementation analysis (or process evaluation) is another useful method useful in this cell. Based on secondary data collection and ethnographic interviews, for example, the focus is on identifying the administrative and political factors that affect the efficient implementation of programs and activities, and actions needed to improve implementation.

The policy community in Cell 3 agrees about the means of policy but disagrees about the ends. This community formulates and implements programs without a clear set of goals. Often, the incremental adoption of programs or services obscures the overall strategic purpose of policy. The result is inconsistent goals that frequently create unintended consequences for potentially effective programs and activities.

For the community that is debating the goals of policy, responsiveness is a relevant criterion. Responsiveness refers to the degree to which a policy meets the preferences of particular groups or interests. The task of the analyst is to rank (and perhaps weigh) the comparative instrumental or symbolic significance of policy goals and community values. When the important values of the community are known, the members can more effectively work to make the ends of policy consistent with each other and with the overall interests of the individual members.

Survey research using the self-administered questionnaire is an appropriate method for analyzing social and political priorities. The sample is targeted to represent the different members of the policy community. The survey questions might focus on the community’s support and opposition to existing and proposed policy goals or only on the value preferences of different groups. A policy community can use such survey data to identify areas where goal consensus may be optimized.

The policy community in Cell 4 is one where there is neither agreement over the ends or the means of policy. Instead of a coherent set of preferences, there is only a loose collection of ideas. The only uniformity is the conviction that policy is needed. This type of policy community may emerge when there are new or unique public problems. The focal point here is on estimating how untested ends and means will mirror social values.

The function of policy evaluation is to evaluate the equity (or fairness) of different ends and means arrangements. The analyst attempts to understand whether the distribution of policy outcomes are more or less fair to ail of those concerned. This requires the analyst to comprehend the complex values of the affected parties and to estimate the impact of different means and ends arrangements. Although there is not a well-developed methodology for this level of analysis, many scholars have begun to discuss the potential of participatory research for evaluating equity. Participatory research entails a formally orchestrated discourse among those who effect and are affected by a given policy. It goes beyond the organizational focus of action research by emphasizing the participation of a wider range of people. The research involves public hearings, meetings, and other forums with proscribed rules of evidence and argumentation. The legal/judicial system is the obvious conceptual precedent and environmental impact statements are the operating analog (deLeon, 1988).

With participatory research, policy actors bring their separate ideas and interests to the public discussion. Participation is the motivation for each to accept the goals and

Policy Evaluation for Policy Communities: Confronting the Utilization Problem 113

duties of the ideal order and to further them as the legitimate base for equitable public

policy. In speaking to each other, the community members presuppose that they hold

beliefs, intentionally follow norms, and give reasons for their norms, values, and policy choices. At different points of the discourse, the analyst interprets the areas of

political consensus and judges what policy design is proper.

IMPLICATIONS

There are several implications of this model for practicing evaluators. First, such

an approach clarifies what relevance means to policy evaluation. For example, cost-benefit analysis of farm policy may be less useful than survey data that reveals the

instrumental value of currently inconsistent goals. Likewise, a cost-benefit analysis of

an AIDS program may be less useful than participatory research regarding the overarching values of an emerging policy community.

Second, this model implies that evaluation exists to clarify complexity. The analysis should be capable of arriving at progressively clearer pictures of ends and means. By doing so, policy evaluation provides a standard by which to judge policy. The purpose is to produce and disseminate information that objectively provides an insight into the underlying controversies in policy formulation and implementation.

Third, this model suggests that evaluation should incorporate an assortment of criteria and methods applicable to distinct policy debates. The criteria used to evaluate policies must be expansive, including efficiency, effectiveness, responsiveness, and equity. Most important, there should be an array of qualitative and quantitative methodologies in the evaluator’s tool box.

Finally, the approach supports a new step in the evaluation process, namely, the identification and description of policy communities. At a minimum, this step should entail the collection of documentation about the policy environment and structured interviews with key informants. It is by considering the audience for information that evaluation begins to confront the problem of utilization.

REFERENCES

Baier, V.E., March, J.G., and Saetre, H. (1986). Implementation and ambiguity. ScandinavianJournal

of Management Studies, 12, 197 - 2 12. Chelimsky, E. (1987). Linking program evaluation to user needs. In D. Palumbo (Ed.), The Politics of

Program Evuluarion (pp. lOO- 145). Newbury Park, CA: Sage. deleon, P. (1988). Advice and Consent: The Developmenr of rhe Policy Sciences. New York: Russell

Sage Foundation. Fischer, F. (1987). Policy expertise and the “New Class”: A critique of the neoconservative thesis. In F.

Fischer (Ed.), Confronting Values in Policy Analysis (pp. 94- 127). Newbury Park, CA: Sage. Goldstein, M.S., Marcus, A, and Rausch, N. (1978). The nonutilization of evaluation research, Pacific

Sociological Review, 21(l), 21-U. Kingdon J.W. (1984). Agendas, Alternatives, and Public Policies. Boston: Little, Brown.

114 Evaluation Practice, 11(2), 1990

Nathan, R.P. (1988). Social Science in Government: Uses and Misuses. New York: Basic Books. Walker, J. (198 I). The diffusion of knowledge, policy communities, and agenda setting: the relationship

of knowledge and power. In J. Tropman, M.J. Dluhy, and R. Linds (Eds.), New SrraregicPerspectives on Social Policy (pp. 75-96). New York: Pergamon.