Challenge Assessing Policy Advocacy

Embed Size (px)

Citation preview

  • 7/27/2019 Challenge Assessing Policy Advocacy

    1/58

    O C T O B E R 2 0 0 5

    Funded by and prepared for:

    The Challenge of Assessing Policy andAdvocacy Activities:Strategies for a Prospective Evaluation Approach

  • 7/27/2019 Challenge Assessing Policy Advocacy

    2/58

    OCTOBER 2005

    Researched and written by:Kendall Guthrie, Justin Louie, Tom David and Catherine Crystal Foster

    Blueprint Research & Design, Inc.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    3/58

    1FOREWORD

    Foreword fromThe California Endowment

    During the last few years, The California Endowment has placed increased focus onpolicy and advocacy work. We believe that for many societal challenges, creatingpolicy change can have a more systemic and lasting effect on improving the healthcare of Californians than solely funding direct services. However, we have discoveredthat documenting the impact of the foundations work in this arena is complex, andthe unpredictable nature of the policy environment poses unique challenges to theclassic program evaluation strategies honed by assessing direct services.

    Therefore, The Endowment asked Blueprint Research & Design, Inc. to gatherinformation from evaluation experts across the country and the foundations ownstakeholders to guide us in developing an understanding about the issues in policychange evaluation, and recommend an approach to strengthening the foundationspublic policy change evaluation practice. Via in-depth interviews, stakeholdersidentified their needs, opinions and priorities around strengthening the foundationspolicy change evaluation practice. Interviews with evaluation experts providedadvice on conducting effective policy change evaluation in a foundation setting,and identified tools and written reports that might inform the foundation indeveloping an evaluation strategy.

    As we continue to refine our evaluation approach in order to better understand theeffectiveness of our grant making, as well as to help grantees improve their own work,we want to share the early learnings from this state of the field report. We inviteyou to share any thoughts and reactions on this important issue.

    Robert K. Ross, M.D.President and CEO

    The California Endowment

  • 7/27/2019 Challenge Assessing Policy Advocacy

    4/58

    Funders and nonprofits involvedin policyand advocacy work need more effective

    ways to assess whether their efforts are

    making a meaningful difference.

    2 T H E C H AL L EN G E O F AS S ES S I NG PO L IC Y A N D A DV OC A CY AC T IV I TI E S

  • 7/27/2019 Challenge Assessing Policy Advocacy

    5/58

  • 7/27/2019 Challenge Assessing Policy Advocacy

    6/58

    IntroductionFoundations and nonprofits today facea growing need to measure the outcomesof their efforts to improve society. For themajority of nonprofits that focus on directservices and their funders, measuringimpact has moved beyond merely countingthe number of people served towardunderstanding the relevant changes inservice recipients attitudes, awareness andbehaviors. Documenting the connections

    between increased services funded by afoundation and increased social changehas been relatively straightforward.

    Funders and nonprofits involved in policyand advocacy work, however, oftenstruggle over ways to assess whether theirhard work made a meaningful difference.It can take years of building constituencies,educating legislators and forging alliancesto actually change public policy. Therefore,foundations, which typically fund inincrements of one-to-three years, havedifficulty assessing the progress of granteespursuing policy work. Moreover,foundations often want to know

    what difference their money made.Yet the murky world of public policy

    formation involves many players,teasing out the unique impact of anyone organization is almost impossible.

    During the last few years, a handfulof foundations and evaluation expertshave been crafting new evaluationmethodologies to address the challengesof policy change work. Many morefoundations and nonprofits are eagerfor help. At this moment in time, whatmethods, theories and tools look promisingfor evaluating policy change efforts?

    Originally prepared for The CaliforniaEndowment, this paper presents arecommended approach to policy changeevaluation. The foundation, one of thelargest health funders in the country, is

    increasingly funding policy change andadvocacy work. This research beganas an effort to inform the foundationsevaluation planning around those issues.However, The Endowments goals andchallenges are not unique. The analysis

    4 T H E C H AL L EN G E O F AS S ES S I NG PO L IC Y A N D A DV OC A CY AC T IV I TI E S

    Part I:Background and Overview

  • 7/27/2019 Challenge Assessing Policy Advocacy

    7/58

    and proposed strategies addressed hereshould prove useful to a variety of foundations, evaluators and policyanalysts with an interest in advocacyand policy change.

    The paper is designed to outline anapproach to policy change evaluationgrounded in the experience of experts andfoundation colleagues. (See Appendix Afor the research methodology.) Thispaper first posits three key priorities inevaluating policy change work, drawnfrom interviews with grantees and staff from The California Endowment ontheir needs concerning policy change

    evaluation. It also discusses thechallenges inherent in monitoring andassessing these types of grants. It then

    provides a brief description of the stateof the practice in evaluating policychange and advocacy work within thefoundation community. Because thepractice of evaluating a foundationsimpact on policy advocacy is fairly new,

    our review did not unearth a commonlypracticed methodology or widely usedtools. However, seven widely agreedupon principles of effective policychange evaluation did emerge.

    Based upon these seven principles andthe advice of national evaluation experts,we recommend a prospective approachto policy change evaluation. Part II of

    the paper is devoted to outlining thisrecommended approach. A prospectiveapproach would enable a funder to define

    5PART I: BACKGROUND AND OVERVIEW

    Clarification of TermsGoals vs. Outcomes For the purposes of this paper, goals are defined asbroader, long-term visions of change, whereas outcomes are the incrementalchanges that indicate progress toward the larger goal.

    Benchmarks vs. Indicators Benchmarks are the outcomes defined in advanceas those the evaluation is monitoring to see change or progress. Indicators arethe specific ways in which the benchmarks will be measured. For example, if one of the benchmarks for a project is broader media exposure for the issue,one of the indicators may be the number of earned media pieces in majormedia outlets.

    Theory of Change vs. Logic Model Theories of change broadly describe aproject, linking its activities or strategies with desired outcomes, and in particulardescribing the how and why of those linkages. Logic models tend to be moregraphical in nature and attempt to map out all elements of a program, thoughthey tend not to describe how and why the pieces are interconnected. Logicmodels are more useful for describing an existing program than they are usefulfor shaping new programs.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    8/58

    1 Note that these institutional priorities are distinct from the specific needs and desires of individual stakeholders, as identified inthe needs assessment.

    expectations from the start of a grant,monitor progress and provide feedbackfor grantees on their work as the programis operating. This paper outlines keycomponents of this prospective approach,namely setting up an evaluation

    framework and benchmarks prospectively,at the beginning of a grant, and thenassessing progress and impact along theway. As well, this paper highlights thearticles, manuals and expert advice mostrelevant and useful in developing thisprospective approach.

    Key steps in a prospective evaluationapproach, as discussed in this

    paper, include:

    Adopting a conceptual model forunderstanding the process of policy change;

    Developing a theory about how andwhy planned activities lead to desiredoutcomes (often called a theoryof change);

    Selecting benchmarks to monitorprogress; and

    Measuring progress toward benchmarksand collecting data.

    Assumed Prioritiesfor Policy ChangeEvaluationTo guide the research on strategies forpolicy change evaluation, Blueprintbegan by interviewing The Endowmentsgrantees and staff to understand theirneeds, challenges and priorities regardingthis issue. This stakeholder needs

    assessment identified three broadpriorities for policy change evaluation. 1

    It is likely that these priorities are widelyshared by other foundations, and ourrecommended approach to policy changeevaluation assumes so.

    1. Setting Grant Goals andMonitoring Progress:When a grant starts, nonprofits andfoundation staff need to agree upongoals for the grant. The grantee agreesto be accountable for working towardsthese goals. Particularly for one-yeargrants, actual policy change is veryrarely a reasonable grant goal, even

    though it is the ultimate goal for boththe grantee and funder. Therefore,grantees and program staff need toidentify how grantees expect theirwork to contribute to policy changeover the long run, but also identify agrant goal that will demonstrate progresstowards the ultimate goalwithoutholding the grantee directly accountablefor changing the policy. Both wantmilestones that indicate actual changein the policy environment (e.g., increasedlegislators awareness of an issue), not justan inventory of activities or processindicators (e.g., number of flyers mailedto legislators). This goal identificationprocess is especially challenging for thegrant-making staff and the direct serviceproviders with less experience in policy

    change work. The challenge is heightenedby some grantees concern that they andthe funder will not see eye-to-eye onwhat is both achievable and impressiveenough to be grant-worthy.

    6 TH E CH AL LEN GE OF AS SESSING PO LICY AN D AD VO C AC Y AC TIVI TIES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    9/58

    2. Assessing Impact at the Grantee,Program and Foundation Level:Both funders and grantees want moreeffective ways to assess and document theirimpact. Grantees want to show theirfunders, donors, colleagues and constituents

    that they are making a difference. Funderswant to document for their board membersand the public that their grant dollars werespent wisely. Since actual public policychange takes a long time, both granteesand program officers want methods toassess their contributions as they movealong, not just when (or if) a policyvictory is achieved. This requires a moresophisticated way to think about impact

    in the policy arena. Both grantors andgrantees need new ways to show howthey are building up to or making theenvironment more conducive to theirdesired policy change, or laying thegroundwork for successful implementationof the policy changebefore the policychange actually occurs.

    3. Improving Programs and DevelopingKnowledge about Effective Strategies:Documenting impact is valuable, especiallyto the funder and to a grantee that wantsto replicate successes and raise more money.In addition to assessing a programs impact,information about what worked and didnot work can be valuable to a wideraudience. This information can helpboth foundations and their grantees

    improve the specific project and identifystrategies that can help other advocatesincrease their effectiveness. Sharing theselessons with colleagues outside thefoundation can also serve to strengthenthe field of philanthropy by improving

    the knowledge base on which grant-makingdecisions are made. It can even helppolicymakers in the future, by documentingmodels of positive and effective relationshipswith advocacy organizations.

    Challenges inEvaluating Policy andAdvocacy GrantsFor the last 20 years, the social scientificmethod has served as the dominantparadigm for evaluations in the foundationworld. A social services program identifiesits change goal and the three-to-fourfactors or inputs that will stimulatethat change. The program, for the mostpart, assumes there is a direct, linear,causal relationship between the inputsand the desired effect. Evaluation is thenabout quantifying those factors andmeasuring the impact.

    However, as foundations have directedan increasing proportion of their grantdollars to public policy and advocacy

    work, they are finding that theirtraditional evaluation approachesdo not work well. The seven keychallenges are:

    ComplexityRole of External ForcesTime FrameShifting Strategies and MilestonesAttributionLimitations on LobbyingGrantee Engagement

    The challenges in evaluating policy andadvocacy have led both to foundationsreluctance to fund policy and advocacy

    7PART I: BACKGROUND AND OVERVIEW

  • 7/27/2019 Challenge Assessing Policy Advocacy

    10/58

    2 See Kingdon, J. Agendas, Alternatives and Public Policies.HarperCollins: New York. 1995.

    work as well as an absence of documentedevaluations, even among those foundationsthat do fund advocacy. The emergingcommunity of practice around policychange evaluation is trying to findstrategies to address these challenges.

    ComplexityPolicy and advocacy grantees are trying toadvance their goals in the complex andever-changing policy arena. Achievingpolicy change usually involves much morethan just getting legislators to change asingle law. In fact, effective policy workoften doesnt even involve changing a law.It involves building a constituency to

    promote the policy changeand monitorits effective implementation, once achieved.It often requires developing researchto assess the impact of different policyproposals. It requires public awarenesscampaigns and closed-door deal-makingwith elected officials. Sometimes, policysuccess even means the absence of change,when, for example, advocates are tyingto stop policy changes they feel wouldbe detrimental. Therefore, the path topolicy change is complex and iterative.In determining what actions will createchange or how to assess progress, linearcause and effect models are not particularlyhelpful in trying to understand thenonlinear dynamics of the system.

    Role of External Forces

    Numerous players and dynamics outsidethe grantee organization, such as anopposition organization or the politicaland economic environment, heavilyshape policy and advocacy work. Unlikesocial services grantees, policy grantees

    often face players actively working tothwart their efforts. The influence of these external forces is hard to predictand often impossible for a grantee tocontrol. It is more appropriate to holda direct service grantee accountable for

    achieving certain outcomes because theyhave much more control over the keyfactors that influence their ability toachieve those outcomesprimarily thequality of their program. In contrast,policy grantees can do everything withintheir control right ( e.g., generate goodmedia coverage, speak with legislators,provide good research) and still notachieve their goal because of factors such

    a change in the party in control of thelegislature or an economic downturn.In advocacy work, a grantees efforts canbuild potential for a policy outcome, butalone cannot necessarily achieve aspecific outcome. Change often happenswhen an opportunity window opens. 2

    Time FrameThe length of time necessary to achieveinterim and ultimate goals differs forpolicy change evaluations. Policy goalsusually are long-term, beyond the horizonof a typical one-to-two-year grant. In mostcases, there will be little or no actual publicpolicy change in one year. It is thereforeinappropriate to measure the effectivenessof most foundations one-year policy andadvocacy grants by the yardstick, Did

    policy change? A more useful questionmight be How did the grantees workimprove the policy environment for thisissue? Or How successful was the granteein taking the necessary steps toward thepolicy change? The field needs new ways

    8 T H E C H AL L EN G E O F AS S ES S I NG PO L IC Y A N D A DV OC A CY AC T IV I TI E S

  • 7/27/2019 Challenge Assessing Policy Advocacy

    11/58

    to think about appropriate outcomes thatthat can mark progress toward the ultimatepolicy goal but are achievable within thetime frame of a grant.

    Shifting Strategies and Milestones

    Since dynamics in the policy arena canchange quickly, advocates must constantlyadjust their strategies to fit the currentenvironment. In policy change evaluation,it is challenging to choose short-termoutcomes and benchmarks at the onset of a grant and measure progress against thembecause the grantees strategies may needto change radically over the course of thegranteven as their ultimate policy goal

    remains the same or similar. This is partlydue to the influence of external forces andcomplexity of policy advocacy, discussedabove. It requires discipline, attentionand a deep understanding of the issuesand the policy environment to craft anapproach and a set of goals that are flexiblewithout being merely reactive or haphazard.

    AttributionIt is also important to note that mostpolicy work involves multiple playersoften working in coalitions and, in fact,requires multiple players hitting numerousleverage points. In this complex system,it is difficult to sort out the distinct effectof any individual player or any singleactivity. Isolating the distinct contributionsof the individual funders who supported

    particular aspects of a campaign is evenmore difficult. In the policy advocacyworld, there is no way to have a controlgroup. Neither the grantee nor thefunder can ever measure how the advocacyeffort would have played out if the grantee

    wasnt there, if a specific activity didnthappen or if one element of funding weremissing or allocated differently. Evenwhen an evaluator has the luxury of interviewing all key stakeholders involvedin a policy development, theres rarely, if

    ever, clear agreement on why somethingturned out the way it did. That canbe frustrating to funders who want topinpoint their influence.

    Limitations on LobbyingMisperceptions regarding the federalguidelines on nonprofit lobbying, aswell as a foundations desire to avoidany chance of regulatory trouble, create

    challenges when trying to assess a fundersrole in advocacy work. Federal regulationsconstrain foundations ability to lobbydirectly and to provide financial supportfor nonprofits to lobby on specific piecesof legislation (although the limitationsare not as strict as many foundation staff and board members believe.) Therefore,nonprofits ask foundations to supportother aspects of their policy work, suchas public education, research or generalcommunity organizing. Traditionally,foundations have wanted grantees to drawvery clear lines between their activitiesrelated to lobbying for legislation andwork supported by the foundation.However, because foundation-supportedactivities are usually part of an overallcampaign for policy change, program

    officers find it hard to understand thevalue of the work they supported withoutunderstanding how it contributed to thelarger policy change goals. In many cases,it makes more sense to evaluate the entirecampaign or the work of an advocacy

    9PART I : B AC K GR O U N D A N D OV E RVIE W

  • 7/27/2019 Challenge Assessing Policy Advocacy

    12/58

    organization overall rather than the distinctcomponent funded by a foundation.However, many program officers expressedconcern that if grantees submit reportsthat describe how their foundation-fundedwork fits together with lobbying efforts to

    achieve policy change, the foundationmight get into legal trouble or alienatefoundation leadership. Continuouslyeducating the foundation and nonprofitcommunities of the nature of the relativelygenerous federal limits on lobbying wouldhelp mitigate this problem.

    Grantee EngagementGrantee participation, commitment,

    enthusiasm and capacity are essentialto collecting quality evaluation data.Experienced advocates have manyinformal systems for gathering feedbackabout their tactics as they move along,usually based heavily on feedback fromtheir peers, and sometimes based onquantitative measures they have developedinternally. However, most policy andadvocacy grantees have little experienceor expertise in formal evaluation. Theyhave limited existing staff or logisticalcapacity to collect data. The data anexternal evaluator might want them tocollect may seem burdensome, inappropriateor difficult to obtain, even if they docollect some data in their day-to-daywork. Moreover, for many nonprofits,their past experiences with evaluation

    have often been less than satisfying.Based on these experiences, there is ageneral concern in the nonprofit fieldabout evaluation. Many feel thatevaluations occur after a grant is overand come as an additional, often

    unfunded, burden that provides littledirect benefit to the nonprofit. Oftenthey never see a final report, and if theydo, it doesnt contain information thathelps them improve their work, theinformation comes too late to put it

    to use, or they have no funding orcapacity to act on it.

    The Current State of Foundation Practiceon Policy ChangeEvaluationFoundations have a long history of evaluating the impact of policy changes,as exemplified by the many studies onthe impact of welfare reform. However,numerous experts noted even three yearsago few funders were trying to evaluatetheir efforts in creating or supportinggrantees to create policy change. Thislack of evaluation was due in part to thechallenge in measuring policy advocacy, asdiscussed above and, in many cases, a desireto keep their involvement in this more

    politically controversial arena low profile.

    There is now an increased desire from anumber of funders to engage in a morerigorous unpacking of all the goals andtactics that grantees have used and examinewhat really was effective in achievingpolicy change. However, after conducting20 interviews with evaluation experts andreviewing a broad sampling of relevantreports, we concur with the consensusfrom a recent Grantmakers in Healthgathering of more than 50 funders of advocacy. They concluded that there isno particular methodology, set of metricsor tools to measure the efficacy of advocacy

    1 0 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    13/58

    grant making in widespread use. In fact,there is not yet a real field or communityof practice in evaluation of policyadvocacy. 3 The field (such as it is) iscomposed of a number of individualpractitioners and firms who are employing

    generic skills to design evaluations on aproject-by-project basis. There appearsto be little communication betweenpractitioners, and there is no organizedforum to share their ideas.

    In trying to assess the state of policychange evaluation, we were particularlychallenged by the lack of writtendocumentation. Despite a substantial

    amount of practical literature onevaluation, very little of it directlyaddresses the specific challenge of measuring policy change and advocacy.Much of what is known about how bestto approach this challenge exists only inthe heads of a few practitioners andresearchers. The written documentationlies largely in internal and often informalevaluation reports, which foundations donot commonly share.

    It is noteworthy that three of the toolsor manuals we identified as most valuablefor foundations interested in evaluatingpolicy advocacy were published in thelast year. (These are the Making the Casetool from the Womens Funding Network,Investing in Change: A Funders Guide

    to Supporting Advocacyfrom Alliancefor Justice, and the Practical Guide toDocumenting Influenceand Leverage inMaking Connections Communitiesfromthe Annie E. Casey Foundation.)

    While practitioners do not share acommonly practiced methodology,there did emerge from our review atleast seven widely held principles forengaging in effective policy changeevaluation. The prospective approach

    outlined in this paper is built onthese principles.

    11PART I : B AC K GR O U N D A N D OV E RVIE W

    3 McKinsey and Company describes a community of practice as A group of professionals, informally bound to one another throughexposure to a common class of problems, common pursuit of solutions, and thereby themselves embodying a store of knowledge.http://www.ichnet.org/glossary.htm.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    14/58

    1 2 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

    Guiding Principles for Policy Change Evaluation

    1. Expand the perception of policy work beyond state and federallegislative arenas.

    Policy can be set through administrative and regulatory action by the executivebranch and its agencies as well as by the judicial branch. Moreover, some of the most important policy occurs at the local and regional levels. Significantpolicy opportunities also occur during the implementation stage and in themonitoring and enforcement of the law or regulation.

    2. Build an evaluation framework around a theory about howa groups activities are expectedto lead to its long-term outcomes.

    Often called a theory of change, this process forces clarity of thinkingbetween funders and grantees. It also provides a common language andconsensus on outcomes and activities in a multi-organization initiative.

    3. Focus monitoring and impactassessment for most granteesand initiatives on the stepsthat lay the groundwork and contribute to the policy change being sought.

    Changing policy requires a range of activities, including constituency andcoalition building, research, policymaker education, media advocacy andpublic information campaign. Each activity contributes to the overall goalof advancing a particular policy. Outcomes should be developed that are relatedto the activitys contribution and indicate progress towards the policy goal.

    4. Include outcomes that involvebuilding grantee capacity tobecome more effective advocates.

    These should be in addition to outcomes that indicate interim progress.These capacity improvements, such as relationship building, create lastingimpacts that will improve the grantees effectiveness in future policy andadvocacy projects, even when a grantee or initiative fails to change thetarget policy.

    5. Focus on the foundationsand grantees contribution,not attribution.

    Given the multiple, interrelated factors that influence the policy process andthe many players in the system, it is more productive to focus a foundationsevaluation on developing an analysis of meaningful contribution to changesin the policy environment rather than trying to distinguish changes thatcan be directly attributed to a single foundation or organization.

    6. Emphasize organizationallearning as the overarchinggoal of evaluation for both thegrantee and the foundation.

    View monitoring and impact assessment as strategies to support learningrather than to judge a grantee. In an arena where achieving the ultimategoal may rarely happen within the grant time frame, and public failuresare more frequent, emphasizing learning should encourage greater granteefrankness. It should also promote evaluation strategies and benchmarksthat generate information valuable to both the grantee and funder,increasing grantee buy-in and participation. Finally, the foundationwill be able to document more frequent wins in learning than inachieving policy change.

    7. Build grantee capacity toconduct self-evaluation.

    Most advocacy organizations have minimal experience or skills in moreformal evaluation methods. To date, most have relied primarily oninformation feedback from their extensive network of peers to judgetheir effectiveness and refine their strategies. To increase their use of formal evaluation processes, grantees will need training or technicalassistance as well as additional staff time to document what actuallyhappened. This additional work should help the nonprofit become morereflective about its own work, as well as provide more useful informationabout change to funders.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    15/58

    Prospectiveevaluationcan help a fundermonitor progress of a grant and enable

    the grantee to use evaluation informationto make improvements in the project.

    13PART 1 : B AC K GR O U N D A N D OV E RVIE W

  • 7/27/2019 Challenge Assessing Policy Advocacy

    16/58

  • 7/27/2019 Challenge Assessing Policy Advocacy

    17/58

    In brief, prospective evaluation involvesfour steps:

    1. Agree upon a conceptual model for thepolicy process under consideration.

    2. Articulate a theory about how and why

    the activities of a given grantee, initiativeor foundation are expected to lead to theultimate policy change goal (often calleda theory of change).

    3. Use the theory of change as aframework to define measurablebenchmarks and indicators for assessingboth progress towards desired policychange and building organizational

    capacity for advocacy in general.

    4. Collect data on benchmarks to monitorprogress and feed the data to granteesand foundation staff who can use theinformation to refine their efforts.

    Finally, at the end of the project, all of the progress should be reviewed to assessoverall impact and lessons learned.

    Benefits of aProspective ApproachA prospective approach can be mosteffective in policy change evaluation forseveral reasons. It will help the foundationsstaff and its grantees:

    Define expectations from the start.Both grantees and program officers

    expressed a strong need for guidancein developing more clear expectationsabout outcomes for policy and advocacyprojects from the beginning of arelationship. Prospective evaluationmethods set up expected outcomes

    at the beginning of the project,allowing stakeholders to clarifyexpectations upfront.

    Monitor progress. Program officersstated they want more effectivestrategies for monitoring the progressof policy advocacy grantees. Theprospective approach would helpprogram officers tell if a project is onor off track and enable them to describeto the foundation board what progresshas been made, short of actual policychange, which may be years off.A prospective evaluation can deal withthe challenge of the long timeframe

    required for policy change. By showinghow short-term outcomes are leadingto long-term policy change, a fundercan document some impact from itsgrant making long before actualpolicy change occurs.

    Provide feedback to refine projects.A key priority for grantees and programofficers overseeing multiple grants

    is gathering information to refinetheir projects and make mid-coursecorrections in their strategies.As one program officer states,We wanted evaluation results asthey happen, not just at the end.Thats what we need to refine ourprogram. A prospective evaluationis designed to provide insights as aproject progresses, when it can beused to strengthen a program, ratherthan just at the end of a grant.

    Promote grantee engagement in theevaluation process. Evaluations thatare able to provide timely findings

    15PART II : A PROSPECTIVE APPROACH TO EVALUATION

  • 7/27/2019 Challenge Assessing Policy Advocacy

    18/58

    are more useful to the grantee andconsequently grantees are more eagerto help provide good data. Manygrantees mentioned that retrospectiveevaluations, which often provideinformation after a program is completed,

    are rarely useful to a grantee and seemas though they are only for a fundersbenefit. Said one, External evaluationstend to take a long time and for agrowing organization, by the time theevaluation was done, we had moved on.

    Support program planning andimplementation. The conceptualmodels and benchmarks for prospective

    evaluation also help foundation staff and grantees clarify their thinkingabout how their activities work togetherto achieve impact. This can increasethe programs effectiveness beforeevaluation data even come in. Explainedone grantee, What I am interested inis evaluation for the sake of strategicdevelopment, to inform planning.This is much more valuable, but more

    difficult, than traditional evaluationthat focuses on reporting whathappened. In fact, one foundationprogram officer said the foundationalways viewed its efforts to developevaluation frameworks as supportingboth planning and evaluation.

    Increase transparency. By setting goalsup front and making them transparentto all parties, prospective evaluationcan be more thorough and is less likelyto produce a self-serving evaluation.In retrospective evaluation, granteesand funders have a natural tendencyde-emphasize or overlook what went

    wrong and to put a good spin on pastevents. In fact, participants may noteven remember the ways in whichgoals changed over time.

    Promote a learning culture. Prospectiveevaluation can demonstrate to granteeshow systematic and well-definedfeedback can help them improve theirpractice. Often, it is only throughexperience that grantees are able to seethe usefulness of evaluation feedback.As grantees begin to see the impact of this knowledge, they may make shiftsto instill evaluation practice into theirorganizational culture.

    The next section outlines the four basicsteps in conducting a prospective evaluation,and includes practical advice with eachstep, drawn from the interviews withevaluators, foundation staff and grantees.

    Step 1: Develop aConceptual Model

    for the Policy ProcessProspective evaluation begins with allpartiesadvocates, funders (and outsideevaluators, if used)developing a clearunderstanding of the environment inwhich advocates are going to work, thechange that the grantees and the funderwant to make in the environment, andhow they intend to make that changehappen. Explained one evaluation

    expert, They need to describe theproblem, they need to show data on theproblem, and they need to show howtheyre going to address the problem....They need to identify the stakeholderstheyre going to target and why, and the

    1 6 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    19/58

    contextual features that may challengethem and how theyre going to addressthose. This knowledge provides thestructure for both planning andevaluation. Evaluation experts andseveral foundation staff members felt

    that too many policy projects and theirevaluations are hampered by inadequatetime devoted to understanding theenvironment in which people willwork and nebulous thinking aboutthe strategies required to make changehappen. When people undertake theseinitiatives, they start out with rather fuzzyideas.... It may be the first issue is not theevaluation but clarifying upfront what

    people are proposing to do, one expertnotes. Said another, Time and again,theres a poor understanding of threethings: the context youre working in,how change happens and is sustainedover time, and power dynamics. Absentthat information, poor planning results.

    Stakeholders can begin to understandtheir policy environment through aprocess of research and informationgathering. For a small project, this mayinvolve something as simple as gaugingcommunity interest and reviewingprevious advocacy efforts on this issue.For larger projects or initiatives, moredetailed research is probably appropriate.Depending on the project and the playersinvolved, the foundation may even want

    to commission research papers, literaturereviews, focus groups or opinion polls.Furthermore, for those less experienced inthe policy arena, research can help themcomprehend the complexities involved intypical advocacy, policy change and social

    change processes. For more experiencedadvocates, who may already have acomprehensive understanding of theirpolicy environment and the latest research,it can be very useful at this point toarticulate and re-examine the assumptions

    they hold about the changes that areneeded and approaches to change thatwill work best in the current environment.

    Understanding the Policy,Social Change andAdvocacy ProcessesFor those foundation staff and granteesless experienced in the policy arena,understanding the

    environment involves firstunderstanding the basicelements or evolution of advocacy, policy changeand social change processes.Funders and grantees withless experience in policy andadvocacy work often strugglewith understanding theprocess and where they canplay a part. Typically, theirdefinition of policy andadvocacy work is too narrow, focusingonly on adoption or passage of new policiesand overlooking the work building up topolicy change and the implementation of the policy once passed or adopted. Andas one foundation staff member stated, itis important to get people not to focus

    just on legislative solutions, but to identifylocal and regional advocacy opportunities,identify potential partners and allies. Or asanother puts it, We need to have a broadfocus on the continuum of activities thatgo into the policy world.

    17PART II : A P ROSPECTIVE APPROACH TO EVALUATION

    Time and again, theres

    a poor understanding of

    three things: the context

    youre working in, how

    change happens and is

    sustained over time,

    and power dynamics.

    Absent that information,

    poor planning results.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    20/58

    Bringing grantees that have traditionallyengaged in service provision into policyand advocacy work poses additionalchallenges. One foundation staff memberstates, There are opportunities for policyand advocacy in every grant. [Program

    officers working with service grantees]couldnt always see that... and then theyneed help in getting their grantees to seethat. Learning the processes of advocacy,policy change and social change helpsprogram officers and grantees understandthe potential opportunities available tothem to engage in policy and advocacywork. Furthermore, once grantees thathave traditionally done more service-

    oriented work get into policy work, theynote that they often struggle with how tocharacterize their new work within apolicy and advocacy framework. In thepast, they have laid out their service workin a deliverables framework, specificallystating what will be accomplished bywhen. This system often does not workfor policy activities.

    Conceptual FrameworksBased upon years of experience, researchersand practitioners have developedconceptual models for understanding thecomplex process involved in advocacy,policy change and social change. Thesemodels are simply ways of systematizingand organizing the typical processes andcomplex details of advocacy, policy change

    and social change, in the same way anorganizational chart systematizes theinternal relationships in a business ora city map lays out how the streetsare aligned. These models are usefulbecause they:

    Provide a common language amongall parties and expand the concept of policy work beyond a narrow focus onlegislative change.

    Assist program planners in identifyingthe numerous levers they can use toeffect change. In particular, they helpthose with less understanding of thepolicy world (direct service granteesand some foundation program officers)grasp the complexity of the worldtheyre working in and understandhow they might best contribute topolicy change.

    Help planners understand the typicalpathways and timeframes of changeto set realistic expectations.

    Determine the process and goalsfor evaluation, including helpingbrainstorm potential strategies,activities, outcomes and benchmarks.

    Facilitate comparison across grants andinitiatives and sum up impact across

    grants at the cluster or program level.Frameworks can be useful tools forunderstanding and aligning differentprojects when evaluating at a cluster,program or foundation level.

    The three conceptual models presentedbelow each describe one particular aspectof the universe of social change andpolicy change. As seen in Figure 1, theoutside square represents social change,which encompasses both policy changeand advocacy, but is also broader thaneither of those. The middle squarerepresents policy change, of which theinner square, advocacy, is a component.

    1 8 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    21/58

    Because each of these models has adifferent focus, thinking through morethan one of them can provide a broaderunderstanding of the environment.

    These three conceptual models arehighlighted because they have thefollowing characteristics:

    Each describes a different aspect of thesocial change/policy change process.

    They all provide a complex understandingof the world of policy change, gettingbeyond the simple lobby for bill Anotion of policy work.

    They are simple enough to beunderstandable to a wide range of players.

    Social Change Model

    The broadest level is a social changeschema, which depicts how individualsand groups create large-scale change insociety. 4 Policy change is viewed as justone piece of a larger strategy for change.Other components of social changeinclude changes in the private sector,

    expansion and investment in civilsociety and democracy ( e.g., buildinginfrastructure groups, involving andeducating the public), and creating realmaterial change in individual lives.

    Policy Change ModelThe policy change model focuses on thepolicy arena and presents the processthrough which ideas are taken up, weighedand decided upon in this arena. 5 It outlinesa policy environment that doesnt operatein a linear fashion and that often involvesmuch time preparing for a short windowof opportunity for policy change. Thebasic process for policy change includes:

    1. Setting the agenda for what issues areto be discussed;

    2. Specifying alternatives from which apolicy choice is to be made;

    3. Making an authoritative choice amongthose specified alternatives, as in a legislativevote or executive decision; and

    4. Implementing the decision.

    But the linearity of these four steps beliesthe fact that at any point in time, thereare three streams of processes occurringsimultaneously and independently:recognition and definition of problems,creation of policy proposals and shiftsin politics.

    Problems are just conditions in theworld that attain prominence andare thus defined as problems. Thisrecognition and definition occursthrough the development andmonitoring of indicators ( e.g., rates

    19PART II : A PROSPECTIVE APPROACH TO EVALUATION

    4 Based on IDR framework, outlined in Chapman, J. and Wameyo, B., Monitoring and Evaluating Advocacy, A Scoping Study.2001.5 Based on Kingdon, J. Agendas, Alternatives and Public Policies.HarperCollins: New York. 1995.

    SOCIAL CHANGE

    POLICY CHANGE

    ADVOCACY

    Figure 1. Model of Framework Relationships

  • 7/27/2019 Challenge Assessing Policy Advocacy

    22/58

    6 Kingdon, J. Agendas, Alternatives and Public Policies.p. 201.

    of uninsured Californians); occurrenceof a focusing eventa disaster, crisis,personal experience or powerful symbolthat draws attention to some conditionsmore than others; or feedback(e.g., complaints from constituents).

    Creation of policy proposals happensboth inside and outside government,and a proposal is likely to survive andendure in the policy stream if it seemslogical, rational and feasible, and if itlooks like it can attract political support.

    Shifts in politics occur because of thingslike political turnover, campaigns byinterest groups, developments in theeconomic environment or changesin the mood of the population.

    It is when these three streams cometogether, in what is termed a policywindow, that policy change happens.

    The separate streams of problems, policiesand politics each have lives of their own.Problems are recognized and defined

    according to processes that are differentfrom the ways policies are developed orpolitical events unfold. Policy proposalsare developed according to their ownincentives and selection criteria, whetheror not they are solutions to problems orresponsive to political considerations.Political events flow along on theirown schedule and according to theirown rules, whether or not they arerelated to problems or proposals.

    But there comes a time when the threestreams are joined. A pressing problemdemands attention, for instance, and apolicy proposal is coupled to the problem

    as its solution. Or an event in the politicalstream, such as a change of administration,calls for different directions. At that point,proposals that fit with that political event,such as initiatives that fit with a newadministrations philosophy, come to

    the fore and are coupled with the ripepolitical climate. Similarly, problemsthat fit are highlighted, and othersare neglected. 6

    Thus, the policy process is at its core anirrational and nonlinear process, whereadvocates work is often building theirown capacity to move quickly when apolicy window opens. This way of

    thinking about the policy process fitswell with how several program officersdescribed how their foundation and itsgrantees engage in policy work. Saidone, While we are trying to makethings happen in the policy arena,right now most often we are trying to

    respond to those opportunities that arise.Our challenge: how do we get granteeorganizations prepared so that they can

    engage when an opportunity arises? Thisis different than going straight to advocacyor legal organizations and giving themconcrete objectiveslike changing acertain policy or enforcing a regulation.

    One specific challenge highlighted by thismodel is that advocates must identify andenter into not just any window, but theright window. They are challenged tostay true to their own goals and act onwhat the research identifies as the need,rather than being reactive and opportunisticin a way that pulls the work away fromthe organizations mission.

    2 0 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    23/58

  • 7/27/2019 Challenge Assessing Policy Advocacy

    24/58

    10 Snowden, A. Evaluating Philanthropic Support of Public Policy Advocacy: A Resource for Funders. p. 11.

    achieving those goals. Evaluation expertsurge those interested in the theory of change idea not to get too caught up inthe terminology and focus more on thebasic concepts underneath.

    A theory of change, no matter what itis officially called, is central toprospective policy change evaluation.In any prospective, forward-lookingevaluation, a programs theory guidesthe evaluation plan. Reciprocally,the evaluation provides feedback to aprogram as it evolves; itbecomes a key resourcein program refinement.

    One foundation staff member notes thisrelationship in saying,if we dont know wherewe want to go and howwe are planning to getthere, it is hard toevaluate our work.

    Developing a theory of change forcesclarity of thought about the path froma programs activities to its outcomes.As one evaluation expert states, A theoryof change makes the evaluation effortmore transparent and efficient.... It breaksup the evaluation into both processimplementation and outcomes. It alsoforces you to be specific about what kindof change you want in what period of

    time. The focus that a theory of changeputs on outcomes or long-term change iscrucial. A program officer from anotherfoundation notes that, Without a theoryof change, use of indicators will lead toactivity-driven monitoring. 10 Finally,

    it helps all players develop a commonunderstanding about where they aretrying to go and how they plan to gettherebe it a single grantee and programofficer or members of a multi-organizationcoalition or strategic initiative.

    Practical Considerations inDeveloping a Theory of ChangeEvaluation experts suggest that insuccessfully developing a theory of change for policy change evaluation,the following issues should be considered:

    1. Get Out ofEvaluation-Speak

    Often the language of evaluation is dauntingand mystifying tothose who are notformal evaluators.Furthermore, manynonprofits have a fearof or distaste for formalevaluation processes

    and are reluctant to engage, even if they

    would welcome receiving useful informationabout the efficacy of their work. They oftensay the process distracts them from theirwork and provides them little that is useful.

    Therefore, making a theory of changeprocess accessible and relevant to granteesis essential. The Womens Funding

    Network has structured its grantee reports

    around questions that get at key conceptsand walk grantees through outlining theirtheory of change in their grant reportswithout using technical terms. Keyquestions in the Womens Funding

    Networks Making the Case tool include:

    2 2 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

    At its heart, a theory

    of change lays out what specific changes

    the group wants to see

    in the world, and how

    and why a group

    expects its actions to

    lead to those changes.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    25/58

    What is the situation you want to change?

    What is the change you want to produce?

    What strategies are you using to makethat happen?

    What helps you accelerate your efforts?

    What gets in your way or inhibitsyour progress?

    These questions are part of an interactiveonline tool that shows a graphic of asmall town as a metaphor that may beparticularly appealing to less sophisticatedgrantees. The situation needing changeis represented as a collection of arrowsimpinging on the town. The change thegrantee is looking to make is symbolizedas a shield protecting the town. Individualstrategies are represented as bolstering theshield. This format avoids using evaluationlanguage and its graphical layout is visuallyappealing and understandable to thosenot familiar with evaluation. The lackof rigidity in the format also allows the

    model to account for the complexity andnon-linearity of advocacy work.

    2. Provide Participants withGood ResearchAs noted above, research is crucial at thestart of any policy change evaluation. Inits work on theory of change, the AspenInstitute Roundtable states, The processwill go more smoothly and produce betterresults if stakeholders and facilitatorshave access to information that allowsthem to draw on the existing body of research and current literature from arange of domains and disciplines, and to

    think more systematically about what itwill take to promote and sustain thechanges they want to bring about.However, the amount and type of researchthat a strategic initiative should engagein is different from what an individual

    grantee might do toprepare for creating atheory of change. Aninitiative might spendmany hours gathering dataon its subject area, goingover existing research, and possiblycommissioning additional research. Onthe other hand, a grantee may just needto gather community input and review

    previous actions, being careful to assessthose actions with a fresh eye anda clear understanding of both the latestresearch and the current environment.

    3. Provide Enough Flexibility to Adaptthe Model Over TimeThe continual shift in environment andstrategies is a hallmark of advocacy work.A rigorously detailed theory of change forpolicy projects will have to be modifiedfrequently, requiring an unsustainableamount of time and energy. Evaluationexperts recommend developing a moreconceptual, and therefore, more flexibletheory of change for policy projects.The theory of change created in theKellogg Foundations Devolution Initiativewas set up with this challenge in mind. 11

    Evaluators designed the basic theory of change to be broad and conceptual,allowing the specifics of the programto change over time while not impedingthe evaluation in any significant way.For example, new organizations joined

    23PART II : A PROSPECTIVE APPROACH TO EVALUATION

    Without a theory of change,

    use of indicators will lead to

    activity-driven monitoring.

    11 Coffman, J., Simplifying Complex Initiative Evaluation. The Evaluation Exchange. 1999.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    26/58

    the initiative, and new objectives wereconstructed without major revision of the theory.

    4. Move Beyond the Basic Manualsfor Policy Change Evaluation

    Existing evaluation manuals providegrounding in the basics of developing atheory of change. 12 However, foundationswith experience helping policy, advocacyand social change groups develop a theoryof change found that the traditionalframework advocated by these manualswas too rigid and linear to capture thefluidity of policy and advocacy work.

    The Liberty Hill Foundation convened agroup of grantees to develop a frameworkfor evaluating policy work. The groupbegan its effort using the formal logic modelprocess developed for service-orientednonprofits by the United Way. They soonfound that the model was not appropriatefor nonprofits engaged in advocacy orother broad social change work andended up adapting it over 10 sessions.

    The Annie E. Casey Foundation hasdeveloped a guidebook for the evaluationof its grants that uses a theory of change.In developing a theory of change, oneexercise the guidebook recommends iscreating a simple so that chain. The goalof this exercise is to link program strategywith long-term outcomes. The chain begins

    with a programs strategy. The first step isto answer, Were engaging in this strategy

    so that... The chain then moves fromshort-term outcomes to longer-termoutcomes. For example, a grantee might:

    Increase media coverage on the lackof health insurance for children

    so thatpublic awareness increases

    so thatpolicymakers increase their

    knowledge and interestso that

    policies change

    so thatmore children have health insurance.

    The exercise works well when participantshave a good vision of what they desiretheir long-term outcomes to be and whattheir current strategies are. Participants

    then can either work forward from theirstrategies toward their long-term outcomesor backward from outcomes to strategies.Approaching from either directionwill allow the participants to clarifythe logic that links their work to theirdesired outcomes.

    5. Choose a Process Appropriatefor the LevelAs stated above, the amount and level of research needed to develop a theory of change differs based on whether the theoryis for an individual grant, an initiativeor an entire foundation program. Thisdifferentiation holds true throughout thetheory of change development process.The efforts that an initiative undertakesare different from what an individual

    grantee should undertake. As seen in theDevolution Initiative example mentionedearlier, an initiative may need a moreconceptual model that can be adapted todifferent groups and over time. As well,

    2 4 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

    12 There are many good resources and guide books on developing a theory of change. Both the Aspen Instituteand the Kellogg Foundation have done extensive research on theory of change and provide useful tools on theirWeb sites. http://www.theoryofchange.org or http://www.wkkf.org. As a starting place to understand the basicconcepts for developing a theory of change, these resources are very useful.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    27/58

    the logical steps in the theory may needto be backed up by more solid evidence.Theories developed for individual granteescan be both more specific to their work(less conceptual) and also less detailedin evidentiary support.

    Creating a theory of change for anindividual grant poses unique challenges,especially as it relates to a funders overallgoal. In creating a theory of change foran individual grant, it will be relevantto understand the funders goals andstrategies in order for the grantee to seehow it fits into that plan and the largerenvironmentthat while an individual

    one-year project may be small, there isvalue in the contribution to a largereffort. As one evaluation expert states,Foundations need to express their owntheory of change and chain of outcomesand express to grantees where they fitinto that chain. A foundations work is apainting, and the grantees are the brushstrokes. Each brush stroke needs to beapplied at a particular time in a particularway. If you have 1,000 people applyingbrush strokes, each has to know whattheyre a part of. It allows foundationsto make it clear that grantees need notposition their project as somethingdifferent or depart from their originalmission just in order to fit all of thefoundations goals.

    6. Provide Grantees and ProgramOfficers with SupportMany program officers and grantees needadvice or technical assistance to developa good theory of change and associatedbenchmarks since most have not been

    trained in program evaluation. Forexample, with an individual project, afunder might review a proposed theoryof change and set of benchmarks orjoin in on some conference calls. For amulti-organization initiative, the group

    might engage a facilitator to help themthrough the process. One evaluationexpert who has conducted severalpolicy change evaluations suggests,The challenge in theory of change isnot having a long, drawn-out process.We always conduct a facilitated process.If you want grantees to create a theoryof change on their own, thatseems very challenging.

    I dont know how thatwould work. The LibertyHill Foundation has providedthis support to a cohort of grantees through a peerlearning project that includedgroup training sessions andsome one-on-one consultationswith an evaluation expert.

    Step 3:Define Benchmarksand IndicatorsGenerally, the policy change goals in atheory of change are long-term and cantake many years. Therefore, developingrelevant benchmarks to track progressalong the way is vital to an effectiveand useful policy change evaluation.A benchmark is a standard used tomeasure the progress of a project.Defining benchmarks at the beginningof an advocacy effort helps both fundersand grantees agree upon ways to assess

    25PART II : A PROSPECTIVE APPROACH TO EVALUATION

    In creating a theory

    of change for an

    individual grant, it

    will be relevant to

    understand the funders

    program-level change

    goals and strategies

    and how the grantee

    fits into that plan.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    28/58

    the level of progress towards achievingthe ultimate policy goal. They serve likemilestones on a road trip to tell travelerswhen they are getting closer to theirdestinationand when they have veeredoff course. Indicators operationalize

    benchmarks, in that they define thedata used to measure the benchmarkin the real world.

    Process Versus Outcomes IndicatorsExperts and the research literaturedistinguish among several types of indicators. One key distinctionis between process and outcomesindicators. Process indicators refer

    to measurement of an organizationsactivities or efforts to make changehappen. Examples include number of meetings held or educational materialsdeveloped and distributed. Outcomesindicators refer to a measurement of change that occurred, ideally duein part to an organizations efforts.Examples include quantitative orqualitative measures of strengthenedpartnerships that resulted from holdinga meeting and measures of changedattitudes after reading educationalmaterial. Generally, process indicatorslie largely within an organizationscontrol, whereas outcomes indicatorsare more difficult to attribute to aparticular organizations work.

    To date, many funders and grantees saythey have emphasized process indicatorsover outcomes indicators to monitorpolicy and advocacy grant activities.Staff without significant policy experience,in particular, said relevant process

    indicators are easier to identify. Moreover,they are easier to quantify and collectdata on because they relate to activities,which are under the grantees control.Many of the grantees interviewed as partof the stakeholder assessment also said

    they found process indicators most usefulbecause they provide concrete feedbackrelevant to improving their organizationsoperations. However, a few grantees, whohave significant expertise and experiencein advocacy work, described processindicators as widget counting.

    While process indicators are a usefultool in grant monitoring, they do not

    demonstrate that an organizations workhas made any impact on the policyenvironment or advanced the organizationscause. Evaluation experts, as well asfunders and grantees, emphasized theneed to develop outcomes indicators thatwould demonstrate the impact of anorganizations work. As one programofficer leading a policy-oriented initiativeput it, We need to teach program officershow to monitor for change...not justmonitor activities. Table 1 highlightsthe difference between a process indicatorand a related outcome indicator thatmight logically follow from the process.Foundations grant practices caninadvertently steer the focus towardprocess indicators, because manyfoundations require evaluation of a time

    period too short to measure real policychange (outcomes). Recognizingand acknowledging this issue wouldmake the shift to outcomes indicatorseasier for everyone.

    2 6 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    29/58

    However, it is often challenging toconceptualize relevant outcomes thatcould show change for policy andadvocacy projects, especially to showinterim progress short of the actualtarget policy change or ultimate goal

    of the advocacy. Grantees and staff need an expanded way to conceptualizethe policy and advocacy processtounderstand how to talk about the stepsthat build up to and contribute to policychange. Traditionally, evaluations fordirect services have broken down outcomesin a linear fashion. Because policy and

    advocacy work is rarely linear, thesecategories arent always useful in identifyingoutcomes and associated benchmarks.

    In its book, Investing in Change: A FundersGuide to Supporting Advocacy, the Alliance

    for Justice makes a useful distinction betweenprogress and outcomes benchmarks. 13

    Outcomes benchmarks demonstratesuccess related to the ultimate goal andusually take many years to achieve.Progress benchmarks are concrete stepstowards the advocacy goal. The Allianceframework also identifies two types of

    27PART II : A PROSPECTIVE APPROACH TO EVALUATION

    13 Alliance for Justice. Investing in Change: A Funders Guide to Supporting Advocacy, p. 37.

    Table 1. Process and Outcomes Indicators

    Process Indicators(what we did)

    Outcomes Indicators(what change occurred)

    Number of meetings organized

    Number of flyers mailed

    Number of people on mailing list

    Number of officials contacted

    Number of press releases sent

    Prepare amicus brief for a court case

    Testify at a hearing

    Increase in proportion of communitymembers exposed to the particularpolicy issue

    Increased awareness of issue, asmeasured in public opinion polls

    Increase in the number of peopleusing organizations Web site to sendemails to elected officials

    Increase in number of elected officialsagreeing to co-sponsor a bill

    Number of times organization is quotedin the newspaper Or Organizationsdefinition of problem incorporated intoannouncement of a hearing

    Material from amicus brief incorporatedinto judges rulings

    Organizations statistics used in formalmeeting summary

  • 7/27/2019 Challenge Assessing Policy Advocacy

    30/58

    progress benchmarks: activitiesaccomplished and incremental resultsobtained. Activities accomplished (suchas the submission of a letter to a legislatoror to a newspaper) is analogous to processbenchmarks. Incremental results obtained

    refers to interim accomplishments requiredto get a policy change, such as holding apublic hearing, preparing draft legislationor getting people to introduce a bill.

    Capacity-BuildingBenchmarksEvaluation experts andfunders with significantexperience in policy

    and advocacy workall emphasized theimportance of identifyingboth capacity buildingand policy changebenchmarks. Capacity benchmarksmeasure the extent to which anorganization has strengthened its abilityto engage in policy and advocacywork. Examples include developingrelationships with elected officials andregulators, increasing in the numberof active participants in its actionalerts network, cultivating partnershipswith other advocacy groups, buildingdatabanks and increasing policy analysisskills. Capacity-building goals can haveboth process or activity indicators as wellas outcomes indicators.

    These capacity outcomes are importantmarkers of long-term progress for both thefunder and its grantees. They indicategrowth in an asset that can be applied toother issues and future campaigns. Said

    one program officer, We need to takethe focus off policy change and encouragemore people to think about how theycan be players. Said another with along history in policy work, Fundersand advocacy organizations should work

    on building bases so that when the policyenvironment changes due to budget,political will, administration changesthere will be people in place who can acton those opportunities. Our challenge is

    to get grantee organizationsprepared to so that theycan engage when anopportunity arises.

    Moreover, it is moreappropriate to holdgrantees accountable forcapacity-building goalsbecause they have greater

    control over them. As described in theearlier section on the challenges inevaluating policy work, a grantee canimplement a campaign flawlessly andstill fail to achieve its goal due toenvironmental changes beyond its control(such as key contacts being voted out of office). However, a failed policy outcomecan still represent a capacity-buildingachievement. For example, sometimesadvocates are pleased to get a billintroduced, even when it fails, becausethe debate over the bill increases publicawareness on the issue.

    Frameworks forBenchmark DevelopmentBenchmarks should grow out of and beselected to represent key milestones in anorganizations or initiatives theory of change.

    2 8 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

    We need to

    teach program

    officers how

    to monitor forchange ... not just

    monitor activities.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    31/58

    However, many grantees and staff alsosaid it would be very helpful to have a listof potential indicators or benchmarks toconsult for ideas so they neednt start witha blank slate. A number of groups havedeveloped frameworks for developing

    benchmarks relevant to policy and advocacywork. These benchmark frameworks canprovide examples of activities, strategiesand types of outcomes associated with thepolicy process. In our literature review,we identified six useful frameworks fordeveloping benchmarks: ranging from asimple model developed by the LibertyHill Foundation to a framework andcompendium of associated examples from

    the Alliance for Justice. Appendix Dcompares key aspects of these frameworks.

    Using an existing framework to guidebenchmark development has several keyadvantages. First, it allows funders andgrantees to build on the experiences of others and, hopefully, reduce the timeand effort required. Second, reviewingseveral frameworks highlights differentaspects of policy workwhich can bothserve as a reminder of key strategies toconsider and expand peoples notion of what constitutes policy work. Finally,employing one of these frameworks willmake it easier for foundation staff tocompare progress and sum up impact andlessons learned across grants because theywill be describing their outcomes using

    common categories.

    Each of the six frameworks listed belowhighlights somewhat different aspects of policy work, and may be used in differentcontexts from the others. 14 In some cases,

    policy change is a primary focus, whilein others, it is one of several socialchange categories. Some are mostrelevant to policy advocacy aroundspecific issues, while others focuson community level change.

    Collaborations that Count (primaryfocus on policy change, particularlycommunity-level change)

    Alliance for Justice (primary focus onpolicy change, most relevant to specificissue campaigns)

    Annie E. Casey Foundation (applicableto a range of social change strategies,

    particularly community-level change)

    Womens Funding Network (applicableto a range of social change strategies,most relevant to specific issue campaigns)

    Liberty Hill Foundation (applicable toa range of social change strategies anda broad variety of projects)

    Action Aid (applicable to a range of

    social change strategies, particularlycommunity-level change, and a broadvariety of projects)

    Each framework has distinct strengths andweaknesses. For example, the Alliancefor Justice framework includes the mostextensive set of sample benchmarks.However, it seems to be more a collectionof typical advocacy activities rather than

    being a coherent theory about howactivities lead to change, so it does notsuggest sequence or relationship amongthe different outcomes. In contrast, theWomens Funding Network (WFN)framework grows out of a theory about

    29PART II : A PROSPECTIVE APPROACH TO EVALUATION

    14 Snowden (2004) has an appendix listing many useful examples of potential policy indicators.However, it is not organizedaround a well-conceived framework, so it was not included in the comparison of frameworks.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    32/58

    key strategies that make social changehappen. It was developed throughextensive research and experienceworking with social change organizationsaround the country. Moreover, it is thebasis for an online reporting tool thatwalks grantees through developing atheory of change and then selectingbenchmarks. Using the tool across a setof grantees would help program officersidentify trends and sum up impact acrossgrantees. However, the WFN tool itself has far fewer examples of ways to measurethose outcomes than the Alliance for

    Justice or Annie E. Casey frameworks.

    Four of the benchmark frameworksspecifically call out advocacy capacitybenchmarks (Alliance for Justice, ActionAid, Collaborations that Count andLiberty Hill). Capacity benchmarkscould easily be added to benchmarksdrawn from other frameworks.

    Evaluation experts said that developingbenchmarks involves a process both of adapting benchmarks from standardized

    frameworks and creating some veryproject-specific indicators. The processbegins by identifying one or more of thebenchmark frameworks that seems thebest fit with the projects conceptualmodel for change. For example, considera program to get local schools to banjunk food vending machines. Using theWFN framework, one might identify thefollowing benchmarks (see chart page 31).

    Developing benchmarks that fit intothese standardized categories will make iteasier to compare progress among groups.For example if a foundation funded 10community coalitions working to ban

    vending machines in schools and all of them had benchmarks in these fivecategories from WFN, a program officercould more easily review the granteereports and synthesize the collectiveprogress of the group along these fivestrategies for social change.

    After identifying outcomes benchmarks,grantors and grantees together could addcapacity benchmarks. They might include:

    3 0 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

    An Alternate Approach to Creating BenchmarksAn alternative approach to using any of these frameworks is to grow a database of benchmarks by using an online reporting tool to review benchmarks selected byprevious grantees. (See Impact Manager from B2P Software. http://www.b2p.com/)Grantees using these databases establish benchmarks for their grant and monitorthemselves against these benchmarks. In the software program Impact Manager,when new grantees begin the process of establishing benchmarks, they are offereda list of all the previous benchmarks established by other grantees in an initiativeor program (or they can create their own). By aggregating this information acrossmany grantees and many foundations, the benchmark options increase dramatically.

  • 7/27/2019 Challenge Assessing Policy Advocacy

    33/58

    Develop a contact at each school PTA

    in the community that is concernedabout the junk food issue;

    Develop a relationship with the schoolcafeteria workers union; or

    Acquire and learn how to use listservsoftware to manage an online actionalert network on this issue.

    Because of the differences among theavailable benchmark frameworks and thewide range of activities that fall underpolicy and advocacy work, we hesitateto recommend that a foundation steerall grantees towards a single existing

    framework, use multiple frameworks or try

    to build one framework for all based onthese models. The diversity among differentfoundation grantees means that differentmodels will speak to different grantees.

    Practical Considerationsin Selecting BenchmarksWhen program officers, grantees andevaluation experts provided practicaladvice on selecting benchmarks, theircomments revolved around four themes:

    1. Balance Realistic and Aspirational GoalsSelecting appropriate benchmarkschallenges grantors and grantees to identifybenchmarks that are both achievable

    31PART II : A PROSPECTIVE APPROACH TO EVALUATION

    Table 2. Example of Benchmark Development from Frameworks

    Benchmark Category Project-specific InterimBenchmarks of Progress

    Changing Definitions/Reframing

    Community or Individual Behavior

    Shifts in Critical Mass

    Institutional Policy

    Holding the Line

    Change official purpose of vending

    machines to include providing nutritious food for students

    Get 100 students to submit requestsfor healthy snack choices into theschool suggestion box

    Get four of seven school boardmembers to make a motion to holda hearing on the issue of vendingmachines in schools

    School board passes a resolutionbanning sodas from being sold inschool vending machines

    Stopping vending machine lobby fromintroducing resolution to allow vendingmachines in junior high and high schools

  • 7/27/2019 Challenge Assessing Policy Advocacy

    34/58

    during the grant period and also meaningfulenough to show some level of real progressor change. As an example, one expertcited a two-year grant to a group tryingto eliminate the death penalty in Texas.Lets be realistic. The possibility of that

    happening is zero to negative 100. If youuse a chain of outcomes, there are a lotof things that need to happen to get toeliminating the death penalty. So anevaluation needs to look at thoseintermediate outcomes.

    Program officers and a number of granteesfelt that grantees themselves often arentvery realistic in setting benchmarks for

    policy projects. Almost never will aproposal come in with outcomes orobjectives we can use, said one programofficer. Grantees said they often feltthey needed to promise ambitious goals,beyond what they knew was attainable,in order to win the grant. So long asfunding for policy advocacy work remainscompetitive, nonprofits will inevitablyfeel it necessary to promise somethingambitious and sexy.

    Program officers also felt that most granteeshad difficulty articulating those intermediateoutcomes for policy projects. This is onereason that sample benchmarks can beuseful in generating ideas. Anothergrantee recommended that grantees andfunders make clear distinctions between

    setting goals that you can fulfill versusthose that are aspirational. Aspirationalgoals motivate people to stretch andremind them of the big picture, but usingthem to measure progress can set granteesup for failure.

    A few grantees and program officersmentioned concerns about a naturalincentive to set benchmarks low, especiallyon process indicators, in order to ensurethey can meet and even exceed them.The incentive particularly comes into

    play when grantees feel that fundingrenewal decisions hinge on progresstowards benchmarks. Said one experiencedgrantee, When I write grants, I am likelyto go for things I know I can accomplishbecause I dont want to come back laterand say that I have failed.

    2. Consider Tolerance for RiskFunding policy change and advocacy is

    inherently a high-risk grant-making strategyin two ways: a) The risk of failureatleast in the short-termbecause policychange is so hard to achieve; and b) Therisk of being viewed as having a politicalagenda and drawing fire from politiciansand other influential actors who disagreewith those goals. As part of establishingbenchmarks of progress or indicators of ultimate success, an internal dialogueabout the foundations appetite for riskwould be useful to achieve clarity on itsexpectations. Failing to achieve a policychange does not necessarily mean a wastedinvestment if knowledge about how andwhy the program failed is effectivelycaptured, valued andideallysharedwith others who can benefit.

    Experts noted that if a foundation isinclined to take chances and placesvalue on learning from mistakes, ithas a different attitude toward craftingbenchmarks of progress than does afoundation more hesitant to be publicly

    3 2 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    35/58

    associated with failed endeavors. Morerisk-averse foundations will tend to setmore manageable and potentially moreprocess-oriented benchmarks, so there isless possibility that grantees fail to achievedesignated goals. Embracing risk will

    both increase learning and providechallenges that could lead to moresignificant policy change.

    3. Have Grantees and Funders DevelopBenchmarks TogetherEvaluation experts, grantees and foundationstaff stressed the value of working togetherto develop benchmarkseven though itcan take longer. The process ensures that

    grantees and program officers are alignedon their expectations of progress. Granteesalso are more invested in reporting onbenchmarks they have helped developand feel serve their needs. On a smallproject, this process might involve severalfocused telephone conversations or ahalf-day planning meeting between agrantee and program officer. For largerinitiatives, it may require several meetingswith all stakeholders. In such groups,program officers may benefit fromassistance from evaluation experts tomake the iterative process more useful.

    4. Choose an Appropriate Numberof BenchmarksAs to the ideal number for benchmarksfor a project, there is no consensus.

    Many evaluation experts emphasizedsimplicity and selectivity, opting for fouror five benchmarks. The appropriatenumber is somewhat dependent on thegrant. For a single organization witha single policy campaign, one or two

    process indicators, two capacitybenchmarks and two-to-three interimoutcomes benchmarks may be sufficient.However, a strategic initiative withmultiple grantees will probably requiremore benchmarks in order for the program

    officer to assess progress. For example,one expert conducting an evaluationof a multi-partner initiative employed35 benchmarks. In such situations,this expert felt it was important for allstakeholders to see some benchmarks thatreflected their interests. However, theseserved as benchmarks for the entireinitiative. Most grantees were only beingasked to track a subset of these 35 most

    relevant of their work. This allowedindividual grantees to demonstrate howtheir work fit into the initiatives theoryof change and how their work contributedto the progress of the initiative withoutholding them accountable for changesoutside their realm of influence.

    5. Be FlexibleFinally, all stakeholders stressed theimportance of viewing benchmarks forpolicy projects with flexibility. This requiresflexibility both on the part of the funderand the grantee. For multiyear policygrants, prioritized activities and strategieswill need to evolve as the policyenvironment changes. Even in aone-year grant, an unexpected changein the policy environment can quickly

    render what was once seen as an easilyachievable interim outcome impossible.Or it may require a change in strategythat can make even process indicatorsirrelevant. One grantee provided thefollowing example, We found out that

    33PART II : A PROSPECTIVE APPROACH TO EVALUATION

  • 7/27/2019 Challenge Assessing Policy Advocacy

    36/58

    the administration doesnt pay anyattention at all to a particular guide.We had submitted an extremely specificobjective for our grant that said that wewould work with these agencies to getthem to revise the guide and disseminate

    those revisions. Now we feel we haveto follow through and complete it eventhough we know it wont be useful.For multiyear initiatives, it may benecessary to revisit benchmarkseach year and revise as needed.

    Step 4: Collect Dataon BenchmarksOnce benchmarks are created, the nextstep is to develop methods to measurethem on an ongoing basis. Data collectionis often a challenging and tedious processfor grantees. It requires time, energy,money and organization that often feelsdistracting from the grantees real work.As one grantee states, The trackingis really tough and it takes a lot of paperwork. That is not the strength

    of grassroots organizations.

    Grantees and evaluation expertsrecommended several strategies toassist foundations in helping granteescollect data:

    1. Help Build Learning OrganizationsThe most crucial aspect of getting granteesto collect evaluation information is tomake sure they value learning in theirorganization, affirm several experts. If learning is valued, the organization willwant to ask questions, gather information,and think critically about how to improveits work. Foundations can support grantees

    in becoming learning organizations byproviding support for grantees to buildtheir internal evaluation capacity. Yet, asone grantee states, Most foundations arenot willing to do what it takes to addevaluation capacity. If they really want

    to build evaluation capacity, they must dolonger-term grants and invest in ongoingmonitoring. Grantees need the time,skills and money to collect and analyzedata, and then think about the implicationsfor their work. They also need help upfront investing in good tools. One granteenotes, It is most useful to invest in gooddata collection tools, especially databases,but its hard to get the funder to support it.

    2. Keep It SimpleThe complexity of the process formeasuring benchmarks should becommensurate with the complexity of the project. If the framework or datacollection strategy gets too complicated,it wont get used. The Liberty HillFoundation spent nine months workingwith grantees to design a tool to helpthem evaluate their policy and socialchange work. The length of time granteeswere involved in the process caused themto become disinterested and theirparticipation dwindled. Consequently,the foundation shifted its resources andthus left the grantees to implement theprocess with limited support. In the end,only one of the grantees chose to implement

    the process in its entirety. A few otherschose particular elements to implementthat they found particularly useful totheir organization.

    3 4 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    37/58

    3. Build on Grantees ExistingData CollectionPushing an organization to collect moredata is challenging if the traditions andculture of the organization work againstsystematic data collection, as is true in

    most advocacy organizations. Evaluationexperts recommend trying to identifythe types of data organizations currentlycollect and building indicators fromthere. Another approach is to add datacollection in grantee reports, a strategyemployed in the Packard Foundationsongoing advocacy effort to instituteuniversal preschool in California.The evaluator worked with the foundation

    to modify its grant reports to ask twoquestions specifically geared to theinitiatives evaluation. These questionsdetailed all the outcomes that theinitiative had laid out in its theory of change and connected each with one ormore measurable indicators. Granteeswere then asked to report on any of theindicators they currently measured.

    The Annie E. Casey Foundation alsoprovides enough flexibility to letgrantees employ existing data whenpossible. The foundation has recentlyimplemented a process that designateskey benchmarks for a group of initiativegrantees, and for each benchmarkit identifies several measurementmethods. Each outcome benchmark

    includes two or three options for datacollection, falling into categories suchas surveys, observations, interviews,focus groups, logs, content analysis ordocument review. This approach allowsthe grantee to choose options that it

    may be currently engaging in or couldimplement easily, while also learningabout other techniques it might chooseto employ later on. The Casey Foundationdoes not yet have enough of a trackrecord with this approach to assess how

    this data variation will affect the qualityof the evaluationespecially efforts tosum up across projects.

    4. Draw on Examples Used inOther StudiesIn many cases, grantees need to developmeasurement strategies and data collectiontools from scratch. A useful resource fordeveloping measurement strategies is

    Annie E. Caseys Practical Guide toDocumenting Influence and Leveragein Making Connections Communities.The Casey guide includes some possiblebenchmark indicators, strategies on howto gather data to measure those indicators,and even sample data collection tools,although many may seem very specificto that grant program.

    Overall, there will likely be differencesin measurement needs for individualgrantees versus initiatives. Whileindividual grantee data collection mightbe simple and work within granteesexisting systems, more rigor and dataare appropriate at the initiative level.This may mean supplementing datacollected by grantees with data collected

    independently by outside evaluators.

    35PART II : A PROSPECTIVE APPROACH TO EVALUATION

  • 7/27/2019 Challenge Assessing Policy Advocacy

    38/58

    Policy change is along-termendeavor so emphasizing a

    long-term perspective is key.

    3 6 T H E CH A LL E NG E O F AS S ES S I NG P O LI C Y AN D A DV OC A CY A CT I VI T I ES

  • 7/27/2019 Challenge Assessing Policy Advocacy

    39/58

    ConclusionIn order for foundations to better monitorand measure the impact of their grants and

    other activities directed towards policychange, this paper recommends anevaluation strategy based on our synthesisof the current thinking of grantees andfunders, as well as experts in evaluatingthis type of work. Drawing from theseinsights, we outline a promising approachto prospective evaluation of policy changeactivities. This approach breaks downinto four steps:

    Agree upon a conceptual model for thepolicy process under consideration.

    Articulate a theory about how and whythe activities of a given grantee, initiativeor foundation are expected to lead tothe ultimate policy change