18
http://erx.sagepub.com/ Evaluation Review http://erx.sagepub.com/content/13/3/206 The online version of this article can be found at: DOI: 10.1177/0193841X8901300302 1989 13: 206 Eval Rev Annette L. Binnendijk Development Projects Donor Agency Experience With the Monitoring and Evaluation of Published by: http://www.sagepublications.com can be found at: Evaluation Review Additional services and information for http://erx.sagepub.com/cgi/alerts Email Alerts: http://erx.sagepub.com/subscriptions Subscriptions: http://www.sagepub.com/journalsReprints.nav Reprints: http://www.sagepub.com/journalsPermissions.nav Permissions: What is This? - Jun 1, 1989 Version of Record >> at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012 erx.sagepub.com Downloaded from

Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

Embed Size (px)

Citation preview

Page 1: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

http://erx.sagepub.com/Evaluation Review

http://erx.sagepub.com/content/13/3/206The online version of this article can be found at:

 DOI: 10.1177/0193841X8901300302

1989 13: 206Eval RevAnnette L. Binnendijk

Development ProjectsDonor Agency Experience With the Monitoring and Evaluation of

  

Published by:

http://www.sagepublications.com

can be found at:Evaluation ReviewAdditional services and information for     

  http://erx.sagepub.com/cgi/alertsEmail Alerts:

 

http://erx.sagepub.com/subscriptionsSubscriptions:  

http://www.sagepub.com/journalsReprints.navReprints:  

http://www.sagepub.com/journalsPermissions.navPermissions:  

What is This? 

- Jun 1, 1989Version of Record >>

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 2: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

206

DONOR AGENCY EXPERIENCE WITH

THE MONITORING AND EVALUATIONOF DEVELOPMENT PROJECTS

ANNETTE L. BINNENDIJK

U.S. Agency for International Development

This article reviews the expercences of the development assistance donor agencies with themonitoring and evaluation (M&E) of their projects in developing countries. First, the conceptsand definitions of monitoring and evaluation are discussed. Then the article traces the donors’early efforts to develop appropriate methodologies and procedures for M&E of developmentprojects, and reviews the problems and issues that emerged during the 1970s. The final sectiondiscusses how these lessons from experience are leading to new M&E initiatives and reorienta-tions among the donor agencies in the 1980s.

The purpose of this article is to review the past experiences of the~~ major development assistance donor agencies with the monitoringand evaluation (M&E) of development projects in the developing countries,to highlight the lessons that were learned from these experiences, and todiscuss the recently emerging trends and growing consensus among the donoragencies concerning more promising and practical approaches to M&E.

The article provides a donor agency perspective, drawing primarily onreports and experience from the U.S. Agency for International Development(AID), from the Development Assistance Committee (DAC) of the Organi-zation for Economic Cooperation and Development, and from the WorldBank and other international and regional development agencies.

The focus of the article is primarily on monitoring and evaluation at theproject level, although the article also discusses the growing emphasis given

AUTHOR’S NOTE: The views and interpretations expressed In this article are those of theauthor and should not be attributed to the Agency for International Development.

EVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222® 1989 Sage Publications, Inc.

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 3: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

207

during the 1980s to evaluation efforts beyond the individual project level andtheir relevance to broader sectoral programs and policy issues.

DEFINITION OF MONITORING AND EVALUATION

There has been considerable confusion and inconsistencies among the

donor agencies, and even within donor agencies, regarding the distinctionand the relationship between monitoring and evaluation. Most agree thatM&E refers to analyses or assessments of project performance during or afterproject implementation, and not to ex ante appraisals done during the designstage of a project. A review of the many different concepts and definitionsused to distinguish monitoring from evaluation suggests that M&E mightbest be thought of as a continuum along several dimensions.

This article identifies four dimensions to be considered, including:

(1) the focus of the assessment-with monitoring primarily concerned with tracking projectinputs and outputs, and whether they are within design budgets and according toschedules, while evaluation focuses on measuring development results and impacts

(2) the timing of the data collection and assessment efforts-with monitoring typicallyinvolving a continuous, ongoing process of data-gathering and review, while evalua-tions are often considered as periodic, or even &dquo;one-shot,&dquo; efforts

(3) the implementers of the assessment-with monitoring viewed as a function imple-mented primarily by in-house project staff, while evaluation is frequently carried outby teams external to the project

(4) the management uses of the assessment results-with monitoring oriented primarily toserve the information needs of the project management to improve the project’simplementation, while evaluation may serve the information needs of developmentmanagers above the project level; for example to provide guidance for broader programand sector-level decisions

By considering monitoring and evaluation as a continuum along thesedimensions, it becomes less urgent to attempt to clarify the two as distinctand separate functions, and easier to recognize the multiple levels of objec-tives, the variations in data collection techniques and timing, the structuraland organizational mixes, and the levels of management users that may beappropriate in these related assessment processes.

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 4: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

208

EARLY HISTORY OF DONOR AGENCY M&E APPROACHES

During the 1960s the most popular approach to appraising and laterevaluating development projects among the multilateral and bilateral donoragencies was estimating financial and economic rates of return. Given thatmost development projects in those days were large infrastructural andindustrial projects seeking to maximize overall economic growth, this meth-odology was appropriate.

However, by the early 1970s a major bilateral donor agency, AID, wasshifting its emphasis toward meeting the basic human needs of the poor. Thisnew objective of development efforts was outlined in 1973 by the U.S.Congress in its &dquo;New Directions&dquo; legislation. Others in the developmentcommunity followed suit. For example, the World Bank began to shift

emphasis and resources toward agricultural and rural development efforts inimpoverished areas, including social services delivery, though not to theextent of AID’s program. As a consequence, the evaluation technique ofestimating financial and economic rates of return became increasingly insuf-ficient. Problems with the methodology included its inability to deal ade-quately with the equity or distribution issue, and the difficulties involved intrying to assign a monetary value to all of the project’s social benefits andother impacts.

While the World Bank and the other regional development banks beganto adapt their economic analyses to accommodate the new equity objectivesand the diversified program portfolios, the shift in AID goals and projectapproaches was so radical that for a while there was no clear preferredevaluation technique. An &dquo;investment banker’s&dquo; approach to assessing de-velopment projects no longer seemed appropriate in the AID program contextof the early 1970s.

What emerged in AID in the 1970s was a conceptual framework forguiding project design, implementation, and evaluation efforts known as theLogical Framework, or the &dquo;Logframe.&dquo; The Logframe solved a majorevaluation problem by clarifying at the design stage the specific development tobjectives of the project and how the elements of the project were hypothe-sized to affect those goals. The Logframe called upon designers to clearlyidentify project inputs, outputs, purposes (intermediate objectives), and goals(ultimate development impacts); to identify objectively verifiable indicatorsof progress in meeting these objectives; and to identify hypotheses about thecausal linkages and assumptions about conditions in the project environmentthat must exist for the hypothesized linkages to occur. Some other membersof the DAC that also emphasized poverty alleviation and basic social service

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 5: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

209

delivery in their assistance programs adopted the Logframe approach toproject design and evaluation.

The AID evaluation guidance of the early 1970s that accompanied theLogframe concept emphasized evaluation designs based on the &dquo;ideal&dquo; of

experimental and quasi-experimental research designs, whereby impacts orchanges in the living standard and behaviors of the project beneficiaries couldbe measured and held attributable to project interventions. This methodologyrequired that statistically representative sample surveys be taken as a base-line, periodically over the project’s lifetime in order to track changes inoutcomes (e.g., living standards, incomes, mortality rates, and so on) amongproject beneficiaries and control groups, and to prove scientifically thatbenefits were caused by the project. The academic research community andother donor agencies were also advocating this approach to the evaluation ofsocial and basic human needs-oriented projects during the early 1970s.

KEY PROBLEMS EMERGING FROMM&E EXPERIENCES OF THE 1970S

This &dquo;ideal&dquo; evaluation approach proved to be overly sophisticated, costly,and impractical for the evaluation of most development projects. In fact, therewas a wide gulf between evaluation precepts and practice. In practice, mostevaluative reports of many donor agencies were monitoring types of efforts,concentrating on implementation issues rather than on assessmg develop-ment results. The emphasis upon this one &dquo;ideal&dquo; approach in the evaluationguidance of the 1970s unfortunately left a gap in terms of practical alternativemethodologies for assessing development results and impacts that mighthave better served the information needs of development managers.

In the few cases where evaluation efforts attempted to assess projectimpacts, they tended to suffer from overly sophisticated designs and too muchemphasis in getting statistically representative proof of impact. These efforts,often based on quasi-experimental designs and multiple rounds of samplesurveys, were very expensive and long term, and ultimately suffered frommethodological weaknesses, inconclusiveness, and an orientation of littlepractical usefulness or interest to project managers. Other complaints fre-quently heard about evaluation efforts were that they lacked focus, eithercollected too little data or too much data of poor quality, suffered from limiteddata processing and analytical capabilities, and were frequently left incom-plete. In addition to the methodological and data collection problems encoun-

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 6: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

210

tered, M&E efforts of the donor agencies also encountered a host of manage-ment, organizational, and other procedural problems.

Some of the key findings about donor agency M&E experience, empha-sizing the problems and shortcomings that occurred during the 1970s, arehighlighted next.

CONCEPTUAL PROBLEMS

Several evaluation problems facing the donor agencies involved concep-tual issues. These included:

~ A frequent lack of clarity about the development objectives of a project and measurableindicators. The Logframe was helpful in overcoming these problems.

~ A second problem was uncertainties concerning the purposes and end-users of evaluationresults. For example, evaluation purposes were not usually tied to managers’ informationneeds for making operational decisions. As a result, evaluations were too frequentlyguided by the dictates of specific methodologies or frameworks rather than by manage-ment concerns.

~ Other conceptual issues of concern to evaluators were how to deal with unexpected orunanticipated outcomes for which there was usually little data; with changes in projectobjectives over time or with conflicting objectives; and with overoptimistic targetstypically found in project design documents.

METHODOLOGICAL AND DATA COLLECTION PROBLEMS

Some of the most serious problems that donors have had with M&E of

development projects during the 1970s were rooted in methodological andrelated data collection problems, in particular the use of the quasi-experimen-tal design as the &dquo;ideal&dquo; standard. The following highlights donor experiencewith this approach.

~ High costs: Formal impact evaluation designs based on quasi-experimental models andrigorous multiround surveys have been very costly, often costing hundreds of thousandsof dollars or even millions of dollars to complete.

~ Dependence on specialized skills: This approach required rigorous statistical and datacollection skills beyond the capabilities of indigenous and even donor staff. Reliance onexternal &dquo;experts&dquo; often resulted in evaluation issues and findings being oriented awayfrom management needs.

~ Lengthiness: They required a long time frame to complete, frequently taking severalyears (often beyond the funding life span of the project) before evaluation results wereavailable. Because of this, they were of little practical use to the project managerconcerned with improving implementation, and were also hard to fund and complete

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 7: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

211

within the context of a project. For these reasons, many of these efforts were never

completed.~ Methodological weaknesses: There were inherent methodological weaknesses in at-

tempting to apply experimental design standards to real-life development project situa-tions in which random assignment of treatments (e.g., project services) is typicallyinfeasible and the alternative quasi-experimental design of carefully matching groupsbased on important characteristics is difficult to the point of being impractical. Further-more, extraneous factors are constantly impinging on the project setting and differen-

tially influencing the experimental and control groups. Because of difficulties such asthese, the findings of some of these studies were inconclusive in terms of proving impactsand attribution, despite large expenditures on surveys.

~ Missed management concerns: The findings of such evaluation designs (whether devel-

opmental impacts were statistically attributable to a project intervention) frequentlymissed many of management’s concerns regarding factors responsible for project successor failure. The design frequently treated these operational concerns (why and how

processes occurred) as a &dquo;black box.&dquo; Thus while the results were perhaps useful for

&dquo;accountability&dquo; purposes (e.g., the donor agency could give examples of &dquo;successful&dquo;

projects with proven impacts), there was little of operational value in the evaluationfindings regarding lessons for improving project performance or for the future design ofsimilar projects.

~ Overemphasis on quantification: The method’s emphasis upon quantification of out-comes could not be easily applied for certain types of project goals, such as those

emphasizing institution building or encouraging community participation as objectives.Also, it tended to result in too much emphasis on easily measured impacts and to ignoreunanticipated and difficult to measure outcomes.

~ Narrow scope: The design approach frequently ignored other important evaluationissues, such as continued relevance of objectives, measuring intermediate effects, costeffectiveness, and sustainability issues in its concentration on measuring impacts,narrowly defined.

~ Not generalizable: The findings of such studies were not transferable beyond their

particular contexts. In other words, because a particular type of project had a provenimpact in one geographical setting, it could not be concluded that similar projects wouldbe successful in other countries or even in other locations within the same country.

ORGANIZATIONAL AND MANAGEMENT UTILIZATION PROBLEMS

Different donors have taken different organizational approaches to theestablishment of M&E capabilities at the project level. Even among projectssupported by the same donor, organizational variation has been considerable.Not all donors have consistently followed the practice of planning andfunding &dquo;built-in&dquo; evaluation as part of project designs; some have reliedpredominantly on centrally controlled, separately budgeted evaluations. Thisapproach has sometimes suffered from disinterest on the part of projectmanagement in the evaluation function, which they perceived as separate

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 8: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

212

from their information needs, or worse yet, as an external oversight functionto be feared and avoided. Also, lack of ongoing M&E systems at the projectlevel frequently meant inadequate data collection on project performance andimpacts.

Various donor agency reports on project M&E have cited, as a major causeof failure of many M&E efforts during the 1970s, a misperception on the partof project management of their roles, responsibilities, and objectives withrespect to monitoring and evaluation efforts. Project managers were not usingM&E system findings.

In addition, project managers sometimes viewed M&E units as surveil-lance agents operating on behalf of concerned ministries or donor agencieswaiting to pass judgment on their performance, especially when the unitsoperated outside of their direct control. Matters were made even worse whenelaborate impact evaluation methodologies were imposed from the outside,which project management tended to view as irrelevant to their informationneeds. Under these circumstances, project managers typically took little

interest or responsibility in guiding the activities of M&E units nor in usingevaluation information that was generated. They tended to leave M&Eactivities to the &dquo;experts.&dquo;

Criticisms of M&E operations also indicated that insufficient attentionand expertise were being given by project design teams to adequate planningof M&E systems, prioritizing information needs, and estimating M&E staff-

ing and equipment requirements and costs. Follow-up technical assistanceand training efforts for project M&E staff were also found to be lacking inmany cases. Technical advisers assigned M&E responsibilities often lackedevaluation methods skills or pertinent experience. Even the so-called expertswere a mixed blessing, since they often held unrealistic standards for rigorousstatistical surveys and research design, and had little appreciation of the typesof information required by management. Consequences of these shortcom-ings included massive data collection efforts with no focus, little capacity fordata processing and analysis, and even less for presenting findings in amanner that drew management’s attention and resulted in actions to improveproject performance.

While some projects suffered from too much or indiscriminate collectionof data, others suffered from a lack of data. This was typically the case forprojects that relied primarily on outside experts to evaluate the project andhad little or no ongoing M&E capability. The common refrain of returningAID evaluation teams of &dquo;there was no data&dquo; has led to an increasingemphasis in AID on data gathering internal to the project rather than external

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 9: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

213

evaluation. Another limitation of external evaluators included their frequentlack of understanding of the local context, sociocultural, and policy aspectsthat can be important factors influencing project performance. On the otherhand, some evaluation tasks of more interest to senior or central agencymanagement, such as more comparative studies across projects or longerterm impact/sustainability concerns, may be of little direct interest to internalproject management and can best be coordinated at a central agency level.External evaluation teams may also be required where special methodolog-ical expertise or objectivity are of key importance.

DONOR COORDINATION PROBLEMS

Little formal coordination existed in the 1970s among the various donors

in terms of sharing evaluation findings or experiences with M&E methodol-

ogies and procedures. In part, this may have been because most donors werethemselves still struggling with their first M&E efforts and because findingsfrom individual project evaluations had not yet been sufficiently aggregatedand synthesized to be of broad interest at higher program, sectoral, and policylevels. Furthermore, the differences among donors in development orienta-tions, project portfolios, and evaluation techniques made coordination difficult.

HOST COUNTRY COMMITMENT AND COLLABORATION PROBLEMS

Low host government understanding of and commitment to the M&Efunction was a frequently cited problem in donor agency reviews. Confusionof monitoring with an audit or &dquo;policing&dquo; function made it an activity to beavoided. Yet monitoring was, in general, more accepted by host governmentsthan evaluation, which was frequently perceived as an expensive, externallyimposed activity of little practical relevance to their operational concerns,except possibly in a threatening way when judgments on performance werenegative. Such attitudes tended to result in effects such as resistance toestablishing M&E units, delays in staffing, low priority given to suchactivities, and diversion of staff time to other functions. There were relativelyfew systematic efforts during the 1970s to educate host country personnel inM&E concepts, benefits, and utility. Other than routine internal monitoringand evaluation, most evaluation efforts were by external teams; collaborativeevaluations were far too rare. For example, most donors did not include hostcountry personnel on their ex post evaluation teams. Thus valuable opportu-

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 10: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

214

nities were lost for building local evaluation capacities, for drawing on theirspecial contextual expertise, for increasing host country understanding ofevaluation efforts, and for enhancing the likely utilization of evaluationfindings and recommendations by the host government.

From the point of view of recipient countries, different M&E data collec-tion and methodological requirements by different donors was placing addi-tional burdens on host country staffs. Moreover, the creation of separateM&E units at the project level was in some instances leading to competitionfor scarce local skills among the donors, and drawing them away fromgovernment services.

M&E LESSONS FROM EXPERIENCE APPLIED TO THE 1980s

While the problems surrounding donor M&E methods and procedures ofthe 1970s were serious, there were growing signs of successful efforts by the1980s. Lessons from past mistakes were leading the donors to new M&Einitiatives and reorientations by the 1980s, including the following.

RECOGNIZING MULTIPLE PURPOSES FOR PROJECT M&E

The multiple purposes and intended users of evaluation efforts are begin-ning to be clarified in the 1980s. Experience indicates that an evaluation’sfocus should not be dictated on the basis of some research design or frame-work developed by &dquo;outsiders&dquo; but should instead be based upon a dialoguewith the intended management users regarding their actual evaluation ques-tions and information needs.

During the 1970s, the overreliance on the quasi-experimental designstandard led to a tendency to focus evaluation efforts too narrowly on&dquo;impacts,&dquo; usually defined as changes in the living standards or behaviors ofthe beneficiary population. In practice, this meant that other legitimate andimportant focuses of project evaluation were being ignored. By the early1980s the focus of evaluations, and indeed the concept of impact evaluationsthemselves, was broadening considerably. It was increasingly being recog-nized that we knew little about concerns such as the longer-term sustainabil-ity of projects or about their impact on institutional capacity or on the naturalenvironment. There was also growing emphasis upon evaluating intermedi-ate effects of projects since ultimate goals were so difficult to trace toprojects. Evaluations were increasingly assessing the factors responsible for

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 11: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

215

project performance, including the very important sociocultural context andpolicy environment.

Today, the project evaluation efforts of many donors cover this diversityof performance issues and examine more carefully than in the past the factorsresponsible. Following are some of the typical evaluation issues that arecurrently being addressed by the donors:

~ Relevance: The continued relevance of the project’s objectives and approach may beassessed, in light of changing development problems and changing policies and prioritiesof the donor and host country.

~ Effectiveness: Evaluations of project effectiveness usually examine whether the project’sservices, technical packages, or other products are actually being used by the intendedtarget group; whether there is equity or bias in access; and whether coverage of the targetgroup is as planned in the project design.

~ Impact: Some evaluations gather evidence regarding accomplishment of the ultimatedevelopment impact goal of a project, for example, whether the beneficiary group’ssocioeconomic status or welfare has improved as a result of the project. Unintended aswell as intended impacts may be studied, as may any differential impacts amongsubgroups. While traditionally such evaluations were done ex post of project completion,donors have been increasingly seeking ways to assess initial impacts during projectimplementation, using intermediate &dquo;proxy&dquo; indicators of results and also more qualita-tive, process-onented evaluation approaches emphasizing beneficiary feedback.

~ Efficiency: Evaluations that examine the results of a project in relation to its costs areconcerned with efficiency. Cost-benefit analyses of economic investment projects aretypically done by the multilateral banks and to a lesser extent by the bilateral donors.Cost-effectiveness analyses are more frequently being done for projects with socialobjectives, whereby the costs of achieving the same social objective by alternativeproject approaches can be compared. However, these cost-effectiveness analyses are stillrelatively difficult to do (they require a quantification of the social benefits of a projectas well as costs) and are therefore rare. More intermediate indicators, such as cost perunit of output or cost per beneficiary reached, are more typically used as proxy measuresof efficiency.

~ Sustainability: There has been a growing concern among the donor community aboutthe financial and institutional sustainability of a project’s services and benefits after thedonor’s involvement ends. More evaluations are being undertaken several years afterdonors have completed projects to investigate their sustainability.

~ Special performance issues: Evaluations are also increasingly addressing special con-cerns such as the impacts of the project on the environment, on women, or on the

development of the private sector.~ Factors influencing performance: For evaluations to be operationally most useful, the

factors that influenced a project’s successful or unsuccessful performance should beidentified. These factors may include aspects internal to the project and thus more withinthe control of project management, such as organizational and management approaches,the distribution system, the appropriateness of the technology or services being pro-moted, the extent of community participation, and so on. Other factors influencing

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 12: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

216

project performance may be external to the project and thus more difficult to influence,such as host government commitment to providing recurrent budget and counterpartsupport to the project and to creating a favorable policy environment, socioculturalattitudes of the target group, weather, and other environment factors.

ENCOURAGING A DIVERSITY OF METHODOLOGIES AND DATA SOURCES

The 1970s clearly taught the limitations of universal application of thequasi-experimental design approach to project evaluation. Such rigorousstatistical designs still have their place in the 1980s, but have been used farmore selectively, for example, in experimental projects in limited &dquo;test&dquo;

settings. It was recognized that for the vast majority of projects, monitoringand evaluation should be based on less rigorous and more practical tech-niques of data collection and analysis.

The 1980s saw a growing experimentation with multiple data collectionapproaches. The concept of one standard or &dquo;blueprint&dquo; methodology wasreplaced by a philosophy of using multiple data collection techniques andsources to address a variety of management-oriented questions. Samplesurveys were replaced or accompanied by less representative, low cost, rapidreconnaissance techniques for gathering information about projects and theirbeneficiaries. These approaches include

~ Administrative records: Simple, yet carefully designed, records systems can be used toregularly assess project progress and costs compared to design plans, targets, andschedules. They can also be very useful for keeping basic information on acceptors ofproject services such as their socioeconomic status, whether they become &dquo;repeat&dquo; users.their repayment profile, and other pertinent information.

~ Small sample surveys: Inexpensive &dquo;mini&dquo; surveys with samples as small as 100

respondents can be used to measure the proportion of a population with access to projectservices, those adopting or not adopting the services, and their initial responses andperceptions.

~ Proxy indicators: Rather than attempting directly to measure changes in project out-comes, sometimes more intermediate, or proxy, indicators provide sufficient informationon results at lower cost. For example, proxies for agricultural production might includechanges in the volume of commodities passing through market, estimates of commoditysupplies from traders or other key informants, and watching movements in prices as anindicator of supply movements.

~ ln-depth beneficiary information: Project experience has shown again and again thatfailures are often due to not understanding the perceptions and the local context of theintended beneficiaries. Informal methods, such as use of key informants, holding focusgroup meetings or village meetings, and observing participants as &dquo;case studies,&dquo; can beinexpensive and rapid feed-back approaches to gaining useful information for

management’s operational decisions. These methods emphasize understanding why and

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 13: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

217

how the project implementation process is influencing beneficiary access, adoption, andresponse to project services. It is not overly concerned with measuring quantities. Thusstatistical representativeness is not critical, and in-depth interviews with 30-50 benefi-ciaries may be plenty to draw valid conclusions for management actions.

LOCATING M&E RESPONSIBILITIES

ORGANIZATIONALLY CLOSE TO THE

MANAGEMENT DECISION-MAKING POINT

In the early 1980s there was a growing understanding based on experiencethat to increase the quality of evaluations and their relevance to management,the responsibility for various evaluation tasks should be functionally andorganizationally as close as possible to the appropriate management deci-sion-making point. In practice this meant that the responsibility for monitor-ing and evaluation systems of individual projects be decentralized to projectmanagement teams. Project M&E units were increasingly placed under thedirect control of project management rather than &dquo;oversight&dquo; units. The 1980ssaw a reorientation of M&E systems and reporting requirements to supportproject management’s information needs. An emphasis was placed on con-tinuous, ongoing evaluation as part of a project management informationsystem and special interim evaluations timed to key management decisions.More emphasis was being given to explicit planning and funding of M&Eactivities at the project design stage, and of treating M&E as a special projectcomponent. In addition, there was growing recognition that M&E functions,just like other project components, required technical support and training tobecome institutionalized.

At the same time, there was growing awareness in the late 1970s thatsenior management levels of the donor agencies were not getting the evalu-ative information in the forms they required. The large volumes of individualproject evaluation reports required further comparative analysis and synthe-sis to be useful at higher management levels. Similar comparative syntheseswere required to meet the needs of project designers interested in learningfrom accumulated experience. Final and ex post evaluations tended to bemore useful for these types of information needs. For implementation of thesetypes of evaluation effort with broader program or policy focus, centralizedevaluation staffs were generally found to be more appropriate.

Since the 1980s, some of the donors have increasingly been addressingevaluation concerns above the project level. Comparative evaluation studiesthat aggregate or synthesize findings across a number of projects have beenuseful to donor management for:

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 14: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

218

~ influencing donor agency resource allocation decisions among sectors or project ap-proaches

~ influencing agency aid policies and procedures so that guidance reflects experience~ improving new project designs~ serving an accountability function of showing legislative bodies and constituents that

foreign aid expenditures are achieving desired development results~ serving as a basis for policy dialogue between donor and host government officials

PROMOTING GREATER COORDINATION AND SHARINGOF DONOR EVALUATION FINDINGS AND EXPERIENCES

During the 1970s most evaluations were at the individual project level andfocused on implementation issues of little interest to other donors. However,as the focus of evaluation efforts broadened, the potential usefulness to otherdonors grew. Also, by the 1980s, the lessons learned by some donors aboutevaluation methodologies and procedures were also emerging that werepotentially useful to other donors with less experience. By the 1980s, donorswere less likely to be wedded to a particular evaluation technique and moreopen to searching for a variety of alternatives.

The perception in the 1970s of evaluations as judgments on performancemeant that some donors treated them as confidential documents not to be

openly shared. The trends in the 1970s, focusing evaluations on practicalmanagement concerns and &dquo;lessons&dquo; with operational implications, madethem both less sensitive and of greater interest to other donors. Another trendthat promoted sharing of evaluation experiences among donors in the 1980swas the growth in computer automation and database management systems.The larger donors had relatively sophisticated and easily accessible systemsstoring their evaluation documents.A DAC expert group on aid evaluation was established in 1982 to

strengthen members’ exchange of information and experience, to contributeto improving the effectiveness of aid by drawing out the lessons learned fromevaluation, and to seek ways of supporting developing countries’ ownevaluation capabilities. Recently this group has been making strides in thedirection of further interagency sharing of experiences. For example, acompendium was published in 1986 that summarized the results of an internalsurvey of DAC participants concerning their evaluation methods and proce-dures. Also, the DAC group has led collaborative donor assessments ofevaluation issues in nonproject assistance and several cross-cutting policyconcerns such as project sustainability and impacts upon women and theenvironment.

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 15: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

219

PROMOTING MORE COLLABORATIVE EVALUATION EFFORTS

AND INSTITUTIONAL CAPACITY OF HOST COUNTRIES

The growing volume of evaluation literature by the donor agencies is apotential force for exerting influence on developing country programs andpolicies. However, host country use of donor evaluation findings has gener-ally been limited because of their lack of involvement and misperceptions ofevaluation efforts.

Host governments have often been involved in monitoring-type functionsat the individual project level. However, involvement has been less typicalwith evaluations, especially ex post project evaluations, which frequentlyhave been done by foreign donor teams.

The 1980s have seen growing efforts among some of the donor agenciesto encourage host country involvement in evaluation efforts. Reasons for thisincreased emphasis on collaborative evaluation efforts include:

~ A realization that local expert involvement can improve the quality of evaluation resultsby their superior understanding of important contextual and policy factors, as well ascontributing local language skills.

~ A recognition that host countries’ involvement in evaluation efforts will enhance theirinstitutional capacity to conduct evaluations in the future. Thus not only is the projectM&E function more likely to be sustained, but the benefits may go beyond the immediateproject being evaluated, to benefit the evaluation efforts of other host country programsand policies.

~ As donors undertake more &dquo;ongoing&dquo; evaluation efforts within project managementstructures, the use of local staff becomes more and more necessary. An increasingconcern with keeping the costs of evaluation efforts down has also led to a greateremphasis on using local evaluation talent.

The commitment within many donor agencies to support the developmentof evaluation capacities in the developing countries is still limited, butcollaborative evaluation activities have been growing. Some of these activ-ities include the following:

~ A variety of efforts to consult with recipients during the planning and implementationphases of an evaluation, or as part of the feedback process where results are shared andtheir implications are discussed.

~ Some donors conduct joint evaluations, working with appropriate recipient agencies orhaving recipient staff participate on the evaluation team. AID, for example, includes

. recipients in about one-third of its evaluations.~ Some donors have built recipient-operated monitoring and evaluation units into projects

from the outset, emphasizing evaluation as part of an ongoing redesign rather than an

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 16: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

220

auditing process. Typically, technical assistance and training of counterparts were usedto transfer skills in evaluation methods and concepts.

~ In some instances, donors have supported institutionalization of data collection andevaluation capacity within government ministries or centralized statistical agencies nottied to specific project M&E functions. These efforts have the advantage of servingmultiple project information needs, of focusing on broader program and policy. concernsof interest to the recipient country, and probably of being more sustainable thanindividual project M&E units.

~ Donors may also support the evaluation capability of local nongovernmental institutions,such as universities, research institutes, and private consulting firms. This approach,which has been relatively underused, offers advantages of cost-effectiveness, objectivity,and knowledge of local conditions and languages.

~ Donors have recently encouraged greater collaborative dialogue on evaluation withcounterparts, especially by holding special collaborative evaluation workshops.

SUMMARY

This article has traced some of the key problems and lessons learned bydonor agencies in their efforts to monitor and evaluate development activitiesin the 1970s. In summary, these problems included the following.

CONCEPTUALAND METHODOLOGICAL PROBLEMS

~ Lack of clarity concerning the objectives of projects and the purposes of evaluationefforts caused conceptual problems for designing evaluations.

~ Donor agencies tended to be at the extremes of too little or too much data collection,while the capacity for data processing and analysis frequently did not keep up with datacollection.

~ Experiences with rigorous statistical design approaches to evaluation, such as quasi-ex-perimental models, were proving to be expensive, infeasible, and of low practical utilityto donor agency management.

ORGANIZATIONALAND MANAGEMENT PROBLEMS

~ M&E functions were frequently not well integrated into project structures, and findingswere not being used by management. Often M&E units were set up independently of

project management and were consequently perceived as oversight units rather than assources of useful information for project management.

~ Management frequently failed to understand their role and responsibility regardingM&E, particularly in providing direction and defining the issues that the M&E systemshould address, and in applying evaluation findings to their operational decisions.

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 17: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

221

. M&E systems were not receiving enough attention or provision of resources at the projectdesign stage. Implementation of M&E systems often suffered from inadequate follow-uptechnical assistance and training, late start-up, underfunding, and diversion of evaluationstaff time to other project activities.

COLLABORATION PROBLEMS

~ Insufficient sharing of M&E experiences and lessons among the donor agencies wasanother shortcoming. Also, donor coordination of M&E requirements at the country levelwas inadequate.

~ Host country understanding, interest, and support for the M&E system components ofdonor projects was typically low, due to their misperceptions of evaluation purposes andinadequate collaborative evaluation efforts on the part of donor agencies.

While the problems surrounding donor agency M&E methods and proce-dures of the 1970s were serious, there has been growing evidence of moresuccessful evaluation approaches being taken in the 1980s. Lessons from pastmistakes are leading the donors to new M&E initiatives and reorientationswith greater promise.

CONCEPTUAL AND METHODOLOGICAL LESSONS

~ A recognition now exists of the diversity of possible purposes of project monitoring andevaluation efforts. Evaluation designs and issues are more clearly linked withmanagement’s practical information needs, not with some &dquo;ideal&dquo; research design.Evaluation may legitimately be interested in assessing various aspects of project perfor-mance, such as impacts, effectiveness, relevance, efficiency, and sustainability, as wellas in analyzing the factors responsible for successful or unsuccessful performance.

~ Encouragement is now given for exploring a variety of evaluation methodologies anddata collection approaches. Emphasis on using one standard methodology has beenreplaced by a philosophy of using multiple data collection techniques and sources moreresponsive to and focused on management-oriented questions. Sample surveys have beenreplaced or accompanied by less representative but lower cost, more rapid appraisaltechniques such as key informant surveys, mini surveys, group interviews, and analysisof existing administrative records.

ORGANIZATIONAL AND MANAGEMENT LESSONS

. An understanding has developed that to be effectively used, M&E responsibilities arebest located organizationally close to the management level requiring the evaluativeinformation for decisions.

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from

Page 18: Evaluation Review - iut.ac.ir · PDF fileEVALUATION REVIEW, Vol. 13 No 3, June 1989 206-222 ® 1989 Sage Publications, Inc. Downloaded from erx.sagepub.com at UNIV

222

~ This includes a reorientation and reorganization of M&E systems at the project level insupport of project management’s information requirements for operational projectdecisions.

~ There is more emphasis upon explicit planning and funding of M&E activities at the

project design stage, and of treating M&E as a special project component As with otheraspects of development projects, it is recognized that M&E functions require technical

support and training to become institutionalized.~ Also, there is recognition that some broader evaluation efforts, of interest primarily at

more senior donor agency management levels, can best be coordinated and implementedby a centralized evaluation staff.

~ A growing number of donor agency evaluations are now addressing concerns above theindividual project level. For example, comparative assessments aggregating and synthe-sizing experience across multiple projects within a particular sector, or focused upon aspecific cross-cutting issue, are being undertaken to guide broad program, sector, orpolicy decisions facing a donor agency.

COLLABORATION LESSONS

~ More emphasis is given to coordination of evaluation efforts among the donor commu-nity and to greater sharing of M&E experiences and lessons.

~ A greater effort by donor agencies is being made to promote host country collaborativeefforts in evaluation, and to enhance local institutional capacity m data collection formonitoring and evaluation.

Annette L. Binnendijk is Division Chief of the Program and Policy Evaluation DIVISIon of theCenter for Development Information and Evaluation, U.S. Agency for International Develop-ment in Washington, D.C. She holds M. A., M. A.L.D., and Ph.D. degrees from the FletcherSchool of Law and Diplomacy, Tufts University, Medford, Massachusetts.

at UNIV CALIFORNIA SANTA BARBARA on April 20, 2012erx.sagepub.comDownloaded from